The Whys and Hows of Measuring Power in your Data Center
How much power your data center used has not historically been a major concern for IT managers; but with costs and environmental savings as a top of mind concern, there are some simple steps so you can measure -- and manage -- your facility's performance. Read More
If your phone rings tomorrow and the CIO is on the line asking, “What are we doing about power consumption in our data centers?” what will you say? Typically, data center managers have not worried about power consumption, but this is quickly changing as 1) additional power is often not available, 2) the cost of power is becoming a significant cost of operating a data center and 3) companies are placing a higher value on green initiatives.
Based on the simple premise that “you can’t manage what you can’t measure” data centers are undertaking steps to measure device-level power consumption. No longer do rule-of-thumb estimates suffice — because they can turn out to be just plain wrong, leading to unnecessary and sometimes quite substantial costs. Devices that were thought to be consuming very little power may be consuming quite a lot, even while simply sitting idle doing no useful work.
The first step is to baseline current power consumption. Ideally, this will be done in a way that provides useful statistics to be compared over time. Early measurements and estimates may be rough, but can be refined as the power deployment inside and outside the data center is better understood and as the measurement quality improves.
There are many ways to manage power consumption in a data center but without some baseline measurements it is difficult to know where to start or what efforts will have the greatest impact. Also, without baseline measurements it is impossible to show management past levels of consumption and how you have improved.
Efficiency Metrics
An efficiency metric receiving a lot of attention is the power usage effectiveness (PUE). It is the ratio of the total energy used by a data center, including IT equipment, and the energy consumed by the IT equipment only. The total energy includes lighting, cooling and air movement equipment and inefficiencies in electricity distribution within the data center. The IT equipment portion is that equipment which performs computational tasks.
![]() |
A data center which only supplies power to IT equipment would have a PUE = 1.0 because the numerator and denominator would both be IT equipment power. This is obviously not a realistic situation. Even in a lights-out data center power will be consumed to provide cooling and air movement and there will be electrical distribution inefficiencies. DCiE is simply the inverse of PUE and won’t be covered here.
Corporate average data center efficiency (CADE) takes into account the energy efficiency of facilities, their utilization rates and the level of utilization of servers.
![]() |
• Facility Efficiency = Energy delivered to IT / energy drawn from utilities
• IT Asset Efficiency = Average CPU utilization across all servers, often a small percentage such as 5 percent, until efficiency efforts like virtualization are undertaken.
Where and How to Measure — the Choices
In a data center there are several locations where power can be measured. Moving from the coarsest measurement to the most detailed the first is the power entering the data center. If the data center is a stand-alone structure this is simply the power feed from the utility. This would be the total power number in the numerator of a PUE calculation.
Very often it’s not this easy. The data center may be a floor in a building in which case a submeter for that floor or room should be installed. This submeter would record the total power number provided the data center doesn’t share power or building facilities such as cooling equipment. If facilities and power are shared, which is often the case particularly in urban data centers, then work will need to be done to at least get an estimate of the total power consumption of the data center, possibly from several different sources, e.g., the submeter measuring the feed into the data center plus some percentage of the power used by the building cooling equipment.
The next place where power is often measured is at the UPS. If it only provides power to IT equipment then this data can be used as an approximation for the denominator of a PUE calculation. However, this is only an approximation because the power inefficiencies of the UPS itself should not be part of the IT equipment power. The UPS may also provide power to rack-based cooling equipment.
A third place to measure power is at the rack itself with metered rack PDUs. These figures are generally considered to represent the IT equipment, aggregated to a rack, unless there are fans or rack-side cooling units.
A fourth place to measure power is at the individual outlets of a rack PDU. These intelligent PDUs also typically provide aggregated rack power consumption as well. Monitoring the power at the outlet level ensures that IT equipment power consumption can be uniquely identified for a PUE calculation. By providing power information at the individual device level, specific actions can be taken to improve efficiency.
The fifth place to measure power is at the CPU. This gives the purest measurement of what power is actually going into doing purely computational work. In practice, this is not widely used today. In terms of taking actual energy conservation actions, the CPU level is not very useful since, in most cases, an entire device, blade or other piece of IT equipment is what data center staff can change or decommission, not a CPU. The most typical approaches to measuring power consumption in a data center are metered rack PDUs and intelligent rack PDUs that monitor individual outlets.
What to Do with the Data Gathered
Depending on the measurement locations and method of measurement chosen, various energy efficiency initiatives may be taken. Individual outlet-level metering is recommended for IT equipment because if provides useful, actionable information.
Monitoring the power consumed at a rack allows data center managers to determine if their original power allocations make sense today. Quite often, power is allocated to IT equipment on the basis of nameplate ratings which are conservatively high. Even when a percentage, say 70 percent, of nameplate power is used power is often over allocated. This means more power is going to an IT equipment rack than what will actually be consumed. This “stranded power” could be deployed elsewhere but how do you know you’re not leaving the rack vulnerable to running out of power in a peak load situation?
Monitor each individual device at regular intervals, the shorter the better, to ensure that no peak periods are overlooked. With individual device power consumption figures it is possible to set up racks such that equipment power consumption patterns compliment each other and thus more IT equipment can be supported with the same amount of power. If a rack is close to consuming all the power allocated to it, and therefore at risk of tripping a breaker, having individual IT equipment power consumption data allows IT staff to remove equipment in a logical manner so as to minimize the risk of a breaker tripping while maintaining useful loading levels.
Through tests in its own data center, Raritan determined that rules-of-thumb percentages of nameplate ratings simply don’t work. Across 59 servers, 15 had average power consumption of 20 percent or less, 29 had 21 to 40 percent, 9 had 41 to 60 percent, 4 had 61 to 80 percent and 2 had 81 percent or more. Even at peak power consumption 49 of the servers were 60 percent or less of their nameplate rating. Many data center planners use 70 percent of nameplate which means there is a lot of stranded power in many data centers.
On the other hand, at peak power consumption 5 of the 59 servers were at 81 percent or more of nameplate and therefore at risk of shutting down. The message is that in terms of power consumption, it is important to know what is going on at the individual device, not some aggregated average which may mask problems both on the high and low side.
Environmental Sensors: Their Impact on Power and Cooling Efficiency
Environmental sensors make an important contribution to power efficiency. It is common for cooling to consume 30 percent or more of a data center’s total power. IT equipment vendors provide inlet temperature specifications. As long as the inlet temperature is within the specification the server will perform fine. These specifications are often substantially higher than what is typically provided in data center cold aisles. Thus, the temperature can often be turned up which leads to less power consumption by the cooling equipment.
Temperature sensors should be placed at the bottom, middle and top third of racks on the cool air inlet side. Cooling IT equipment to temperatures lower than required consumes a lot of power without any beneficial effects. Due to a lack of at-the-rack instrumentation, data center managers often overcool to be confident IT equipment won’t fail.
New Technologies Available
Taking an individual snapshot of power consumption at one point in time is not sufficient. IT devices may consume a lot less power at 2 a.m. than they do at 8 a.m. and may hit peak power consumption at 4 p.m. on Thursday. Power consumption can also vary by time of year such as online sales during December.
There are hardware devices that can take snapshots of power consumption at user defined intervals as often as once every few seconds. Software programs are available to turn these data points into calculations of power usage where the unit of measure is kilowatt hours (kwh). Sophisticated tools can calculate carbon footprints based on energy usage. With actual individual device information data center staff can know the biggest contributors to carbon generation and therefore what needs to be most closely managed.
What to Look For In Power Measurement
Accurate: As carbon caps, credits and trading are adopted, accuracy becomes important. +/- 5 percent accuracy, assuming perfect sine waves which rarely occur in the real world, may be acceptable to determine if a rack is operating with about a 25 percent margin before circuit breakers trip. It is not acceptable when dealing with regulations and carbon credits to be verified and traded on exchanges. Nor is it accurate enough for billing or charge backs.
Open and interoperable: Many data centers have deployed an IT management system. To tie such a system to power measurement look for open standards for integration and interoperability with existing equipment. Ease of use is a key consideration so power management does not become a time-consuming project for already busy IT staff.
Secure: Power is the life blood of data centers. It is important that access to the power management system be secure. Look for systems with high levels of encryption such as 256-bit AES and the ability to set authentication, authorization and permissions.
We hope that if the CIO calls you tomorrow and asks, “What are we doing about power consumption in our data centers?” you’ll refer to this article and outline a plan beginning with a program to gather information to establish some baselines. Collecting data now, and taking a stab at data center metrics such as a PUE calculation, will put you on a path to more efficiently manage power and power costs….And to take calls from CIOs.
Herman Chan is the Director for Raritan‘s Power Solutions Business Unit, and Greg More is the Senior Product Marketing Manager for the same unit.
