In our previous post we noted that the state of the art power montoring in virtualized environments is much less advanced than power monitoring applied to physical systems. There is a larger historical context, and economic implications in the planning and operation of data centers that make this problem worth exploring.
Let's look at a similar dynamic in a different context: In the region of the globe where I grew up, water used to be so inexpensive that residential use was not metered. The water company would charge a fixed amount every month and that was it. Hence, tenants in an apartment would never see a water bill. The water bill was a predictable cost component in the total cost of the building and included in the rent. Water was essentially an infinite resource and reflecting this fact, there were absolutely no incentives in the system for residents to reign in water use.
As the population increased, water became increasingly a more precious and expensive resource. The water company started installing residential water meters, but bowing to tradition, landlords continued to pay the bills, which was still a very small portion of the overal operating costs. Tenants still had no incentive to save water because they did not see the water bill.
Today there are very few regions in the world where water can be treated as an infinite resources. The cost of water increased so much faster than other cost components to the point that landlords decided to expose this cost to tenants. Hence the practice of tenants paying the specific consumption for the unit they occupy is common today. Also, because this consumption is exposed at the individual unit level, the historical data can be used as the basis for the implementation of water conservation policies, for instance charging penalty rates for use beyond a certain threshold.
The use of power in the data center has been following a similar trajectory. For many years the cost of power had been a noise level item in the cost of operating a data center. It was practical to include the cost of electricity in the bill of the cost of the facilities. Hence IT managers would never see the energy costs. This situation is changing as we speak. See for instance this recent article in Computerworld.
Recent Intel-based server platforms, such as the existing Bensley platform, and more recently, the Nehalem-EP platform to be introduced in March come with instrumented power supplies that allow the monitoring and control of power use at the individual server level. This information allows compiling a historical record of actual power use that is much more accurate than the more traditional method of using derated nameplate power.
The historical information is useful for data center planning purposes by delivering a much tighter forecast, beneficial in two ways: by reducing the need to over-specify the power designed into the facility or by maximizing the amount of equipment that can be deployed for a fixed amount of power available.
From an operational perspective we can expect ever more aggressive implementations of power proportional computing in servers where we see large variations between power consumed at idle vs. power consumed at full load. Ten years ago this variation used to be less than 10 percent. Today 50 percent is not unusual. Data center operators can expect wider swings in data center power demand. Server power management technology provides the means to manage these swings, stay within a data center's power envelope, yet maintain existing service level agreements with customers.
There is still one more complication: with the steep adoption of virtualization in the data center in the past two years starting with consolidation exercises, an increasing portion of business is being transacted using virtualized resources. Under this new environment, using a physical host as the locus for billing power may not be sufficient anymore, especially in multi-tenant environments, where the cost centers for virtual machines running in a host may reside in different departments or even in different companies.
It is reasonable to expect that this mode of fine grained power management at the virtual machine level will take root in cloud computing and hosted environment where resources are typically deployed as virtualized resources. Fine grained power monitoring and management makes sense in an environment where energy and carbon footpring is a major TCO component. To the extent that energy costs are exposed to users along as the MIPS consumed, this information provides the checks and balances and the data to implement rational policies to manage energy consumption.
Based on the considerations above, we see a maturation process for power management practices in a given facility happening in three stages.
Stage 1: Undifferentiated, one bill for the whole facility. Power hogs and energy efficient equipment are thrown in the same pile. Metrics to weed out inefficient equipment are hard to come by.
Stage 2: Power monitoring at the physical host level implemented. Exposes inefficient equipment. Many installations are feeling the pain of increasing energy cost, but organizational inertia prevents passing costs to IT operations. Power monitoring at this level may be too coarse grained, too little, too late for environments that are rapidly transitioning to virtualization with inadequate support for multi-tenancy.
Stage 3: Power monitoring encompasses virtualized environments. This capability would align power monitoring with the unit of delivery of value to customers.