Turning The Tide: Challenges to Scaling the Platform

As I discussed in my blog two weeks ago, one of the most powerful technical strategies in turning the tide on data center energy consumption has been the drive toward more proportional scaling of platform power with workload.

Sustained Efficiency Improvement


Performance and power consumption results are based on certain tests measured on specific computer systems.  Any difference in system hardware, software or configuration will affect actual performance.  Configurations: Two-socket Systems, Test Results for SPECpower_ssj2008, Testing by Hewlett-Packard.

Per the trend shown in this figure, while successive generations of platforms have delivered greater and greater performance, they have also delivered lower active power consumption. According to this data, the energy efficiency at 30% platform load has doubled every 16 months since 2004.

That is an astounding result when you put it in contrast to other industries. For example, the automobile industry increased fuel efficiency about 7% per year from 1978 to 1984 and then ceased to improve or innovate for over two decades.

The trend to proportional computing has produced real energy savings. Generally the idea behind proportional computing is that a platform should consume energy proportional to the work it is doing. When it is doing maximum work it will consume maximum power, and when it is doing no work it should consume (ideally) no power.


Improving platform scaling has taken us down and impressive road, but we face some serious challenges. These challenges are most easily recognized by looking inside the platform (or, to continue the car analogy, under the hood!) and the Power Scalability of Components.

To highlight this, let’s define the Component Power Scalability =  1 – (Component Idle Power)/(Component Max power). A perfectly proportional component would have a Power Scalability of one.

Here is a graph of a model system’s components. Generally dividing into quadrants, the upper left is best and the lower right is where the most work is needed.


Note that memory power scaling is nearly ideal, it has very low idle power and Power Scalability of over 90%. In this particular system the CPUs have the highest idle power of ingredients on the platform, but also have scaling above 80%

The components with the lowest scalability and biggest impct to proprtionality are readily apparent as those scaling much worse than 50% with high idle power. For instance, the power supply in this example has quite good efficiency at peak load (over 80%), but a scalability below 50% and a siginficant contribution to idle power.

Hard disk drives used in this platform are kept spinning at high speed even when the platform is at idle to reduce transaction latency and speed up performance. The cluster of points near the lower left each contributes a small amount to platform idle power, but together form a challenging barrier to improving proportionality.

Some technology solutions are becoming available to solve the above problems. For instance, improved fan speed control algorithms can improve fan power consumption scaling. Solid State Drives can reduce the amount of power consumed by storage while offering improved performance.

Within Intel, we are continuously focused on improving the scalability of our processors and silicon. We need Industry innovation to continue improving scalability of the rest of the platform. PSU scalability solutions stands out as a highest priority. Fan power scalability is another key opportunity. Beyond that, the “non-trivial many” components on the system will ultimately become limiters.

Does this proposal for looking "under the hood" make sense to you?

What and how else should we look under the hood?

What would you call the Component Power Scalability?

Input and comments welcome!

Together, we can “Turn the Tide.”