Data centers run 24x7, consuming power at high rates even when workload demand is relatively light. Servers can be shut down completely when unneeded, but this can negatively impact performance if demand suddenly ramps up, and can still leave power and cooling resources out of balance with performance needs. Intel Node Manager provides a solution with policy-based power management capabilities, already widely available in data centers today with servers based on the Intel Xeon Processor.
But what are the trade-offs? Which workloads run most efficiently with power capping policies in place? And what is the ideal level to limit power without negatively impacting performance? Intel worked with Principled Technologies, an independent assessment firm, to conduct in-depth analysis and provide answers to these questions.
This is all about efficiency, finding the sweet-spot in performance per watt. For instance, the study shows that a 65% power level cap produced the optimal performance per watt for database storage-intensive workloads, such as those found in OLTP-style e-commerce sites. This is almost 20% more efficient than without Intel Node Manager. CPU- and memory-intensive workloads, such as those found in virtualized desktop environments or in Java application servers, respectively, also showed efficiency improvements. Mixed workload environments, such as Exchange, were able to achieve up to a massive 42% efficiency boost. At a scale of thousands of servers in a datacenter, this adds up to significantly lower power consumption costs without unduly sacrificing performance.
Take a look at the detailed report, including the underlying methodologies, and consider taking advantage of Intel Node Manager to optimize power efficiency in your datacenter.