Data Center Consolidation: Measuring Latency Effects on Application Service Quality

Intel IT’s data center strategy described in “Data Center Strategy Leading Intel’s Business Transformation” could make one think: “If consolidating data centers has a positive outcome, why not consolidate all of Intel IT’s data centers into a very small number, like two or three?”

The strategy must balance consolidation cost and resource utilization wins against service quality. A key part of that service quality is application performance. Intel chip designers use interactive applications that are highly sensitive to latency. The further the distance that a server running these applications is from the user, the higher the latency. High latency is very noticeable to interactive design application users, negatively affecting productivity. How do you quantify interactive performance application to be able to make a data driven decision on when to consolidate and when not?

The Solution

Intel IT’s solution has been to create an automated application that performs graphically intensive operations similar to that done by chip designers. This graphical benchmark includes operations like dragging and dropping, opening windows, and writing text. The time to complete these operations in normal conditions along with other factors like network bandwidth. The automated test has the advantage of being reproducible, making it highly useful for establishing performance benchmark measurements that can be compared to proposed data center layouts and latencies, as well as when debugging problems.

The Evaluation

I first learned about this work when I was assigned to select which virtual desktop software package would be best for the design environment. In addition, I was asked to see whether any performed well enough under higher latencies to possible enable consolidation of any data centers. After modifying the graphic performance app to work under any virtual desktop system, I ran our graphical benchmark under various latencies over Intel’s network.

We used the slow-motion benchmarking1 technique, a method of comparing interactive performance that contrasts network traffic under optimal conditions to traffic under the measured, higher latency conditions. As expected, performance degraded under higher latencies for all the software evaluated, so we couldn’t move interactive design applications to other data centers without a significant penalty.

Design applications are not the only ones that require a local data center. Intel’s manufacturing generates so much data that having data centers near our factories is the only practical choice for running Advanced Data Analytics. Service Quality, and latency, is a factor that we have incorporated into our data center strategy.

More Information

Data Center Strategy Leading Intel's Business Transformation

Improving Manufacturing with Advanced Data Analytics

1. Nieh, Jason, S. Jae Yang, and Naomi Novik. "Measuring thin-client performance using slow-motion benchmarking." ACM Transactions on Computer Systems (TOCS) 21.1 (2003): 87-115.