Almost Free Data Center Capacity

Ok, nothing is free, but some things are a pretty good deal. I spoke last time about the capacity boost delivered through virtualization. I threw out some big numbers, so here is a bit more detail. More accurately this capacity comes from applying virtualization to a new model for data center management ( you will have to do more than install a hypervisor). I felt pretty conservative with my 5x multiplier in five years.

Even if all you ever read is the in-flight magazine, you know virtualization is a big deal. Hype aside, virtualization is the foundation for realizing the "next generation data center-NGDC". Utilization on enterprise servers is pathetic. The number I used was 15%, but I have heard many customers talk of 5% or even less. The target I used for a super efficient data center was 75% utilization - hence the 5x.

Getting to 75% average utilization will take a lot more than simple consolidation of physical servers onto a virtualized server. This is why I jump to NGDC requirement. Reality says server utilization is all over the place, with odd spikes and many differences in where the bottle neck is. Capacity limitations can be in CPU, Memory, Disk, or Network.

The key to maximizing consolidation is in achieving what I call "Dynamic Resource Management" or sometimes Dynamic Resource Pooling. DRM is what moves the NGDC beyond simple consolidation to Policy Based Balancing of data center resources. In the DRM model a server has become a virtual collection of compute, storage, and network resources. This model is beginning to emerge in commercial offerings from VMware, Microsoft, Sun, Cisco, Virtual Iron, and others.

The trick here is to couple the ability( like in vmotion from VMware) to move a VM from one set of hardware to another, with policy based moves. In my view this makes DC efficiency "just" another logistics optimization problem, not unlike airline scheduling or package delivery. "A game to maximize the utilization, minimize energy use, maximize availability, gracefully handle exceptions, and meet all my SLAs". i.e. a really hard problem. I have tried to capture this journey to NGDC in a compelling graphic, but all seem to fall short. (Thinly veiled request for better pictures of NGDC)

For now achieving the NGDC requires complex software stacks, coupled with management heroics. Intel, IMHO, has the best roadmap and view of this future as shown in the addition of virtualization features across compute, storage, and network. I would like to hear from others where you see barriers and bridges to NGDC. Who are the rabbits leading the way to this dynamic data center?