Agility in the Datacenter

Since this is the first time I'm blogging on this web site, let me briefly introduce myself. I'm working at Intel since 1984 and started right after University to develop software mainly for the Industrial Automation Industry (way back with good old iRMX for Multibus I/II). After a couple of years of running IT for several Intel sales Offices in EMEA, I'm now running a team of Technical PreSales people to work with End Customers in the Enterprise space.

When working with End Customers in the IT space, we often hear about the requirements of reducing costs but at the same time being more agile. Particularly in the Datacenter this is important to achieve, in order to quickly adapt to changing business requirement and thus swiftly enabling business opportunity through IT. On the way to get to real Business Agility through IT, Gartner has defined the Infrastructure Maturity Model**. This consists of 6 stages with the ultimate goal to deliver Business Agility in almost real time. Before a company can get there however, one important stage is to get to a virtualized infrastructure.

In the storage area we have seen quite some progress in this space which has been adopted already in a lot of medium and large companies. On the server side, I can see server virtualization being one of the hot topics, which almost every company is looking into or even deploying currently in order to achieve this datacenter agility at least in the infrastructure area.

In the past, people typically have used virtualization at large SMP machines to better utilize those; more recently virtualization was used to consolidate (mostly older) servers/application onto 4 way Intel Architecture based Servers to avoid a zoo of different machines and OS Revisions IT has to support. However from the cost efficiency perspective, it is also appropriate to consider using 2-way servers in virtualization too. When we discuss this with end customers, we sometimes got the concern that the ratio of Memory/CPU-Core is not good enough. While we have a great deal of Processor performance, particularly through the Quad Core Technology, which is available in Intel's XeonTM Processors since more than a year now, the memory capacity at the DP machines could not always live up to the desired ratio. Recently however there are some new DP Servers on the market (i.e. the Sun Microsystems x4150,, which implemented the full specification of the memory interface providing up to 64GB of Memory for Dual Processor Server hosting 8 Cores altogether. While I can hear you saying already that this would need the most expensive Memory Modules (4GB ones), I can tell you, that I was pleasantly surprised about an offer I got recently from one of our suppliers to get the full 64GB, for one of our lab servers, for less than 5900Euros (8400 US$, as you see I coming from Europe). 32GB of Memory would have been just below 2100 Euros (2940US$, 2GB Modules). Obviously prices may vary, but I just wanted to give a ball park figure what the costs are for a DP server containing 4-8GB/Core Memory. So with these types of systems you should be easily able to expand your 4-way system virtualization pool at much reduced cost.

But don't get me wrong here, I'm not promoting that the complete server virtualization pool in a DC should only consist of 2-way systems, I just wanted to point out that with the decrease of cost of the higher density Memory modules and the increase in the number of Memory slots in Dual Processor server space, you have a nice option to select that server type, that fits the best to your needs. If you have for instance applications that need a lot of aggregated CPU Performance or a lot of I/O Performance you sure would be better off using a 4-way server. But I'm sure there will be a blog soon covering the considerations of using 2-way or 4-way servers in the virtualization space.

If you agree in my train of thoughts, one thing must appear as obvious to you. Analysis of the computing resources used by your current applications and capacity planning to meet the need of your future business is the key to success for your virtualization strategy. And here we come back to the Gartner model. As IT you can only become a business value, if you understand the business needs of your company.

When speaking about agility you obviously have to have the possibility to easily migrate a Virtual Machine from a 2-way System to a 4 way system. With the recent introduction of the Intel XeonTM 7300 processor based 4 way servers this is possible too. Xeon 5100/5300 processors are sharing the same micro-architecture (Intel CoreTM Architecture) as the 4-way servers (Xeon 7300 Processor), which means you can live migrate VMs from DP to MP systems very easily. This live migration is offered in the various management suites from Virtualization Software vendors. In VMware's ESX () this is called vMotion, at Virtual Iron () for instance it is called LiveMigrate.

So those of you, who carefully read Intel's announcement, might rightfully save that all the above is true but now Intel introduced the new Xeon 5200/5400 series using still the same Intel CoreTM Micro architecture, but with an extended instruction set, particularly for the SSE instructions. ...and you are right. If an application uses these new instructions you cannot do a live migrate of a VM from, say a Xeon 5400, back to a Xeon 5300 based system. But here the Intel Architecture offers some hooks (technologies) to still make this possible. For VMware for instance we have implemented a new functionality called VT Flex Migration. Since ESX has such a long experience in the Virtualization of Intel Architecture, it still uses Binary translation for 32 Bit OSs instead of Intel's VT-x (the hardware supported Virtualization). In VT-x Intel offers to mask some CPU functionality so that the OS/Application, when running in a virtualized environment, only sees a certain instruction set and thus can easily be live migrated from a Xeon 5400 to a Xeon 5300 Processor based system. So VMMs like for instance Virtual Iron or Xen () may use this feature because they require VT-x. In order to enable the same functionality in ESX, Intel worked closely with VMware and implemented a hardware hook for VMware to allow even in Binary Translation (meaning outside VT-x) to mask certain capabilities (here SSE4) to be seen by the OS, hence making sure the OS uses only those instructions also available in Xeon 5100/5300/7300 Processors.

With this in mind you can setup a very powerful combination of 2-way and 4-way Intel Architecture servers being able to be shared in a virtualized Server pool and allowing live migration between them as the basis for a flexible and agile infrastructure. What you need on top of this now is the Management Software orchestrating the use of this server pool. Those are products like VMware's Infrastructure 3 or their Management and Automation tools such as Virtual Center. At Virtual Iron for instance this would be their Virtualization Manager. Those tools allow you to set rules and policies to automatically react on changes in the virtualization pool, such as a change of CPU load or memory requirements, to allow an automated move of VMs between the servers to still fulfill SLAs.

So I hope I was able to share my view of an agile infrastructure in the Datacenter, I realize that this is quite a hardware centric view of it, but after all I still work for Intel and server system oriented topics are the majority of my job.

I'm looking forward to hear your opinion or questions about it.

Best regards,


*Other brands may be claimed as the property of others

**Source: Gartner, Inc. "Infrastructure Maturity Model," by Tom Bittman. Gartner Data Center Summit, 2006.