Fall IDF: Is Italian Pasta the Actual Inspiration for Server Virtualization?

I love food! Since I was a kid I’ve loved noodles, especially Italian pasta. I used to think that Spaghetti was the general name for Italian noodles. Learning how to twist Spaghetti on a fork gave a great sense of achievement and joy.

Bowl of Spaghetti.jpg

Many years later my wife and I travelled to Rome. Naturally – both of us really love food, we spent a lot of time seeking out restaurants and checking out new food. One of the wonderful dishes we had was Pappardelle (on the left) with Duck Ragu. It was my 1st encounter with Pappardelle –a very wide form of pasta. You get only a few Pappardelle on your plate but it’s still the same amount of pasta. I found it not as practical to twist the Pappardelle around my fork, so I cut them up into smaller pieces to eat them.


I’ve thought about it recently when I looked at the back of a virtualized server. Looks similar, no?

Server with 1GbE.jpgBowl of Spaghetti.jpg

A typical virtualized server has 8-10 1Gigabit Ethernet (1GbE) ports, and 2 Fibre Channel ports. This makes for a lot of cabling, and many add-in cards. It translates to a lot of cost, power, and complexity (and thus reliability risk) for an IT shop. As a result, there’s a lot of buzz around high-speed networks, specifically 10GbE. That technology presents the opportunity to consolidate all these 1GbE ports to a significantly smaller number of higher bandwidth, i.e. 10GbE ports. It makes for a much tidier server.Server with 10GbE.jpg

Kind of like substituting Pappardelle for Spaghetti

In that case, iSCSI or FCoE (Fibre Channel over Ethernet) could be used for the SAN connection, still using the same high-speed ports. Standards like Data Center Bridging (DCB) could add a lossless character to the 10GbE link to make it friendlier to FCoE.

Few new solutions though come without new challenges. The common way for VMs to share I/O devices in a today’s environment in through mediation of the hypervisor, using emulation or para-virtualization. That reduces the effective I/O bandwidth. It also becomes a fairly significant overhead to the server in its own right, reducing the available server capacity for application processing, and it adds latency. With the growing trend in IT to treat virtualization as a default deployment mode for any application, these issues become quite limiting.

We at Intel have thought that the best way to overcome these issues is by using “direct assignment”. Using the Intel® VT-d technology (launched in the Xeon 5500 platform), a VM can be assigned a dedicated I/O device. This nearly eliminates the overheard related to the hypervisor mediation I mentioned above. A side benefit is that it increases the VM to VM isolation and security. But assigning an individual I/O device to one VM is not very scalable…

This is where the PCI-SIG’s SR-IOV (Single Root I/O Virtualization) standard comes into play. This standard allows a single I/O device to present itself as multiple virtual devices. With SR-IOV, each virtual device can be assigned to a VM, adding scalability to the direct assignment model, effectively allowing the physical I/O to be shared yet with greater security and reliability.

Another challenge with the direct assignment model is related to live migration. Hypervisors have typically assumed the SW mediated IOV model. As a result, hypervisors need to be modified to adapt their live migration solutions to direct assignment.

These technologies span many different components of the server platform. Intel® VT-d is necessary, so Xeon 5500 must be used (or later platforms). SR-IOV capable I/O devices – NICs or Storage controllers, are required. BIOS must be modified, as well as hypervisor software. This is pretty heavy lifting.

So you can only imagine how excited I am to be able to showcase 4 different SR-IOV demos at IDF next week! The demos involve 2 server vendors, 3 VMM vendors – 3 different vendors implementing 3 different hypervisor architectures, and 3 different IHVs representing 2 different I/O technologies. We show the performance improvements, as well as VM live-migration. It works!

Come and see it (Booths 517, 707, 709, and 711 in the IDF showcase)!