IDF Thoughts: SR-IOV and Random Tidbits

I’m a bit late in relaying my thoughts from Intel’s Developer Forum (IDF), but there was definitely some excitement around virtualization and high performance networking that I wanted to get the word out about!

In the past I’ve shared some details about SR-IOV and the advantages you can gain by being able to present virtual LAN hardware to each Virtual Machine (VM), effectively avoiding the Hypervisor when presenting virtual devices to each VM.  The advantage of being able to do this is clear:  The less interaction in the networking stack there is from the hypervisor, the less processing overhead is required for the system process the data.

That’s all good because if you have a dual 10 Gigabit adapter, you can segregate those two physical pipes into perhaps 16 virtual pipes that get exposed to 16 VMs.  By segregating these LAN pipes at the hardware level with SR-IOV instead of using Hypervisor switching, the performance gains in both CPU utilization as well as maximum total throughput can be very large.  There were several demos at IDF with various configurations, but reductions in CPU utilization of 40% were possible coupled with dramatic improvement in throughput!

But there is unfortunately one minor complication that I didn’t mention in my last post on the topic of SR-IOV.  There is the little fact that when VMs move between physical boxes (a usage that is highly desired and commonplace these days) you run into some problems with this SR-IOV capability.  When the hypervisor owned the network hardware abstraction, the performance was worse, but the functionality was better because you could seamlessly migrate from one box to another and the virtualization application would handle the details.  But with SR-IOV, a new layer needs to be added so that the direct hardware connection between the VM and the LAN hardware can be moved to a new box.

The really exciting part of IDF demos that I saw was the demonstration not just of the SR-IOV functionality on multiple hardware and virtualization configurations, but that these demonstrations also showed updated software from two virtualization vendors allowing mobility of the VMs while supporting SR-IOV!

There was a demo on Dell systems showing this fully functional SR-IOV implementation with Citrix’s Virtualization suite.  There were two separate demonstrations on Dell systems, with VMWare displaying their new Network Plug-In Architecture (NPIA) solution that allows for the migration of SR-IOV connected VMs seamlessly between servers.

For those hungry for more detail, I’ve included the three SR-IOV demonstration videos here:

The first is the Citrix demonstration on Dell and Intel hardware of SR-IOV with VM mobility:

These next two are two videos are demos on Dell and Intel hardware with VMWare and their NPIA software implementation.

Each virtualization demo shows the massive performance benefits under various workloads when moving from Hypervisor based LAN segregation to SR-IOV implementation.  But most importantly, each demonstration proves out the capability to migrate VMs between physical hardware.  The only system hardware requirement is that the server itself supports VT-d.  If the networking hardware in the newly migrated-to box supports SR-IOV you get better performance, and if not, the solution falls back on the legacy Hypervisor virtualization.  Backwards compatibility is maintained!

I didn’t get firm details on when this full support for SR-IOV and migration will be available in Citrix and VMWare’s releases, but the demos looked pretty clean, and hopefully these suites will be available soon with this new functionality.  The LAN and Server hardware ecosystems are ready today, and it looks like the software vendors are just around the corner.  Virtualization momentum continues!

While virtualization was the big takeaway for me from IDF, there were also several other interesting demos for us networking hounds.  I’ve linked a couple videos of them below for anyone still thirsting for more of the latest networking technology and performance details!

The first video is a demonstration of Intel’s 82599 10 Gigabit Ethernet-based adapter card with Fiber Channel over Ethernet (FCoE) support.  Storage and Ethernet together at last!

The second video is a demonstration of Intel’s NetEffect 10 Gigabit Ethernet card publishing 1 million messages per second in a simulated NYSE floor trading scenario.  Oh yeah, only 35uS of latency.  That is fast.

So although I am two weeks after IDF, I hope some of you got a little taste of the networking excitement that took place.   Industry wide, hardware and software vendors alike are delivering ultra high performance low latency applications for the financial services industry, as well as mainstream performance increases for virtualization.  The performance and technology beat moves forward.  Exciting times!


Ben Hacker