I/O Bottlenecks due to Virtualization? VMDq to the Rescue!

Virtualization is without a doubt a very hot topic these days. Companies continue to look to server virtualization to increase the utilization rates of their systems and lower overall deployment and management costs. The basic model of a virtualized server is depicted below:

Essentially, you have a VMM (Virtual Machine Monitor) SW layer that talks between hardware and software and allows each virtual machine to successfully use what it thinks is one network port. This is a pretty straightforward model and it directly addresses the general reason for virtualization which is that generally the server may not be utilizing its processing power in full and is thus wasting CPU cycles.

There is an interesting result of this consolidation onto a single physical box with several Virtual Machines. In addition to consolidating CPU processes, you also effectively consolidate I/O bandwidth and switch processing capabilities onto the same platform. The overhead of this switching limits your bandwidth, adds CPU overhead, and effectively reduces the benefits of server virtualization. In some cases you may have a new problem in having created an I/O bottleneck.

This makes a lot of sense if you think about the fact that in essence, what you are doing is merging 5-10 machines that each had 1 or 2 ports of Gigabit Ethernet (all connected via a switch) into a single machine. This new server probably needs to have at least 6 ports or more of Gigabit Ethernet and may even require 10 Gigabit connections just to be able to support the new consolidated workload.

Enter Virtual Machine Device Queues (VMDq):

In order to help the I/O congestion associated with the additional VMM software switching in a virtualized environment, Intel implemented a technology called VMDq in our latest Ethernet NICs and silicon. VMDq is a technology specifically designed to offload some of the switching that was done in the VMM to networking hardware specifically designed for this function. This drastically reduces the overhead associated with I/O switching in the VMM which greatly improves throughput and overall system performance.

Below is a diagram that summarizes the new virtualized server stack with VMDq enabled:

On the receive path, VMDq provides a hardware ‘sorter' or classifier that essentially does the pre-work for the VMM of directing which end VM the packets should go to. The NIC or LAN silicon is performing a hardware assist for the VMM layer.

On the transmit side, the packets are serviced round robin style to avoid "head of line" blocking and alleviate potential quality of service (QoS) issues.

The immediate question I expect is "So, don't the VMM vendors have to support this?" And the answer is yes. Intel is supporting this feature today on shipping platforms, but you do need to work closely with the VMM vendor to make sure the whole stack works as designed.

Just this week Intel announced that our VMDq capability will be supported in VMware's upcoming ESX release. This is certainly a big step towards wide support of network virtualization performance enhancing features.

Ethernet technology has grown and become more important over the last 25 years, and the trend appears to be continuing on course.

Ben Hacker


For more details on VMDq, there is a VMDq Whitepaper, and an Intel® VT for Connectivity Datasheet located on our website.