10 Gigabit Ethernet Update: Another Serving of Alphabet Soup

Back in March 2008, my colleague Ben Hacker wrote up a blog post that compared and contrasted the 10 Gigabit Ethernet (10GbE) interface standards. It was a geeky dive into the world of fiber and copper cabling, media access controllers, physical layers, and other esoteric minutiae. It was also a tremendously popular edition of Ben’s blog and continues to get hits today.

So here we are three years later. How have things shaken out since that post? Which interfaces are most widely deployed and why? Ben has left the Ethernet arena for the world of Thunderbolt™ Technology, so I’ve penned this follow-up to bring you up to date. I’ll warn you in advance, though – this is a long read.

Still here? Let's go.

In “10 Gigabit Ethernet – Alphabet Soup Never Tasted So Good!” Ben examined six 10GbE interface standards: 10GBASE-KX4, 10GBASE-SR, 10GBASE-LR, 10GBASE-LRM, 10GBASE-CX4, and 10GBASE-T. I won’t go into the nuts and bolts of each of these standards; you can read Ben’s post if you’re looking for that info. I will, however, take a look at how widely each of these standards is deployed and how they’re being used.


These standards support low-power 10GbE connectivity over very short distances, making them ideal for blade servers, where the Ethernet controller connects to another component on the blade. Early implementations of 10GbE in blade servers used 10GBASE-KX4, but most new designs use 10GBASE-KR due to its simpler design requirements.

Today, most blade servers ship with 10GbE connections, typically on “mezzanine” adapter cards. Dell’Oro Group estimates that mezzanine adapters accounted for nearly a quarter of the 10GbE adapters shipped in 2010, and projects they’ll maintain a significant share of 10GbE adapter shipments in the future.


10GBASE-CX4 was deployed mostly by early adopters of 10GbE in HPC environments, but shipments today are very low. The required cables are bulky and expensive, and the rise of SFP+ Direct Attach with its compact interface, less expensive cables, and compatibility with SFP+ switches (we’ll get to this later) have left 10GBASE-CX4 an evolutionary dead end. Dell’Oro Group estimates that 10GBASE-CX4 port shipments made up less than two percent of total 10GbE shipments in 2010, which is consistent with what Intel saw for our CX4 products.

The SFP+ Family: 10GBASE-SR, 10GBASE-LR, 10GBASE-LRM, SFP+ Direct Attach

This is where things get more interesting. All of these standards use the SFP+ interface, which allows network administrators to choose different media for different needs. The Intel® Ethernet Server Adapter X520 family, for example, supports “pluggable” optics modules, meaning a single adapter can be configured for 10GBASE-SR or 10GBASE-LR by simply plugging the right optics module into the adapter’s SFP+ cage. That same cage also accepts SFP+ Direct Attach Twinax copper cables. This flexibility is the reason SFP+ shipments have taken off, and Dell’Oro Group and Crehan Research agree that SFP+ adapters lead 10GbE adapter shipments today.


“SR” stands for “short reach,” but that might seem like a bit of a misnomer; 10GBASE-SR has a maximum reach of 300 meters using OM3 multi-mode fiber, making it capable of connecting devices across most data centers. A server equipped with 10GBASE-SR ports is usually connected to a switch in a different rack or in another part of the data center. 10GBASE-SR’s low latency and relatively low power requirements make it a good solution for latency-sensitive applications, such as high-performance compute clusters. It’s also a common backbone fabric between switches.

For 2011, Dell’Oro Group projects SFP+ fiber ports will be little more than a quarter of the total 10GbE adapter ports shipped. Of those ports, the vast majority (likely more than 95 percent) will be 10GBASE-SR.


10GBASE-LR is 10GBASE-SR’s longer-reaching sibling. “LR” stands for “long reach” or “long range.” 10GBASE-LR uses single-mode fiber and can reach distances of up to 10km, though there have been reports of much longer distances with no data loss. 10GBASE-LR is typically used to connect switches and servers across campuses and between buildings. Given their specific uses and higher costs, it’s not surprising that shipments of 10GBASE-LR adapters are much lower than shipments of 10GBASE-SR adapters. My team tells me adapters with LR optics modules account for less than one percent of Intel’s 10GbE SFP+ adapter sales. It’s an important one percent, though, as no other 10GbE interface standard provides the same reach.


This standard specifies support for 10GbE over older multimode fiber (up to 220m), allowing IT departments to milk older cabling. I’m not aware of any server adapters that support this standard, but there may be some out there. Some switch vendors ship 10GBASE-LRM modules, but support for this standard will likely fade away before long.

SFP+ Direct Attach

SFP+ Direct Attach uses the same SFP+ cages as 10GBASE-SR and LR but without active optical modules to drive the signal. Instead, a passive copper Twinax cable plugs into the SFP+ housing, resulting in a low-power, short-distance, and low latency 10GbE connection. Supported distances for passive cables range from five to seven meters, which is more than enough to connect a switch to any server in the same rack. SFP+ Direct Attach also supports active copper cables, which support greater distances while sacrificing a small amount of power and latency efficiency.

A common deployment model for 10GbE in the data center has a "top-of-rack" switch connecting to servers in the rack using SFP+ Direct Attach cables and 10GBASE-SR ports connecting to end-of-row switches that aggregate traffic from multiple racks.

This model has turned out to be tremendously popular thanks to the lower costs of SFP+ Direct Attach adapters and cables. In fact Dell'Oro estimates Direct Attach adapter shipments overtook SFP+ fiber adapter shipments in 2010 and will outsell them over 2.5:1 in 2011.


Last, let’s take a look at 10GBASE-T. This is 10GbE over the twisted-pair cabling that’s deployed widely in nearly every data center today. It uses the familiar RJ-45 connection that plugs into almost every server, desktop, and laptop today.

RJ-45 Cable End

RJ-45: Look familiar?

Alternate title: Finally, a picture to break up the text

In Ben’s post, he mentioned that 10GBASE-T requires more power relative to other 10GbE interfaces. Over the last few years, however, manufacturing process improvements and more efficient designs have helped reduce power needs to the point where Intel’s upcoming 10GBASE-T controller, codename Twinville, will support two 10GbE ports at less than half the power of our current dual-port 10GBASE-T adapter.

This lower power requirement along with a steady decrease in costs over the past few years mean we’re now at a point where 10GBASE-T is ready for LOM integration on mainstream servers – mainstream servers that you’ll see in the second half of this year.

I’m planning to write about 10GBASE-T in detail next month, but in the meantime, let me give you some of its high-level benefits:

  • It’s compatible with existing Gigabit Ethernet network equipment, making migration easy. SFP+ Direct Attach is not backward-compatible with GbE switches.
  • It’s cost-effective. List price for a dual-port Intel Ethernet 10GBASE-T adapter is significnatly lower than the list price for an Intel Ethernet SFP+ Direct Attach adapter. Plus, copper cabling is less expensive than fiber.
  • It’s flexible. Up to 100 meters of reach make it an ideal choice for wide deployment in the data center.

We at Intel believe 10GBASE-T will grow to become the dominant 10GbE interface in the future for those reasons. Crehan Research agrees, projecting that 10GBASE-T port shipments will overtake SFP+ shipments in 2013-2014.

If you’re interested in learning about what it takes to develop and test a 10GBASE-T controller, check out this Tom's Hardware photo tour of Intel’s 10 Gigabit “X-Lab." It's another long read, but at least there are lots of pictures.

In the three years that have passed since Ben’s post, a number of factors have driven folks to adopt 10GbE. More powerful processors have enabled IT to achieve greater consolidation, data center architects are looking to simplify their networks, and more powerful applications are demanding greater network bandwidth. There’s much more to the story than I can cover here, but if you are one of the many folks who read that first article and have been wondering what has happened since then, I hope you found this post useful.

Follow us on Twitter for the latest updates: @IntelEthernet