Connecting the Supercomputer: Network Technologies

If you were in Seattle last week, you shared the Emerald City with Supercomputing 2011, one of the high-performance computing (HPC) community’s biggest shows. Attendees heard about the latest generation of supercomputers, the technologies that enable them, and advances that will make future high-performance systems even faster than the current models. In this post, I’ll take a quick look at the role of networking in HPC and share some Intel Ethernet information from the show.

Today’s typical HPC systems are custom-designed clusters of many physical servers, often hundreds or even thousands of them. These many compute nodes process and analyze massive amounts of data for traditional supercomputer tasks, such as climate research, quantum physics, and aerodynamic simulations, as well as financial applications, including high-frequency stock trading and data warehousing. As with any compute cluster environment, the network is a critical element of an HPC cluster. Each machine communicates constantly with its peers in the cluster.

Specialized fabrics have been the traditional method for networking HPC clusters for several years, but many organizations turned to Ethernet for these environments. The reasons are pretty straightforward:

  • Ethernet is everywhere. It’s a familiar, well-understood technology, and practically every server includes an integrated Ethernet connection. Ethernet’s ubiquity and a solid, established ecosystem make it easy to deploy in any environment, including HPC.
  • Ethernet is flexible and adaptable. Over the years, Ethernet expanded to incorporate additional traffic types, including video, voice over IP, and storage (NFS, iSCSI, and Fibre Channel over Ethernet). Enhancements such as 10 Gigabit Ethernet (10GbE) and iWARP (for ultra-low latency performance) have made Ethernet a viable solution for HPC.
  • Ethernet enables simplification. Consolidating multiple fabrics onto Ethernet eliminates the need to maintain and manage disparate network fabrics and the required equipment.

Looking for an example of someone deploying 10 Gigabit Ethernet in an HPC environment? Want to know more about upcoming products? Let me give you a few examples of what the Intel Ethernet team showcased at SC11:

NASA Case Study: HPC in the Cloud

Yep, HPC in the cloud. In the Intel booth, Hoot Thompson from NASA’s Center for Climate Simulation discussed how NASA moved modeling and simulation applications to a cloud-based cluster. This cloud deployment offered greater elasticity and agility than their bare-metal cluster, but they wanted to know how their applications would perform in the cloud. NASA combined the cloud cluster’s management and backbone networks onto 10GbE using Intel Ethernet 10 Gigabit Server Adapters, and found network performance to be comparable to the bare-metal cluster. It even exceeded it in some tests. Support for single-root I/O virtualization (SR-IOV) in Intel Ethernet 10 Gigabit Server Adapters played a key role in these performance levels. Results like these show that not only is Ethernet a viable fabric for HPC, but also that cloud environments are suitable for HPC deployments.

Low Latency Switching: Intel® Ethernet Switch FM6000

The Intel® Ethernet Switch FM6000 is the latest 10GbE and 40GbE Ethernet switch controller developed by Fulcrum Microsystems, which Intel acquired earlier this year. One of this product’s key features is its low latency performance. Switch latency is the amount of time it takes to forward a received packet on to its destination, and it’s an important ingredient in high performance computing, where nodes in a cluster constantly communicate with each other through the switch. At SC11, we displayed a reference board that will make it easier for switch vendors to design products based on the Intel Ethernet Switch FM6000. We’ll have someone from that product team blog more about it soon, so stay tuned.

The Public Debut of Integrated 10GBASE-T

If you’ve read any of my previous blogs posts, you’ve heard of Intel’s upcoming 10GBASE-T controller, which brings integrated 10GbE to rack and tower servers in 2012. At SC11, Supermicro and Tyan showcased next-generation motherboards that feature this controller, marking the first time motherboard integration has been shown publicly. We’re excited about this product because it will allow you to connect mainstream servers to 10GbE networks, including HPC clusters, without the expense of an add-in adapter.

And now, ladies and gentlemen, I give you integrated 10GBASE-T powered by Intel® Ethernet.


Supermicro motherboard with integrated 10GBASE-T (the two ports on the right)


Tyan motherboard with integrated 10GBASE-T ports on the upper left

10GbE in the Top 500

Need more proof that Ethernet is ready for HPC? Twice each year, the Top 500 organization publishes a list of the 500 fastest supercomputers on the planet. Number 42 on the list published last week is an Amazon EC2 cluster. It’s powered by the Intel® Xeon processor X5570 series, contains 7,040 cores, and is connected together by 10GbE. Check out High Performance Computing Hits the Cloud by Amazon Web Services’ James Hamilton to learn how Amazon used their 10GbE pipes.

Those are just a few of the technologies that help make Ethernet a viable network fabric in HPC environments. While HPC systems perform vastly different tasks than typical data center servers, many of those systems rely on the same Ethernet technologies and products used in data centers. We expect this number to grow as more organizations understand the benefits of moving to Ethernet and use it to connect their HPC systems -- systems that predict how big the next hurricane will be, processing thousands of stock transactions in less than a second, or modeling what happens when two black holes collide.

Follow us on Twitter to learn more: @IntelEthernet