A World-Class Interconnect for a World-Class Scalable Processor Family

In the high-performance computing (HPC) community, there’s a lot of excitement about the new Intel® Xeon® Scalable platform. It offers the world’s most advanced compute core designed into a broad portfolio of balanced platforms to deliver compelling performance, scalability, and energy efficiency for diverse HPC workloads. These gains are driven in part by an integrated high-performance, low-cost fabric for HPC: Intel® Omni-Path Architecture (Intel® OPA).

Intel OPA helps users achieve supercomputing-class fabric performance while dramatically reducing fabric infrastructure costs and requirements. This end-to-end fabric solution is designed for low latency and cost-effective scaling across every level of HPC — from department clusters to supercomputers.

Let’s take a closer look at Intel OPA and the value it brings to the HPC community. This part of the story can be boiled down to three fundamental topics: performance, price/performance, and innovative fabric features needed for greater scalability. This value proposition is further enhanced through the integration of the Intel OPA fabric with the latest Intel processors.

When Intel Omni-Path is deployed with the new Intel Xeon Scalable platform, Intel OPA offers the added benefit of a fabric that is integrated into the processor socket. This brings the following value vectors to the market – potential for high-density server designs and deployments, reduced costs and power, and an option for improving latency. Intel OPA directly integrates with the processor as opposed to connecting to the processor via an add-on card plugged into a PCI Express* slot. This allows systems to be more cost-effective and gives users more I/O flexibility by opening up PCI Express slots for other uses.

On the performance front, Intel OPA delivers 100 Gbps port bandwidth, with high message rate plus low fabric latency that stays low even at extreme scale — Intel OPA can scale to tens of thousands of nodes. These capabilities helped Intel OPA become the leading 100Gb fabric for the Top500 supercomputers. Additionally, with Barcelona Supercomputing Center’s MareNostrum and BASF’s QURIOSITY placing high in the Top500 list, the performance benefits of combining Intel Xeon Scalable processors with Intel OPA are already being demonstrated by some of the world’s most powerful systems.

For exceptional price/performance, Intel OPA fabrics are designed around a 48-port switch chip, which enables more nodes to be supported per switch, thus reducing the number of switches and cables needed, significantly lowering overall fabric costs. This reduced fabric infrastructure cost enables more servers and/or storage to be acquired within a given cluster hardware budget.[i]

And then there are the innovative fabric features that improve performance, resiliency, and quality of service. These features include traffic flow optimization (TFO) that delivers low, deterministic latency for mixed MPI and storage traffic. The benefit of TFO is that it helps to improve application performance and run-to-run consistency.

Another important feature, packet integrity protection (PIP), enables error detection and correction without any additional latency. Bit errors in the fabric are more prevalent at greater line speeds and greater scale, so detecting and correcting the inevitable bit errors without performance penalty is a huge advantage.  PIP virtually eliminated end-to-end retries that are serious impacts on application performance.

Let’s take a step back and look at the bigger picture. The advances in the Intel Xeon Scalable platform deliver more cores, more cache, and higher frequencies. This is all great news for HPC users, but better compute capability alone isn’t a solution. To capitalize on all this CPU goodness, you need a better network interconnect. You need a high-performance, low-latency fabric that connects the nodes together to accelerate highly parallel HPC workloads. That’s Intel Omni-Patch Architecture.

To learn more about Intel OPA, visit intel.com/omnipath. For a closer look at the Intel Scalable System Framework, see intel.com/ssf. Or for a one-on-one conversation about your needs, contact your preferred system vendor.

[i] Assumes a 750-node cluster, and number of switch chips required is based on a full bisectional bandwidth (FBB) Fat-Tree configuration. Intel® OPA uses one fully-populated 768-port director switch, and Mellanox EDR solution uses a combination of 648-port director switches and 36-port edge switches. Mellanox component pricing from www.kernelsoftware.com, with prices as of November 3, 2015. Compute node pricing based on Dell PowerEdge R730 server from www.dell.com, with prices as of May 26, 2015. Mellanox power data based on Mellanox CS7500 Director Switch, Mellanox SB7700/SB7790 Edge switch, and Mellanox ConnectX-4 VPI adapter card product briefs posted on www.mellanox.com of November 1, 2015. Intel OPA power data based on product briefs posted on www.intel.com of November 16, 2015. Power and cooling costs based on $0.10 per kWh, and assumes server power costs and server cooling cost are equal and additive. Intel® OPA pricing based on estimated reseller pricing based on projected Intel MSRP pricing at time of launch. All amounts in US dollars.

 

Published on Categories High Performance ComputingTags , , , , ,
Scott Misage

About Scott Misage

General Manager, Data Center Group, Connectivity Group, Omni-Path Business Unit Scott recently joined Intel in January this year as the General Manager of the Omni-Path Business Unit. He is responsible for the end-to-end High-Performance Computing Fabric business at Intel, one of the key growth adjacencies in the Data Center Group at Intel. Scott joins us after 19 years at HP where most recently he was the HP Server Vice President and General Manager of High-Performance Computing. In this role, Scott managed the worldwide HP hardware, software and solution products business for high-performance computing, including fabrics, accelerators, storage and HPC software. Misage holds a Bachelor of Science degree from Cornell University in Electrical Engineering and a Master of Electrical Engineering also from Cornell University.