Next-Generation Intel HPC Fabric Takes Flight

In the months since its launch in late 2015, Intel Omni-Path Architecture (Intel OPA) has received a tremendous amount market acceptance. Most HPC OEMs are adopting this next-generation fabric into their HPC solutions while world-class research institutions are leveraging the architecture in leading-edge compute clusters deployments.

Some of these deployments will be in the spotlight in the week ahead at the International Supercomputing Conference, also known as ISC High Performance. The event, which takes place in Frankfurt June 19 – 23, will include several Intel and partner demos of applications running on clusters based on Intel OPA, a key element of Intel Scalable System Framework.

There are good reasons for all this momentum. Intel OPA delivers the performance and scalability required for tomorrow’s HPC workloads at an excellent price/performance when compared to competitive fabric technologies. It allows organizations to achieve supercomputing-class performance for their HPC fabrics while reducing their infrastructure requirements by as much as 50 percent.[i]

Our internal application testing shows comparable or better than EDR application performance with Intel OPA, and we encourage our customers and partners to perform their own benchmarking. As a general practice, Intel always provides configuration data so that our customers and partners can use the information as a guideline for their own benchmarking, allowing them to make more informed choices.

As the benchmarks show, Intel OPA has all the right elements for very high MPI message rate while reducing latency in HPC clusters running HPC applications and other data- and compute-intensive workloads, from entry level to large scale deployments.

Early adopters of Intel OPA include research-focused organizations running leading-edge compute clusters for scientific and engineering research. A few examples:

  • The University of Tokyo and the University of Tsukuba recently announced plans to build a 25 petaflop system that incorporates Intel OPA. That cluster, named Oakforest-PACS, is expected to go online in December.[ii]
  • The Pittsburgh Supercomputing Center is using Intel OPA to improve the scalability and connectivity of its Bridges supercomputer.
  • The Texas Advanced Computing Center is using Intel OPA to support performance boosts and larger fabrics in its supercomputers.
  • The Tri Labs project at Sandia National Laboratories, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory plans to use Intel OPA for its Capacity Technology Systems (CTS-1) program. For more on this project, see Intel Omni Path Architecture for the Tri Labs CTS-1

This list of customer examples could go on and on. A key takeaway here is that Intel OPA is a fabric for the future of HPC that is being deployed today in many data centers that require leading-edge cluster performance along with a superior price/performance. That’s Intel OPA—an end-to-end fabric solution that provides 100 Gbps port bandwidth with very high MPI message rate and low latency that stays low even at extreme scale.

To take a step back and look at the bigger picture, the Intel Omni-Path Architecture and the broader Intel Scalable System Framework are among the keystones of Intel’s long-term vision for the future of high-performance computing.  This future starts today.  To meet the performance and cost requirements of tomorrow’s HPC workloads and data analytics challenges, organizations are going to need a next-generation fabric that is woven tightly into a broader HPC framework. That’s clearly the direction of Intel Omni-Path Architecture.

For a closer look at the use of this next-generation fabric in prominent supercomputing centers, watch the videos on the Intel OPA site. In addition, you can gain a close-up look at the architecture in our “Ask the Architects” webinar, tentatively titled “Intel® Omni-Path Architecture: Live at ISC.” It’s planned for 9 p.m. CEST on June 20th.  If you are attending ISC, stop by our booth and talk to our experts, and, of course, stay tuned for announcements from our growing community of Intel OPA partners.

 

 

[i] Reduction of infrastructure requirements claim based on a 1024-node full bisectional bandwidth (FBB) Fat-Tree configuration, using a 48-port switch for Intel Omni-Path cluster and 36-port switch ASIC for either Mellanox or Intel® True Scale clusters.

[ii] Top500 Supercomputing Sites. “Japanese Universities Order 25 Petaflop Supercomputer From Fujitsu.”