By John Beck, General Manager Intel Omni-Path Architecture Business Group
On November 14, 2016, at Supercomputing 2016 in Salt Lake City, Utah, Intel shared exciting updates on the continuing market momentum of Intel® Omni-Path Architecture, Intel’s 100Gb fabric for high performance computing. The Intel® Omni-Path Architecture connects 28 systems in the current Top500 list, double the number of InfiniBand’s EDR, clearly proving that Intel® Omni-Path Architecture is becoming the preferred 100Gb fabric for major deployments as a result of its strong performance and scalability. Intel® Omni-Path Architecture commands a total of 43.7 petaflops in the 100Gb Top500 arena, which is over 2.5x the total peak FLOPs for Infiniband EDR systems (17.1 petaflops). The high-speed, low latency and scalability of Intel OPA is critical to better performance and balanced cluster delivery for HPC applications.
What a Difference Five Months Can Make
The graphs below illustrate the tremendous improvements in the number of systems and total performance that have propelled Intel® Omni-Path Architecture to a leadership position in 100Gb fabric.
We’re also very excited that Intel OPA is the first 100Gb fabric to appear in the Top 10! The 8,208-node Oakforest-PACs cluster (University of Tokyo and Tsukuba University), which is based on the Intel® Omni-Path Architecture, has broken into the #6 spot in the recently-published Top500list. Delivering 25 petaflops of peak performance and a measured 13.5 petaflops (Rmax), the Oakforest-PACS cluster supports over 500,000 Intel® Xeon Phi™ processor cores (formerly codenamed Knights Landing). The Intel® Omni-Path Architecture is specifically designed for large scale HPC deployments to offer the bandwidth, low latency and efficiency required for today’s biggest supercomputers. The #6 listing on the Top500 list for the Oakforest-PACS cluster demonstrates the ability for Intel® Omni-Path Architecture to deliver top performance and scalability.
Following the Oakforest-PACS cluster in the #12 spot is the 3,556-node CINECA Marconi-A2 cluster, sporting a peak performance of 10.8 petaflops and a measured ~6.2 petaflops.
These are significant accomplishments, given that volume shipments started just nine months ago. Intel’s high-speed, low-latency fabric for high-performance computing has quickly established its presence around the world as the 100Gb HPC fabric of choice, primarily due to its performance, price/performance, scalability, and the customers’ satisfaction with their initial experiences after deployment. The vast majority of major HPC system OEMs integrating large and scalable machines have adopted the Intel fabric for their deployments. These include Dell, Lenovo, HPE, Cray, SGI, and Supermicro in the United States; Bull Atos and Clustervision in Europe; Fujitsu in Japan; and Sugon, Inspur, and Huawei in China.
Integrators and customers cite that competitive performance is not the only compelling reason to choose Intel® Omni-Path Architecture. The increased port density afforded by Intel technology’s 48 radix switch silicon (compared to InfiniBand’s 36-port switch) reduces the complexity and costs in the network’s design required to enable large clusters. For Penguin Computing, who deployed Intel® Omni-Path Architecture for the first Commodity Technology System program (CTS-1) for the U.S. government’s National Nuclear Security Agency (NNSA), said Intel’s switch was very beneficial for the CTS-1 design. Please check out what Sig Mair of Penguin has previously said about Intel® Omni-Path Architecture benefits in the following Top500.org article. There are now eight NNSA/CTS1 clusters on this Top500, a number of which are in the top 50 on the list.
The Proof is in the Performance
The proven quality and stability of Intel® Omni-Path Architecture allowed MIT to deploy their HPC system in record time this fall. Jeremy Kepner, MIT Lincoln Laboratory Fellow and head of the Lincoln Laboratory Supercomputing Center, shares that “Dell EMC and Intel have been great partners in enabling us to dramatically increase the capabilities of our supercomputing center. The Dell HPC and Intel teams were very knowledgeable and responsive and able to deliver, install, and benchmark our Petaflop-scale system in less than a month. This was a great example of a well-coordinated and dedicated organization that was able to allocate the appropriate resources to exceed customer expectations. We are very pleased with the efficiency and performance of our Dell EMC Networking H-Series fabric based on Intel® Omni-Path Architecture, which enabled our #106 ranking on the November Top500 list."
All accomplished in just the second Top500 list featuring Intel® Omni-Path Architecture, Intel’s leap in both number of solutions represented in the Top500 list and overall performance proves its commanding presence in the 100Gb fabric market. Intel OPA is a pillar of Intel®’s Scalable System Framework, a consolidated strategy to bring highly integrated solutions to high performance computing.
To learn more about Intel® Omni-Path Architecture visit: http://www.intel.com/omnipath