HPC Innovation Delivers Best-in-Class Industry Solutions

By Terry Myers, Senior Manager of HPC & Big Data, HPC Market Management at HPE

 

High-performance computing (HPC) has been perceived as beyond reach for the average enterprise due to complexity, cost, and resistance to change. Hewlett Packard Enterprise’s (HPE’s) differentiated, cutting edge HPC solutions, backed by more than 80 years of innovation, are driving breakthrough results and competitive advantage for our customers.

A rapidly growing number of vertical industries are increasingly embracing HPC solutions to better manage and monetize massive quantities of data while traditional HPC users in government and academia pursue  more efficient and higher performing  solutions as they accelerate the race to Exascale computing.   As our customers seek new and more advanced technologies, HPE is delivering industry leading solutions and system innovation capabilities, broad partner ecosystem technologies, and extensive HPC portfolio of purpose-built hardware, and software to furnish the next generation of integrated, workload optimized solutions.

To extend “traditional” supercomputing and aid new organizations in realizing the benefits of HPC for competitive differentiation, HPE is offering Enterprise HPC solutions that are easy to deploy, mitigate risk by using proven platforms, and enable seamless integration into datacenters. These breakthrough innovations deliver faster time to results with high-performance capabilities previously unavailable to standard IT environments.

HPE is bringing high-performance computing capabilities to the traditional enterprise market through the use of optimized infrastructures while simultaneously increasing levels of performance with unmatched HPC domain expertise, and the most comprehensive line of supercomputing platforms and solutions. HPE is ready to help you plan and deploy your very first or next high-performance computing solution.

One key area of development is the system architectures used at the core of HPC solutions. Through the Intel and HPE HPC Alliance and close partnership, we have continued to evolve and enhance HPE systems, architectures and solutions. We are incorporating Intel’s next generation technologies which Intel collectively calls the Intel® Scalable System Framework (Intel® SSF) into our HPE Apollo and ProLiant DL platforms.  HPE’s unique innovation in terms of technology integration, density optimization, software environment as well as global services and support deliver significant incremental value to customers beyond the Intel technologies. Intel® SSF is a game changer for HPC & Big Data technologies as it is the first purposefully tailored technology component framework which dramatically improves performance, latency, bandwidth, efficiency, and TCO.

Most importantly, the Intel® SSF framework promises to  make it easier for developers to design HPC applications and for IT admins to manage HPC environments based on existing x86 architecture, using fewer tools and basic infrastructure.  Key elements of HPE’s implementation of Intel® SSF include Intel® Omni-Path Architecture, a comprehensive family of high-performance, low-latency fabric solutions for HPE Apollo and HPE ProLiant portfolio;  Intel® Xeon Phi™ processors supported on HPE Apollo 6000; and Intel® Enterprise Edition for Lustre*, part of the HPE Apollo 4520 announced on April 4th 2016.  This framework enables HPE purpose-built platforms to accelerate the performance of HPC and deep learning workloads while maximizing compute capacity and lowering TCO.

A key example of customer success enabled by Intel’s Omni-Path fabric and HPE’s purpose built portfolio is the Pittsburgh Supercomputing Center’s (PSC) award winning Bridges supercomputer.  Bridges represents a new approach to supercomputing that focuses on research problems that are limited by data movement, not floating-point speed. In addition to serving traditional supercomputing users, Bridges is already helping researchers tackle new kinds of problems in genetics, the natural sciences and the humanities where scientists are impacted by the volume of data rather than computational speed. Users with different scales of data are now able to use a mix of memory, data bandwidth and computational power customized to their problem.

Early end user success includes Wenxuan Zhong and Xin Xing of the University of Georgia used Bridges to assemble 378 billion base pairs of bacterial DNA from the intestines of healthy patients and those with diabetes.  The scientists sequenced short DNA fragments of all the species at once, using computation to sort out the different microbes’ sequences as they assembled them. This massive task leveraged PSC Bridges’ Intel® Omni-Path fabric internal connections—the first such installation in the world—linking 20 HPE computational nodes to finish the calculation in a blistering 16 hours.  The team is now using Bridges to test a new statistical method on the sequence data to identify critical differences in gut microbes associated with diabetes.

Bridges 800+ node cluster is comprised of HPE Apollo 2000 servers, HPE ProLiant DL580 scale-up servers, and HPE Integrity SuperdomeX with 12TB in-memory servers, and a 10 Petabyte parallel file system all seamlessly connected with 100 Gbps Intel Omni-path fabric.  Significantly, HPE and Intel’s collaboration with PSC has proven to be extremely successful as measured by how quickly the cluster came up and end users – both experienced and first time HPC users -- were successfully running on the cluster with very positive feedback on their experience.

HPE is making HPC more accessible to all types of users by tightly integrating Intel technologies into HPE systems and solutions and providing customers with additional choice and flexibility in the form of workload optimized, purpose-built HPC solutions.  I encourage you to explore these solutions and the remarkable outcomes and distinct benefits that they bring to you.