The Future Unleashed with HPC, HPDA, and AI

Traditionally high-performance computing (HPC) has been focused on modeling and simulation. But things are evolving. New and complex workloads are creating a demand for everything—from the familiar simulation to innovative High Performance Data Analytics (HPDA), visualization, and artificial intelligence (AI)—to run on one smart infrastructure.

AI and HPC Trends

As the worlds of HPC and AI converge in some areas, each brings its own trends. These should all be considered carefully by anyone looking to bring AI into their HPC environment (or vice versa), as they’ll be important strategic considerations.

In HPC, one of the big trends is simply the scale of performance at work. Data processing at the petaflop level is not uncommon, and exascale computing is now firmly on the radar.

At the same time, we’re seeing more users than ever before from different parts in organizations wanting to use HPC applications. For example, small engineering houses that up until now have used workstations for their simulation needs are moving to HPC clusters. This democratization of HPC opens up all sorts of exciting opportunities for innovation and discovery. But, it also means that those with the skills to create and run HPC environments are facing additional requirements to meet entirely new user needs.

Comparing HPC and AI Uses

One of the key challenges of HPC and AI convergence is the compute infrastructure. Traditional HPC uses numerical evidence to do modeling and simulation, and needs computers with high floating-point compute capabilities.

So, can AI run on an HPC infrastructure?—let’s call it “AI on HPC”. The answer, happily, is yes. AI applications such as those based on deep learning can use the scalable compute capabilities provided in a highly parallel HPC cluster environment based on Intel® Xeon® processors. This ability to speed up deep learning applications on Intel® technology-based supercomputers was successfully trialed by the national supercomputing center in the Netherlands, SURFsara, captured in detail in this case study.

On the other hand, modeling and simulation can use AI capabilities such as pattern classification and anomaly detection to improve HPC—let’s call it “HPC on AI”. These capabilities let you take analysis from the data and go backwards to develop a better model that can be extrapolated for the future. In this way, AI can actually be used to make the original task of simulation easier and more effective.

Getting Started with AI

You can begin your journey to AI on your existing HPC hardware. Intel collaborates with the industry to use, develop and optimize tools to help you bring new AI capabilities into your existing Intel technology-based architecture. For example:

  • Existing AI Frameworks like Caffe*, TensorFlow* or BigDL* can help you get started quickly with AI applications
  • Software optimizations for such popular deep learning frameworks can greatly increase the performance of AI applications on Intel Xeon processors used in deep learning and HPC applications.
  • Software libraries like the Intel® Math Kernel Library (Intel® MKL) accelerate math processing routines that increase application performance.
  • It is possible to accelerate and deploy convolutional neuronal networks (CNNs) on Intel® platforms with the Intel® Deep Learning Deployment Toolkit that's available as part of the OpenVINO™ toolkit and as a stand-alone.

At Intel, we are also working closely with customers—from academic research to government studies to enterprise applications—to help them design their first AI initiatives and run proofs of concept.

Meanwhile, we continue to evolve underlying technology with new developments such as the next generation of Intel® Omni-Path Architecture, with 200 Gbps being planned for roll-out in 2019. This new release is being designed to support the highly scalable cluster architectures that will be needed for simulation, analytics and AI. For information on Intel Omni-Path Architecture advancements, have a look at this newsletter from Intel.

Our industry collaboration has also created a series of pre-tested, quick-to-deploy infrastructure solutions that are optimized for analytics applications and HPC clusters. These Intel® Select Solutions are designed to help accelerate time to breakthrough, actionable insight, and new product design, and are available for key HPC and AI use cases such as data visualization, modeling and simulation, and genomic analytics.

Conclusion

Application areas such as life science, finance and manufacturing are producing compute workloads like modeling/simulation, HPDA, AI and visualization which are demanding on the infrastructure. Combinations of these, such as modeling and AI, on the application side can provide the basis for new discoveries.  It’s an exciting development but it must all be underpinned by a smart infrastructure. For more insight on how to use HPC infrastructure to get going with AI applications read this Solution Brief ‘Optimizing HPC Architecture for AI Convergence’  or check out this blog ‘HPC and AI: Intertwined Futures’.


Intel® technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at www.intel.com.

All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel® product specifications and roadmaps.

 

Published on Categories High Performance ComputingTags , , , ,

About Stephan Gillich

Stephan Gillich is Director Technical Computing GTM in Intel Deutschland GmbH. He is responsible for the strategy and execution on how to position Intel products in the HPC, Workstation and Data Analytics market in EMEA via a cross-organizational team. He joined Intel in 1992 in its Supercomputing Division as a Computer Scientist. Since then, he has gained broad experience covering Client and Server products in technical, consumer and vertical industry management positions. As a member of the steering board of DVB Organization, he has worked for advanced standardization in the media industry. Recently, Stephan managed Enterprise competitive marketing. He holds a Diploma (Master Degree) in Computer Science from the Technische Universität München.