In Part 1 of this blog post, I explored the synergies between artificial intelligence and high performance computing, and how organizations are using HPC to accelerate AI, making new things possible. In this post, I intend to dive down into some of the work that Intel is doing to enable organizations to leverage HPC capabilities in AI solutions.
My team at Intel focuses on HPC platform software. We work to ease the pain of creating and maintaining HPC systems on the software side and expose all of the power of the hardware underneath the software layers to end users. In a nutshell, you can say that our goal is to make HPC ubiquitous and transparent to data scientists and analysts alike.
From my own personal experience, I know that what is valuable for any business relying on analytics is to have fast access to insights and more of it. This is definitely a lofty goal. If the IT infrastructure is set up so that data scientists and analysts can take advantage of HPC systems either on premises or on the cloud in a seamless fashion for advanced analytics, we will have reached our goal.
Diving deeper, our product, Intel® HPC Orchestrator, is the Intel supported version of a Linux Foundation project called OpenHPC.
In essence, OpenHPC is a brand new community project that has gained broad support from research institutions, system vendors, and ISVs who take part in solving the world’s most difficult problems, like finding a cure for cancer and analyzing patterns of nature to develop earlier warnings for disasters. These workloads and many new and emerging workloads require simultaneous tackling of the problem best suited to parallel execution.
That’s where we come in. The goal of this community is to build a consistent software platform to support traditional and emerging workloads that benefit from HPC. The use of HPC in the fields of machine learning and, broadly speaking, artificial intelligence, is an example of these emerging workloads. To put it simply, with Intel HPC Orchestrator and with the OpenHPC community we are providing the base platform for artificial intelligence, as well as traditional HPC. Here, these applications and many like them can take advantage of parallel processing over multiple nodes in a cluster or supercomputing environment.
It is critical to point out that Intel HPC Orchestrator is an essential part of the Intel® Scalable System Framework that defines solutions with optimization across all elements of compute, storage, networking, and software. Fast compute is paramount in our evolving ecosystem. We have fast compute enabled through the Intel® Xeon® and Intel® Xeon Phi™ processor families, storage, and fast I/O with Intel® Optane™ technology and enabled Intel solutions for Lustre. On the fabric side, we have the Intel® Omni-Path Fabric and Silicon Photonics technologies.
Fusing it all together is Intel HPC Orchestrator, our HPC software platform, the glue that brings all of the best performance and cross-integration together across these elements. With this software platform, a data scientist running machine learning algorithms or a business analyst using HPDA applications can focus on their science, generating insights and squeezing every ounce of performance out of their computing environment.
Fueling Artificial Intelligence and Machine Learning
When it comes to building on the natural synergies of AI and machine learning, Intel has a lot to offer. Intel has a vast portfolio of offerings in the artificial intelligence/machine learning space—all the way from high-end processors to software platforms and libraries that support machine learning frameworks.
On the hardware side, we support mainstream artificial intelligence workloads with the Intel Xeon processor family. Our customers get an additional boost from using Intel Xeon Phi processors in computationally heavy workloads, and the best part is that no code changes or recompilation is necessary because Intel Xeon Phi processors are binary compatible with Intel Xeon processors. This means the developer audience still gets to enjoy the ease of programming when it comes to writing scalable applications. Additionally, by using a CPU architecture, developers are now able to tap into the vast scalability and direct memory access. Hence, this is why Intel processors are fueling more than 97 percent of AI workloads today. That is a huge number!
You may remember, we recently added Altera FPGAs to the Intel mix in order to provide best-in-class energy efficiency in addition to outstanding reconfigurability, which eliminates the need to replace hardware and leads to lower TCO. So watch out for Intel® Arria 10® FPGA related news.
Driving Communications Efficiency
Of course, no parallel processing on multiple nodes comes without its concerns for communications efficiency. To meet this challenge head-on, Intel Omni-Path fabric both increases price-performance and reduces communication latency.
Taking this a step further, on the software side, Intel offers a wide range of software resources to help accelerate data analytics and simplify the development of applications. You can learn more about these contributions in a blog by my colleague Bill Savage: Accelerating the Development and Optimization of AI Solutions.
In addition, Intel takes care of educational needs by providing comprehensive training on machine learning tools and Intel Xeon Phi processors to academic institutions to better foster academic research on next-generation algorithms.
Ready for a deeper dive? To learn more about simplifying the installation, management, and ongoing maintenance of HPC systems, explore the features of Intel HPC Orchestrator. And if you have thoughts you’d like to share on the use of HPC in AI, my team would welcome your input. Just send us a note via my Twitter account.