Enabling Innovation and Discovery across the HPC and AI Communities

ISC High Performance, previously known as the International Supercomputing Conference, is always a showcase for the great work being done by the high-performance computing (HPC) community in Europe and points beyond. That’s absolutely the case this week, where the ISC 2017 conference is under way in Frankfurt.

In countless presentations and demos, scientists and engineers are showing how they are using HPC to solve the really big problems, from enabling precision medicine and discovering new drugs to simulating complex weather systems and designing new products. The use cases for HPC go on and on, as do the societal benefits.

At Intel, we are committed to facilitating the great work being done by the legions of scientists and engineers in the HPC community. We believe that our commitment to the community is evident in the latest TOP500 rankings for supercomputers around the world, released this week at ISC. The rankings show that Intel processors were selected as the foundation for more than 90 percent of today’s TOP500 systems and that many users are also benefiting from other Intel products, such as Intel® Omni-Path Architecture (Intel® OPA), to accelerate highly parallel HPC workloads.

Our focus on helping the HPC community achieve its mission is further reflected in our newest processor platforms — the recently announced Intel® Xeon® Scalable Processors and the latest Intel® Xeon Phi™ processor which accelerates artificial intelligence workloads.

Intel Xeon Scalable Processors gives the HPC and Artificial Intelligence communities a significant leap forward in performance and efficiency. An innovative approach to platform design in this new processor family unlocks scalable performance for a broad range of HPC systems — from workstations and small clusters all the way up to the world’s largest supercomputers. Additionally, support for Intel® Advanced Vector Extensions 512 (Intel® AVX-512) enables Intel Xeon Scalable Processors to deliver up to double the amount of flops per clock cycle peak compared to the previous generation, significantly increasing performance for demanding HPC workloads[1]. These improvements are demonstrated by the latest performance benchmarks, which show that Intel Xeon Scalable Processors yield an increase of up to 8.2x more double precision GFLOPS/sec when compared to Intel® Xeon® processor E5 (formerly codenamed Sandy Bridge)[2], and 2.27x increase over the previous-generation Intel Xeon processor E5 v4 (formerly codenamed Broadwell)[3]. And for AI training and inference, Intel Xeon Scalable Processors deliver significantly more performance than the previous processor generation.

Meanwhile, the upcoming Intel Xeon Phi processor, code-named Knights Mill, is optimized to help data scientists and other users accelerate deep learning training. This extension of the existing generation of Intel Xeon Phi processor family is designed to offer higher performance when running lower precision workloads, which is one of the keys to training models faster. We expect this product to be in production in the fourth quarter of 2017.

Let’s look at the bigger picture. Around the world, software developers, engineers, and data scientists are working together to create new artificial intelligence (AI) systems that will bring us safer vehicles, smarter cities, better security systems, and much more. We all have a stake in their success. To that end, Intel is supporting the HPC community not only by offering products for AI like Intel Xeon Scalable Processors and Knights Mill but also by optimizing leading AI frameworks and providing developers software libraries that make deploying AI solutions easier and higher performing. Our mission is to deliver solutions which enable discovery and innovation.

If you’re at the ISC conference this week, be sure to stop by the Intel booth, to learn more about the new Intel Xeon Scalable Processors and the upcoming new Intel Xeon Phi processor. Also to learn more about HPC technologies that are helping scientists and engineers fuel new insights, please visit intel.com/ssf.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Results are based on internal testing and are provided to you for informational purposes. Any differences in your system hardware, software, or configuration may affect your actual performance. [1] www.intel.com/avx512 [2] Baseline config: 1-Node, 2 x Intel® Xeon® Processor E5-2690 based system on Red Hat Enterprise Linux* 6.0 kernel version 2.6.32-504.el6.x86_64 using Intel® Distribution for LINPACK Benchmark. Score: 366.0 GFLOPS/s vs. 1-Node, 2 x Intel® Xeon® Scalable process on Ubuntu 17.04 using MKL 2017 Update 2. Score: 3007.8

[3] Baseline config:  1-Node, 2 x Intel® Xeon® Processor E5-2699 v4 on Red Hat Enterprise Linux*  7.0 kernel 3.10.0-123 using Intel® Distribution for LINPACK Benchmark, score: 1446.4 GFLOPS/s vs. estimates based on Intel internal testing on 1-Node, 2x Intel® Xeon® Scalable processor (codename Skylake-SP) system. Score: 3295.57

Published on Categories High Performance ComputingTags , , , , ,
Trish Damkroger

About Trish Damkroger

Trish Damkroger is Vice President of Intel’s Data Center Group and General Manager of its Technical Computing Initiative. Her work helps shape Intel’s high-performance computing (HPC) products and services for the technical market segment. Under this umbrella are the next-generation platform technologies and frameworks that will take Intel toward exascale and advance the convergence of traditional HPC, big data, and artificial intelligence workloads. Damkroger has more than 27 years of experience in technical and managerial roles both in the private sector and during her 15-year career within the United States Department of Energy. While at the Department of Energy, she served most recently as the Associate Director of Computation (Acting) at Lawrence Livermore National Laboratory leading a 1,000 person group that is one of the world’s leading teams of supercomputing and scientific experts. Since 2006, Damkroger has been a leader of the annual Supercomputing Conference series, the premier international meeting for high performance computing. She was the SC14 General Chair, headed the SC15 steering committee, led the SC16 Diverse HPC Workforce Committee, and has signed on as Vice Chair of SC18. Damkroger is also a certified coach and a strong advocate for women in science, technology, engineering, and math (STEM). Damkroger was named one of HPCwire’s People to Watch in 2014. She has a master’s degree in electrical engineering from Stanford University.