ISC High Performance, previously known as the International Supercomputing Conference, is always a showcase for the great work being done by the high-performance computing (HPC) community in Europe and points beyond. That’s absolutely the case this week, where the ISC 2017 conference is under way in Frankfurt.
In countless presentations and demos, scientists and engineers are showing how they are using HPC to solve the really big problems, from enabling precision medicine and discovering new drugs to simulating complex weather systems and designing new products. The use cases for HPC go on and on, as do the societal benefits.
At Intel, we are committed to facilitating the great work being done by the legions of scientists and engineers in the HPC community. We believe that our commitment to the community is evident in the latest TOP500 rankings for supercomputers around the world, released this week at ISC. The rankings show that Intel processors were selected as the foundation for more than 90 percent of today’s TOP500 systems and that many users are also benefiting from other Intel products, such as Intel® Omni-Path Architecture (Intel® OPA), to accelerate highly parallel HPC workloads.
Our focus on helping the HPC community achieve its mission is further reflected in our newest processor platforms — the recently announced Intel® Xeon® Scalable Processors and the latest Intel® Xeon Phi™ processor which accelerates artificial intelligence workloads.
Intel Xeon Scalable Processors gives the HPC and Artificial Intelligence communities a significant leap forward in performance and efficiency. An innovative approach to platform design in this new processor family unlocks scalable performance for a broad range of HPC systems — from workstations and small clusters all the way up to the world’s largest supercomputers. Additionally, support for Intel® Advanced Vector Extensions 512 (Intel® AVX-512) enables Intel Xeon Scalable Processors to deliver up to double the amount of flops per clock cycle peak compared to the previous generation, significantly increasing performance for demanding HPC workloads. These improvements are demonstrated by the latest performance benchmarks, which show that Intel Xeon Scalable Processors yield an increase of up to 8.2x more double precision GFLOPS/sec when compared to Intel® Xeon® processor E5 (formerly codenamed Sandy Bridge), and 2.27x increase over the previous-generation Intel Xeon processor E5 v4 (formerly codenamed Broadwell). And for AI training and inference, Intel Xeon Scalable Processors deliver significantly more performance than the previous processor generation.
Meanwhile, the upcoming Intel Xeon Phi processor, code-named Knights Mill, is optimized to help data scientists and other users accelerate deep learning training. This extension of the existing generation of Intel Xeon Phi processor family is designed to offer higher performance when running lower precision workloads, which is one of the keys to training models faster. We expect this product to be in production in the fourth quarter of 2017.
Let’s look at the bigger picture. Around the world, software developers, engineers, and data scientists are working together to create new artificial intelligence (AI) systems that will bring us safer vehicles, smarter cities, better security systems, and much more. We all have a stake in their success. To that end, Intel is supporting the HPC community not only by offering products for AI like Intel Xeon Scalable Processors and Knights Mill but also by optimizing leading AI frameworks and providing developers software libraries that make deploying AI solutions easier and higher performing. Our mission is to deliver solutions which enable discovery and innovation.
If you’re at the ISC conference this week, be sure to stop by the Intel booth, to learn more about the new Intel Xeon Scalable Processors and the upcoming new Intel Xeon Phi processor. Also to learn more about HPC technologies that are helping scientists and engineers fuel new insights, please visit intel.com/ssf.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Results are based on internal testing and are provided to you for informational purposes. Any differences in your system hardware, software, or configuration may affect your actual performance.  www.intel.com/avx512  Baseline config: 1-Node, 2 x Intel® Xeon® Processor E5-2690 based system on Red Hat Enterprise Linux* 6.0 kernel version 2.6.32-504.el6.x86_64 using Intel® Distribution for LINPACK Benchmark. Score: 366.0 GFLOPS/s vs. 1-Node, 2 x Intel® Xeon® Scalable process on Ubuntu 17.04 using MKL 2017 Update 2. Score: 3007.8
 Baseline config: 1-Node, 2 x Intel® Xeon® Processor E5-2699 v4 on Red Hat Enterprise Linux* 7.0 kernel 3.10.0-123 using Intel® Distribution for LINPACK Benchmark, score: 1446.4 GFLOPS/s vs. estimates based on Intel internal testing on 1-Node, 2x Intel® Xeon® Scalable processor (codename Skylake-SP) system. Score: 3295.57