Last night at SC15, Diane Bryant spoke about the future of HPC—new applications, new audiences, new architectures. Intel has been involved in high-performance computing for over 25 years, and today, nearly 89 percent of the world’s 500 largest supercomputers run on Intel® architecture.
But HPC is at an inflection point, with growing demand from existing users as they grapple with more data and more complex models, and from new classes of applications that are turning to HPC to gain insights from Big Data streaming in from our connected world. Increased data, complexity, and audiences requires a complete re-examination of how systems are designed.
The challenges to continuing to exploit performance from HPC systems are well documented. More than powerful processors are required to take HPC to the next level. We need leaps forward in memory, I/O, storage, reliability, and power efficiency, and we need innovations in these areas to work together in a scalable and balanced way.
Intel has been busy working on these next-gen challenges for decades. Our Intel® Xeon® processor E5 processors and Intel Xeon Phi coprocessors are designed for HPC. And our new Intel® Omni-Path Architecture is an HPC fabric that can scale to tens of thousands of nodes.
We’ve introduced innovative memory technologies like 3D XPoint technology, used to create fast, inexpensive, and nonvolatile storage memory. We’re also continuing to improve Lustre* software, the most widely used parallel file system software for HPC.
But next-gen HPC requires more than a collection of parts. The future will require a rethinking of the entire system architecture, a new system framework to ensure that all these parts work together seamlessly and efficiently.
That’s why we’ve developed the Intel® Scalable System Framework. It combines all the elements I just mentioned, and others, into a scalable system level solution that is more deeply integrated than ever before. It is a flexible blueprint for designing balanced and efficient HPC systems that can scale from small to large, address data and compute intensive workloads, and ease procurement, deployment, and maintenance, while being based on standard X86 programming models. Customizability is key. This framework will allow users to adapt their HPC system procurement to their application needs—to tune for high I/O or compute, for example.
Soon, Intel will publish a reference architecture and a series of reference designs for the Intel® Scalable System Framework that will simplify the development and deployment of HPC systems for a variety of industries.
We’ve got to make it easier to use these systems since HPC is moving beyond its traditional technical and scientific roots into business, education, even the world of dating. With the advent of Big Data analytics, everyone from retail chains to social media sites needs HPC calibre systems to make sense of the reams of data cascading in. And in regards to machine learning Andrew Ng, Associate Professor for Stanford University, Chief Scientist at Baidu and Chairman and Co-founder of Coursera, has a great quote, “Leading technology companies are making more and more investments in HPC and supercomputers. I think the HPC scientist could be the future heroes of the entire field of AI”.
Intel is working closely with a number of partners to bring the Intel® Scalable System Framework to market in 2016. Many of our partners have opened Centers of Excellence, where customers can collaborate with experts from Intel and our partners to optimize their codes for HPC systems. The ability to buy easy-to-use HPC systems will make HPC practical for solving new business and social problems that don’t have the technical staffing for the massive academic and research HPC systems of today.
Since Intel entered the HPC field more than 25 years ago, we’ve been out to democratize HPC. Now we’re transforming it to enable the next 25 years of innovation with the Intel® Scalable System Framework!
Learn more about Intel® Scalable System Framework