Open Compute Project – Solutions Accelerating the Digital Transformation

 

This week, I again had the honor of presenting a keynote address at the Open Compute Project (OCP)* Summit.  I always welcome this opportunity to update the OCP community on our progress.

We are anticipating 50 billion connected devices in the year 20201, communicating with higher bandwidth interactions.  Datacenters will perform more challenging workloads like big data analytics and machine learning.  A datacenter transformation with higher processing efficiency and better connectivity will be needed to gain actionable insights.  The world is deploying racks rather than individual systems.  OCP has helped shape the way our industry evolves toward more efficient and scalable computing environments.  Through collaborative innovation we are shaping the future data center.

I’m convinced the advances we’re achieving for the largest hyperscale service providers are helping organizations of all kinds and sizes deliver services more efficiently and meet the business challenges they face.  OCP is about open hardware, and Intel has supported many designs.  I want to take a moment here to highlight a few of the significant advances in compute, storage, and networking technologies and how it’s going to affect the ecosystem as well.

One of the major new platform announcements at OCP Summit, named Project Olympus*, was the result of design collaboration with Microsoft*. The platform is designed for the next generation Intel® Xeon® processor family (codename Skylake), which delivers new optimizations including Intel® Advanced Vector Extensions (Intel® AVX) 512, Intel® Omni-Path Architecture, and Intel® Resource Director Technology.  This next generation Intel® Xeon® processor  with Intel® AVX 512 extensions delivers up to 2x flops per clock-cycle peak performance capability increase (over Intel® AVX 2)2, especially important for high performance computing, data analytics, and cryptography real-world workloads. I’m excited to announce that integrated Intel® QuickAssist Technology will be available with the family, providing hardware-enhanced acceleration to assist with the performance demands of securing and routing Internet traffic and other workloads, such as compression algorithms or cryptography data security offload, thereby reserving processor cycles for application and control processing. The result of these innovations is that we will be delivering one of the highest performance Olympus platforms at OCP.3  Select cloud service providers, including Google Cloud Platform*, have announced services based on this next-generation Intel® Xeon® processor, and additional features, systems and software are being qualified for the general launch later this year.

Since we recognize the importance of high performance computing, I highlighted Intel’s work with a software platform called Intel® HPC Orchestrator which is a comprehensive HPC systems management stack and combines over 60 open source components.  Complementing this software infrastructure, we announced that we are contributing the Intel® Server Board S7200AP (Adams Pass) supporting our Intel® Xeon PhiTM processors to OCP.  This platform enables high density computing with up to four boards in a 2U chassis.

We continue to increase storage capacity and access speed with non-volatile memory (NVMe*). The enhanced Intel-based Lightning design has been expanded to support 60 PCIe 3.0. This year we’ll expand our offering with SSDs in several new form factors to allow users to select NVMe optimized to meet their needs for power efficiency, capacity, or performance.

In networking, we’re using wafer-scale integration to push Intel® Silicon Photonics now available 100G CWDM4 and pushing to 400G in future.

We continue to advance the technology to make compute, networking, and storage more scalable, because we believe by 2025, 70 percent to 80 percent1 of systems will be deployed in data centers where the ability to scale rapidly and efficiently will be critical to business success.  The rack is the building block of theses hyperscale datacenters.

Intel® Rack Scale Design (Intel® RSD) is where it all comes together. Intel RSD is a logical architecture and a set of APIs that allow us to create multi-vendor pools of compute, network, and storage resources, to compose systems from within these pools as needed to support specific workloads.  We just released Intel RSD 2.1 to partners which extends pools of NVMe, like Project Lightning.  We’re excited that this year there are OCP-compliant systems running Intel RSD on display at Intel and partner booths at OCP Summit.

Learn more about the Open Compute Project, its partners, and how to get involved here.

 

 

1 Intel estimates, based on internal data

2 Refer to https://software.intel.com/en-us/blogs/2013/avx-512-instructions

3 https://azure.microsoft.com/en-us/blog/microsoft-reimagines-open-source-cloud-hardware/

Disclaimers

© Intel Corporation

Intel, the Intel logo, Intel Xeon Phi are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Statements in this document that refer to Intel’s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed discussion of the factors that could affect Intel’s results and plans is included in Intel’s SEC filings, including the annual report on Form 10-K.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at [intel.com].

For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.