Unleashing an Unmatched Portfolio to Move, Store and Process Data

The data-centric era is upon us with data being generated at a pace of 1.7MB per second for every person on earth. This data represents a monumental opportunity for our customers to drive new societal insights, business opportunity, and redefine our world. Intel long-ago recognized this opportunity and underwent a strategic shift in silicon innovation towards a data-centric infrastructure that will move, store and process data from core data centers to the intelligent edge, and everywhere in between. Today, we expanded our unmatched portfolio of silicon with a host of new products, and unveiled collaborations with industry leaders around the world to deliver data-centric infrastructure and services.

Let’s start with the new 2nd Generation Intel® Xeon® Scalable processors, the lifeblood of our data-centric strategy that stems from over 20 years of continuous innovation. We integrated workload acceleration for AI inference and network functions, and delivered a once-in-a-generation innovation with Intel® Optane™ DC persistent memory. We created both standard and custom SKUs that deliver the performance and efficiency to serve as our customer’s needs across real world workloads across the spectrum from data center to intelligent edge.

The 2nd Gen Intel Xeon Scalable processor integrates Intel® DL Boost technology for improved AI inference. Building upon optimizations we delivered in last generation with Intel AVX-512, this is the only CPU on the market with integrated inference acceleration. Because most inference is integrated into the workload or application, customers benefit from the performance and flexibility that built-in acceleration provides. The combination of hardware and software improvements means this new Intel Xeon Scalable platform delivers up to 14X deep learning inference performance versus first-gen Intel Xeon Scalable processors released in July 2017. With our Intel Xeon Platinum 9200 processors, a new family of advanced CPUs, we can deliver double that performance. We’re delighted to say that from day one, all major frameworks including TensorFlow, Pytorch, Caffe, MXNet and Paddle Paddle as well as AI tools and operating environments like ONNX and OpenVINO, support Intel DL Boost, and we’ve already seen customer performance scale with early examples including Alibaba, AWS, Baidu, JD.com, Microsoft and Tencent. We’re excited to see the traction of this technology in both data center and intelligent edge workloads.

The race is on to virtualize network functions and deliver 5G ready infrastructure. We completed hundreds of network function proof-of-concepts and deployments with leading communication service providers to deliver network optimized processors. These processors are tuned for network function virtualization (NFV) with optimized cores, frequency and thermals specific to network requirements. Also, our latest Intel® Speed Select technology enables dynamic control of core frequency to accelerate high priority network services and maintain QoS in a virtualized environment. We’re delighted that industry leaders such as Advantech, Dell EMC, Ericsson, H3C, HPE, Huawei, Nokia, ZTE, and more will deliver solutions featuring these processors, helping establish Intel as the architecture from the network core to the intelligent edge that will deliver the full promise of 5G. We delivered hundreds of POCs with the top 15 Communications Service Providers in the market, who were the first movers racing towards NFV and represent 65% of the world’s networks. Today, 30% of the networks across the globe are virtualized, meaning there is plenty of opportunity ahead for cloud-like, Intel-based networks.

We also delivered Intel Optane DC persistent memory, a breakthrough that fundamentally disrupts decades of memory and storage thinking. This technology creates a persistent memory tier that allows users to affordably scale memory capacity, enables data persistence in main memory rather than disks, and unleashes in-memory software to deliver new levels of insight. We’ve seen incredible application advancements with this technology across in memory analytics and databases, high capacity virtualization environments, and content delivery networks. Some examples of breakthrough performance for historic memory bound applications include innovators like Aerospike, Microsoft, Oracle, Redis Labs, SAP, SAS, Apache Spark, SUSE, VMware and more. We’re delighted to see dozens of leading companies join us today on the path to ramp of optimized software, systems and services to the market.

We’ve coupled this innovation with new security capabilities on the Xeon Scalable platform, adding hardware mitigations to side channel attacks and integrating new security libraries to help developers more easily take advantage of every security features we deliver in our technology.

Helping customers move, store, and process data takes more than processor innovation. We’ve advanced the entire silicon and solution portfolio with 25/50/100GB Ethernet delivery with Intel Ethernet 800 Series Adapters, 24x7 storage availability with our latest dual-port Intel Optane DC SSDs, efficient warm storage with Intel SSDs based on QLC NAND, power and space-efficient network processing with Intel® Xeon® D 1600 processors, and our next leap in programmable acceleration with Intel® Agilex™ FPGAs. We’ve also taken our latest technology and delivered 21 Intel® Select Solutions, reference designs with pre-verified configuration featuring the latest Intel technology, optimized software and unique optimizations from leading systems providers that accelerate deployment of customers most demanded workloads across AI, analytics, multi-cloud, network transformation and HPC. To learn more about all of these innovations, I welcome you to check out my colleagues blogs on the Intel IT Peer Network and engage with the Intel team. Together, we’ll lay an infrastructure foundation to unleash data.

Check out Lisa’s conversation on the Intel Chip Chat podcast here.