Intel Architecture: A Critical Catalyst for the Hyperscale Computing Revolution and Beyond

Register now for Intel’s Launch Event! Hear from Intel’s executives, technology experts and ecosystem partners about next-generation solutions that will transform businesses, industries and ultimately lives on July 11th.

Register today launchevent.intel.com 

At Intel, innovation is in our DNA. Cofounder Robert Noyce said, “Innovation is everything. When you're on the forefront, you can see what the next innovation needs to be. When you're behind, you have to spend your energy catching up.” We’ve been on the forefront of cloud computing for years, collaborating with cloud service providers (CSPs) around the globe to not only increase performance per dollar amount of total cost of ownership (TCO) for internal efficiency metrics, but also revenue generated per dollar amount of TCO for differentiation and growth. Because of these deep collaborations, we have a clear view of what the next innovations need to be. This year, as AMD* tries to return to the server market and CSPs are faced with additional choices, I’d like to take a minute to share our perspective about what we’re doing to innovate ahead of the competition and enable the clouds of the future.

Enabling new and differentiated cloud services

Intel technology and product advances have already made clouds faster, more efficient and more secure, and the cloud is now poised for a new wave of applications based on analytics, artificial intelligence, visual computing, and scientific computing. Intel is hard at work on the technologies that will enable cloud service providers to deliver the new wave of applications efficiently, at scale, and with lower TCO.

This starts with our processors. Today, we deliver the system and per-core performance the industry demands, and as the upcoming Intel® Xeon® processor Scalable family (code-named Skylake) begins broadly shipping, performance will take another leap forward. Google* and Amazon* have already announced they’re utilizing these new processors to expand into new markets with cloud applications like scientific modeling, genomic research, 3D rendering, data analytics, and engineering simulation.

Beyond Intel Xeon processors, we’ve expanded our portfolio to even better address many more workloads. The Intel® Xeon Phi™ processor is optimized to deliver massive parallelism and vectorization, making it well-suited for analytics and machine learning applications. For emerging workloads in development, we’ve delivered fast, low-power field programmable gate arrays (FPGA) with IP libraries for encryption, compression, analytics, and machine learning. For service providers, advances like Intel Xeon Phi processors and FPGAs enable a broad range of high-performance, specialized applications running on a uniform, scalable hardware environment for improved TCO.

And true to Intel’s rich history of innovation, we’re actively focused on leading the next wave in compute evolution for artificial intelligence workloads with Intel® Nervana™ technology — purpose-built silicon specifically for neural networks to deliver our high performance for deep learning training. Intel’s sweeping processor portfolio addresses the varied and changing cloud workload landscape, something competitors cannot achieve with one or two products.

Optimizing cloud efficiency and TCO at scale

CSPs need more than just the amazing performance per workload, though. We’ve been collaborating on joint engineering with some of the largest service providers for more than a decade to customize Intel processors specifically for their unique application and infrastructure use cases so that they can deploy them at scale and grow their services on demand with lower TCO. We’ve innovated around security and integrated Intel® Trusted Execution Technology into Intel Xeon processors to make the cloud more secure. We’re also making cloud storage bigger, faster, and more efficient at scale, as demonstrated by our collaboration with IBM* to use Intel® Optane™ Solid State Drive storage technology. And we’re using wafer-scale integration to push Intel® Silicon Photonics networking to speeds of 400 Gbps to dramatically increase bandwidth and transfer speeds within the data center.

Efficiency at scale also extends to hypervisors, virtual machines (VMs), and software containers. Intel’s broad portfolio of processors have long been optimized for VMs, which lets service providers select the Intel solution best suited to each application, while still assuring VMs and containers can be deployed where needed and avoiding the VM compatibility and migration issues that can plague mixed-architecture data centers. To ensure the levels of virtualization flexibility, performance, and security that our customers demand, Intel has invested heavily in hardware-assisted virtualization enhancements (Intel® Virtualization Technology) and collaborated closely with industry leading hypervisors that underly the majority of the world’s on- and off-premise virtualized resource pools. Introducing a different compute architecture, such as AMD processors, adds data center complexity by requiring customers to create isolated compute siloes, since AMD VMs can not be live-migrated to and from Intel-based cloud computing pools.

Unleashing innovation through open, ubiquitous cloud

The innovations that drive cloud computing also extends beyond high-performance Intel processors. It takes a community of innovators working together to create a standards-based ecosystem of hardware and software. We not only must develop advanced technology, but also work together to make it simple and accessible to developers in all fields to bring the power of cloud to solve business, science, and societal problems.

At Intel, we contribute expertise and technology to open source hardware communities like the Open Compute Project* (OCP) and the Open Data Center Committee*. We have introduced a new data center architecture called Intel® Rack Scale Design (Intel® RSD) that includes open-sourced management software based on the Distributed Management Task Force Redfish* API standard. Intel® RSD enables cloud providers to dynamically compose systems from physical hardware resource “pools” into the most efficient configuration for each workload. The recent OCP Summit showcased several OCP-compliant systems running Intel® RSD. We are also a large contributor to OpenStack* and other open source software projects that are critical to cloud efficiency and scale.

Intel’s mature and vibrant ecosystem is a result of continuous commitment and investment, alongside deep CSP partnerships. We are more than just a silicon provider, and that sets us apart from silicon vendors trying to enter the server market segment.

We continue to march to the drumbeat Robert Noyce established more than 30 years ago, and we’re doing it side-by-side with our customers and partners. Because we’re on the forefront of transformation, we’re spending our energy increasing performance, enabling scale, reducing TCO, creating technologies that differentiate cloud services, and helping cloud service providers bring their own innovations to the market quickly and profitably.

To learn more visit www.intel.com/cloud or join the Intel® Cloud Insider Program. Sign in today and explore!

Intel, the Intel logo, Intel Nervana, Intel Optane, Intel Xeon Phi, and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance.

Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.