AWS and Intel Partnership Remains Strong Amid Buzz of Custom Chip

This week at Amazon Web Services’ re:Invent* there was a flurry of announcements. Amazon Web Services (AWS) and Intel share a passion for delivering constant innovation and have a long-term partnership to deliver new compute innovations in the cloud. Intel was heavily involved in the momentum at re:Invent to deliver new high performance offerings for high performance computing, big data and machine learning, and democratizing artificial intelligence—specifically, reinforcement learning to the broad developer audience through the cool AWS DeepRacer* with Intel Inside®.

AWS announced a new EC2 C5n instances providing the most elastic and scalable network for high performance computing and EC2 P3dn instances. Additionally, they highlighted EC2 High-Memory Instances certified for SAP applications and EC2 z1d instance for design automation that were announced earlier this year. All of these instances feature a custom Intel® Xeon® Scalable processors designed through the Amazon and Intel collaboration! Intel-based instances are globally available wherever customers need new compute power (not true for all instance types, but generally across instances).

C5n, P3dn, z1d Instances

Amazon Web Services (AWS) launched two new cloud instances for compute and data intensive workloads powered by Intel® Xeon® Scalable processors that support 100Gbps networking, as well as a network interface that supports MPI communication that can scale to tens of thousands of cores.

The first instance is compute-intensive C5n, which improves on the baseline C5 with more memory capacity and faster networking. Both the C5 and new C5n are powered by 3.0 GHz Intel® Xeon® Platinum 8000 series processors, and are offered in the same capacities, from a single core (2 vCPUs) all the way up to 36 (72 vCPUs). C5n provides up to 100 gigabits of network bandwidth, four times the network throughput compared to the recently launched C5 instances. A wide range of applications such as analytics, machine learning, big data, and data lake applications can benefit from this improved performance.

Amazon Cloud
Fig. 1

C5n instances are aimed at high-performance web serving, scientific modelling and simulations, batch processing, distributed analytics, machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding. The new instances are currently available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and AWS GovCloud (US-West) regions.

In addition, the new P3dn.24xl instance delivers the fastest machine learning training in the cloud with 100 Gbps of network throughput. They feature 96 of the latest Intel® Xeon® Scalable processors using Intel® Advanced Vector Extensions 512 (Intel® AVX512) instructions, 768GB of instance memory, and 2 TB of local NVMe storage.

Amazon Cloud
Fig. 2

Also highlighted at AWS re:Invent was the recently-announced, high frequency z1d instances. Z1d instances use custom Intel® Xeon® Scalable Processors running at up to 4.0 GHz, powered by sustained all-core turbo boost, perfect for Electronic Design Automation (EDA), financial simulation, relational database, and gaming workloads that can benefit from extremely high per-core performance. The fast cores allow you to run your existing jobs to completion more quickly than ever before, giving you the ability to fine-tune your use of databases and EDA tools that are licensed on a per-core basis. There are a number of unique innovations the underpin the z1d instance, such as the joint engineering done between AWS and Intel to make this instance possible.

Ladies and Gentlemen, Start Your Engines! AWS DeepRacer with Intel Inside

One of the most exciting announcements at the event, which included a workshop, a racetrack, and a new racing league, is the AWS DeepRacer—a 1/18th scale radio-controlled, four-wheel-drive car. There’s an Intel Atom® processor onboard, a four megapixel camera with 1080p resolution, fast (802.11ac) WiFi, multiple USB ports, and enough battery power to last for about two hours. The Intel Atom® processor runs Ubuntu* 16.04 LTS, ROS (Robot Operating System), and the OpenVINO™ computer vision toolkit.

Intel Amazon AI
Fig. 3

Like the programmable AWS DeepLens* camera that debuted last year, AWS DeepRacer is based on a bunch of Intel® tech. Intel® Distribution of OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware (including accelerators) and maximizes performance. OpenVINO™ improves time to market via a library of functions and pre-optimized kernels.

Reinforcement learning is one of the technologies used to make self-driving cars a reality; the AWS DeepRacer is the perfect for you to go hands-on and learn all about it or to share with your soon-to-be drivers in the family. This announcement has guaranteed me the position of “Best Mom Ever” at home with my twin, teenage daughters!

Improve Your TCO using Intel-powered AWS Instances

Our Intel booth was bustling with activity demonstrating how cloud end-users could improve their overall total cost of ownership. Our GE Healthcare demo was popular, and highlighted the value of application optimization using Intel OpenVINO™ and Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) to improve performance up to 10 times in Image recognition on Intel® Xeon® Scalable processors. At the TSO Logic demo, attendees learned how moving to the latest Intel® Xeon® Scalable processor instances (e.g., C5, M5) can reduce per-instance infrastructure costs by as much as 57%.

Function-as-a-Service Collaboration with AWS

What is Function-as-a-Services (FaaS)? FaaS is the concept of “serverless” computing via “serverless” architectures. I often find the concept of “serverless” computing an oxymoron because, well, it runs on servers, despite the high level of abstraction of the software function from the hardware. We support this new service model through our work with Intel® Clear Containers and support for Docker* Swarm and Kubernetes* orchestration. With AWS and the new Firecracker* technology, Intel is now able to support another FaaS offering. Firecracker provides a different approach, with a lightweight micro-hypervisor that isolates microservices and functions with less performance overhead and improved security and currently only supports Intel® processors.

There was so much to see, learn, and experience at AWS re:Invent, but the week really highlighted all the ways AWS and Intel work together to bring new and innovative services and solutions to market. I’d love to hear about your experience at the re:Invent! Check out my interview with theCUBE and tweet to me @RaejeanneS.