Intel FPGAs Accelerate AI with Microsoft Project Brainwave

I’m excited about today’s announcement from Microsoft that they have chosen Intel’s Stratix 10 FPGA to power their new deep learning platform codenamed Project Brainwave.  In this post I want to clarify why real-time AI is so critical, explain how Intel FPGAs accelerate the performance of AI applications and share how this project is a natural extension of a decades-long collaboration between Intel and Microsoft.

The Challenge of Real-Time AI

From its Bing search engine to its Azure cloud, Microsoft processes streams of data on an enormous scale, and much of that processing has to happen instantaneously. This is the case with the company’s new deep learning platform, code-named Project Brainwave, which is designed for real-time artificial intelligence (AI). Whether you’re talking about search queries, video analysis, sensor streams, or any other type of data, real-time AI means that the system applies the AI algorithm as fast as it receives data, with ultra-low latency.

For Microsoft’s engineers and data scientists, the biggest challenge here is not how fast they can train the AI model, but how they can apply AI algorithms to massive data streams in real time across a range of data types. This is a much more difficult, and computationally advanced, application than batch-processed AI or other more latency-tolerant AI applications. Real-time AI requires a special mix of software-like flexibility and hardware-like acceleration technologies in the supporting IT systems.

Selecting the Right Tool for the Job

Intel offers a broad portfolio that meets AI requirements across a wide range of use cases, including our Intel® Xeon® Scalable processors, which form the basis for many of the world’s largest AI deployments, and our Intel® Xeon Phi™ accelerators which—along with third party GPUs—are known for AI training performance. However, as powerful as these tools are, they don’t provide the specific mix of capabilities that Microsoft wanted for Project Brainwave.

Microsoft’s need for real-time inference across a wide range of data types demands a high-performance AI hardware accelerator with software-programmable flexibility. Microsoft selected Intel® Stratix® 10 field programmable gate array (Intel® FPGA) for their added communication blocks on the chip, as well as synthesizable logic, to provide high performance in deep learning across many types of data.

Intel Stratix 10 FPGA for Project Brainwave

In Project Brainwave, Intel Stratix 10 FPGAs are making real-time AI possible. Here’s how: Intel FPGAs provide completely customizable hardware acceleration that Microsoft can program and tune to achieve maximum performance from its AI algorithm and deliver real-time AI processing. Better still, these programmable integrated circuits are adaptable to a wide range of structured and unstructured data types, unlike the many specialty chips that are targeted at specific AI data types.

Intel FPGAs enable developers to design accelerator functions directly in the processing hardware to reduce latency, increase throughput, and improve power efficiency. FPGAs accelerate the performance of AI workloads, including machine learning and deep learning, along with a wide range of other workloads, such as networking, storage, data analytics and high-performance computing.

Intel FPGAs are particularly extremely valuable when used in AI applications that require instantaneous response times. FPGAs are based on a highly parallel, low latency architecture that supports the simultaneous processing of many operations. This is one of the keys to enabling real-time processing for applications like facial recognition, real-time language translation, and autonomous driving.

Intel Stratix 10 FPGA is the industry’s first 14nm FPGA and combines the benefits of Intel’s 14nm tri-gate process technology with a revolutionary new architecture called HyperFlex™ to uniquely meet the performance demands of high-end compute- and data-intensive applications.

Another Innovation from our Long-Standing Partnership

Microsoft and Intel have a decades-long collaboration to maximize the performance and capabilities of data center infrastructure across a wide range of use cases. Project Brainwave is another example of the way our two companies work together, selecting the right tools for the job and rising to the challenges of today’s cloud data centers. Today, that collaboration has yielded one of the world’s most sophisticated and exciting AI deployments.


Published on Categories Artificial IntelligenceTags , , , , , ,
Dan McNamara

About Dan McNamara

Daniel (Dan) McNamara is senior vice president and general manager of the Network and Custom Logic Group (NCLG) at Intel Corporation. In this role, McNamara leads a global organization that delivers maximum value for Intel’s customers across the Cloud, Enterprise, Network/5G, Embedded, and IOT markets. He is responsible for the group’s product lines and business strategies, focused on powering a broad portfolio of Intel products that includes Xeon, SoC, FPGA, eASIC, Full custom, software, IP, Systems and solutions. McNamara joined Intel in December 2015, upon close of Intel’s acquisition of Altera Corporation, where he served in various leadership roles, including vice president and general manager of Altera’s Embedded Division. McNamara has more than 25 years of experience in the semiconductor industry. Prior to Intel and Altera, McNamara served as director of Sales at StargGen Inc. and as co-founder and vice president of startup Semitech Solutions Inc. McNamara received his bachelor’s and master’s degrees, both in electrical engineering, from the Worcester Polytechnic Institute in Massachusetts.​