As customers in every field seek the efficiencies of cloud computing, the need for greater versatility to run these new, diverse applications is creating challenges for cloud-ready data center service providers. While the cloud traditionally thrived on hardware and software homogeneity, these new workloads—like analytics, artificial intelligence (AI), and high-performance computing—demand specialized computing resources that challenge homogeneity, tap capacity, and strain data center power. Field programmable gate arrays (FPGAs) offer cloud service providers the ability to accelerate these specialized workloads via fast, energy efficient processors that can be customized for specific applications. At Intel, we’re working to make FPGAs more accessible to application developers, so they can apply this exciting technology to this new wave of evolving workloads. And we recently announced several advances designed to do just that.
FPGAs are hardware programmable semiconductor chips that can be used to implement specialized functions. FPGAs provide workload optimization for general-purpose CPUs that can result in significant performance enhancements with less power consumption while still offering the service provider a uniform hardware environment. At Intel we offer an array of FPGAs with a common software framework that work in conjunction with Intel® Xeon® processors. FPGA accelerators are configured to specific workloads and environments and they can be dynamically reconfigured in the datacenter to support workload capacity changes. The key to tapping the benefits of FPGAs in the data center is to create a development environment that is both productive and compatible with the skills application developers already have and to offer a suite of tools compatible with data center orchestration and management tools for ease of deployment.
Last week we announced three new FPGA acceleration libraries providing pre-developed, off-the-shelf functionality in areas critical to many new apps. These new libraries go beyond competitive FPGA-hardware focused libraries and offer a hardware and software optimization of the IP used on the FPGA and the IP used on the Intel® Xeon® processor. Additionally, they are used in a fashion similar to the Intel Architecture software library programming paradigm familiar to application developers. The acceleration libraries we’re introducing include support for:
- Artificial Intelligence: Image Recognition: Image recognition is becoming important to machine learning applications in many industries. Convolutional Neural Networks (CNN), a key component of image recognition, is processor intensive and relies on parallelism to achieve acceptable performance. Real-time inference requires low latency to ensure customer service level agreements are met. This is where FPGAs can be used to accelerate inference. Intel’s CNN engine for FPGAs simplifies the effort to implement image recognition inference within customer applications with benefits in performance, performance/watt and agility.
- Data Analytics: Recognition Engine. Many new business applications embody recommendation engines that can correlate products or media offerings to preferences a consumer has previously exhibited. Some of these algorithms, like the alternate least squares algorithm, work very well on FPGAs. We are creating a library of real-time processing of these algorithms
- Data Compression. Many data-intensive applications need to compress data for storage or transmission over a network. This new IP library supports widely used compression algorithms with dynamic compression ratios so developers can choose the compression algorithm best suited to their use case. When executing on Intel FPGAs, compressed data can reduce the processing latency and enable the CPU to perform other operations.
These FPGA acceleration libraries are available today for select customers using Intel software development platform based on based on Intel® Xeon® processor E5-2600 v4 product family and the Arria 10 FPGA. Intel will be making them more broadly available in the future. We also demonstrated these libraries in the Intel booth at Open Compute Project Summit this week.
FPGAs can be integrated into all aspects of data center infrastructure: servers, network, and storage. They can be deployed in any location from the data center to the edge. Unlike GPUs, they can support many functions and switch among them dynamically. And they do it with low latency and reduced power consumption.
We see cloud service providers enabling workload acceleration with FPGA IP libraries as a way to enable new services. Alibaba recently disclosed that they will enable their customers to develop and use Intel FPGAs through their Ali Cloud Service. As we combine the kind of IP library functionality I’ve described with the application developer environment of the Intel Architecture, more developers will be able to bring the FPGA acceleration advantages to the new wave of applications lining up for the cloud data center.
Learn more about our FPGA solutions here.
Intel and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.