Supercharging Data Center Performance while Lowering TCO: Versatile Application Acceleration with FPGAs 

For a wide range of data center workloads, FPGAs can dramatically speed performance, minimize added power and lower Total Cost of Ownership (TCO).

For example, Intel partner Swarm64 realized a 10X improvement in real-time database query performance using Intel® FPGAs, resulting in a projected >40% TCO savings over three years.1 Similarly, using Intel® Arria® 10 FPGA-based acceleration, Intel and the Broad Institute achieved a 50X enhancement compared to using Intel® Xeon® E5 processors alone for the Pairwise HMM algorithm, previously a bottleneck in the genomic sequencing process.2 Using an Arria 10 FPGA, Attala Systems is able to cut through the software overhead of a traditional “OS plus NIC” approach to achieve up to 72% latency reduction in emerging NVMe storage designs.3

To simplify and expedite the benefits of these types of FPGA-accelerated solutions, Intel developed a combination of hardware platforms, a software acceleration stack, and ecosystem support in a compelling new approach. Today, we unveiled the first in a family of Intel® Programmable Acceleration Cards which, along with previously announced Acceleration Stack for Intel Xeon® CPU with FPGAs, will make it easier and faster to deploy FPGAs in the data center.

An Easier Path to FPGA Acceleration

With the introduction Intel® Programmable Acceleration Cards (Intel® PACs), FPGA acceleration can be accomplished much more quickly. The first card in this series, the Intel Programmable Acceleration Card with Intel Arria 10 GX FPGA, plugs easily into any Intel Xeon processor-based server and boosts performance while minimizing power consumption for complex, data-intensive applications such as AI inference, video streaming analytics, database acceleration, and more.

Fig. 1 Intel Programmable Acceleration Card (Intel PAC) with Intel Arria 10 GX FPGA

The versatility of FPGAs enables acceleration of a wide-range of applications across networking, storage, and computing infrastructure. In fact, FPGAs can be reconfigured on the fly to accelerate varying workloads as they change throughout the course of the day.

Enabling the Intel PACs is our recently-announced Acceleration Stack for Intel® Xeon® CPU with FPGAs. The acceleration stack includes APIs, frameworks, software libraries and tools that allow application developers to work at a higher level, without worrying about the inner workings of the FPGA. It also makes it easier to migrate code to new platforms in the future.

 Acceleration Stack for Intel® Xeon CPU with FPGAs
Fig. 2 Acceleration Stack for Intel® Xeon® CPU with FPGAs

The stack also provides an easy way to drop-in accelerator functions developed by the ecosystem for specific workloads. A rapidly growing list of worldwide ecosystem partners is developing market-specific solutions in the areas of artificial intelligence, real-time big data analytics, video processing, financial acceleration, genomics, cybersecurity, and more.

Ecosystem Support for the Acceleration Stack for Intel Xeon CPU with FPGAs
Figure 3. Ecosystem Support for the Acceleration Stack for Intel® Xeon® CPU with FPGAs

What Applications Do You Want to Accelerate?

Taken together, the programmable acceleration cards, acceleration stack, and ecosystem solutions offer a fast path to performance, power and TCO benefits of FPGA-based acceleration. What applications do you have that need to be accelerated? Check out the resources below to get started.


1 Based on database queries run with SWARM64 acceleration vs. no acceleration. Testing performed by Swarm64. For more information see System configuration: Supermicro* SuperServer 2028U-TR4+ with Super X10DRU-i+ Mainboard, 2X Intel® Xeon® E5-2695 v4 CPUs, 8X Samsung* 32GB DDR4-2400 ECC RAM. Note: This is SQL to relational database, not SQL to semi/unstructured data. Projected Total Cost of Ownership (TCO) savings for Swarm64DB over PostgreSQL database over a 3- year period. Swarm64 estimate.  Projection derived based on published commercial cloud service prices on Sept 26 2017.

Based on average Giga-Cell Updates per Second (GCUPS) performance using Intel® Arria® 10 GX FPGA acceleration compared to Intel® Xeon® processor single-core with Intel® Advanced Vector Extensions (AVX) technology.  Testing performed by Intel. For more information see   System configuration: Intel® Xeon® processor E5-2697 v2 @ 2.70 GHz, 2 sockets/12 cores per socket, 128 GB RAM,2 TB Seagate HDD ST2000DM001 with Intel Arria 10 GX FPGA compared to Intel® Xeon® processor E5-2699 v4 @ 2.20 GHz, 2 sockets/22 cores per socket, 256 GB RAM, 2 TB Intel® SSD Data Center P3700 Series with single-core Intel® Advanced Vector Extensions (AVX) technology.

3 Read/write latency performance for Attala Systems Development Host NVMe Adapter compared with Mellanox ConnectX*-4 Lx EN RNIC with Linux* OS on Intel® Xeon® processor E5-2600 v3. Testing performed by Intel.  For more information see
System configuration: Quanta D51B with Intel® Xeon® processor E5-2600 v3, Attala Systems Development Host NVMe Adapter with Intel Arria® 10 FPGA, Intel SSD DC P3700 STD compared to Quanta D51B with Intel Xeon processor E5-2600 v3, Mellanox ConnectX*-4 Lx EN RNIC, Intel SSD DC P3700 STD.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit



Published on Categories Big Data and AnalyticsTags , , , , , , , ,
John C. Sakamoto

About John C. Sakamoto

John C. Sakamoto is vice president and general manager for the Data Center and Communications businesses in the Programmable Solutions Group at Intel Corporation. He guides teams working with customers in leading areas for FPGA use---including data center acceleration, NFV, and wireline and wireless infrastructure. Sakamoto joined Intel in 2015 with the acquisition of Altera Corp., where he had served most recently as vice president for integration planning. Before being appointed to lead the integration team, Sakamoto had spent three years as vice president and general manager of sales for Europe, the Middle East and Africa. Based in London, he was responsible for overseeing Altera’s sales, technical support, strategic accounts and channel strategy throughout the region. Earlier in his Altera career, Sakamoto managed the company’s operations organization, which included team members in Silicon Valley and Penang, Malaysia. His responsibilities in that role encompassed supplier management, planning, quality and reliability, worldwide customer service, and tester hardware development. Sakamoto spent his first years at Altera managing various business units dedicated to wireline, computer, storage, medical, test and military applications. He also managed international customer marketing and the European business operations groups, with teams in China, Japan, and Ireland. Before joining Altera, Sakamoto was an ASIC design engineer at Hitachi America Ltd. Sakamoto holds a bachelor’s degree in electrical engineering from California Polytechnic State University, San Luis Obispo.