AI – Rapidly Changing How We Live, Work, and Play

AI is all around us, from the commonplace (talk-to-text, photo tagging, fraud detection) to the cutting edge (precision medicine, injury prediction, autonomous cars). The growth of data, better algorithms, and faster compute capabilities is leading to this revolution in artificial intelligence.

Machine learning, and its subset deep learning, are key methods for the expanding field of AI. Deep learning is a set of machine learning algorithms that utilize deep neural networks, to power advanced applications, such as image recognition and computer vision, with wide-ranging use-cases across a variety of industries:

  • Cloud computing: Detection of inappropriate photos uploaded by users on social media platforms
  • Marketing: Tracking sponsor logos in televised sporting events, allowing sponsors to calculate a return on investment of their sponsorship dollars
  • Healthcare: Detection of anomalies in medical imaging, flagging additional medical attention for unusual patterns not easily recognizable
  • Smart homes: Facial recognition in home security camera to detect family members and automatically unlock the front door

To deliver optimal solutions for each customer’s unique machine learning requirements, Intel offers the most flexible and performance optimized portfolio of AI solutions.  Customers have choice depending on their primary decision criteria and unique environment, from more general purpose using Intel® Xeon® processors to more workload-optimized solutions like Intel® Xeon Phi™ processors or systems using FPGAs.

FPGA technology is uniquely positioned to address the challenges and opportunities of AI

In the rapidly evolving field of AI, algorithms and approaches are changing at breakneck speed. This results in demands for new ways to extract value from growing amounts of data. FPGAs, or field programmable gate arrays, can help address some of these challenge. As reprogrammable circuits that can be tailored to provide hardware acceleration of key functions, FPGAs bring increased throughput, lower latency, and improved power efficiency to important workflows. Moreover, because they are reprogrammable circuits, FPGAs offer the flexible platform required to meet the latest needs of the industry and offer the most advanced capabilities.

How Intel is enabling AI with FPGAs

One of the biggest challenges of implementing FPGAs is the work needed to lay out the specific circuitry for each workload and algorithm, and develop custom software interfaces for each application. To make this easier, the Intel® Deep Learning Inference Accelerator (Intel® DLIA) was designed to deliver the latest deep learning capabilities via FPGA technology as a turnkey solution. Intel® DLIA is a complete solution, combining hardware, software, and IP into an end-to-end package that provides superior power efficiency for inference for deep learning workloads.

The Intel® DLIA brings together Intel® Xeon® processor and an Intel® Arria 10 FPGA with Intel’s robust software ecosystem for AI and machine learning, including frameworks such as Intel-optimized Caffe and Intel’s Math Kernel Libraries for Deep Neural Networks (Intel® MKL-DNN). This enables end users to develop accelerated deep learning inference solutions without having to spend time and money developing IP and middleware interfaces.

Future-proof your hardware

The Intel® Deep Learning Inference Accelerator will come with state-of-the-art intellectual property (IP) for convolutional neural networks (CNNs), supporting targeted CNN-based topologies and variations, all reconfigurable through software. As the industry develops new models, adopts different precisions, and assumes new standards, new IP and software packages will become available allowing the same hardware investment to adopt to new uses and higher performance.

The Intel® Deep Learning Inference Accelerator will be available in early 2017. For more information, visit: http://www.intel.com/content/www/us/en/design/data-centers/server-accelerators/canyon-vista/intel-deep-learning-inference-accelerator.html