Intel at O’Reilly: Fast Forward AI in the Data Center

Artificial Intelligence (AI) is fast becoming the transformational technology for both business and society as a whole. Gartner recently added General Artificial Intelligence, Natural Language Processing, and Deep Learning to its 2017 hype cycle; the Web overflows with think pieces on what AI is, how to develop an AI strategy, and how to build a business around AI. Research firm Tractica estimates that the AI software market will grow from approximately $1.3 billion in 2016 to approximately $60 billion in 2025, with the hardware market most likely doubling that figure in the same timeframe. In one of my recent blogs, I discussed the power of AI to change the world, including everything from safer manufacturing to smart agriculture to advances in personalized medicine. But before we can use AI to solve society’s most challenging problems, we have to ensure we have a complete technology portfolio that is easily accessible to get us there. At the recent O’Reilly AI Conference, Intel made two announcements supporting our goal of making AI accessible to everyone.

Intel® Architecture Fuels the AI Revolution

To form the foundation for AI innovation, Intel offers a comprehensive AI product portfolio with scalable performance for the widest possible range of use cases. This “edge-to-data center-to-cloud” portfolio addresses the diverse approaches to AI required today and in the future.

AI at the Edge

  • Movidius™ – The Myriad X Visual Processing Unit provides high performance for deep neural network compute with a low power footprint, making it perfect for use cases like drones and digital security devices. Movidius also offers the Neural Compute Stick, which enables rapid validation and deployment of DNN inference applications at the edge in a small form factor.
  • Intel® SaffronTM – Solutions that combine unsupervised and supervised learning using associative memory based reasoning techniques and transparent analysis of heterogeneous data. Saffron technology can help find and prevent fraud and crime (financial services), resolve product quality issues (manufacturing), and improve health and wellness recommendations in sports and medicine.
  • MobileEye – An advanced driver assistance system using vision and machine learning, data analysis, localization, and mapping to warn drivers real time when a threat is detected.

AI in the Data Center

Intel has a portfolio of AI technology for the data center, all brought under our Intel® Nervana® AI Portfolio for synergy. It includes software tools and framework optimizations to deliver exceptional performance for data-intensive workloads to complement the Intel Xeon Scalable processor (which meets the Machine Learning and Deep Learning needs of most enterprises), Intel® Xeon PhiTM processor, AI accelerators like Intel® FPGAs, and AI-specialized Intel® Nervana™ technology (codenamed Lake Crest).

  • Intel® Xeon® Scalable processors – The heart of data centers worldwide, the recently launched Intel Xeon Scalable processor was designed with significant increases in memory and IO bandwidth and delivers up to 2.2X higher deep learning training and inference performance over prior generations. Customers like Montefiore Medical Center have already deployed Intel Xeon Scalable processors to take on advanced analytics, power their AI workloads, and address large-scale personalized medicine.
  • Intel® Xeon PhiTM processors – Our upcoming generation of Intel Xeon Phi processors (codenamed Knights Mill) will deliver significantly faster throughput for training complex neural networks for workloads that require dense compute and very high levels of parallelism.
  • Intel Nervana technology – The upcoming Intel Nervana technology (codenamed Lake Crest) is an ASIC designed specifically for the highest performance with neural networks for emerging deep learning use cases.
  • Intel FPGAs – Intel Arria® and Stratix® FPGAs function as programmable hardware accelerators for specialized deep learning inference processing when combined with Intel Xeon processors.

AI in the Intel® Nervana™ DevCloud

Getting started on AI may seem like a daunting task, which is why we announced the Intel® NervanaTM DevCloud, free development environment available to Intel® Nervana™ AI Academy members and powered by Intel Xeon Scalable processors. The DevCloud allows AI Academy members to tackle their machine learning and deep learning needs through a cloud computing service, frameworks, tools, training materials, and support. To realize the promise of AI, we believe the technology must be easily accessible to all. With a goal of supporting 200,000 Intel Nervana AI Academy members by 2018, the Intel Nervana DevCloud offers students, developers and data scientists the resources to get started in AI without significant out-of-pocket expenses.

Artificial Intelligence Center of Excellence

In addition to the DevCloud, we announced a collaboration with Tata Consultancy Services (TCS) to create an Artificial Intelligence Center of Excellence. This partnership will help Intel Nervana AI Academy members develop AI solutions. The Center of Excellence is an amalgamation of the latest Intel hardware platforms, AI optimized software, and tools coupled with experts and developers. The TCS team will augment Intel experts with support to AI academy members in online forums, workshops and hands-on labs, and new technical content. Intel’s goal is to produce not only an exceptional product portfolio for AI but to make that portfolio accessible to the widest possible audience with the right levels of support and training.

AI, the next technology revolution, is upon us, and Intel alone delivers a complete, flexible product portfolio while working with developers, data scientists, and the industry to make the technology accessible for everyone. For more on the Intel Nervana DevCloud and Artificial Intelligence Center of Excellence, visit intelnervana.com/devcloud and for more on our Intel AI strategy, visit www.intelnervana.com.