Accelerating the Development and Optimization of AI Solutions


When it comes to advancing artificial intelligence, we’ve come a long way in a short period of time. Deep Learning and other uses of big data have really taken off in the last few years, yielding break through results for consumers and businesses alike.

Software plays a significant role in these breakthroughs, and there are challenges to solve. Software needs to be designed and optimized to take advantage of the features of the underlying hardware, or performance will lag. We also need frameworks, libraries and other tools to streamline and accelerate the development process.

At Intel, we recognize these challenges, and are working actively to help data scientists and software developers overcome them. We do this by delivering a broad range of software that builds on the compute foundation provided by Intel® architecture, as well as increasing training to broaden the AI talent pool. These contributions include open, high-performance library building blocks that harness the power of Intel hardware, and high-productivity tools and platforms that simplify and streamline workflows.

Let’s look at some examples of these freely available software offerings, ecosystem contributions, and related resources:

AI for All: Deep Learning Compute Building Blocks

Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) is an open source performance library for Deep Learning (DL) applications that is designed to accelerate DL frameworks on Intel architecture. Intel MKL-DNN includes highly optimized building blocks to implement convolutional neural networks (CNN) with C and C++ interfaces. We created this project to help the DL community innovate on Intel processors.

In addition, Intel has its own framework, NervanaTM Neon. This framework allows us to innovate in both hardware and software to bring blazing performance to deep learning. Our Neon team works closely with our customers on solutions in multiple domains, which yields insights to improve our own implementations as well as those in the industry.

Another building block for driving innovation in DL performance is the Intel® NervanaTM Graph. Graph compilation brings advanced capabilities to Nervana Neon and other DL frameworks. In concept, it builds a directed graph of operations and then optimizes that graph. It can combine or compound operations and partition graphs for efficiency and distribution across multiple nodes.

We have also worked to optimize a number of the most popular deep learning frameworks for Intel® architecture including: neon, Caffe*, Theano*, Torch* and TensorFlow*, enabling Intel to deliver increased value and performance for data scientists on our platform.

AI for All: Data Center Solutions

At a broader level, Intel is a major contributor to several open source projects that are accelerating the availability of solutions. Our work with Apache Spark*, for example, has been particularly useful in scaling machine learning and deep learning solutions to very large data sets. Intel is one of the top contributors to this open source project, which provides a software foundation for processing the huge amounts of data that are at the heart of AI solutions.

Intel is also a major contributor to the Trusted Analytics Platform (TAP), an open-source software project designed to accelerate the creation of advanced analytics and machine learning solutions. TAP simplifies solution development with a collaborative, flexible integrated environment. It makes all tools, components and services accessible in one place for data scientists, application developers, and system operators.

Intel® Data Analytics Acceleration Library (Intel® DAAL) helps speed big data analytics by providing highly optimized algorithmic building blocks for all data analysis stages for offline, streaming and distributed analytics usages. It’s designed for use with popular data platforms, including Hadoop*, Spark*, R, and Matlab* for highly efficient data access. Intel DAAL helps applications make better predictions faster and analyze larger data sets with the available compute resources.

There’s also a full reasoning system in this mix. Saffron TechnologyTM, from an Intel company, provides a full software stack for a reasoning system built to drive insights and decisions. The platform’s associative learning techniques can learn faster with less data, while providing transparent insights into trends, anomalies, and other characteristics of datasets. Saffron learns dynamically and continuously.

AI for All: End-to-End Solutions

To empower the ecosystem and further drive the democratization of AI, we offer the Intel® Deep Learning SDK, a free set of tools for data scientists and software developers who want to develop, train, and deploy deep learning solutions. The SDK encompasses a training tool and a deployment tool that can be used separately or together in a complete deep learning workflow. With the SDK, developers can simplify the installation of popular deep learning frameworks; visually set up, tune, and run deep learning training in the data center, and deploy high performance inference solutions on the endpoint using the trained models from any source.

You can access most of these resources via links on the new Intel® Nervana™ AI Academy. This portal extends the reach of the vibrant Intel Developer Program by providing a one-stop shop for access to frameworks, libraries, and tools as well as tutorials and training that help developers and data scientists accelerate the development of solutions optimized for top performance on Intel architecture.

As these examples show, Intel is serious about enabling the AI ecosystem and arming data scientists and software developers with the tools and information they need to bring solutions to market in less time. And we’re serious about this for a reason: We want to help unleash the next wave of artificial intelligence and an ever-broader spectrum of AI use cases.

For a closer look at the projects and resources highlighted here, visit the Intel Artificial Intelligence site.