AI on Intel Architecture

 

Artificial Intelligence (AI) was born at a workshop organized by John McCarthy in 1956 at Dartmouth. Ever since, AI has been a branch of computer science that studies the properties of intelligence and aims to synthesize it. Over the past 60 years, the field has seen at least 4 peaks and troughs (called “winters”) of interest and sponsorship. During much of this time, researchers in the field have been stymied not only by the lack of funding, but by the lack of data, hardware and algorithms. In the last 5 years, however, all this has changed.

The global growth of online activity over the past two decades generated enormous amounts of data on every aspect of human life. In turn, the availability of “big data” has accelerated the development of algorithms for search at web-scale, statistical data analysis of billions of transactions per day at financial institutions and governments, as well as pattern detection and recognition in unstructured data such as images, video, speech, and text.

This preponderance of data drove the development of algorithms that could learn from experience (data) rather being explicitly programmed, which in turn has revitalized the decades-old sub-field of machine learning by infusing new vigor into a class of algorithms called neural networks (now known as deep learning). The availability of open data sets, open source development models, and powerful (yet low-cost) hardware has allowed researchers to test and improve deep learning algorithms at a rapid pace. And finally, the tighter coupling and co-evolution of hardware and software has advanced deep learning technologies in particular and promises to advance AI in general. AI spans a broad array of technologies for automated perception, cognition, and control, and Intel® is investing in the leadership of AI at the level of ingredients, platforms, and solutions.

Technologies for automated perception, such as computer vision and speech understanding, have evolved rapidly due to the recent effectiveness of deep learning. Intel offers compute, storage, network, and memory components for deep learning software. Intel tests, validates, and is actively designing hardware such as Intel® Xeon Phi™ processors, Intel® OmniPath network adapters and switches, and 3D XPoint™ memory components for deep learning.

Intel’s acquisition of Nervana brings us a full stack of datacenter capabilities for deep learning, such as custom hardware designed to accelerate neural networks to low-level software kernels, and higher order software functions that enable cloud-services for image classification and speech recognition. Soon afterwards, we announced the acquisition of Movidius, a chip-maker that builds processors for visual understanding in wearables, drones, and other low-power devices.

Intel has also gained traction within the deep learning research community by contributing optimizations to open source deep learning frameworks such as Caffe*, Theano*, TensorFlow*, Torch*, and CNTK. And Intel is working closely with deep learning ISVs to cultivate an ecosystem of commercial software platforms optimized on Intel® Architecture. However, the enthusiasm for deep learning is somewhat tempered within enterprises with mission-critical businesses. The reason for this is that deep learning doesn’t provide reasons for its predictive power. The unreasonable effectiveness of deep learning produces black box models that are anathema to practitioners in highly-regulated industries like healthcare, financial services, and transportation. These industries require machine learning models with reason codes and decision trees.

Machine learning is the study, development, and application of algorithms that learn from data to improve the operation of a system without explicit programming. It is an essential technique for building predictive applications and is particularly effective when data sets are too large for humans to analyze manually. Machine learning is expected to grow the global predictive analytics market from $2.74 Billion in 2015 to $9.20 Billion by 2020, at a Compound Annual Growth Rate (CAGR) of 27.4% according to one market research company.

Yet, despite all this learning, one wonders: isn’t there more to AI than perception?

Some argue that the real promise of AI lies in the automation of higher-order cognitive tasks beyond the perceptual tasks automated by deep learning. Instead, cognitive computing technologies are designed to automate tasks that rely on long-term memory, inferential and causal reasoning. The human mind organizes memories in intricate webs of connections and patterns that enable us to determine context and meaning. These connections and patterns shift dynamically to create new memories as we encounter new information.

In 2015, Intel acquired Saffron, a company that has developed a unique cognitive computing platform. The Saffron platform is a graph-oriented, semantic, and statistical knowledge store inspired by the associative structure and function of biological neural systems. With each new experience, Saffron learns from outcomes, builds more memories, and makes new connections — giving its users the ability to address new situations in profound and creative ways. This acquisition has enabled us to bring cognitive computing capabilities into the fold of AI at Intel.

With the convergence of technologies for sensing, perception and cognition under one virtual roof and on a shared hardware infrastructure, Intel is poised to lead in the AI space. With a platform that can support a wide variety of domain-specific AI solutions in healthcare, government, transportation, and financial services and beyond, we’re powering the creation of amazing human experiences.