The vision that computers could emulate human reasoning and decision making arose in the 1940s—soon after the development of modern computers themselves. There have been long periods when progress was scant, but Artificial Intelligence is now poised to take off. Today, I’m taking a closer look at the breakthroughs making AI a reality today.
To understand why now is the time, let’s look for a minute at then. What has changed? The challenge with AI has always been to understand how humans represent knowledge and how they apply it to make decisions. Initially, most AI efforts revolved around creating expert systems. The idea was to capture the knowledge of experts along with a set of rules that governed how to apply it. This followed closely our approach to programming computers in general: if x is true, then do y.
Developers of expert systems taught computers to play checkers and chess, and machines could often beat expert human players. Enthusiasm was high, other simple applications were developed, and in 1965 Nobel laureate Herbert Simon declared "machines will be capable, within twenty years, of doing any work a man can do."
But deployment of these expert systems proved challenging. For one thing, there aren’t that many experts in some areas. And they often can’t describe how they apply their knowledge to make decisions. They rely on their gut. Finally, implementing the rule sets proved tedious and error-prone. So expert systems—while embedded in some applications in limited form—were for many years unscalable and impractical in large-scale AI applications.
An alternative approach to manually developing expert systems within AI is machine learning. Rather than teaching the computer everything it needs to know, the idea is to let the computer learn what it needs in a way similar to how humans learn—by experiencing and observing the result. One requirement of this approach is that the computer must be able to quickly process large amounts of data. Researchers developed algorithms for organizing data and models for representing and applying it that were inspired by neural networks. Like human brains, machine learning applications get better and better the more they are used.
However, machine learning has had its own limitations. Until recently, there just wasn’t enough digital data available, or systems with the ability to store and process it, for machine learning applications to learn from and deliver useful results.
So what has changed, and why is machine learning now carrying the banner of AI and promising to give us everything from better business decisions to more personalized Internet services to self-driving cars? Big Data is the answer.
Big Data is another way of referring to the availability of vast amounts of data coming from the Internet, sensors, and the coming Internet of Things — along with the technology to store and analyze it and the computing power to process it. And our ability to capture data, move it to where it needs to be, aggregate it, and process it is growing rapidly. 5G network services, for example, promise far greater bandwidth with less latency, and they offer the potential to converge disparate network technologies to create a seamless flow of information.
At Intel, we’re developing processors that deliver the computing power needed for AI. The Intel® Xeon Phi™ processor, for example, “on loads” machine learning algorithm processing onto a bootable x86 CPU. That eases development, and dramatically enhances performance by distributing processing across many cores and processors. And the on-package high-bandwidth memory of Intel Xeon Phi processors brings all that data needed for machine learning algorithms even closer to the processor, which speeds performance even more. In other words, we deliver significantly higher machine learning performance without compromising the ease of software development and portability of code optimizations across Intel® Xeon® and Intel Xeon Phi processor-based platforms.
That’s why the time is now for us—as an industry—to strengthen our commitment to AI. Breakthroughs in AI techniques like machine learning will bring us new capabilities and new applications, so AI can be applied to more and more situations — and solve more and more problems.
We’re inspired by the possibilities of AI, and Intel is committed—not just to a particular approach and technology—but to the vision of its early proponents. We think they were right, and we think the time is now.