Intel Embraces the AI Spring

As described in my recent white paper, “Artificial Intelligence Reduces Costs and Accelerates Time-to-Market,” it’s an exciting time to be working with advanced analytics at Intel. Over the last several years, Intel IT has developed several successful artificial intelligence (AI) solutions, and interest in additional solutions is increasing exponentially (see the S-curve graph below). Every day—several times a day—I am approached by Intel chip design teams and others asking, “How can we use machine learning? How can we embed AI in our processes?” I see this huge spike in enthusiasm as a result of two converging trends.

AI Adoption Increasing Exponentially

AI Isn’t Just for Consumers—It Can Make Your Job Easier

(and it won’t eliminate it, at least if it’s up to us)

In our day-to-day consumer lives we’re surrounded by the benefits AI can provide, from getting great product or movie recommendations, to personal assistants in our phone or homes, to search engines “reading our minds” and knowing what we’re looking for even before we fully articulate it.

In the enterprise, the adoption of AI isn’t as mature. The reluctance regarding AI in the enterprise might result from several causes. First, it is more complicated and risky to make disruptive changes in an enterprise setting; there are many dependencies, legacy systems, and potentially harsh financial implications for mistakes. Second, people tend to be more conservative and less patient with mistakes in a professional setting—they don’t mind when their phone’s personal assistant misunderstands them but are much less tolerant of getting erroneous alerts from the algorithm monitoring their production environment. Finally, AI is often viewed with suspicion or skepticism, fearing it will be used to eliminate jobs.

But Intel IT is seeing a huge shift and much less resistance as AI proves its worth in the enterprise. Engineers at Intel are realizing AI isn’t just a tool to make their smartphones smarter, and it’s not about job elimination. Instead, AI can automate dreary, repetitive tasks that machines can accomplish and let the engineers focus on tasks that require human attention. In other words, AI lets engineers work more intelligently since the machines handle the simple stuff. At Intel, it is becoming obvious that AI is a powerful tool that lets the experts better utilize their expertise, not a way to get rid of the experts. Moreover, the AI solutions we’re working on assume there needs to be a “human-in-the-loop” and are designed so that AI algorithms and human experts interact to get better, faster results. This is one of our guiding principles: we make sure to use the power of AI to better people’s work lives.

In addition, AI can also relieve the pressures of a demanding job that is quickly becoming humanly impossible. As Intel’s product complexity, abundance, and variety grow, Intel engineers are realizing that if they don’t change the way they work they will literally be unable to do their jobs. Necessity is always a strong driver for change, and it helps engineers identify the potential AI offers to make their work more exciting and free them to do new things they previously didn’t have time for.

Success Builds Trust and Enthusiasm

Over the last two years, we have worked with Intel’s product validation teams to develop two highly successful AI tools:

  • CLIFF (Coverage LIFt Framework). To speed validation in our chip-design process, CLIFF creates new tests for hard-to-validate functionalities to discover hidden bugs as early as possible. CLIFF improves the targeted functionalities coverage by 230x on average, compared to standard regression tests.
  • Following CLIFF’s success, we created an additional capability called ITEM (Intelligent Test Execution Management), which creates the best testing suite on a weekly basis. ITEM ensures that, from a bug-finding and functionality-coverage perspective, the teams run the most cost-effective tests. ITEM has reduced the number of required tests by 70 percent.

During the development of these tools, we built a solid partnership with the validation teams. As their trust in and understanding of AI grew, the validation teams became willing to take bold steps, letting us replace their key processes with AI-based tools. Using our key learnings from validation and other domains we’ve worked in, we have established a repeatable process for adding new AI applications. We use this same model to assure the success of AI projects in new domains or with new teams:

  1. We find an eager partner that is willing to invest and make changes. Their role includes:
    • Providing subject matter expertise, which is critical to success. They provide insights about the as-is state and how best to utilize AI, as well as help us get a deep understanding of the data.
    • Helping to harvest, clean, and adjust the data to maximize AI results.
    • Managing the change and removing any obstacles for implementing the solution. The “black magic box” of AI can create a lot of resistance, as I’ve outlined above.
  2. We choose the first project wisely. The first problem to work on should be valuable and solvable enough to prove the potential of AI.
  3. We assess the relevance and feasibility of a project from three perspectives:
    • Business value—How important is the project, is it well-defined, and is there significant potential to improve the current process?
    • Data—Does the data represent the problem we are trying to solve, and is it clean and reliable? Is the problem solvable using the available data?
    • Execution—Can we get the data in and out fast enough, and provide a reasonable level of performance to integrate with the business processes?
  4. When piloting the solution, we initially run it in parallel with the non-AI process, prove AI’s superiority in a risk-free setup, and gradually evolve toward replacement when results are good and stable enough.
  5. The first project and solution showcase what can be done. Once it begins to yield results, many new ideas arise, and we can gradually build on that to create additional capabilities and finally produce a full “AI offering” roadmap.

The Future of AI at Intel

CLIFF and ITEM’s success offers concrete examples of how AI can bring high value to engineers’ work at Intel. The potent combination of interest and curiosity, necessity, and success is fueling the expansion of AI far beyond what we’ve done so far. These days we are working hard to ensure that we can scale the impact that AI brings to Intel even further. Our scale strategy includes joining the efforts of IT teams with the business units to accelerate the R&D pace of new AI capabilities while proliferating the mature capabilities (like CLIFF and ITEM) quickly and intelligently to all relevant and critical teams across Intel.  We are already applying and scaling AI to sales and marketing and testing processes for high-volume manufacturing. Engineers are working on embedding AI into Intel’s products to make them smarter.

Read the IT@Intel White Papers, “Artificial Intelligence Reduces Costs and Accelerates Time-to-Market” and “Improving Sales Account Coverage with Artificial Intelligence” to find out more about how Intel IT is leading Intel’s digital transformation with AI.

Published on Categories Artificial Intelligence, Machine LearningTags , , , , ,
Nufar Gaspar

About Nufar Gaspar

Nufar Gaspar joined Intel in 2012 after completing her Masters in Industrial Engineering in Ben Guryon University, with focus on optimization, statistics, and scheduling problems. Publications from her Thesis can be found at During 2012-2015, Nufar played different roles in the creation of Machine Learning solutions for Intel’s Design organizations. Since 2015, Nufar leads the Design Advanced Analytics team, which includes Data Scientists, Big Data SW Developers, Products Analysts, and Product Managers. Nufar and her team creates AI and Big Data tools to help revolutionizing how Intel designs and verifies its product. These capabilities include CLIFF (AI based test creation to uncover hidden bugs), ITEM (Intelligent Test Execution Mgmt. to optimize the test suit being used for validation), Gatekeeper Smart Filter (predict if a code submitted to GIT is likely to be buggy) and many others. These days Nufar and her team run multiple R&D effort utilizing state-of-the-art AI and optimization techniques as well as scaling the existing capabilities across the various design organizations at Intel. Examples include: Adaptive Testing: online testing manager agent that tunes its decisions in light of status vs. user-defined goals, and Aided debug: a collection of debug utilities that interact with human expert to speed up root cause analysis.