Some manufacturers may find the journey to predictive analytics with machine learning a daunting initiative. The task of collecting, storing, and analyzing huge volumes of disparate data in a consistent, repeatable manner can overshadow the benefits of improved product quality, improved yield, and reduced maintenance costs. But with today’s advances in machine learning algorithms and compute performance these models are manageable and deliver significant value.
At Intel, we use machine learning to help identify tool health, predict the quality of our wafers, and increase the overall yield. We use a standard Industrial Internet of Things (IIoT) framework that separates data from logic. The predictive analytics are based on three primary building blocks:
- Connectivity. First, we identify the data available from our existing sensors and the data we can collect by integrating new sensors. Data structures vary across sensor types, so standardizing the message structures by source and type, simplifies the data integration. A service-oriented architecture (SOA) provides a stable foundation to minimize the impact of future changes and allows quick, seamless updates to existing environments.
- Transforming the data. We simplify integration with standard message structures. We deliver standardized components for visualization and analytics using third-party and open source tools. These data structures, or messages, retain their unique origins, time stamps, and other identifying factors to ensure that we can trace the resulting insights back to the source.
- Building up the hierarchy. We started with one tool to demonstrate capabilities and identify how the machine was behaving. We then added additional tools of the same type to the framework to understand how they were behaving in context to their counterparts. Data mining allowed us to correlate the statistical patterns and establish relationships across tools, processes, and products. We are now using automated models, based on the patterns observed in the initial data.
Many machine learning models use unstructured data. But machine data is typically numerical, making it easier to evaluate. For example, measuring the pressure within an engine pipe includes an event, such as powering on the engine. Measuring pressure also includes the time it takes for water to travel the length of the pipe, the temperature, and the condition of the motor when the event occurs. Subject matter experts understand the acceptable ranges and control values for the specific processes and tools. The experts’ analysis is crucial to determine when the model should take action.
Combining values from additional sources also strengthens the correlation between out-of-range values and other events in the process. The more data we collect, the better our insights are into tool and process condition.
Iterative Process Development
We use an iterative process to refine the data and improve the insights. When we first started, we manually ran the models and observed patterns. The patterns became the foundation for building models to detect tool nonconformance, as well as predict product yield and quality. Then we moved to using third-party applications. Now, we are using h machine-learning models. Repeating the process helps filter out unnecessary data, identify misaligned data, and find new data to enrich our understanding.
With each iteration we discover new relationships, allowing us to further refine our data and models. The discoveries are limitless. We continue to deepen our insights and see even greater benefits. If you want to learn more about our journey, check out our paper, “Increasing Product Quality and Yield Using Machine Learning.”