Key Learnings for Migrating Supply Chain Management Applications to an In-Memory Data Platform

Over the last 18 months, Intel IT has migrated many of Intel’s supply chain management applications from a traditional data warehouse system to an in-memory data platform. As described in the white paper, “Optimizing Intel’s Supply Chain with an In-Memory Data Platform,” this platform is transforming Intel’s supply chain by providing real-time predictive business analytics that empower rapid, data-driven decisions.

In the paper, we discuss the top four key things we learned during the migration, but there were many other lessons we learned as well:

  1. Verify consistency. The platform we chose includes an in-memory object store technology that can speed up material planning scenarios. We found it useful to check, recheck, and check again for consistent performance and results prior to production rollout. It was also important to have our users test and validate that their business processes are not impacted adversely by the upgrade – thereby increasing our confidence.
  2. Seek expert advice. Having access to an expert well versed in the in-memory platform’s features was invaluable throughout our migration.
  3. Trim the data warehouse. Reducing the data warehouse to the minimum amount of data (as suggested by the platform’s supplier’s migration guide) helps reduce downtime during the cutover. Some methods we used included table cleanup and deleting the persistent change area and change log, which take up nearly a third of the data warehouse.
  4. Optimize disaster recovery replication. We observed slowness in the initial migration of disaster recovery replication. We found that using a 10-GB data pipeline between the main and disaster recovery sites resolved the slowness. We also learned that the application servers and the in-memory data platform should reside on the same sub-net.
  5. Plan for code remediation. Development and testing take time. We learned it is important to validate the critical process chains for code in the transformations (update/transfer rules) to verify that all SQL SELECT and AT NEW statements are ordered by the primary key.
  6. Upgrade early. We found it helpful to upgrade elements of the legacy system well ahead of the upgrade so we were working with the most recent and compatible components.
  7. Use migration best-known methods.> Some of these include the following:
      • Use a dedicated import server
      • Make sure that key performance indicators are updated before migrating
      • Use the unsorted method for export
      • Use the time analyzer results and split large tables and packages
  8. Perform post-production tuning. We moved the application logic for database and parallel processing using optimization services after the production version was live.
  9. Convert planning object structures. During the quality assurance phase of the project, we converted our planning object structures to a flat physical table, then imported the table into the new platform.

We are excited by the results of the in-memory data platform and how it has improved supply chain management at Intel. The details are available in the paper, but here are just a few:

  • Database size reduced by 63 percent
  • Processing chains run 40 percent faster
  • Advanced business application runtime reduced by 24 percent
  • Runtime of batch jobs reduced by an average of 24 percent
  • Overall warehouse queries run 62 percent faster
  • Top ten transactions’ average response time reduced by 47 percent

The real-time nature of the in-memory data platform supports Intel’s IT goal of implementing a dynamic supply chain that can instantly respond to changes. More importantly, we’ll be able to build on what we learn with the supply chain implementation, applying real-time data in innovative ways in many other areas of the enterprise – reducing the time from event to decision to action. I’m sure Intel is not the only enterprise exploring in-memory data platforms as a way to speed decision making. Please join the conversation by leaving comment below – share your stories, your questions, and your concerns!

Published on Categories Archive
Jeff Sedayao

About Jeff Sedayao

Jeff Sedayao is the domain lead for security in Intel's IT@Intel group. He has been an engineer, enterprise architect, and researcher focusing on distributed systems—cloud computing, big data, and security in particular. Jeff has worked in operational, engineering, and architectural roles in Intel's Information Technology group, done Research and Development in Intel Labs, as well as performed technical analysis and Intellectual Property development for a variety of business groups at Intel.