If you look at the top trends in technology, things like autonomous vehicles, blockchain, edge deployment, natural language processing, and virtual reality, they all need analytics and artificial intelligence (AI). As society generates an unprecedented amount of data, there is an opportunity to make systems more efficient and create entirely new, immersive services. The challenge is the journey from raw data to useful insights.
Intel is no stranger to this challenge. We have 315 petabytes of data across over 140,000 different sources, and it’s growing exponentially each year. Seven years ago, our data sources likely looked like most enterprises at the time: silos, puddles and pools scattered across supply chain, marketing, customer, manufacturing, finance and other functions. We set out on a mission to connect our corporate data where it made sense in order to drive more value – this journey lead us to create a unified data repository. By bringing all our data together and setting a framework for cleaning, storing and managing this data, advanced machine learning could start to take place, setting the stage for advanced analytics and AI.
As more and more data is generated by edge devices, as much as 75% by 2025, an enterprise’s data journey can’t look like small puddles or a big lake, it needs to be a managed river system where data can be moved, stored, and processed wherever it’s needed. You cannot expect to tap into machine learning and deep learning without first getting the rest of your data pipeline in working order.
Intel is working with several partners to create end-to-end solutions for the modern data pipeline. At the O’Reilly Strata Data Conference, some of these Intel partners, including Hazelcast and H20.ai, are announcing new projects to help enterprises create more efficient data pipelines that are ready to take advantage of the advancements in analytics and AI.
Hazelcast’s Project Veyron will focus on optimizing the performance of the Hazelcast in-memory computing platform for 2nd Generation Intel® Xeon® Scalable processors and Intel® Optane™ DC persistent memory. Hazelcast and Intel intend to provide an integrated edge-to-cloud IoT processing solution for the financial services, telecommunications, energy, manufacturing, and entertainment industries. Real-time and large data set demands of AI applications require new performance advancements across all components of the data processing architecture. Project Veyron will accelerate the completion of parallel in-memory tasks, complex analyses for more sophisticated models and the use of structured and unstructured data sets.
H20.ai’s project BlueDanube will focus on accelerating and scaling H2O.ai technologies on Intel platforms, including the new 2nd Gen Intel Xeon Scalable processors, for enterprises to gain a competitive edge with a highly scalable, cost effective path to AI insights and results. With the new “Make Your Own AI” recipe combined from Intel® Data Analytics Acceleration Library (DAAL) and the H2O.ai open source recipe repository, customers can now achieve machine learning at speed and scale. Listen to the Intel on AI podcast episode to learn more about how H2O.ai, a member of the Intel® AI Builders program, and Intel are working together.
The foundation of a valuable end-to-end solution is consistent and flexible architecture. As advanced analytics and AI become part of nearly every process, it’s important for enterprises to get started now, but do so within the appropriate framework. While the executive “AI mandate” is true for almost every industry, the reality is that “77% of organizations report business adoption of big data and AI initiatives as a big challenge.” Prioritizing use cases that are critical and unique to the company, and collaborating with internal business partners from the beginning, is the first step to overcoming adoption challenges.
By aiming for quick, deliverable wins, enterprises can prioritize projects with the highest ROI and lowest complexity. Because there’s no “one size fits all approach to AI,” consistent architecture with open, flexible, and scalable AI software allows enterprises to choose projects from statistical to machine to deep learning without investing in single-purpose products. Even as AI projects fail, the underlying infrastructure is already prepared for new projects to begin and large incremental compute investments have not gone wasted.
As the volume of data continues to rise, enterprises are facing new challenges in moving data, storing data, and processing data. Intel’s portfolio leadership comes from creating products that address multiple steps in the data journey, from Ethernet and silicon photonics to persistent memory and storage, FPGAs and accelerators to world-class CPUs. These products are tied together by software partners and our own team of over 15,000 software engineers across the globe working to optimize layers of the system software infrastructure from operating system to applications, including libraries, industry frameworks, and tools. Our software strategy is to simplify and accelerate the development of end-to-end solutions that address the entire data analytics and AI pipeline from ingest to insights. Just last week one of our partners, Oracle, announced Exadata X8M was the “fastest database machine in the world,” capable of reaching 16 million OLTP read IOPS with latency of 19 microseconds thanks to standardization with 2nd generation Intel® Xeon® Platinum processors and Intel Optane DC persistent memory.
There is a lot to Intel beyond our CPUs; they’re only the foundation. To learn more about how your enterprise can reach new levels of performance, flexibility, and scalability, visit intel.com/yourdataonintel or software.intel.com/ai. And if you’re attending the O’Reilly Strata Data Conference, I’ll be speaking about how enterprises can unleash the power of data at scale on Wednesday, September 25 starting at 10 A.M. in room 3E and again at 11:20 A.M in room 1A 03.