At Intel’s Data-Centric Innovation Summit, we disclosed some amazing new Intel® Optane™ DC persistent memory performance results from our internal testing and ecosystem partners. This new data center memory connects to our next Intel® Xeon® Scalable processor, code-named Cascade Lake. And, I’m thrilled to say, we’ve started revenue shipments of Intel® Optane™ DC persistent memory to select customers—a huge milestone for the program and, ultimately, the technology industry.
Customers today are trying to keep pace with the mountains of data piling up. Promises abound of transformative insights and breakthrough ideas waiting to be uncovered in the data pile. But data is just an expensive storage burden unless it is sorted, analyzed, and exposed through user-accessible services to deliver on its promised value.
To date, the systems that deliver these insights can be hampered by technology and/or cost limitations in memory and storage. Powerful processors and AI algorithms must be fed a steady stream of data, but the DRAM memory, which is fast enough to deliver the data, gets very expensive at large scale, so memory configurations tend to be kept as small as required to meet the target service level.
When the data isn’t in memory, the processors must incur multiple orders-of-magnitude latency penalty to retrieve data from storage. Any data that needs to be stored permanently must also make the relatively slow trip out to the drives. There is a historical trade-off between the speed of DRAM and the capacity and permanence of disks. These are the gaps in the memory-storage hierarchy we’ve discussed before.
Our new Intel® Optane™ DC persistent memory product, based on the revolutionary Intel® 3D XPoint™ memory media technology, establishes a new tier in the memory-storage subsystem: Persistent Memory. It combines the speed of traditional memory with the capacity and native persistence of storage. Filling this gap means that performance-degrading workarounds to move data back and forth across it can be avoided, and applications are more likely to be able to quickly access the data they need, when they need it.
Big Memory, Big Results
The software developer ecosystem has long wished for a persistent memory tier, and getting ready for its introduction by modifying their software to take full advantage of it. Moving to a workload-optimized system architecture, maximizing the working data available to an application, and reducing I/O operations with disks, can deliver impressive results in many use cases.
These enhanced solutions have achieved 8x-improvements in reduced query wait times1 for one popular analytics tool, 4x-increased virtual machines2 by quadrupling the memory capacity, increased user and application throughput, as well as entirely new use cases around consistency and replication. And native persistence also means that in-memory database restart times can be greatly reduced from minutes to seconds3, dramatically improving uptime and operational efficiency.
“Intel has been a key partner for Databricks as we constantly investigate ways to increase performance and efficiencies that help enterprises derive value from AI.” said Michael Hoff, SVP of Business Development at Databricks. “Based on initial testing, Intel’s Optane DC persistent memory will enable our customers to obtain significantly faster insights from Databricks' Unified Analytics Platform.”
Software developers can be a spirited community, so today, we announced the Optane DC Persistent Memory Developer Challenge. This competition will showcase the developer community’s creativity and technical savvy with persistent memory. Awards will be based on originality, performance improvement from baseline, and most comprehensive use of persistent memory. Complete details will be available shortly, so please visit here for more information about how you can participate.
Bringing an entirely new category of product to market takes tireless work of experts from multiple disciplines with deep and active customer engagement. Intel® Optane™ DC persistent memory is not just a result of decades of materials science research, semiconductor innovation, and manufacturing expertise within Intel, but is also based on our deep CPU and computer architecture know-how, system and platform integration expertise, and extensive collaboration with Intel global solutions ecosystem.
I’m thrilled to have achieved so many key milestones for this new product category as we approach its initial revenue shipments this year, and am excited by the other platform innovations that Intel is delivering to the storage tier. Today, Intel also announced new SSD capabilities including increased endurance being delivered by our Intel® Optane™ DC SSD products, and introduction of our Intel® QLC 3D NAND technology. A memory-storage revolution is underway!
Developers can learn more about Intel® Optane™ DC persistent memory at the Intel Developer Zone.
Check out all the news from Intel® Data Centric Innovation Summit here
1. [8x reduced wait time] Performance results are based on testing as of July 31, 2018 and may not reflect all publicly available security updates. No product can be absolutely secure. Results have been estimated based on tests conducted on pre-production systems running Redis memtier and KVM hypervisor, and provided to you for informational purposes.
2. [4x increased VMs] Performance results are based on testing as of August 02, 2018 and may not reflect all publicly available security updates. No product can be absolutely secure. Results have been estimated based on tests conducted on pre-production systems running OAP with 2.6TB scale factor on IO intensive queries, and provided to you for informational purposes.
3. [In-memory database restart] Performance results are based on testing as of July 31, 2018 and may not reflect all publicly available security updates. No product can be absolutely secure. Results have been estimated based on tests conducted on pre-production systems running Aerospike* database, and provided to you for informational purposes.