The growth of Non-Volatile Memory Architectures: Is it really a revolution?

Last week at Intel’s Developer Forum 2012 we hosted solutions from over a dozen companies and collaborators interested in showcasing their solutions for Non-Volatile Memory (NVM) in a community forum. The rise of NVM began several years ago when the specifications for SATA became inadequate to meet the performance requirements of Solid State Drives I/O specifications. The release of these specifications has begun the 1st phase of industry innovation in developing the first new storage architecture of the 21st century.

Many people have speculated that this transition to SSD is costly, overblown and may not be required. However, with data requirements in the data center growing at CAGR of ~85% (according to IDC) over the balance of this decade does require technology manufacturers and system administrators to “rethink” their current storage technology decisions. It requires all of us throughout the industry to re-examine our investments and architect new solutions for data management and future I/O architectures. At Intel, it has led us to form a new division, led by Storage industry veteran Steve Dalton, to build products for a future where information and data can be accessed in real time. This new team is leading the development of NVM storage solutions for the first time within our company and collaborating with Intel’s SSD manufacturing team, led by Rob Crooke, to deliver NVM with unique optimizations for new usages in Big Data, Database, HPC and Search. However, these usage models are only the most obvious usages for NVM. Real-time data access will require system administrators and data center architects to build Real-time and near Real-time data pools for users to have performance access to the most used data sets. The difficult parts about these solutions are cost/benefit analysis required to justify the purchases. Traditional price per GB analysis is not sufficient when examining the value of “hot data” vs. “volume data”.

NVM provides an opportunity for software, compute, networking and storage companies to build multimode support of different storage usage models into the NVM solutions that we develop. For the first time since I can remember we have the ability to use storage for block file system, 1st level memory and optimized Cache transparent to the operating systems and application environments. While many of these technologies are in their “early stages/Alpha” of adoption the early results have been promising. In future blog post’s I intend to publish many of these performance numbers of the benefits with some of the industry’s most widely deployed application and database environments.

IBM’s intended purchase Texas Memory Systems is a good example of the industry aggressive trend towards SSD and NVM. At Intel, we have also made a decision to acquire new technology teams. I am pleased to announce that we have completed the acquisition of Nevex to improve server caching performance and deliver a new level of optimization for application performance. In July we acquired Whamcloud to increase our capabilities in HPC file system optimizations. These 2 new additions to Intel combined with the great team we have in place makes for an exciting time in the Data Center Group. We continue to actively invest in standards work led by Jim Pappas with SNIA, hardware and software innovations and building solutions for broad industry adoption. NVM has become the “catalyst” technology for these innovations. In terms of our longer term goals this is only the beginning. NVM is proving to be an opportunity for the industry to transform our architectures, our solutions and the way all of us interface with the current applications. We look forward to working with the industry to continue to transform the future of memory and storage technologies.

To quote Charles Dickens from a Tale of Two Cities, “Tell Wind and Fire where to stop but don’t tell us (me)”.


Feel free to share your thoughts with me @jakesmithintel on Twitter.