The 3D NAND revolution

datasphere

Today’s solid state drives are based on NAND technology, a mature, reliable storage media.  Unfortunately, NAND advancement in capacity and cost-per-gigabyte has slowed.  NAND is fundamentally a two-dimensional technology, which means the chip must increase its size to add capacity. This flat approach means the cost of more silicon and all its related process costs are continually added to the cost of SSDs. Unless memory density can be offset by more advanced process technology, NAND cannot affordably add capacity to keep pace with anticipated data center demands. But the demands within the server footprint of a few inches grows with more capable processors per machine. So what can we do to reliably bring down the cost of NAND technology so you have more choices in the datasphere above?

In August 2016, Intel is revolutionizing our NAND storage technology offerings with ever denser memory technology, called 3D NAND, where NAND structures are stacked vertically, rather than adding more to the edges of the chip at a higher cost.  Like a city hemmed in by mountains or oceans, 3D NAND adds capacity by building up, rather than out.  This will fundamentally change the amount of data that can be stored affordably on an SSD.  An Intel 3D NAND SSD will be able to store up to 10 TB in a 2.5-inch form factor, soon enough, as much data as today’s mainstream hard drives can handle but at many times faster access and throughput.

3D NAND solid state storage technology will help us redraw more flexibly the storage pyramid.  Obviously, the NAND SSD’s in the warm tier will convert to 3D NAND, but the biggest change will be in the cold tier.  Mainstream servers will replace most spinning hard drives with NVM Express-attached 3D NAND SSDs that offer equal capacity, higher performance and reliability, and lower power and TCO.  Hard drives will largely move to the archival tier, where they will provide long-term storage, and disaster recovery services.

Advancement in the upper layers of the datasphere is certainly helpful and exciting, but if processors are starved for data in the upper tiers, benefits in the lower tiers will be incremental, at best, and insufficient to meet the coming data center challenges.  The most straight-forward answer to a large, fast hot tier is to add DDR4 DRAM to the platform.  DRAM is fast, but very expensive, especially at large capacities. There is always some sweet spot DRAM DIMM size, but that size is often too small and limiting.   In a familiar challenge, DRAM isn’t “stackable,” so it’s unlikely to experience a cost-reduction breakthrough like 3D NAND, and other 3D memories that Intel is working on.  A multi-terabyte hot tier will be extremely expensive without a disruptive new approach to how you view the pyramid and how you see your choices within it.

So as we re-draw the spheres of data closer to the CPU, we can look forward now to ever denser SSDs that will help us build up the sphere in a way that makes the data storage stack less of a “fixed” issue. More size means more flexibility in what you build out and how you build it. More choices in memory means more freedom. The freedom in storage choices that Intel is providing this week with our launches geared at 3D NAND memory is another major step in our evolution to be a memory company and a compute company. Your power as an architect of a better IT, better user experiences, and more flexible computing system just grew. Make good use of it!

 

Published on Categories Data Center, Storage

About Frank Ober

Frank Ober is a Data Center Solutions Architect in the Non-Volatile Memory Group of Intel. He joined 3 years back to delve into use cases for the emerging memory hierarchy after a 25 year Enterprise Applications IT career, spanning, SAP, Oracle, Cloud Manageability and other domains. He regularly tests and benchmarks Intel SSDs against application and database workloads, and is responsible for many technology proof point partnerships with Intel software vendors.