A while back, I talked to a data center IT professional who made an interesting observation on one of the problems with current approaches to storage—specifically hard disk drive (HDD) failures.
“We never get out of rebuild,” he said. He went on to explain that his IT team was constantly dealing with the effects of failing hard drives. On a regular basis, drives would go down and an operation would kick in to automatically rebuild them. And while systems rebuilt the failed drives, infrastructure performance would suffer.
These reliability issues point to one of the reasons why data centers need to move away from hard disk drives, with an average mean time to failure rate of 30 years, and into the era of widespread use of reliable solid state storage technology, with a failure rate that is double or longer than HDDs. But that’s just one reason. Another popular one is performance. HDDs offer on average 200-250 IOPS per second per drive with an average response time of 2-7 milliseconds and today’s SSDs are more on the magnitude of at least 6000 IOPs with an average response time of 100-500 MICROSECONDs depending on the manufacturer. That is a 30x performance (IOPS) improvement and 1,000x improvement in response time.
These dramatic performance gains make possible to better utilize today’s multi-core processors and high-speed networking in the rest of the infrastructure.
What can this mean for data center applications? Better performance for business processing applications, big data processing for data analytics, and faster processing of scientific and life science applications. For virtual environments this means more efficient server and desktop consolidation, and better performing applications. Overall better efficiency and performance in the data center means increased productivity and the capacity to handle more business and revenue.
And there is good news here. Next-generation solid state storage technologies are racing ahead as the new face of primary (hot/warm tiers) storage. We took a quantum leap forward with the arrival of NAND and flash memory. And now we are poised to take quantum leap No. 2 with the rise of persistent DRAM and non-volatile memory technologies, most notably 3D XPoint technology. These will become the building blocks for ultra-high performance storage and memory.
The new NVM technologies will wipe out the I/O bottlenecks caused by legacy primary (hot/warm tiers) storage architectures. A case in point: 3D XPoint technology, developed by Intel and Micron, is 1,000 times faster than NAND. NAND latency is measured in 100s of microseconds; 3D XPoint latency is measured in tens of nanoseconds. This is yet another magnitude faster.
And, better still, 3D XPoint technology has 1000 times the endurance of NAND and 10 times the density of conventional memory. Put it together and you have a unique balance of performance, persistency, and capacity—the characteristics we are going to need for the storage landscape of the coming years.
All of this means that storage can now keep pace with the speeds of modern multicore processors and next-generation networks, along with an ever-larger deluge of data. And, in another important benefit, with the move to data centers dominated by solid state storage with no moving parts, primary storage will become more reliable—and less of a headache for data center IT professionals.
This doesn’t mean you will have to throw out your traditional disk arrays. They will still have a place in the data center, although they will play a different role. They will be repurposed for non-primary storage.
For a closer look at next-generation NVM technologies, including the new 3D XPoint technology, visit www.intel.com/nvm.