Throw out your hard disks: Revisited!
My original blog from October of 2008! has been getting a bunch of traffic lately and is linked below but frankly... things have changed quite a bit in 6 years, so here's an update.
If I recall correctly the Intel X-25e 32GB SSD was about $600 from any of my favorite Internet retailers at almost $20/GB... WOW! I did a quick Internet search this morning (April '14) and came up with $1.00/GB for the Intel SSD DC S3500 Series standard endurance drive and $2.25 for the Intel SSD DC S3700 Series high endurance drive. So today I can get a 480GB Intel SSD DC S3500 Series for $100 less than that X-25e cost me 6 years ago... That's a 15x larger drive, at an 18x smaller cost per GB... not to mention the vast difference in performance! In addition, the SSDs are now certified by most of the major OEMs, software tools like the Intel SSD Toolbox have matured over the years, we now have 6Gbps SATA3 speeds instead of piddly 1.5Gbps SATA and the consistent performance of Intel's 4th generation SSD. Check out this blog at the Tech Report on Intel SSD reliability.
If you read my blog from a few weeks ago; Temp, Tier, and Cache use cases all make sense for SSD. But at $1.00/GB... the core Engineer in me starts to think about possibilities. I barely touched on this in that prior blog but... what if I replaced the RAID 1 set of boot/swap disks in my typical server with SSDs. I certainly don't need 300/600GB for an OS either Windows or Linux, so let's replace those SAS drives with say 240GB Intel SSD DC S3500 Series. Run the numbers and you'll see it doesn't significantly change the overall cost of your average server. Even with a large MTBF (mean time between failure) and a low AFR (annualized failure rate) which you can check in the spec sheets, I'm still going to put these in a RAID set. Then I'll split this up and give my OS 100GB and 'something else' the other ~140GB.
So what am I going to do with that other 140GB? I can read the SMART E9 (media wear indicator) stat through most of my OEMs RAID controllers so I can monitor wear on the SSD, I bought the drive from my OEM so I know it's guaranteed and covered by warranty, what are the possibilities? If I'm in a Windows world, I'm engineering to swap to page file as little as possible, maybe I could afford a little swapping now. In the Linux world, maybe I could increase my 'swapiness' a bit. Maybe I want to move some TempDB files to that 140GB, how about some specific web content, perhaps I could point some of MS SharePoint's caching mechanisms at that 140GB of fast storage. If I'm running VMware I now have a local disk to use for VFRC (VMware Flash Read Cache) or in OpenStack maybe I use this space for my local base images.
There are tons of possibilities, time to let that core Engineer out to play! Now... with 4TB SATA drives running 4 cents/GB or less I'm probably not going to throw out all my hard disks. But, I think there's compelling reasons to start replacing those boot/swap disks with boot/swap SSDs to explore some possibilities. Solid-state storage has crossed over into mainstream, time to start becoming as familiar with these devices as we are with their spinning predecessors.
Christian Black is a Datacenter Solutions Architect covering the HPC and Big Data space within Intelâ€™s Non-Volatile Memory Solutions Group. He comes from a 23 year career in Enterprise IT.
Follow Chris on Twitter at @RekhunSSDs.