Ceph is an increasingly popular software defined storage (SDS) environment that requires a most consistent SSD to get the maximum performance in large scale environments. There are 3 things about an NVMe Intel drive that will make your Ceph deployment more successful.
1. Server density - you can consolidate NVMe PCIe drives without the need for Hardware HBA's. Try using just 2 NVMe drives instead of 4 SATA drives for journals. This will save space, it can save power, and it will provide direct from the processor IO write benefit for this critical component in the Ceph architecture. Moreover, you can use NVMe SSDs for caching, further boosting a cluster's performance. Use the best drives for the most important work in a Ceph cluster.
2. Quality in latency - Latency quality is key to delivering fast writes to a journal, under always varying conditions. Intel focuses on quality of latency foremost.
3. NVM Express protocol (NVMe) efficiency and maturity - NVM Express is mature and ready for your Ceph journal needs.
You might have some further questions about efficiency and maturity, see these blogs for greater proof:
Most of all, take a look at this study on NVMe versus SATA drives done by Intel's Ceph team:
Intel Developer Forum 2015 (IDF) in San Francisco, is right around the corner, it kicks off this Tuesday August 18.
From 1pm to 3pm there is Tech Chat on SSD's and Ceph which I will attend with Dan Ferber and others from the Ceph domain within Intel.
Jian Zhang from our R&D labs for Ceph and Storage will give a talk on tuning for Ceph and the CeTune tools, soon to be open sourced for automatic Ceph performance tuning and profiling. Jian's session will be at 3:30 on Tuesday and the direct link to that is here:
Finally come out to the SSD Pavilion on the Exhibit floor where we have a booth related to Ceph and tuning.