Finding your new Intel SSD for PCIe (think NVMe, not SCSI)

Sometimes we see customers on Linux wondering where their new NVMe capable SSD is on the Linux filesystem. It's not in the standard place on the Linux filesystem in '/dev/sd*' like all those scsi devices of the past 20+ years. So how come, where is it? For all of you new to the latest shipping Intel SSD's for PCIe, they run on the NVMe storage controller protocol, and not the scsi protocol. That's actually a big deal because that means efficiency and a protocol appropriate for "non-volatile memories" (NVM). Our newest P3700 and related drives will use the same, industry standard, and open source NVMe kernel driver. This driver drives I/O to the device and is part of the block driver subsystem of the linux kernel.

So maybe it is time to refresh on some not too familiar or oft-used linux administrative commands to see a bit more. The simple part is to look in "/dev/nvme*". The devices will be numbered and the actual block device will have an n1 on the end, to support NVMe namespaces. So if you have one PCIe card or front-loading 2.5" drive, you'll have /dev/nvme0n1 as a block device to format, partition and use.

These important Data Center Linux distributions:

Red Hat 6.5/7.0


Ubuntu 14.04 LTS

...all have in box nvme storage drivers, so you should be set if you are at these levels or newer.

Below are some basic Linux instructions and snapshots to give you a bit more depth. This is Red Hat/CentOS 6.5 distro relevant data below.

#1 Are the drives in my system scan the pci and block devices:

[root@fm21vorc10 ~]$ lspci | grep 0953

04:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)

05:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)

48:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)

49:00.0 Non-Volatile memory controller: Intel Corporation Device 0953 (rev 01)

[root@fm21vorc07 ~]# lsblk


sda 8:0    0  372G  0 disk

├─sda1 8:1    0    10G  0 part /boot

├─sda2 8:2    0  128G  0 part [SWAP]

└─sda3        8:3 0  234G  0 part /

nvme0n1    259:0    0 372.6G  0 disk

└─nvme0n1p1 259:1    0 372.6G  0 part

#2 Is the nvme driver built into my kernel:

[root@fm21vorc10 ~]$ modinfo nvme

filename: /lib/modules/3.15.0-rc4/kernel/drivers/block/nvme.ko

version:        0.9

license:        GPL

author:        Matthew Wilcox <>

srcversion:    4563536D4432693E6630AE3

alias: pci:v*d*sv*sd*bc01sc08i02*


intree:        Y

vermagic:      3.15.0-rc4 SMP mod_unload modversions

parm: io_timeout:timeout in seconds for I/O (byte)

parm: nvme_major:int

parm: use_threaded_interrupts:int

#3 Is my driver actually loaded into the kernel

[root@fm21vorc10 ~]$ lsmod | grep nvm

nvme 54197  0

#4 Are my nvme block devices present:

[root@fm21vorc10 ~]$ ll /dev/nvme*n1

brw-rw---- 1 root disk 259, 0 Oct  8 21:05 /dev/nvme0n1

brw-rw---- 1 root disk 259, 1 Sep 25 17:08 /dev/nvme1n1

brw-rw---- 1 root disk 259, 2 Sep 25 17:08 /dev/nvme2n1

brw-rw---- 1 root disk 259, 3 Sep 25 17:08 /dev/nvme3n1

#5 Run a quick test to see if you have a GB/s class SSD to have fun with.

[root@fm21vorc07 ~]# hdparm -tT --direct /dev/nvme0n1


Timing O_DIRECT cached reads:  3736 MB in  2.00 seconds = 1869.12 MB/sec

Timing O_DIRECT disk reads: 5542 MB in  3.00 seconds = 1847.30 MB/sec

Remember to consolidate and create parallelism as much as possible in your workloads.These drives will amaze you.

Have fun!

Published on Categories Data CenterTags , , ,

About Frank Ober

Frank Ober is a Data Center Solutions Architect in the Non-Volatile Memory Group of Intel. He joined 3 years back to delve into use cases for the emerging memory hierarchy after a 25 year Enterprise Applications IT career, spanning, SAP, Oracle, Cloud Manageability and other domains. He regularly tests and benchmarks Intel SSDs against application and database workloads, and is responsible for many technology proof point partnerships with Intel software vendors.