How to benchmark SSDs with FIO Visualizer

There are many ways and software tools available for benchmarking SSDs today. Many of them are consumer oriented with very nice looking interface, others are command line based, ugly looking, doing something strange. I’m not going to criticize none of these in this blog, I’ll share the approach we’re using at Solution Architecture team at Intel NVM Solutions Group.

There are two proven software tools for IO benchmark used there – Iometer (http://www.iometer.org) for Windows and FIO (http://freecode.com/projects/fio) for Linux OS. Both of them offer many advanced features for simulating different types of workloads. Unfortunately, FIO lacks of GUI interface, it’s only command based. Having an amazing feature set, simply was not enough to be used as a demo tool. That’s how an idea of a FIO Visualizer (http://01.org/fio-visualizer) appeared, developed at Intel and released to the Open Source.

What is FIO Visualizer? - It’s a GUI for the FIO. It parses console output in real-time, displays visual details for IOPS, bandwidth and latency of each device's workload. The data is gathered from FIO console output at assigned time intervals and updates the graphs immediately. It is especially valuable for benchmarking SSDs, particularly those based on NVMe specifications.

Let’s have a quick look on the interface features:

  • Real time. Minimum interval is 1 second, can be adjusted to even lower value by simple FIO source code change.
  • Monitors IOPS, bandwidth, latency for reads, writes and unique QoS analytics.
  • Multithread / multi jobs support makes a value for NVMe SSD benchmarking.
  • Single GUI Windows, no overlap windows or complicated menus.
  • Customizable layout. User defines which parameter needs to be monitored.
  • Workload manager for FIO settings. Comes with base workload settings used in all Intel SSD datasheets.
  • Written on Python with QtGraph; uses third-party libraries to simplify GUI code.

fiovisualizer.pngFIO Visualizer GUI screen with an example of running workload.

Graph screen is divided for two vertical blocks corresponding for read / write statistic. It’s also divided for three horizontal segments displaying IOPS, bandwidth and latency. Every graph supports auto-scaling in both dimensions. Individual zoom is also supported for each graph. Once zoomed, it can roll back to auto-scaling by popup button. There is possibility to disable certain graphs and change the view for the control panel on the right.

multijob.PNG

This example demonstrates handling of multi-job workloads, which are executed by FIO in separate threads.

Running FIO Visualizer

Having a GUI written in Python gives us great flexibility to make the changes and adopt the enhancements. However it uses few external python libraries, which are not the part of default installation.

This results in the OS compatibility/dependency:

Here are exact steps to make it running under CentOS 7:

0. You should have python and PyQt installed with the OS

1. Install pyqtgraph-develop (0.9.9 required) form http://www.pyqtgraph.org

$ python setup.py install

2. Install Cyphon from http://cython.org Version 0.21 or higher is required.

$ python setup.py install

3. Install Numpy from http://numpy.org

$ python setup.py build

$ python setup.py install

4. Install FIO 2.1.14 (latest supported at the moment) from http://freecode.com/projects/fio

# ./configure

# make

# make install

5. Run Visualizer under root.

# ./fio-visualizer.py

SSD Preconditioning

Before running the benchmark you need to prepare the drive. This usually calls “SSD Preconditioning”, i.e. achieving sustained performance state on "fresh" drive. Here are basic steps to follow to get reliable results at the end:

  • Secure Erase SSD with vendor tools. For Intel® Data Center SSDs this tool called Intel® Solid-State Drive Data Center Tool‌.
  • Fill SSD with sequential data twice of it's capacity. This will guarantee all available memory is filled with a data including factory provisioned area. DD is the easiest way to do so:

dd if=/dev/zero bs=1024k of=/dev/"devicename"

  • If you're running sequential workload to estimate the read or write throughput then skip the next step.
  • Fill the drive with 4k random data. The same rule, total amount of data is twice drive's capacity.

Use FIO for this purpose. Here is an example script for NVMe SSD:

[global]

name=4k random write 4 ios in the queue in 32 queues

filename=/dev/nvme0n1

ioengine=libaio

direct=1

bs=4k

rw=randwrite

iodepth=4

numjobs=32

size=100%

loops=2

[job1]

  • Now you’re ready to run your workload. Usually measurements start after 5 minutes of runtime in order to let the SSD FW adapting to the workload. It will enter the drive into sustained performance state.

Workload manager

Workload manager is a set of FIO settings grouped in files. It comes together with FIO Visualizer package. Each file represents specific workload. It can be loaded directly into FIO Visualizer tool. From where it starts FIO job automatically.

Typical workload scenarios are included in the package. These are basic datasheet workloads used for Intel® Data Center SSDs and some additional ones which simulate real use cases. These configuration files can be easy changes in any text editor. It’s great start point for the benchmarking.

workloadm.png

You see some workloads definitions have a prefix SATA, while others come with NVMe. There are few important reasons why they are separate. AHCI and NVME software stack are very different. SATA drives utilize single queue of 32 I/Os max (AHCI), while NVMe drives were architectured as massively paralleled devices. According to NVMe specification, these drives may support up to 64 thousands of queue of 64 thousands commands each.  On practice that means certain workloads such as small block random ones will have a benefits of executing them in parallel. That’s the reason, random workloads for NVMe drives use multiple FIO jobs at a time. Check it in the section “numjobs”.

To learn more about NVMe, please see public IDF presentations explaining all details of this:

NVM Express*: Going Mainstream and What’s Next

Supercharge Your Data Transfers with NVM Express* based PCI Express* Solid-State Drives