Does Database Performance Matter?

I’m posing that question somewhat rhetorically.  The answer happens to be the theme for Percona* Live 2016‌ – “Database Performance Matters!” Databases are ubiquitous- if not also invisible, managing to hide in plain sight.  Reading this blog?  A database was involved when you signed in, and another one served up the actual contents you are reading.  Buy something from Starbucks this morning and use their app to pay?  I’m not an expert on their infrastructure, but I will hazard a guess that at least one database was involved.

So why the mention of Percona Live 2016?  Well, recently I was offered the opportunity to speak at the conference this year.  The conference takes place April 18-21.  For those able to attend, the session I’m delivering is at 11:30am on April 19th.  The session is titled “Performance of Percona Server for MySQL* on Intel® Server Systems using HDDs, SATA SSDs, and NVMe* SSDs as Different Storage Mediums”, creative and lengthy, I know…  Without revealing the entirety of the session, I’ll go into a fair amount of it below.  I had a framework in mind that involved SSD positioning within MySQL, and set out to do some additional research before putting the proverbial “pen to paper” to see if there was merit.  I happened upon a talk from Percona Live 2015 by Peter Zaitsev, CEO of Percona, coincidentally titled “SSD for MySQL‌”.  It’s a quick read, eloquent and concise, and got me thinking- just how much does storage impact database performance?  To help understand the answer, I need to offer up a quick definition of storage engines.

Database storage engines are an interesting topic (to me anyway).  The basic concept behind them is to take a traditional database and make it function as much as possible like an in-memory database.  The end goal being to interact with the underlying storage as little as possible, because working in-memory is admittedly preferred/faster than working with storage.  Generally speaking, performance is good and consistent so long as the storage engine doesn’t need more memory than it has been allocated.  In situations where allocated memory is insufficient, and these situations do arise, what happens next can make or break an application’s Quality of Service (QoS).

Percona Server, with its XtraDB* storage engine, is a drop-in replacement for MySQL.  So, I figured it was time for a quick comparison of different storage solutions behind XtraDB. One aspect I would be looking at is how well XtraDB deals with memory pressure when a database’s working set exceeds the RAM allotted to XtraDB.  This can be greatly influenced by the storage subsystem where the database is ultimately persisted.

To simulate these situations, I decided I would run a few benchmarks against Percona Server with its storage engine capped at sizes less than the raw size of the databases used in the benchmarks.  This would create the necessary memory pressure to induce interaction with storage. For the storage side of the equation, I decided to compare a RAID of enterprise-class SAS HDDs against a SATA SSD and also against an NVMe SSD.  My results are presented as relative to those of the HDD solution.  Rather than report raw numbers, the focus here is to highlight the impact storage selection has on performance rather than promote any single configuration as a reference MySQL solution.

I used the following tools to perform the benchmarking:

  • SysBench* 0.5: Open source, cross platform, scriptable, and well-known in the MySQL world.  SysBench provides modules for testing multiple aspects of a server, and for my testing, I used the modules for file I/O performance and database server performance (OLTP).  For SysBench results, well, I have to keep something for the talk at Percona Live, not to mention the brevity of this blog, so those are not recapped below.
  • HammerDB* 2.19: Also open source and cross platform, HammerDB provides a nice wrapper for running workloads based on/similar to TPC-C and TPC-H created by the Transaction Processing Performance Council (TPC*). HammerDB results are illustrated below.

Moving on to the base server platform:

And the underlying storage configurations tested:

  • HDD- HW RAID 10 comprised of the following:
    • 6x Enterprise-class 15K RPM, 600 GB, 12Gbps SAS HDDs
      • Raw capacity: 3600 GB
      • Capacity as configured: 1800 GB
  • NVMe SSD:

Next the software stack:

  • CentOS* 7.2 (64-bit)
  • Percona Server 5.7.11-4 (64-bit)
  • SysBench 0.5
  • HammerDB 2.19 (64-bit)
  • Inbox NVMe and RAID Controller drivers
  • XFS file system


The table below recaps some of the high level observations from these tests:


Figure 1

HammerDB TPC-H (Run Time)

Figure 2

HammerDB TPC-H (QPH)

Figure 3

Performance gains of up to 53% for SATA Reduction in in run time up to 23% for SATA Up to 29% more Queries per Hour for SATA
Performance gains of up to 64% for NVMe Reduction in run time up to 46% for NVMe Up to 84% more Queries per Hour for NVMe
Figure 1- HammerDB TPC-C Test: Relative Throughput Compared to HDD HW RAID 10
Figure 2- HammerDB TPC-H Test: Relative Run Time Compared to HDD HW RAID 10
Figure 3- HammerDB TPC-H Test: Relative Throughput Compared to HDD HW RAID 10


All in all, this was an interesting (if not fun) exercise.  Six HDDs or a single SSD?  Relative performance results aside, one should also consider power consumption, reliability, and opportunity cost savings that derive from performance gains over the life time of a hardware platform, as often these can be more substantial than the upfront costs.  Speaking of upfront costs, the Percona Live talk itself also addresses the relative upfront cost of each storage configuration, which makes for an interesting conversation when that information is juxtaposed against usable capacity and performance results.

Additional configuration details:

Additional, non-default, configuration parameters for HammerDB and Percona Server for these tests:

For HammerDB with TPC-C Option

  • Within HammerDB
    • Number of warehouses : 1140
    • Number of users: 31
    • Timed Test Driver Script: 2 minute ramp, 5 minute duration
  • The [mysql] innodb settings in my.cnf:
    • innodb_buffer_pool_size=10240M (~1/10th the database size, induce memory pressure)
    • innodb_log_file_size = 2560M
    • innodb_log_files_in_group = 2
    • innodb_log_buffer_size = 8M
    • innodb_flush_log_at_trx_commit = 0
    • innodb_checksums = 0
    • innodb_flush_method = O_DIRECT

For HammerDB with TPC-H Option

  • Within HammerDB
    • Scale Factor: 10
    • Number of users: 24
  • The [mysql] innodb settings in my.cnf:
    • innodb_buffer_pool_size=10240M (~ ½ the database, induce memory pressure)
    • innodb_log_file_size = 2560M
    • innodb_log_files_in_group = 2
    • innodb_log_buffer_size = 8M
    • innodb_flush_log_at_trx_commit = 0
    • innodb_checksums = 0
    • innodb_flush_method = O_DIRECT


Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as HammerDB, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Source: Internal Testing

Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations.

Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase.  For more complete information about performance and benchmark results, visit

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

Copyright © 2016 Intel Corporation.  All rights reserved.

*Other names and brands may be claimed as the property of others.

Published on Categories StorageTags

About Ken LeTourneau

Ken LeTourneau has been with Intel for 20 years and is a Solutions Architect focused on Big Data and Artificial Intelligence. He works with leading software vendors on architectures and capabilities for Big Data solutions with a focus on analytics. He provides a unique perspective to leading IT decision makers on why AI is important for 21st century organizations, advising them on architectural best practices for deploying and optimizing their infrastructure to meet their needs. Previously, Ken served as an Engineering Manager and Build Tools Engineer in Intel's Graphics Software Development and Validation group. He got his start as an Application Developer and Application Support Specialist in Intel's Information Technology group.