Server Performance Tuning Habit #7: Document and Archive

Here’s the 7th follow-up post in my 10 Habits of Great Server Performance Tuners series. This one focuses on the seventh habit: Document and Archive.


I hope the reason why you need to document and retain data for any performance project is understood, so I won’t go into it. Nor will I recommend particular documentation solutions – just find a database or filing solution you like that gets the job done. What I will do is list what needs to be documented.

Normally, performance tuning consists of iterating through experiments. So, for each experiment, it is important to document:

  • What changes were made – hopefully you weren’t trying too many things at once!
  • The purpose – why you tried this particular thing (including who requested it, if appropriate
  • General information – date & location of testing, person conducting the test
  • Hardware configuration:
    • Platform hardware and version, BIOS version, relevant BIOS option settings
    • CPU model used, number of physical processors, number of cores per processor, frequency, cache size information, whether Hyper-Threading was used (cpu-z can help document all this)
    • Memory configuration – number of DIMMs and capacity per DIMM, model number of DIMMs used
    • I/O interfaces – model number of all add-in cards, slot number for all add-in cards, driver version for all devices (on Windows*, msinfo can help with this, on Linux*, lspci)
    • Any other relevant hardware information, such as NIC settings, external storage configuration, external clients used, etc if it affects your workload
  • Software configuration:
    • Operating System used, version, and service pack/update information (use msinfo on Windows systems, uname on Linux systems)
    • Version information for all applications relevant to your workload
    • Compiler version and flags used to build your application (if you are doing software optimization)
    • Any other relevant software information, such as third-party libraries, O/S power utilization settings, pagefile size, etc if it affects your workload
  • Workload configuration:
    • Anything relevant to how your experiment/application was run, for example, your application’s startup flags, your virtualization configuration, benchmark information, etc
  • Results and data - naturally you would store all the above information along with the results and data that accompany your experiment

This blog entry is also the appropriate place to for me to mention the role of automation in your tuning efforts. If you are going to be doing a significant number of experiments, invest the energy needed to set up an automation infrastructure – a way to run your tests and collect the appropriate data without human attention. I included links to automated ways to gather the above data where appropriate.

Keep watching The Server Room for information on the other 3 habits in the coming months.