The top500 is a highly watched list in the high performance computing community. My employer, Intel, is in no way immune to bragging about their success in this space.
Recently I have heard many complaints that the measure of performance for the top 500 ( LINPACK ) is something less than ideal. For about 20 years the LINPACK benchmark has defined “leadership” in the top 500 supercomputing list - causing governments and universities to focus on this one benchmark as a measure of status.
The problem is that LINPACK is a measure of compute, not a measure of work. I.E. Just because a system can rock the LINPACK benchmark, it may not be the best at finding the next best protein fold. There is even concern that some solutions can game the benchmark to score higher in a top 500 competition. It appears that these complaints are now manifesting themselves in actual challenges to the status quo. Intel and IBM both recently questioned the Top 500 criteria at the recent SC2010 conference.
I was excited today to read about a potential replacement for LINPACK, Graph 500 is a benchmark that can measure actual work done. Adopting something like Graph500 or maybe a series of top 500 benchmarks could make HPC bragging rights much more relevant to what we want to use these machines for.
My favorite quote on Graph500 was “Some, whose supercomputers placed very highly on simpler tests like the Linpack, also tested them on the Graph500, but decided not to submit results because their machines would shine much less brightly," said Sandia computer scientist Richard Murphy, a lead researcher in creating and maintaining the test.