My “HPC” take on March Madness

I recently read a couple of articles:  March was a multicore watershed month and HPC Madness:  March is More Cores Month.  Rightly so, first article mentions that users have “a lot more choice on how they want to balance FLOPs with memory capacity, memory bandwidth, and I/O in different product niches.”

Additional factors that a HPC customer should factor into their choice, as cpu cores escalate, is to not lose sight per core performance and licensing costs.   Particularly, in HPC verticals, like Energy and CAE, where users might still be licensing software on a per core or per instance, rather than per socket, or per node,  getting the best return on performance on the utilized cores and the lowest possible software cost is essential.  It should be an essential consideration because, often, software and software maintenance costs might outweigh acquisition hardware costs.

When you consider the two above factors, performance per core and licensing costs, there is a clear advantage to considering the latest generation of Intel(R) Xeon(R) Processors among your HPC choices.   Intel per core performance advantages start under the hood at the micro architecture level:  delivering more instructions per clock, better branch prediction, delivering processor innovations such as Intel® HT technology and Intel® Turbo Boost technology, and by delivering higher clock frequencies.   It means that, if the processor is delivering higher per core performance, then you’ll spend less software costs to accomplish your task when your software costs are related to cores.  For those cases where software licensing is not an issue, better performance per core could translate into needing fewer servers to achieve a job and lowering total costs.

I encourage you to compare the per-core job performance to the per-core license cost to determine the best performance-to-cost operating point when making your wise HPC buying decision.   I think you’ll find the old adage “working smarter, not harder" is often true.