Recently at Intel, we were faced with an important question: Are our data centers driving up our software costs?
After several rounds of research and testing, we found our answer. When running software that is licensed on a “per-core” basis, the answer is probably “yes.” In the past few years, many enterprise applications, such as Microsoft Windows Server 2012, have started licensing based on computing power. Rather than licensing based on users or physical processors, some software providers are now offering the option to purchase licenses based on the number of cores you’re using in your server cluster.
By upgrading to single-socket servers, companies are experiencing a modest increase in performance while significantly decreasing their software costs. As Intel CIO Kim Stevenson revealed in a recent interview, we know from experience:
We are seeing up to 35% performance increase in our Electronic Design Automation application workloads. We have deployed more than 5,000 of these servers, achieving better rack density and power efficiency, while delivering higher application performance to Intel silicon design engineers.
By moving from one two-socket server to four single-socket servers, we have reduced the number of cores in use while retaining equal throughput. Since there are fewer cores in a single-socket server cluster than there are in a two-socket cluster, the cost of any per-core software licensing can be dramatically reduced.
It’s a win-win: By upgrading to single-socket clusters, we’re seeing a 35 percent increase in performance while reducing the need to add software licenses. If your data center seems to be causing higher software costs, it might be time to re-think your server clusters.