If you are using legacy UNIX infrastructure to host those applications…
What is a Mission Critical application?
Everybody has their own definition, regardless of how the definition is framed it boils down to the applications that are required to run the business; if the application is not running then the mission of the business is threatened or even shut down. Consequently mission critical applications are hosted on the most robust and redundant hardware platforms available.
Thirty-five years ago it was unthinkable to host such an application on anything but a mainframe. Midrange servers existed but only the VAX VMS Cluster could sport the high availability requirements to rival the mainframe. Mostly these servers, if used in business, were used for departmental operations. Mini computers were relegated to the periphery of the business.
Then SMP (Symmetric Multi-Processing) servers began showing up in the 90’s allowing for large UNIX servers and the configuration of servers into high availability architectures. Processors used for these servers ranged from proprietary RISC processors like SPARC or POWER to industry standards processors like 386 or 486 processors and eventually the early Pentium processors from Intel. Lower costs compared to the mainframe led IT departments to host Mission Critical applications on these UNIX servers. The last ten years have seen another shift. These large UNIX servers are increasingly being supplanted by lower cost servers sporting high performance and reliable Intel Xeon processors. The chart below from Intel’s promotion of the Itanium Processor dramatically shows this shift.
But the shift in IT spending isn’t the only message from this chart. The chart shows the amount of IT spend for hosting Mission Critical applications but also shows that the total amount spent for hosting Mission Critical applications has GONE DOWN over the last decade! In 2001 IT was spending $58.136B for hosting Mission Critical applications. In 2011 the spend for Mission Critical applications was $56.5B. That’s about $1.6B less in 2011 than in 2001! In addition, it shows that about 26% of the amount spent, that spent for RISC Servers, is for supporting only about 3% of the hardware in the Data Center.
How can this be? The answer is that IT Directors are increasingly turning to Intel Xeon based servers for hosting their Mission Critical applications because as we all know there are an ever increasing number of Mission Critical applications in the business.
How is it possible that the number and scope of Mission Critical applications has skyrocketed in size but the amount of money spent by IT to run these applications has gone down?
The answer’s lies in Moore’s Law.
Basically Moore’s Law says that the numbers of transistors on a processor will double every 18 to 24 months. This means that the capability of a processor to process data increases significantly every 18 to 24 months. The low cost Intel Xeon processor has been adding functionality along with speed, and has been capable of running just about any Mission Critical application in IT’s portfolio. So the capability and performance of the Intel processors increases with Moore’s Law. Instead of doubling the size of a processor every 18 to 24 months, Intel is relentlessly driven to shrink the manufacturing process of the features of the processor. Shrinking feature size is another way to achieve high manufacturing volume while keeping the processor size similar to its predecessor all the while doubling the numbers of transistors. By manufacturing a high volume of processors Intel is able to sell these processors at a reasonable price, near the price point of the previous generation.
Shrinking the manufacturing process of the processor features each generation requires new machinery in a whole new processor fabrication facility. To construct a new fab today is running from $3 to $5 Billion each and it is estimated that the costs of a 300mm fab will rise to $10 Billion soon. The chart below shows that the number of companies with their own fabs will shrink in the next generation.
So what does this have to do with your paying too much for your Mission Critical compute infrastructure? It’s pretty simple. Intel makes hundreds of millions of chips each generation. Fab costs, the costs of ramping up a new process technology, and R&D costs are amortized across hundreds of millions of processors allowing Intel to keep the costs for current generation processors and succeeding generations down.
All developers of processors face these issues in the development and manufacture of the processors they design. For companies making only hundreds of thousands or even just a few million processors, the fixed costs of a fab, the variable costs of ramping up a new process, and the variable costs of R&D have to be amortized across just those processors. With fewer processors to spread the costs across, something has got to give. Either the manufacturing process has to stay the same as the previous generation or the price has to go up to absorb the costs.
Instead Intel has been able to provide increased performance and features generation after generation while keeping the price to the consumer at a reasonable level. Most IT managers are aware of this. They are faced with needing to meet ever tightening Service Level Agreements while their budgets are being kept flat or even being reduced. These managers are able to meet these demands by putting their Mission Critical application on servers built on the reliable and powerful Intel family of Xeon processors.