Today, I’d like to take a peek at what’s around the corner, so to speak, and put the spotlight on a new and exciting area of development. We’ve spent some time in this blog series exploring Software Defined Infrastructure (SDI) and its role in the journey to the hybrid cloud. We’ve looked at what’s possible now and how organisations early to the game have started to use technologies like orchestration layers and telemetry to increase agility whilst driving time, cost and labour out of their data centres. But where’s it all going next?
One innovation that we’re just on the cusp of is server disaggregation and composable resources (catchy, huh?). As with much of the innovation I’ve spoken about during this blog series, this is about ensuring the datacentre infrastructure is architected to best serve the needs of the software applications that run upon it. Consider the Facebooks*, Googles* and Twitters* of the world – hyper-scale cloud service providers (CSPs), running hyper-scale workloads. In the traditional Enterprise, software architecture is often based on virtualisation - allocating one virtual machine (VM) to one application instance as demand requires. But, what happens when this software/hardware model simply isn’t practical?
This is the ‘hyper-scale’ challenge faced by many CSPs. When operating at hyper-scale, response times are achieved by distributing workloads over many thousands of server nodes concurrently, hence a software architecture designed to run on a ‘compute grid’ is used to meet scale and flexibility demands. An example of this is the Map Reduce algorithm, used to process terabytes of data across thousands of nodes.
However, along with this comes the requirement to add capacity at breath-taking pace whilst simultaneously achieving previously unheard of levels of density to maximise on space usage. Building new datacentres, or ‘pouring concrete’, is not cheap and can adversely affect service economics for a CSP.
Mix-and-Match Cloud Components
So, what’s the ‘The Big Idea’ with server disaggregation and composable resources?
Consider this: What if you could split all the servers in a rack into their component parts, then mix and match them on-demand in whatever configuration you need in order for your application to run at its best?
Let me illustrate this concept with a couple of examples. Firstly, consider a cloud service provider with users uploading in excess of 50 million photographs a day. Can you imagine the scale on which infrastructure has to be provisioned to keep up? In addition, hardly any of these pictures will be accessed after initial viewing! In this instance, the CSP could dynamically aggregate, say, lower power Intel® Atom™ processors with cheap, high capacity hard drives to create economically appropriate cold storage for infrequently accessed media.
Alternatively, a CSP may be offering a cloud-based analytics service. In this case, the workload could require aggregation of high performance CPUs coupled with high bandwidth I/O and solid state storage – all dynamically assembled, from disaggregated components, on-demand.
The Infinite Jigsaw Puzzle
This approach, the dynamic assembly of composable resources, is what Intel terms Rack Scale Architecture (RSA).
RSA defines a set of composable infrastructure resources contained in separate, customisable ‘drawers’. There are separate drawers for different resources – compute, memory, storage – like a giant electronic pick-and-mix counter. A top-of-rack switch then uses silicon photonics to dynamically connect the components together to create a physical server on demand. Groups of racks – known as pods – can be managed and allocated on the fly using our old friend the orchestration layer. When application requirements change, the components can be disbanded and recombined into infrastructure configuration as needed – like having a set of jigsaw puzzle pieces that can be put together in infinite ways to create a different picture each time.
Aside from the fun of all the creative possibilities, there are a lot of benefits to this type of approach:
- Using silicon photonics, which transmits information by laser rather than by physical cable, means expensive cabling can be reduced by as much as three times1.
- Server density can be increased by 1.5x and power provisioning reduced by up to six times1.
- Network uplink can be increased by 2.5x and network downlink by as much as 25 times1.
All this means you can make optimal use of your resources and achieve granular control with high-level management. If you want to have a drawer of Intel Atom processors and another of Intel Xeon processors to give you compute flexibility, you can. Want the option of using disk or SSD storage? No problem. And want to be able to manage it all at the pod level with time left over to focus on the more innovative stuff with your data centre team? You got it.
All this is a great example of how the software-defined infrastructure can help drive time, cost and labour out of the data centre whilst increasing business agility, and will continue to do so as the technology evolves. Next time, we’ll be looking into how the network fits into SDI, but for now do let me know what you think of the composable resource approach. What would it mean for your data centre, and your business?
1 Improvement based on standard rack with 40 DP servers, 48 port ToR switch, 1GE downlink/server and 4 x10GE uplinks, Cables: 40 downlink and 4 uplink vs . rack with 42 DP servers, SiPh patch panel, 25Gb/s downlink, 100Gb/s uplink, , Cables: 14 optical downlink, and 1 optical uplink. Actual improvement will vary depending on configuration and actual implementation.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.