Are You Smarter than a Data Centre?

Man-Checking-Out-Data-Center-Servers.png

I don’t know about you, but when I’m tired or jet lagged doing even the simplest thing feels like a chore. Basic tasks like plugging in my phone charger or making a cup of tea suddenly become a Herculean effort as my brain and limbs attempt to talk to each other. When they don’t quite manage it, clumsiness and frustration ensue.

Why am I sharing my co-ordinational woes with you? It is relevant, I promise. My point is that the relationship between limbs and brain can be thought of as similar to that between data centre resources and the orchestration layer – the subject of today’s blog. Ideally, the orchestration layer should behave like your brain on a good day, keeping track of your environment and sending the right messages to your extremities – “walk around that pillar or you’ll hurt yourself”; “put the teabag in the mug before the boiling water goes in” – without you consciously thinking about it. It’s automatic and dynamic. When that channel of communication is interrupted, or too slow, that’s when you’re likely to get inefficient use of resources.

In the software-defined data centre, orchestration is what will transform your data centre management from a manual chore to a highly automated process. The idea is that it enables you to do more with less, helping to drive time, cost and labour out of your data centre while increasing agility in your journey to the hybrid cloud – the priorities I outlined in my last blog. It sounds like quite an engineering feat, but the process model for orchestration is actually relatively simple.

As we’ve seen, software-defined infrastructure’s primary focus is managing resources to ensure business-critical application SLAs are met. So, the application is the starting point for the orchestration layer process, which works as follows:

  • Watch.
  • Decide By analysing  these ongoing observations, the orchestration layer can then draw conclusions about the causes of any sub-optimum levels. For example, is there a power outage somewhere that’s forcing data to be diverted away from the most efficient servers? Once these issues or bottlenecks have been identified, decisions can be made about how to overcome issues.
  • Act. The orchestration layer can then make these changes quickly and automatically, for example by allocating additional compute resource to improve response times, or making more network bandwidth available during peaks in demand. Changes that could have taken weeks, or even months, for a human technician to get to can be reduced to minutes or seconds.

There’s one more important step though:

  • Learn. The orchestration layer automatically monitors the impact of any changes it makes in the software-defined infrastructure and uses these insights to improve future decision making.

This machine learning, or artificial intelligence (AI), may sound a little farfetched but it’s actually being used in a number of familiar environments today – whenever you’re offered a recommendation on Netflix, use Google voice search, or watch IBM’s Watson win at Jeopardy!, you’re experiencing machine learning in action!

In summary then, it’s a self-perpetuating cycle of Watch, Decide, Act, Learn; Watch, Decide, Act, Learn.

Making Intelligent Connections

I must stress that at Intel we’re not in the business of providing the orchestration layer itself. Our primary role is to enable better orchestration by providing the telemetry and hardware hooks needed by the software to communicate with its data centre resources, like the neurons that carry information (in the form of electrical and chemical signals) to your brain from your hand.

Intelligent-Resource-Orchestration.png

    Figure 1. Intelligent Resource Orchestration

We’ve a long history of collaborating at a software engineering level with established names in this field – companies such as VMware and Microsoft — to enable them to take full advantage of these features.

We also collaborate with the open source community where we’re working in the OpenStack arena, contributing code to close feature gaps and help make it enterprise-ready.

It’s still early days, and many end-user organizations working to implement a fully automated orchestration capability across all data centre resources are still in the innovator phase of the adoption curve.

However, there is lots of experimentation going on around how better orchestration can help create a more productive and profitable data centre, and I’m sure we’ll be seeing some great progress being made in this sphere over the coming months. Key to the success of any orchestration initiative however, is having the right telemetry in place, and this is what I’ll be looking at in my next blog.

Meanwhile, do let me know your thoughts on the potential of automatic and dynamic orchestration — I’d love to hear where you think you could use it to reduce costs whilst boosting agility.

You can find my first and second blogs on data centres here:

Is Your Data Center Ready for the IoT Age?

Have You Got Your Blueprints Ready?

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.