Make Your Data Centre Think for Itself


Wouldn’t it be nice if your data centre could think for itself and save you some headaches? In my last post, I outlined the principle of the orchestration layer in the software-defined infrastructure (SDI), and how it’s like the brain controlling your data centre organism. Today, I’m digging into this idea in a bit more detail, looking at the neurons that pass information from the hands and feet to the brain, as it were. In data centre terms, this means the telemetry that connects your resources to the orchestration layer.

Even the most carefully designed orchestration layer will only be effective if it can get constant and up-to-date, contextual information about the resources it is controlling: How are they performing? How much power are they using? What are their utilisation levels and are there any bottlenecks due to latency issues? And so on and so forth. Telemetry provides this real-time visibility by tracking resources’ physical attributes and sending the intelligence back to the orchestration software.

Let me give you an example of this in practice. I call it the ‘noisy neighbour’ scenario. Imagine we have four virtual machines (VMs) running on one server, but one of them is hogging a lot of the resource and this is impacting the performance of the other three. Intel’s cache monitoring telemetry on the server can report this right back to the orchestration layer, which will then migrate the noisy VM to a new server, leaving the others in peace. This is real-time situational feedback informing how the whole organism works. In other words, it’s the Watch, Decide, Act, Learn cycle that I described in my previous blog post – doesn’t it all fit together nicely?

Lessons from History…

Of course, successful telemetry relies on having the right hardware to transmit it. Just think about another data centre game changer of the recent past – virtualisation. Back in the early 2000s, demand for this technology was growing fast, but the software-only solutions available put tremendous overheard demand on the hardware behind them – not an efficient way to go about it. So, we at Intel helped build in more efficiencies with solutions like Intel® Virtualization Technology, more memory, addressability and huge performance gains. Today, we’re applying that same logic to remove SDI bottlenecks. Another example is Intel® Intelligent Power Node Manager, a hardware engine that works with management software to monitor and control power usage at the server, rack and row level, allowing you to set the usage policies for each.

However, we’re not just adding telemetry capabilities at the chip level and boosting hardware performance, but also investing in high-bandwidth networking and storage technologies.

….Applied to Today’s Data Centres

With technologies already in the market to enable telemetry within the SDI, there are a number of real-life use cases we can look to for examples of how it can help drive time, cost and labour out of the data centre. Here are some examples of how end-user organizations are using Intelligent Power Node Manager to do this:


Another potential use case for the technology is to reduce usage of intelligent power strips, or you could throttle back on server performance and extend the life of your uninterruptable power supply (UPS) in the advent of a power outage, helping lower the risk of a service down time – something no business can afford.

So, once you’ve got your data centre functioning like a highly evolved neural network, what’s next? Well, as data centre technologies continue to develop, the extent to which you can build agility into your infrastructure is growing all the time. In my next blog, I’m going to look into the future a bit and explore how silicon photonics can help you create composable architectures that will enable you to build and reconfigure resources on the fly.

To pass the time until then, I’d love to hear from any of you that have already started using telemetry to inform your orchestration layer. What impact has it had for you, and can you share any tips for those just starting out?

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to

You can find my first and second blogs on data centres here:

Is Your Data Centre Ready for the IoT Age?

Have You Got Your Blueprints Ready?

Are You Smarter than a Data Centre?

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.

*Other names and brands may be claimed as the property of others.