Economies of Scale in the Datacenter: Enabling Webscale for Information & Communications Technology

By Deirdré Straughan, Technology Adoption Strategist, Business Unit Cloud & IP, Ericsson

To keep up with today’s fast-changing workloads and storage demands, datacenters need distributed computing environments which can scale rapidly and seamlessly, yet remain cost-effective. This is known as hyperscale computing, and it is currently happening within a few giants (Amazon, Facebook, Google) who design their own systems – from data centers to cooling and electrical systems to hardware to firmware to software to operational methodologies – in order to achieve economics that drive down capex and opex, enabling them to be profitable providers of cloud and other massive-scale online services.

All business are becoming software companies, and we all need – but often can’t afford – this kind of “webscale information and communications technology (webscale ICT).” Most companies don’t have the in-house resources to design and manufacture bespoke systems as Amazon, Facebook, and Google are doing. Legacy IT vendors, content to maintain their current margins as long as possible, have not stepped up to fill this gap.

At the same time, we are all beginning to recognize the limitations in data security and governance that have so far deterred 74% of companies* from moving critical operations into the cloud. With daily news of data breaches across all sectors, and rapidly changing global laws on privacy and customer data, it is clear that traditional IT architectures and approaches, even when used strictly in-house, have become inadequate to today’s security needs.

Putting the C into ICT

Ericsson, which has been providing telecommunications equipment and services since 1876, approaches this problem from a different angle. We bring long, global experience in building and maintaining the real-time, reliable, predictable, and secure communications network infrastructure that our operator customers – and their billions of subscribers in the remotest corners of the globe – demand and rely upon.

We set out to analyze from first principles what it actually means to run the world’s most efficient data centers, and how their practices can be applied in every datacenter and telco central office. We have recognized a cycle in the industrialization of IT, a continuous loop of:

•   Standardization of hardware, software, operational methodologies, and economic strategy.

•   Combination and consolidation to drive highest possible occupancy, utilization, and density.

•   Abstraction, for complete programmability of all functionality and capabilities.

•   Automation of anything that is done more than three times.

•   Governance of performance, scalability, quality, economics, compliance, and security.

How is this to be achieved for hyperscale computing?

Hardware Standardization, Combination, and Consolidation

We first need an off-the-shelf system that can be designed, purchased, and managed in a completely customized fashion, able to integrate with legacy hardware and fit into existing data centers, yet capable of evolving rapidly as needs and workloads change: a software-defined, workload-optimized infrastructure, architected for hyperscale efficiency.

This is made possible by Intel’s Rack Scale architecture, which introduces features such as hardware disaggregation, silicon photonics, and software that pools disaggregated hardware over a rack scale fabric for higher utilization and performance.

Monitoring and lifecycle management software provide full awareness of every detail of hardware infrastructure and workloads – the knowledge needed to achieve new levels of capex and opex savings.


Another way to use hardware efficiently is to abstract formerly hard-wired features into software. Ericsson telco customers are already familiar with SDN (software-defined networking) and NFV (network functions virtualization), technologies that are enabling new efficiencies in telco systems. Software itself can be further abstracted and modularized via APIs.


Compelling economics, efficiencies, and ease-of-use, however, will not be enough in today’s increasingly insecure yet regulated world of data. The requirements for true data security and compliance go well beyond today’s RBAC, public-key encryption, and so on. On the front end, systems must set and enforce policy as software is deployed. Then, once data is moving through a system, its integrity must be independently verifiable wherever it goes, whenever anyone touches it, throughout its lifetime.


In the last 200 years, the telecommunications industry has brought the power of communication to an ever-larger number of the world’s peoples, at ever lower cost, resulting in unimaginable technical and social changes. The next step is to similarly democratize data compute and storage, bringing the power of IT to everyone, while maintaining the security and reliability that we expect from our telecommunications systems.

The Ericsson HDS 8000 hardware system announced today is a first step – a big step! – taken together with Intel, towards webscale ICT: massive-scale systems, reliable and secure, available to all. We don’t yet know what changes this will enable in the world – but it’s going to be fun finding out!

*Cloud Connect and Everest Group “Enterprise Cloud Adoption Survey 2014,” page 6. Also see: Cloud Connect and Everest Group “Enterprise Cloud Adoption Survey 2013”

Deirdré Straughan, Technology Adoption Strategist for Ericsson, has been communicating online since 1982 and has worked in technology nearly as long. She operates at the interfaces between companies and customers, technologists and non-technologists, marketers and engineers, and anywhere else that people need help communicating with each other about technology. Learn more at