Edge Computing, the Next Great IT Revolution

Commercial 5G services are no longer a pipe dream. We at Intel have worked with communication service providers (CommSPs) around the world to lay the foundation for a new era of data-centric services, but we have only scratched the surface of what’s possible.

The promise of 5G and the rise of computationally-intensive AI applications have accelerated network transformation from the data center and cloud to the network core and edge to deliver significantly higher bandwidth, lower latency, end-to-end quality of service (QoS) and proliferation of private wireless networks.

Cloud transformation dominated IT headlines as it evolved from infrastructure-as-a-service and platform-as-a-service to software-as-a-service and function-as-a-service models. Edge computing will have similar prominence over the next decade because placement of resources to move, store and process data closer to the data’s source or the point of service delivery will be central to a new class of applications that require faster decision making, leveraging its optimized location. Edge computing evolves and extends cloud computing to transform the underlying architecture and create an environment ripe for application, service and business model innovation.

The edge demands decentralized and heterogeneous discovery, deployment, and orchestration to support key performance indicators (KPIs) around higher quality of experience, total cost of ownership improvements, security and compliance with data sovereignty requirements, and delivery of timely and actionable insights. We believe the opportunity of edge computing represent the future of computing and service delivery. Let’s look at some of our preparations to support it.

Network Functions Virtualization + Cloud: Fundamental for Intelligent Edge

Intel’s leadership in network infrastructure and application virtualization opens up interesting possibilities. With network functions virtualization (NFV) as the foundation across the entire network cloud architecture, the radio access network (RAN), next generation central office (NGCO) and universal customer premises equipment (uCPE) at the enterprise site are ideal locations to support new edge services and more fully monetize infrastructure investments.

The above picture shows how edge locations are expected to support different levels of KPIs, such as latency. Whether it’s a pole-mounted video surveillance application, a server deployed at a radio tower, an on-premise enterprise data center or a rack of servers in a NGCO, each location and use case has different performance, form factor, power and thermal requirements. Along with our partner ecosystem, we have disaggregated these platform differences from the services to give CommSPs the flexibility to move services around to support these various KPIs.

Edge Key Performance Indicators: Location Matters

There are three KPIs driving the edge value proposition:

  • Latency & Determinism – Latency and Determinism are fundamental value drivers for advanced applications such as content delivery, factory automation with robotics, industrial control systems, video surveillance and security, immersive media applications, autonomous vehicles and more. These applications demand less than 20ms in end-to-end latency and deterministic response. A hyperscale cloud cannot meet this requirement.
  • Bandwidth - By 2022, 82 percent of IP traffic will be video content. This traffic is expensive to transport to the cloud and process at a hyperscale cloud data center. CommSPs are therefore motivated to process video data at the edge to reduce jitter, improve video quality and create new revenue with value-added services such as content delivery networks (CDNs), cloud gaming, or video analytics for enhanced retail experiences, among others.
  • Data Locality & Regulatory Compliance – Data privacy and security are driving the need for pervasice edge deployments as governments implement policies to protect customer data and maintain data within their geographies, such as the General Data Protection Regulation (GDPR).

These KPIs, among others, are the driving force behind investments in Intel architecture, open source innovation and a maturing ecosystem.  Intel’s overarching commitment to quality of service is focused on providing fine-grained control over performance of platform components to deliver the right balance between these KPIs as required by various workloads.

Supporting edge KPIs with optimized hardware and software for infrastructure, applications and ecosystem

Intel offers a number of options to support NFV and virtual network applications anywhere in the network. Standardizing on a common architecture also simplifies the software implementation by providing easy portability and reuse of software for various applications. This is one of the reasons that so many 5G deployments and trials are fueled by Intel® Xeon based platforms. Let’s briefly look at how Intel has tailored its silicon and optimized its software for different network locations support edge KPIs.

Intel® Atom® C3000 System on a Chip (SoC) with integrated Ethernet and Security IP delivers a compelling option for small form factor deployments with space, power and cost constraints, such as IOT gateways, virtual CPE (vCPE) and uCPE.

Snow Ridge” is the code name for a forthcoming 10nm-based Network SoC developed specifically for 5G wireless access and edge computing.

Intel Xeon-D 1600 and Xeon-D 2100 series of SoCs with high performance Xeon cores and integrated Ethernet and Security IP provide a great scale up option for dense form factor requirements such as NGCO and on-premise deployments in industrial and factory automation use cases, among many others.

2nd Generation Intel Xeon Scalable Processors with high performance Xeon cores are deployed as the network optimized NFV servers for 5G infrastructure and high-performance edge applications to support performance, QoS and security. Network or “N” SKUs in the Intel Xeon Scalable family are designed with the network in mind. They are fine-tuned to deliver up to 50 percent performance increase for the most commonly deployed virtual network functions as compared to the first-generation Intel Xeon Scalable platforms.1

The Xeon Scalable “N” SKUs represent some of the optimizations targeting the unique characteristics of network and edge workloads. Other hardware and software enhancements include:

  • Intel Speed Select Technology complements “N” SKUs to improve overall network workload performance by boosting base frequency on critical cores and removing bottlenecks.
  • Intel Optane™ DC persistent memory delivers up to 33 percent better cost/ performance density while supporting new use cases that require persistent memory.1
  • Intel® RDT provides fine-grained control of platform resources, such as Cache and Memory, to avoid the “noisy neighbor” effects of NFV and deliver QoS for real-time edge applications.
  • Optimized data plane software, such as Data Plane Development Kit (DPDK) with its device abstraction improvements for Cloud Native applications, unlocks improved networking performance.
  • Intel Deep Learning Boost technology with Vector Neural Network Instructions (VNNI) supports high performance artificial intelligence (AI) inferencing at the edge with content delivery network (CDN), video analytics and other visual cloud workloads.
  • Intel Ethernet 800 Series with Application Direct Queuing (ADQ), enhanced Dynamic Device Provisioning (DDP) and QoS scheduler support more efficient data traffic flows at the edge.
  • Intel Quick Assist technology delivers acceleration of crypto, compression and public key (PKE). Combined with Intel Trusted Execution Technology (TXT), you can be assured of end-to-end security of data through the network – data in use, data stored on the platform and data in motion.

But just optimizing silicon isn’t delivering a complete, ready-to-deploy solution. It takes a village or an ecosystem that can rapidly consume technology and partner to create innovative solutions that solve industry problems and customer needs. Intel Network Builders ecosystem is delivering commercialized, multi-vendor network transformation solutions. Intel Select Solutions offers verified, reliable infrastructure configurations with partners that target specific customer workload demands around NFV infrastructure, uCPE, visual cloud delivery networks and now media analytics. Intel FlexRAN is another example of a reference architecture that is being utilized to support multi-vendor transformation projects in the network edge. We also believe that industry efforts, such as GSMA-led Common NFVi Telco Taskforce (CNTT), the OPNFV Verification Program (OVP), Scalable Video Technology AOMedia Video 1 (SVT-AV1) and DPDK, are vital to tackling the enormous IT and operational challenges with edge computing and 5G.

Making edge easier with OpenNESS, Open Visual Cloud and OpenVINO

Intel has a long history of leadership and investments in open software initiatives, such as Data Plane Development Kit (DPDK), FD.io, OPNFV, OVS-DPDK, Tungsten Fabric, Openstack, Kubernetes and ONAP. More recently, we have been driving software disaggregation and hardware abstraction towards a concerted push for Cloud Native network applications. We are working on bringing the DPDK style optimizations into the Linux Kernel via AF-XDP (eXpress Data Path) – a high performance Linux Kernel stack for packet processing. These enhancements will drive lower latency, improved throughput and deliver QoS for edge applications.

In order to make edge a new playground for innovation we need to focus on delivering tools, frameworks and libraries to the developer ecosystem and a new generation of cloud developers. With this in mind, Intel has recently announced initiatives focused on service creation and deployment – OpenNESS, Open Visual Cloud and OpenVINO.

OpenNESS

An open source reference toolkit designed to foster application innovation, open collaboration and portability of applications across cloud, network and on-premises, enterprise edge. OpenNESS is the easy button for cloud and IoT developers to engage with a worldwide ecosystem of hardware, software and solutions integrators to develop new 5G and edge use cases and services. OpenNESS helps to securely on-board and manage new edge services in the on-premise and network edge environments.

Open Visual Cloud

An open source project that offers a set of pre-defined reference pipelines for various target visual cloud use cases. These reference pipelines are based on optimized open source ingredients across four core building blocks (encode, decode, inference, and render), which are used to deliver visual cloud services.

OpenVINO™ toolkit

Short for Open Visual Inference and Neural Network Optimization toolkit, provides developers with improved neural network performance on a variety of Intel processors and helps them further unlock cost-effective, real-time vision applications. The toolkit enables deep learning inference and easy heterogeneous execution across multiple Intel platforms (CPU, Intel Processor Graphics)—providing implementations across cloud architectures to edge devices. This open source distribution provides unmatched flexibility and availability to the developer community to innovate deep learning and AI solutions.  OpenVINO supports a comprehensive number of deep learning models out of the box. An unmatched 40 public models and about 40 Intel pre-trained models are supported through Intel Model Zoo.

As you can imagine, the combination of these three innovations—OpenNESS, Open Visual Cloud and OpenVINO—is a powerful foundation for easy creation, deployment and management of new Edge applications. I encourage you to download and work with the tools and contribute with your feedback.

What’s next for Edge Revolution

Over the next decade the edge will be the epicenter of innovation, creating new ecosystems and applications and disrupting industries in ways we could not imagine previously. I believe that there is a wonderful opportunity for the edge to evolve to support multi-tenancy, reduce total cost of ownership, and support KPIs for new age services. We can shape the edge to be discoverable (with the attributes they support), decentralized (from a decision-making perspective) and even collaborative (to support mobility considerations). In order to make edge computing pervasive across millions of deployments and locations, we will need open collaboration across a robust developer ecosystem, easy-to-use software infrastructure and significant hardware investments that stay a generation ahead of network, application, and customer requirements.

The industry will also need to address questions about edge discovery, distributed computing, heterogeneous computing and decentralization among others. Intel continues to work with academia to drive longer term research on Edge Computing. We are also collaborating with our industry partners to drive innovations encompassing communication, cloud, IoT and AI technologies to drive rapid innovations in the evolving world of Edge computing.

Written by Rajesh Gadiyar, Vice President, Intel Corp, and CTO, Network and Custom Logic Group.


1 For more complete information about performance and benchmark results, visit www.intel.com/benchmarks.