A Broad View of Data Center Connectivity is Crucial to Long-Term Success

“Without the right kind and the right amount of I/O between the components of a system, all of the impressive feeds and speeds of the individual components don’t amount to more than a pile of silicon and sheet metal.”

-Nicole Hemsoth, The Next Platform.

I couldn’t have said it better myself. Looking holistically at the data center is the only way to achieve the needed performance for a variety of workloads, which continue to shift and change. As I wrote back in April, being able to efficiently move data should be as much of a priority for data center managers as processing and storing that data. But we’ve hit a wall across several I/O forms at the network adapter and controller, switch, and interconnect levels. Intel is investing at all three points to solve bottleneck issues and prepare for exponential data growth. We’re at an inflection point and what we do next really matters.

Network Adapters and Controllers

Driven by the speed and latency requirements of hyperscale data centers, Intel created world-class adapters and network interface controllers (NICs) to meet the most demanding server tasks by providing a comprehensive level of interoperability through a range of media types and Ethernet speeds.

With speeds up to 100GbE, the Intel® Ethernet 800 Series is a perfect example of the advanced capabilities that Intel can deliver. For example:

  • Application Device Queues (ADQ), which filter and isolate application traffic into specific hardware queues of dedicated lanes to increase predictability in application response time, reduce application latency, and improve application throughput.
  • Dynamic Device Personalization (DDP), which allows dynamic reconfiguration of the packet processing pipeline to meet specific protocol use case needs.
  • Support for three NVMe-oF protocols in order for organizations to choose what works best for their environment.

Our team at Intel understands that features that increase NIC intelligence – including emerging SmartNICs – still need to be easy for our customers to adopt and implement. Therefore, we continue to add new capabilities to foundational NICs and adapters, like the Intel Ethernet 800 Series, that can quickly be deployed in organizations of any size.  And, as SmartNICs emerge from Intel, we will seek to make them broadly deployable.


As the industry advances to 200GbE and eventually 400GbE, Ethernet switch silicon will lead the way. The acquisition of Barefoot Networks expands Intel’s portfolio further to address the needs for agile cloud-scale fabrics. Barefoot’s Tofino series of high-performance Ethernet switch application-specific integrated circuits (ASICs) deliver full data-plane programmability with the open and community-owned P4 programming language. The P4 ecosystem is thriving with multiple P4 data-plane applications being created. Plus, these Ethernet switches create opportunities for future silicon photonics integration.


Interconnect technologies that provide the ability to communicate the ever-growing volume of data being created and processed are essential, and it’s clear that optics will be the foundation. Though there are many challenges, Intel is working diligently to reduce power and size. We’re already starting to see a remarkable expansion of Intel® Silicon Photonics, and as the industry adopts 100GbE products in larger quantities, 400GbE speeds are close at hand.

From silicon to NIC adapters, to switch systems and optimized software solutions, Intel® Omni-Path Architecture (Intel® OPA) is changing fabric economics for demanding HPC and AI environments. My colleague James Erwin recently showed how easy it is to implement IP routing to Ethernet with Intel OPA for legacy storage access. We’ll continue to publish these types of guides on how our end-to-end solutions can quickly and seamlessly be incorporated into data centers so that organizations can realize the true value of their data.

If you’ll be attending The Next I/O Platform on September 24th, come say hello. I’d love to chat with other industry professionals about the future of connectivity, especially about how edge services are already starting to create unique opportunities in routing and management. At the event, I’ll be speaking just after lunch at 1:00pm about why incremental improvements in interconnects aren’t enough for how quickly the industry is changing today. I expect the on-stage interview to be an honest, relevant discussion about the technical challenges large-scale data centers must overcome.

For more information about Intel’s vision for data center connectivity, visit intel.com/connectivity.

Published on Categories Data Center NetworkingTags , , ,
Mike Zeile

About Mike Zeile

Vice President, Data Center Group. GM, High-Performance Fabrics Division, Connectivity Group, leads the Strategy and Marketing team for CG, which is responsible for developing a DCG-wide strategy for leadership connectivity products, solutions and IP. Zeile joined Intel in 2011 through the company’s acquisition of Fulcrum Microsystems. After the acquisition, he led planning for DCG networking products, initially within the Networking Division, and later as a member of the DCG Strategic Planning organization. Before the acquisition, Zeile served as COO to Fulcrum Microsystems, responsible for all operational aspects of business for the company. He has published a number of papers on networking technology, trends and architectures and he holds process patents related to networking and the secure storage and delivery of value items over the internet. Zeile holds a Bachelor’s degree in business with an emphasis on computer science from UCLA.