High-Performing IP Routing to Ethernet with Intel® Omni-Path Architecture

Intel® Omni-Path Architecture (Intel® OPA) can be connected to different types of existing networks, such as Ethernet for legacy storage access. In this article we show how easy it is to implement IP routing to Ethernet and how near-line-rate performance can be achieved using minimal router resources.

Easy to Configure

Setting up an Intel OPA-to-Ethernet IP router is very straight forward and uses many common Linux* commands. For more details please refer to the Intel® Omni-Path IP and LNET Router Design Guide. In this example, the router is a dual-socket Intel® Xeon® Gold 5117 processor-based server in a 1U chassis, with a basic Red Hat Enterprise Linux 7.5 (RHEL*) distribution installed. It has one 100Gb Intel® Omni-Path Host Fabric Interface (Intel® OP HFI) and one 100GbE Ethernet NIC installed on separate CPU sockets. Complete system configuration is given at the end of the article. An even simpler and potentially less-expensive configuration could be a single socket system with both the Intel OPA HFI and Ethernet NIC connected to the same CPU.

Figure 1 is a simplistic illustration of the configuration. There can be any number of switches or servers and clients on both networks. Intel OPA is assigned one network, for example, and the 100GbE Ethernet network is assigned IP routing between the two interfaces is implemented simply by enabling IPv4 forwarding on the Intel OPA and Ethernet interfaces on the IP router. The routes on the Intel OPA and Ethernet nodes need to be defined to use the router to access the other network, as shown in Figure 1. The Ethernet network is configured with 9K “jumbo” MTU. Other performance tunings are summarized at the end of the article.

Figure 1: Simple schematic for IP routing configuration

High Performance

Taking advantage of a feature enhancement of Intel OPA known as Accelerated IPoFabric (AIP), near 100Gbps line rate performance can be achieved with a simple benchmark test. The AIP feature enhances OpenFabric IPoIB driver and makes use of additions to recent operating systems and TCP/IP stack. (For a full list of supported operating systems, please see the latest Intel OPA Fabric release notes.)  In addition, AIP enables a large IPoFabric MTU of 10236 to improve bulk data efficiency and increase throughput, as well as scaling the send and receive processing to multiple CPU cores unlike previous implementations of IPoFabric. Using the popular iPerf2 benchmark, close to 100Gbps “line rate” is demonstrated between just one Intel OPA node and one 100GbE node. 16 parallel threads are launched between the client and server with the -P16 option passed to iperf2. The write direction is tested using the Ethernet node as the “server” and the Intel OPA node as the “client”.  The read direction is tested using the Ethernet node as the “client” and the Intel OPA node as the “server”. The following table is the measured performance, averaging ten successive runs with the iPerf2 tool and all default options.

Direction Throughput (Gb/s)
Intel® Omni-Path Architecture (Intel® OPA) Read (from Ethernet) 96.9/100
Intel OPA Write (to Ethernet) 98.6/100

Router Utilization

During the above benchmarks, only 3.2% of the available memory was used on the router, with an average total CPU utilization of 8%. This suggests that less expensive hardware may be used for the router since memory and CPU utilizations are not a performance bottleneck. For routers connecting to a large number of clients and handling many different simultaneous connections and packet demands, additional router resources would be required.


Intel OPA can be connected to Ethernet-based networks with high-throughput and easy to configure IP routers, important for legacy storage access. The IP router in this example is using a basic dual socket Intel® Xeon® Scalable processor-based server with a standard RHEL release. Only a few settings are required to enable IP routing, and performance tunings, including taking advantage of the newly-released Accelerated IPoFabric (AIP), providing near-line-rate routing from Intel OPA to 100Gb Ethernet. Please explore intel.com/omnipath for customer case studies and more information about the advantages of deploying Intel Omni-Path Architecture for your solutions.

System Configuration & Disclaimers

Server and client are dual socket Intel® Xeon® Platinum 8170 processor servers with 12x16GB 2666 DDR4 per node. Router is a dual socket Intel® Xeon® Gold 5117 processor server. All nodes have 12x16GB 2666 DDR4 per node. Red Hat Enterprise Linux* Server release 7.5 (Maipo), 3.10.0-862.el7.x86_64.  Microcode: 0x2000043, variants 1,2, and 3 mitigated (Load fences, IBRS, IBPB, PTI). Memory utilization captured on router with htop. CPU utilization computed via /proc/stat counters. generic-segementation-offload and generic-receive-offload disabled for both networks. Other tunings described in the Intel® Omni-Path Performance Tuning User Guide, version 15.0.

Intel OPA: Accelerated IPoFabric enabled with 10236 MTU.  Intel® Fabric Suite (IFS num_sdma=8 and krcvqs=1 on IP router, 100 Series HFI and 48 port 1U 100-series Edge switch.

Ethernet: MLNX_OFED_LINUX-4.5- on server/client node, in-box Ethernet drivers on router node. Mellanox Technologies MT27800 Family [ConnectX-5]. RSS enabled with 14 RX rings and toeplitz hash function on router node. SN2700 1U Open Ethernet Switch.

Performance results are based on testing as of May 14 2019 and may not reflect all publicly available security updates. See configuration disclosure for details.

No product can be absolutely secure. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.

Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.   For more complete information visitwww.intel.com/benchmarks.

Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at [intel.com].

All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel provides these materials as-is, with no express or implied warranties.

Intel, the Intel logo, Intel Core, Intel Optane, Intel Inside, Pentium, Xeon and others are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

*Other names and brands may be references.