Tackling the Dynamic Service Chaining Challenge of NFV/SDN Networks with Wind River and Intel

As part of the launch of the Intel® Open Network Platform Server Reference Design, we’ve asked key development partners to blog on their work with Intel on this reference architecture and their views on network transformation.  Today’s blog is by Craig Griffin of Wind River.

NFV is great. SDN is great. But there’s still work to be done to implement both of these paradigm-shifting technologies in a way that matches all of the capabilities of legacy, appliance-based networks.

One issue we addressed last week at the Intel Developer Forum Fall 2014 was dynamic service chaining, or directing packets to virtualized network applications.  Wind River demonstrated this capability in our booth, and I discussed the concept in a lecture session alongside Brian Skerry of Intel.

Making Packets Go Where They Don’t Want To Go

At its most basic, service chaining is translating a company’s networking policies into a data flow that takes packets through designated applications (services) before they are switched to their destination.

Firewalling, load balancing, VPNs, IPS/IDS, and deep packet inspection are all common services that fit into this model. In a legacy network, service chaining was accomplished by deploying appliances in the data path between the WAN and the LAN switch. While this is an easy architecture to create, it is also expensive and one that could potentially limit bandwidth to that of the slowest appliance in the chain.

With NFV, however, services are now applications running on virtual machines (VMs) on an Intel® architecture-based server.  This raises several challenges starting with the fact that virtual appliances are not in a fixed physical location and may not be addressable (such as bump-in-the-wire appliances). Also, there is no mechanism now to distinguish whether a packet has been processed by an application and is ready to move on. Those are just some of the packet flow challenges; more issues are present at every layer of the protocol stack.

Defining Service Chaining When a VM Moves

Our dynamic service chaining demonstration at IDF is not a product, nor is it the only way to solve the problem. In keeping with the ecosystem and open source tool set ethos of the Intel® Open Network Platform Server Reference Architecture, we showed how the tools and technology of Wind River and Intel could be combined to provide a service chaining solution.

In the demo, we built two Intel® Xeon® processor-powered NFV servers with a range of virtual applications including VPN, antivirus, virtual routers and WAN optimization. The operating system foundation of the servers was Wind River Open Virtualization Profile combined with OpenStack computer layers software.  The servers also featured Intel® Data Plane Development Kit Open vSwitches for switching between virtual machines.

Other technology in the demonstration included the Intel® Communications Chipset 89XX devices for encryption / decryption acceleration and a 40 Gigabit Intel® Ethernet Controller XL710-based converged network adapter.

The demonstration showed traffic flow through the services on one server before being forwarded to services on another server for further processing.  One service was then migrated to the other server, and the demo showed not just virtual machine migration but how the service chain flow can be dynamically rerouted to ensure the packets flow still maintains the network policy.

What’s Next for Service Chaining

Making service chaining work is a gap in NFV / SDN today. What’s needed are service chaining abstractions that can be incorporated into OpenStack and OpenDaylight so the solution can be available for all SDN/NFV implementations and interoperable with any operating system, VM, or network service.

Our goal is to see the SDN / NFV market grow and deliver on its promise of a lower cost, higher performance network infrastructure. We designed our demo to inspire ecosystem members to learn from what we’ve created and take it to the next level.