Data Center Fabric

Whenever I see the word fabric, I immediately think of cloth, which immediately takes my mind to "helping" pick out curtains, which puts me in immanent boredom mode. Pardon the offense to those of you that actually know what color curtains you own, but there are some very cool things happening in data center communication.

Today the boxes in the data center (servers, storage, switches, ...) communicate over some combination of medium and protocols. While some protocols have become less common, (DECnet, Token Ring) there is still a bunch of InfiniBand, Fiber Channel, and Ethernet. Guess what, Ethernet is going to win. Ok, that was an unsupported prognostication,,, but Ethernet has won in every other arena it has entered.

Ethernet is what I really want to talk about today. There are a series of changes happening that allow Ethernet to be cast in the role of data center fabric. The first is simple throughput - 10GB. 10GB has the capacity to support the needs of detached storage. The thing that really makes this possible is a QOS feature called "priority pause". This extension to the Ethernet standard enables Ethernet to support QOS for differentiated services and to minimize or eliminate packet drops.

This new "Ethernet" enables rich SOE ( Storage over Ethernet) beyond iSCSI to FCOE( fiber channel over Ethernet). Intel has open[-sourced|] FCOE software, and the network community is actively discussing the future of Fiber Channel.

Consolidating on 10gb reduces required port counts, and a single protocol reduces server hardware and switch infrastructure. All of this saves energy and simplifies data center wire management. These are good things. The extensions to the Ethernet specs were the result of collaboration between Intel and other industry leaders. This new spec should make it simpler to choose which curtains will go best in the data center

Intel is the leader in the add on server NIC business has great products available in the 10gb NIC space. The Intel NICS, when used on and Intel based platform, also support VMDQ - part of Intel's "Virtualization Technology for Connectivity". At VMworld Intel demonstrated that VMDQ technology in Intel Network Adapters boosted max throughput on a 10gb virtualized connection from 4gb to ~9gb - nearly max theoretical capacity.

You can buy the bits today! There are about 30 different vendors with products in the 10gb space. Clearly this is a ripe area for innovation and entry. The question remains glass or copper - optical or electrical? There are pros and cons for each, and both are supported by this next generation Ethernet. I would love to hear which you are choosing and why.