Best Practices for Simplifying Your Network for Cloud Computing

*Please not a version of this blog first apeared as an Intel industry perspective on Data Center Knowledge as Tips for Simplifying Your Cloud Network.

"Ethernet is the backbone of the Cloud."

Bold statement? Not at all. Any data center, cloud or otherwise, depends on its Ethernet network to allow servers, storage systems, and other devices to talk to each other. No network means no data center. Today, as IT departments prepare to deploy internal cloud environments, it’s important for them to consider how network infrastructure choices will impact their cloud’s ability to meet its service level agreements (SLAs). Terms commonly used to describe cloud computing capabilities, such as agility, flexibility, and scalability, should absolutely apply to the underlying network as well.

With that in mind, I’d like to take a look at some recommendations for simplifying a private cloud network. You can consider this post a sort of CliffsNotes* version of a white paper we completed recently; you’ll get a basic idea of what’s going on, but you’ll need to read the full piece to get all the details. It’s a great paper, and I recommend reading it.

Consolidate Ports and Cables

Most cloud environments are heavily virtualized, and virtualization has been a big driver of increasing server bandwidth needs. Today it’s common to see virtualized servers sporting eight or more Gigabit Ethernet (GbE) ports. That, of course, means a lot of cabling, network adapters, and switch ports. Consolidating the traffic of those GbE connections onto just a couple of 10 Gigabit Ethernet (10GbE) connections simplifies network connectivity while lowering equipment costs, reducing the number of possible failure points, and increasing the total amount of bandwidth available to the server.

GbE Configuration.jpg

With 10 Gigabit Ethernet, you can consolidate this . . .

10GbE Configuration.jpg

. . . down to this.

Converge Data and Storage onto Ethernet Fabrics

10GbE’s support for storage technologies, such as iSCSI and Fibre Channel over Ethernet (FCoE), takes network consolidation a step further by converging storage traffic onto Ethernet. Doing so eliminates the need for storage-specific server adapters and infrastructure equipment. IT organizations can combine LAN and SAN traffic onto a single network or maintain a separate Ethernet-based storage network. Either way, they’ve made it easier and more cost-effective to connect servers to network storage systems, reduced equipment costs, and increased network simplicity.

Maximize I/O Virtualization Performance and Flexibility

Once you have a 10GbE unified network connecting your cloud resources, you need to make sure you’re using those big pipes effectively. Physical servers can host many virtual machines (VMs), and it’s important to make sure bandwidth is allocated and balanced properly between those VMs. There are different methods for dividing a 10GbE port into smaller, virtual pipes, but they’re not all created equal. Some methods allow these virtual functions to scale and use the available bandwidth of the 10GbE connection as needed, while others assign static bandwidth amounts per virtual function, limiting elasticity and leaving unused capacity in critical situations.

Enable a Solution That Works with Multiple Hypervisors

It’s likely that most cloud deployments will consist of hardware and software, including hypervisors, from multiple vendors. Different hypervisors take different approaches to I/O virtualization—and it’s important that network solutions optimize I/O performance for those various software platforms; inconsistent throughput in a heterogeneous environment could result in bottlenecks that impact the delivery of services. Intel Virtualization Technology for Connectivity, included in Intel Ethernet server adapters and controllers, includes Virtual Machine Device Queues (VMDq) and support for Single Root I/O Virtualization (SR-IOV), to improve network performance across all major hypervisors.

Utilize Quality of Service for Multi-Tenant Networking

Like a public cloud, a private cloud provides services to many different clients, ranging from internal business units or departments within the company to customers, and they all have performance expectations of the cloud.  Quality of Service (QoS) helps ensure that clients’ requirements are met.

Technologies are available that provide QoS on the network and within a physical server. QoS between devices on the network is delivered by Data Center Bridging (DCB), a set of standards that defines how bandwidth is allocated to specific traffic classes and how those policies are enforced. For traffic between virtual machines in a server, QoS can be controlled in either hardware or software, depending on the hypervisor. When choosing a network adapter for your server, support for these types of QoS should be taken into consideration.

Again, keep in mind that these are high-level looks at the recommendations. The white paper I’m summarizing goes into much greater detail on the hows and whys behind each recommendation. If you’re thinking about deploying a cloud data center, it’s highly recommended reading.

Follow @IntelEthernet for the latest updates on Intel Ethernet technologies.