Shower Curtains in the Datacenter? – Live from the Blades System Insight Event

I've been in Las Vegas this week for the Blades Systems Insight event talking about data center transformation and data center efficiency (no white tiger sightings...just technology this week in Vegas).  This event draws attendees who are deploying high density compute platforms in their data centers and dealing with the power and cooling challenges that come along with these environments. So I was excited to share some of Intel's thoughts on power and cooling optimization beyond pure system refresh.  If you read the blogs on the server room you know plenty about the compelling financial benefits associated with refresh...and if you haven't seen this yet check out my friend Chris Peters' blog here.

But back to the show and the shower curtains...If you dip a bit deeper into the challenge of data center efficiency, three primary focus areas emerge:

Power: The underlying power cabling and infrastructure into your datacenter.  Ultimately you want the most efficient power delivery possible.

Cooling: The HVAC systems, fans, and ducting installed to remove heat from your datacenter and let you avoid thermal environments that make Las Vegas feel chilly.

Compute: Server, network and storage gear that drive business producitivity for your organization.  This is why you have datacenters to begin with so the ultimate goal is to optimize percentage of power flowing to compute and productivity spent on every kw of power within your compute infrastructure.

At the Blades event we were discussing the impact of high density environments to this fragile ecosystem.  High density environments a) require more power, more than the typical 750W per square foot that an average rack requires and far more than the 75-100W/sq foor that a typical datacenter facility supports.  High density environments also produce a lot of heat that needs to be dealt with by cooling systems that are often close to their cooling capacity.  So how much density is a good thing for datacenters and how do we deal with that gap between power delivered and power required?  I'd like to provide a few concepts but ultimately every datacenter is I'd love to hear from you on how you've dealt with this as well. In this blog I'm going to start with cooling capacity as there are a lot of options to consider:

#1 Warmer datacenters.  ASHRAE recently updated their datacenter temp and humidity recommendations with a range of 18-27 C.  What this means is that server inlet temps can be set higher than what many datacenters are running today...the first step here is to measure your server inlet temp to get a picture of what your facility is operating at, checking with your manufacturers warranty spec, and measuring your power usage difference when altering the datacenter temp - remember to take before and after readings on your cooling power usage.

#2 Cool aisle containment: This is a pretty simple concept - placing barriers to control cool air and confining it to the area where servers need it.  Think about this as constructing a type of wall or ceiling around the cool aisle to control air flow.  So what are these walls made of? I've seen them made of plexiglass and plastic sheeting...and this week at the conference I heard about one of the largest banks in America who is experimenting with the deployment of shower curtains to control air flow and reporting a 15 degree drop in temperature associated with installation.  Now...last time I checked a shower curtain cost a few bucks so we're not talking about a major investment to test this in your datacenter.

#3 Ambient air cooling: Even in Las Vegas datacenters are utilizing outside/filtered ambient air economizers instead of their chillers to deliver cooled air at least part of the year.  This concept is simple - it's like turning on your furnace's fan setting to cool your house instead of your AC and in many regions of the country you can utilize this much of the year at a fraction of the cost of running a chiller.

#4 Liquid cooled cabinets - think of these essentially as a good Sub-Zero for the datacenter and especially applicable for the high density environments that we were focused on at the blade conference.  They basically contain a rack of compute equipment and chill this equipment utilizing liquid cooling.  This is a great way to isolate highly dense racks from your datacenter cooling equation completely and works especially well in heterogeneous environments where cooling requirements vary from rack to rack.

I will be back to you on the power and compute vectors the meantime I'd love to hear if your datacenter has implemented any of these approaches and any results you've been able to measure.