The Bandwidth Paradox—How Pokémon GO Pushes Connectivity Boundaries and Data Center Demands

It’s Saturday. No work emails to return, PowerPoint presentations to perfect, or staff evaluations to consider. That’s because you’re an IT data center manager who, removed from the concerns of predictive capacity planning, energy consumption, and cooling monitoring, can finally attend your daughter’s afternoon soccer match.

You look over at your wife. Sure enough, she’s at the sideline facing the field of play—but with her head deep into her iPad. You think, “Why not?” and reach for the large-screen smartphone poking out of your cargo shorts—the one with the 128GB storage capacity to accommodate both your iTunes library and the 50 or so apps you’ve downloaded. A quick game of “Words with Friends”—no one will notice.

That’s when your daughter comes off the field and hands you her iPhone. She wants you to take a Snapchat video of  her in action. Because you’re old enough to remember the Polaroid Instant Camera, you can’t fully grasp the purpose of shooting and transmitting video that will self-destruct in 10 seconds after the recipient sees it. But you’re a family. And so you run along the pitch, framing up your daughter as best you can as she chases down a loose ball, fully aware that 50KB of data will be stored on a server somewhere, albeit for only 30 days.

One hundred-fifty million people use Snapchat every day. No matter how ephemeral, that’s no small amount of data. Moreover, the playing field for mobile apps is much broader than MapQuest or Pokémon GO, the free-to-play, location-based augmented reality game for iOS and Android devices, which became so wildly popular this summer that there were reports of it causing some players to walk blindly off a cliff and others into traffic. With more than 75 million users currently playing Pokémon GO, with one player, averaging about 4 hours of play time a week, 250MB of data is generated and 18.75PB of data—assuming of course players are doing this on the weekend. Next year, Statista predicts that  mobile  app  downloads  will reach 268.69 billion. To adequately manage this unprecedented level of capacity, data centers need to up their compute, storage, and memory capabilities.

IN FREEDOM BEGINS RESPONSIBILITIES

Consumers’ insatiable demand for connectivity doesn’t begin and end with mobile apps. This year, Gartner predicts that 6.4 billion connected things will be in use worldwide, up 30% from 2015, and reaching 20.8 billion by 2020.

While there’s an undeniable sense of freedom in being able to sneak in a game of digital Scrabble or to Shazam the song you hear on the radio during the drive from the game to your home, where a smart thermostat is remotely adjusted to the perfect room temperature, the paradox is that all this anywhere, anytime, anything connectivity is not, actually, free. For one, analysts believe the Internet of Things (IoT)  will be  the  single largest  driver of IT expansion in larger data centers. The repercussions ripple forth from the rack level, to the data center’s energy efficiency and power consumption, to its impact on the electrical grid and carbon footprint, to the welfare of the planet. According to a recent report by the market research firm IDC, in 3 years, IoT will need 750% more data center capacity in service-provider facilities than it consumes today. By 2020, data centers in the U.S. alone will require six times the electricity of New York City, and produce nearly 100 million metric tons of carbon pollution per year. Clearly, there’s no sign that consumers’ obsession with connectivity will slow down, forcing IT and data center operators to responsibly deal with this bandwidth paradox head-on.

BRAVE NEW WORLD OF CONNECTED PEOPLE, PLACES, AND THINGS

Ironically, most IoT smart devices aren’t in your phone or home, they are in factories (the Industrial Internet of Things, or IIoT) and retail businesses (beacons paired with mobile apps are being used in stores to monitor customer behavior and deliver advertisements) as well as healthcare. They’re increasingly in your car: by 2020, it’s estimated that 90% of cars will be connected to the internet. They are where the rubber meets the road: By equipping street lights with sensors and connecting them to the network, cities can dim lights to save energy, bringing them to full capacity only when the sensors detect motion from automobiles.

They’re on your body: Wearables will become a $6 billion market this year, with 171 million devices sold, up from $2 billion in 2011 and just 14 million devices sold. They’re in your body, but that’s  actually old news: In 2008, Proteus Digital Health created a pill with a tiny sensor inside of it. The sensor transmits data when a patient takes his medication and pairs with a wear- able device to inform family members if it’s not taken at the right time.

We’re living in a brave new world of connected people, places, and things, and the expectations for these technologies are high. In today’s 24/7 business and leisure worlds, the ability to transmit, process, and receive data at lightning-fast speeds is critical. Customers expect on-demand access to services and don’t want to wait for pages to load, apps to function, or things to connect to them or inanimate objects.

Businesses’ and consumers’ bottomless appetite for connectivity is impacting how enterprises evaluate solutions that boost efficiency inside the data center to keep up with every Snapchat, Netflix binge session, or Instagram selfie. With always-on connectivity, predicting and planning for unexpected upticks in traffic is a complex process. So when Gartner predicts that another

5.5 million new things will get connected every 24 hours this year, short of packing up the family and moving to the Kingdom of Bhutan for a permanent digital detox, what is the data center manager or IT facility administrator to do?

SOUND DATA CENTER DECISION MAKING REQUIRES GRANULAR DATA

Data center managers require accurate information concerning power consumption, thermals, airflow, and utilization in order to take appropriate actions. Before you can intelligently reduce your data center’s energy consumption, you need to know what its current consumption is. Monitor- ing energy consumption is the first step to managing a data center’s energy efficiency, and benchmarking helps you understand the existing level. Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCIE) are internationally accepted benchmarking standards developed by the Green Grid consortium that measure a data center’s power usage for actual computing functions, as opposed to power consumed by lighting, cooling, and other overhead. A data center that operates at 1.5 PUE or lower is considered efficient. Identifying where power is lost is the key to making a data center run more efficiently. By address- ing inefficiencies at the rack level, you can optimize by row and  eventually  address the entire data center’s efficiency. This will reduce power consumption and related energy costs—in both operating and capital expenses—and thereby extend the useful life of your data center.

Data center infrastructure management (DCIM) is software that converges IT and building facility functions to pro- vide engineers and administrators with a holistic view of a data center’s performance. DCIM provides increased levels of auto- mated control that empower data center managers to receive timely information to manage capacity planning and allocations, as well as cooling efficiency.

Utilizing our own Intel Data Center Manager solution across data centers in multiple countries, Intel was able to receive detailed information about server power characteristics that helped us set fixed-rack power envelopes and enabled us to safely increase server count per rack, which improves data center utilization. Meanwhile, real-time power consumption and thermal data have allowed us to manage data center hotspots and perform power usage planning and forecasting. Leveraging power usage planning and real-time monitoring allows you to fit more servers in your rack, providing an increase in compute capacity available. More often than not, rack density increases in the ranges of 25%–50%. With 50% more servers, there would be less pressure to keep up if, say, your new AR game took off and had 75 million active users instead of the 25 or 50 million originally forecasted.

Because heat is a leading cause of downtime in data centers, a lot of energy is expended to keep rooms filled with racks of computers and other heat-producing IT equipment cool. Some experts claim a data center’s infrastructure may be responsible for as much as 50% of the data center’s energy bill, a sizable dollar amount com-ing from cooling equipment. The energy required by this cooling equipment may come at the expense of actual compute power. So reducing the power fed to data center’s cooling solution may allow greater utilization of its power resources for actual business.

Operating a data center in a High Temperature Ambient, or HTA environment, otherwise known as running a server “hot,” for example, raises the inlet temperature of a server while staying below component specifications. This can decrease data center chiller energy costs and increase power utilization efficiency. Some DCIM platforms can extract temperature ratings from storage devices, power distribution units, and even networking devices to pro- vide information on cooling and heating. Hence, the transparencies these software platforms provide to power and thermal management directly impact an organization’s bottom line. While it may seem counterintuitive, a 4-degree temperature increase in the average 300-rack 3MW facility can save 20% in cooling costs.

So, join your son for that game of “League of Legends,” sit down for that Pinterest tutorial with your daughter, “Like” the latest Facebook video posting of your wife’s Zumba class, and buy that Moov Now fitness wearable. If you deploy a DCIM solution at your data center, you’ll probably have a lot more time on your hands. And you won’t want to put on any weight  while  sitting  around  all weekend binge-watching House of Cards on your smart TV.

Published on Categories Data Center NetworkingTags
Jeff Klaus

About Jeff Klaus

General Manager of Data Center Solutions at Intel. Internationally respected software executive with experience building data center software licensing, API management and software solution businesses. Jeff has extensive experience building software engineering, product development, marketing, licensing and deployment through a variety of industry verticals globally. Jeff has experience distributing solutions to the top 10 global hardware OEMs, leading global software solution providers and direct to the largest telco and Internet Portal Data Centers around the world. He has built global sales and distribution teams and has experience orchestrating solution selling through indirect solution partners in addition to direct GTM strategies. Jeff is a graduate of Boston College, and also holds an MBA from Boston University.