The Efficient Planet

The Internet Wears Shorts

How Google workers in summer clothes are solving data centers’ pollution problem.

Google Employee Mike Barham swaps out a motherboard at the company's Dalles data center in Oregon.
Google Employee Mike Barham swaps out a motherboard at the company’s Dalles data center in Oregon.

Photo by Connie Zhou for Google.

In September, the New York Times ran a provocative front-page story about the data centers that power the Internet. “A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness,” the story said. In fact, the data centers that store our YouTube videos, run our Google searches, and process our eBay bids use about 2 percent of all electricity in the nation. The Times reported that, in some data centers, up to 90 percent of this electricity is simply wasted.

The story was no doubt an eye-opener for the 51 percent of Americans who were under the assumption that “the cloud” has something to do with the weather.  Far from some meteorological phenomenon, the cloud is in fact a massive collection of warehouses jammed with rows and rows of power-sucking machines.

But once you’ve gotten past the fundamental realization that the cloud is a hulking, polluting, physical thing, there’s another story to tell. It’s the one about how some of the more forward-thinking Internet companies are coming up with wildly creative ways to cut down on all that waste. Facebook is building its latest data center at the edge of the Arctic Circle. An industry consortium is sponsoring a “server roundup” and handing out rodeo belt buckles to the Internet company that can take the largest number of energy-leeching comatose servers offline. And Google has saved huge amounts of energy by allowing its data center workers to wear shorts and T-shirts. 

Why shorts and T-shirts? Well, let’s back up. There are two main ways that server farms hog power. One is running the servers. The other is keeping them cool so they don’t overheat and crash. Amazingly, many data centers expend as much—or more—energy on cooling as on computation. The Uptime Institute, a private consortium that tracks data-center industry trends, estimates that companies a decade ago spent an average of 1.5 times as much energy cooling their servers as running them. These days that figure is closer to 80 or 90 percent—a big improvement, though still ugly. But these statistics do not count only the big Internet companies whose servers make up what we usually think of as the cloud. Rather, they’re weighed down by smaller and less-Internet-focused firms that run their own data centers, often using antiquated equipment and discredited practices.

By contrast, Google’s state-of-the-art data centers use, on average, just 12 percent as much energy to cool their servers as they do to power them. How does Google do it? I spoke with Joe Kava, the company’s vice president of data centers, to find out. He says the company has improved the layout of its data centers by using precise testing to figure out the exact times and locations at which energy is being lost. The fundamental principle is to keep hot air separate from cold air. The more they mix, the more energy you waste. Data centers typically do this by creating “hot aisles” behind the servers and “cold aisles” in front. Kava says Google quickly realized that a lot of heat was escaping the hot aisles when technicians had to go into them to work on the machines. So it began customizing its servers to put all the plugs on the front. Now you can fix them all from the cold aisle.

At Google, though, the term “cold aisle” is something of a misnomer. “There’s a fallacy that data centers have to be like meat lockers—they have to be cold when you walk in,” Kava says. “And it’s absolutely not true. The servers all run perfectly fine at much warmer temperatures.” Instead of keeping its data centers at the traditional 60 to 65 degrees Fahrenheit, Google’s often run at a balmy 80 degrees. That’s why its technicians now work in shorts and T-shirts. Keeping the temperature a little higher has also allowed the company to switch from using giant, power-hungry chillers to an evaporative “free cooling” system that relies on outside air. On the few days when it’s too hot for free cooling, the company can switch to chillers temporarily—or simply shift its computing load to a different data center. Google’s data center in Belgium was among the first anywhere built without any chillers at all.

Other companies have begun to experiment with data centers in frigid climates. Facebook, for instance, is building its newest facility in Lulea, Sweden, just 25 miles south of the Arctic Circle. The temperature there has not risen above 86 Fahrenheit for more than 24 hours since 1961, according to the Telegraph. There are limitations to that strategy—data centers are most effective when they’re located near large population centers—and cold air brings its own challenges for server maintenance. But the location, on the site of an old paper mill, has the environmental advantage of being on a river that produces plenty of relatively clean hydroelectric power. Google’s center in chilly Hamina, Finland, also draws on hydroelectricity, and it’s cooled with water from the adjacent Baltic Sea.

Big companies like Google and Facebook have an advantage when it comes to data-center efficiency, thanks to deep pockets, economies of scale, and the fact that they often have thousands of servers doing essentially the same thing. Forbes’ Dan Woods argued persuasively that the Times piece’s biggest error was to conflate these big Internet companies’ server farms with the much smaller (and necessarily less efficient) data centers run by the IT departments of non-Internet companies. Much of what the Times called “waste” is actually essential redundancy for companies that can’t afford a total server crash.

Still, experts say less-cutting-edge firms can also clean up their acts significantly. When Google first began reporting its eye-popping power usage effectiveness numbers in 2008, Kava recalls, “people said, ‘Wow, that’s amazing, only Google could do that.’ So then we started publishing a whole series of white papers on how you could apply some very simple and cost-effective techniques to any data center.” You’d think Google might want to keep that information to itself, since efficient data centers can be a significant competitive advantage in the Internet industry. But sharing best practices has actually become quite common among data-center pros. Facebook won the Uptime Institute’s “audacious idea” award earlier this year for its Open Compute Project, which made both its server and data-center designs open-source for all to use and improve upon.

Meanwhile, another Uptime Institute competition is aimed squarely at the server side of the efficiency equation. Last year the institute held its first server roundup, a vaguely Wild West-themed (or is it zombie-themed?) drive to encourage companies to pull the plug on outdated and underutilized IT equipment. Last year’s prize went to AOL, which took an amazing 9,484 servers out of commission—about a quarter of its fleet—saving about $5 million in utility and maintenance costs, plus 20 tons of carbon emissions.

Given how much companies can save, it might seem strange that they’d need the enticement of a rodeo belt buckle to cull their old servers. But the Uptime Institute’s Matt Stansberry, who helps run the competition, tells me one of the industry’s biggest problems is a misalignment of incentives. At four out of every five companies surveyed by the institute, the IT department is not responsible for the power bill from its data centers. In other words, the people with the ability to save power have no direct incentive to do so. And they often have a big incentive not to, Stansberry explains. “As our founder Ken Brill likes to say, nobody ever gets promoted for going around unplugging servers. And if you unplug the wrong server, that’s a career-limiting move.”

One way for smaller companies to address that is to bring in outside consultants. Charles Connolly, CEO of a data-center efficiency consultancy called Noble Vision Group, says not every company can cool its centers with seawater or run them at 80 degrees like Google does. But they can still save huge amounts of energy just by improving the hot-aisle and cold-aisle containment in the facilities they already have. Stansberry of the Uptime Institute says another option for small companies is to outsource their data center operations to the cloud, because cloud companies have tremendous financial incentives to make their operations efficient. “Most of the people doing this at large scale today”—the Googles, Amazons, and eBays—“are doing it right,” he says. “And if they aren’t, they aren’t going to be doing it for long.”