Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

BLOG: Your data centre is too cool

Paul Venezia | Dec. 4, 2013
With modern gear capable of operating at warmer temps than in the past, a freezing data center is no longer necessary

One of the eternal concerns for any data center is cooling the mass of metal that makes a room a data center. Over the years we've seen a significant decrease in the power consumption and heat generation of a general-purpose server, as well as an order-of-magnitude decrease in those variables from a per-server-instance standpoint. Where once stood 2U servers with several 3.5-inch disks running a single server instance, you'll now find two 1U servers with 2.5-inch drives or no disk at all, running maybe 30 server instances.

But the fact remains that we're also seeing a proliferation of logical server instances. A server-by-server comparison of a data center five years ago and the same infrastructure today should show a decrease in the number of physical servers, but that's not a guarantee.

In the meantime, costs for power and cooling have not been stagnant. Power consumption is still and likely always will be a source of pain in the data center budget. I can recall a time in the engineering labs at Compaq where the monthly power bills for the data center-sized labs would run into the hundreds of thousands of dollars, and that was many moons ago. 

Data center power draw comes from two main sources: the hardware (servers, storage, and networking) and the cooling systems. The larger and hotter the metal, the more power needed to cool it, to exhaust the hot air, and to maintain suitable humidity. There are many ways to combat the laws of physics and maintain reasonable intake temperatures. There are water-cooled racks and in-row cooling units that serve to bring the cold air where it's most necessary — at the server inlet.

These methods aren't for general use, however. They generally require careful hot- and cold-aisle designs, and while they can reduce overall power and heating bills, they can also cost more initially. Perhaps surprisingly, these designs are also very effective in smaller builds, where a prospective eight racks can be cooled by only two units.

These units can be water-cooled with a chill-water unit mounted to the roof or behave like normal air conditioners, using plenum space for exhaust and intake. Generally speaking, the water-cooled units will be a better long-term bet. Building construction considerations and potential plenum blockages can make it a challenge to run the air units. Your mileage may vary depending on rack density and the actual equipment present, but in-row cooling definitely has a place in the data center.

And of course, we have the massive AC units bolted to the walls of the room or on the roof, pumping out 68-degree Fahrenheit air nonstop either through dedicated ducting or through a raised floor, while pulling in hot air from the room. This is the traditional method. However, the larger issue today may be not what kind of cooling system should be used, but at what temperature should the data center operate.

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.