Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Green memories accelerate ROI for data centers

Sylvie Kadivar, director, DRAM strategic marketing, Samsung Semiconductor, Inc. | Feb. 15, 2011
Hundreds of thousands of KWh are being consumed by the use of memory components in servers today. By adopting more energy-efficient components in optimized server architectures, such as lower voltage DRAMs and advanced solid-state drives (SSDs), data centers can drastically reduce power consumption and associated energy costs.

FRAMINGHAM, 15 FEBRUARY 2011 - This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

Hundreds of thousands of KWh are being consumed by the use of memory components in servers today. By adopting more energy-efficient components in optimized server architectures, such as lower voltage DRAMs and advanced solid-state drives (SSDs), data centers can drastically reduce power consumption and associated energy costs.

The most recent U.S. Environmental Protection Agency's (2007) report on server and data center energy efficiency shows that U.S. data center power consumption will surge from 61 billion kWhr in 2006 to about 100 billion kWhr in 2011. Data center power requirements are expected to increase as much as 20% per year for the next couple of years.

Subsequently, data center power costs have skyrocketed because, for every watt of power, between 2 and 2.5W is needed to cool it. IDC analysis shows that server energy expense in 2009 accounted for 75% of the total hardware cost. The firm also found that the energy expense to power and cool servers has jumped by 31% in the last five years.

Various studies show that memory has become one of the biggest power consumers in the server space, which adds substantially to energy costs, including cooling. In addition, virtualization is adding to the memory challenge, as it grows in popularity in helping to drive down data center cost. The move to virtualization translates into more memory per server, thereby increasing the use of DRAM and raising the total density per storage system.

To confront the power dilemma, aggressive targets have been set by regulatory agencies like the EPA and environmentally focused technology solutions providers.

Green memory and optimized servers

To handle the challenge, buyers are tapping more energy efficient servers that can save hundreds of thousand dollars per year. Realizing the need for greater energy efficiency, server manufacturers have been optimizing their architecture including the use of power-saving "green" memory alternatives.

To better understand the server power scenario, let's take a look at 48 GB servers as an example. The power breakdown in second generation DDR2 memory consumes about 26% of total power consumption as compared to 20% by the CPU.

While most server designers are exploiting multi-core CPUs to enhance processing per watt, many are also migrating to new generation memories to further slash system power consumption. For that, Hewlett-Packard, Dell, IBM (IBM), Fujitsu and other OEMs are working closely with memory suppliers to tap the benefits of Green memory architectures and technologies at ultra low voltages and high density to deliver a new class of servers that dramatically improves power consumption. In so doing, they are decreasing corresponding cooling costs to lower the total cost of ownership (TCO) while accelerating return on investments (ROI).

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.