Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Achieving Data Centre Efficiency

Soeren Juul Schroeder, DCIM Expert & Vice President, FNT Software | June 3, 2015
Energy management tools and methods for green IT compliance.

For gathering this type of usage data, the use of innovative toolsets can be utilised with great advantage, automating the process of discovering and categorising all IT assets. The operator therefore should be offered an intuitive interface for capturing all data points associated with each data asset. These can be physical aspects (i.e. size, weight, location) as well as operational data (like expected and actual power consumption, cooling needs,etc.)

To determine how long the capturing process for CPU utilisation needs to run, it is necessary to apply knowledge of the type of operation the servers are involved with. For classic manufacturing for instance (repeating the same work cycles day-in and day-out) it might be appropriate to focus on a 24 hour cycle, while other lines of business might need longer cycles. If in doubt, run longer cycles.

What will likely be found here is a vast variation of IT servers in operation, where some – from a power performance point of view – will be performing well (medium to high average CPU utilisation), some will be performing less efficiently (medium to low average CPU utilisation) and others will not be performing well at all (very low average CPU utilisation).

Another categorisation that needs to happen is based on power usage over time. While some servers will have a clear differentiation between power usage for normal business hours versus off hours, others will be harder to categories due to more random usage. Some servers that are typically used as host servers for virtualisation will likely demonstrate a more static usage of power.

For both categories, the usage of both peak and average consumption data needs to be examined to avoid wrong conclusions later in the process. For instance, a server dedicated to monthly financial batch jobs might typically show up as under-performing, if only consulting average CPU consumption, while its corresponding peak value will tell a different story.

Step 2: Evaluate findings and explore potential strategies
With discovery and categorisation defined, its time to investigate the results. While action plans can quickly be established for the group of low-performing servers which are typically decommissioned after further analysis, it is more difficult with the remaining two other groups.

While it is tempting to instantly strive for virtualising the servers in the mid-to-low performing group, it might not be the best overall choice when trying to balance efficiency motives with honouring performance requirements. The challenge is to define reasonable power capping policies for each category of servers or for individual servers in extreme cases. These policies all have to be aggressive enough to provide a noticeable bottom-line saving after deployment, but also ensuring enough slack to mitigate foreseeable risk on performance.

 

Previous Page  1  2  3  4  5  6  Next Page 

Sign up for CIO Asia eNewsletters.