Virtualising a data centre can have consequences related to the physical infrastructure, for instance higher power density in racks, according to Benedict Soh, IT business VP, Schneider Electric Singapore. He shared with Computerworld Singapore some recommendations on how the data centre's physical infrastructure can be optimised to help save energy costs and maximise server efficiency.
Benedict Soh, IT business VP, Schneider Electric Singapore
1) What are the data centre pain points encountered by your clients due to virtualised servers?
There are four primary pain points that businesses can encounter when undergoing virtualisation:
a) The rise of high density - Higher power density is likely to result from virtualisation, at least in some racks. Areas of high density can pose cooling challenges that, if left unaddressed could threaten the reliability of the overall data centre.
b) Reduced IT load can affect PUE - After virtualisation, the data centre's power usage effectiveness (PUE) is likely to worsen. This is despite the fact that the initial physical server consolidation results in lower overall energy use. If the power and cooling infrastructure is not right-sized to the new lower overall load, physical infrastructure efficiency measured as PUE will degrade.
c) Dynamic IT loads - Virtualised IT loads, particularly in a highly virtualised, cloud data centre, can vary in both time and location. In order to ensure availability in such a system, it's critical that rack-level power and cooling health be considered before changes are made.
d) Lower redundancy requirements are possible - A highly virtualised data centre designed and operated with a high level of IT fault-tolerance may reduce the necessity for redundancy in the physical infrastructure. This effect could have a significantly positive impact on data centre planning and capital costs.
However, businesses can be effectively addressing these challenges with a whole system approach. The shift towards virtualisation, with its new challenges for physical infrastructure, re-emphasises the need for integrated solutions using a holistic approach - that is, consider everything together, and make it work as a system. Each part of the system must communicate and interoperate with the others. Demand and capacities must be monitored, co-ordinated, and managed by a central system, in real time, at the rack level, to ensure efficient use of resources and to warn of scarce or unusable ones.
2) So, how can data centre managers overcome the abovementioned power and cooling challenges?
There are three key solutions that businesses can implement in order to overcome their power and cooling obstacles.
The rise of high density
While virtualisation may reduce overall power consumption in the room, virtualised servers tend to be installed and grouped in ways that create localised high-density areas that can lead to "hot spots". Grouping or clustering these bulked up, virtualised servers can result in significantly higher power densities that could then cause cooling problems. As such, businesses will require a cooling system that can adapt and match the changing power densities both in location and amount.
Sign up for CIO Asia eNewsletters.