Q: Virtualisation has obviously brought tremendous benefits for businesses. Yet being able to maximise implementation and keep costs down seem to be two key concerns. What advice do you have to help businesses overcome these concerns?
My primary piece of advice is to think carefully about management and data protection tools. Selecting tools built for virtualisation increases the leverage you have over the underlying platform and increases the value of that major investment over time.
Secondly, strive to have complete visibility into your virtual realm. Virtualisation allows for rapid service provisioning and deprovisioning and while this agility has benefits it can create a management nightmare for those who cannot control it. For example, 'zombie' virtual machines which no longer have purpose will continue to consume resources if not carefully administered. This can delay the provision of new services or even cause outages for existing ones. Visibility is crucial.
Q: While data backups are essential, efficient storage and retrieval still remain elusive for many organisations. There appears to be an availability gap between what's kept in storage and what can be retrieved quickly to help businesses run smoothly. Why is there such an "availability gap" and what can organisations do to close this gap?
First of all, stop thinking about backup; start thinking about recovery. If you figure out your recovery point and time objectives (RPTO's) then your backup strategy will naturally follow. The availability gap appears when RPTO's are shackled by backup software not up to the challenge of virtualisation. The gap is widened by the expectation of 'always on' and the pressure on IT to be a strategic asset to the business. There are a number of ways to bridge this gap: using the right tools, identifying tiers of data with different RPTO's and using solutions appropriate to that data including hardware-based, software-based and cloud-friendly.
Q: As businesses today are conducted at Internet speeds, many organisations are struggling to keep legacy storage systems up to speed, despite having better hardware and software to manage data sprawl. What need to be fixed first before they can gain better compatibility with legacy systems and data accessibility?
It is tempting to encourage readers to rush out and buy newer, faster hardware but I would urge a more considered approach. If you follow my advice about having complete visibility over your IT infrastructure then you are in a good position to leverage that data to make the right decisions about how to get the best out of existing infrastructure and where to make future investments. For example, Veeam customers use reporting to identify where they can increase virtual workload consolidation rates. This allows them to delay IT spend and truly sweat existing assets.
Sign up for CIO Asia eNewsletters.