Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

BLOG: 5 best practices for effective data recovery

Ang Chye Hin | Aug. 26, 2011
Organisations need best practices to optimise the backup process, avoid data loss and corruption, minimise the impact on bandwidth and storage, and enforce business-relevant policy to recover data in case of disaster.

Businesses today are seeing an explosive rate of data growth - but it is highly likely that they are unsure where this sudden growth is coming from. They may not even fully comprehend its usage levels.

What is certain is that they are constantly challenged with managing and backing up huge information pools because, in the absence of understanding exactly what to keep and what not to keep, companies more often than not preserve everything as a precautionary measure without realising the downside.

The volume of backed up data severely impacts network performance and incurs high costs associated with backup management and secondary storage consumption when storing massive amounts of duplicate or irrelevant data within the backup footprint.

The bottom line is that IT environments are constantly changing, and new regulations are introduced that demand stricter data and disaster recovery requirements. Therefore, it is prudent for each organisation to ask if its current recovery solution is as effective as when it was initially deployed.

In order to take control of this growing problem and ensure service levels, IT must re-evaluate traditional backup methods and their ability to recover data reliably in case of disaster. In addition, IT should seek opportunities to improve efficiency and reliability, reduce the amount of transmitted and stored data, and streamline management.

To optimise resources and reduce costs, IT needs to effectively capture, manage and preserve information according to business relevance and contain excessive data growth to the organisation's backup volume. Moreover, IT needs to easily access and retrieve information in strict compliance with regulatory mandates.

The problem with traditional backup practices typically involves running full backups, differential or incremental backups and log backups. As a result, this method is frequently inefficient and unreliable, stores and transmits large volumes of duplicate data, and overburdens IT administrative resources.

The traditional backup process makes recovery a highly convoluted and error-prone process that can lead to poor results. The entire process is dangerously dependent on the integrity of each incremental backup footprint. Data corruption to a single incremental backup file can cause the entire recovery to fail and no organisation can afford for that to happen.

Organisations need best practices to optimise the backup process, avoid data loss and corruption, minimise the impact on bandwidth and storage, and easily and globally enforce business-relevant policy in order to reliably recover data in case of disaster.

Optimise backup processes

An effective backup solution should know what-and what not-to back up. An ideal solution should intelligently identify, capture and preserve only unique and valuable information, while eliminating the costly backup and storage of duplicate, irrelevant or outdated data.

Prevent data loss and corruption

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.