International Data Corporation (IDC) forecasts worldwide expenditure on public IT cloud services to reach more than $107 billion in 2017[i]. Following the cloud technology adoption boom in Asia, it is no surprise that companies are now shifting their focus to preventing data loss, securing their intel and investing in data backup - only 1% of regional CIOs polled would not include cloud services in their budget for 2015[ii].
As organisations look to invest in the cloud for data backup, they often overlook a key component of the big picture - the network. Poor network performance can lead to high disaster recovery costs and missed recovery point objectives (RPO), whilst making access to stored data even more challenging for remote workers. The reduced capabilities of the network further threatens the success of cloud storage projects, and can result in increased expenditures as organisations try to make up for limited replication throughput and poor connectivity by buying more wide area network (WAN) capacity or upgrading servers.
At the highest level, cloud computing involves the predictable delivery of hosted services via a shared WAN, such as the Internet. All cloud computing initiatives have one thing in common - data is centralized, while users are distributed. Hence, it is critical for IT managers and business owners to keep a close eye on the predictability of such hosted services, as a lack thereof has the potential to de-stabilize any cloud initiative and lead to jeopardy of the entire cloud storage investment.
Increasing bandwidth not always a solution
Three network elements that impact replication throughput and storage initiatives are bandwidth, latency caused by distance, and poor network quality caused by packet loss. The relationship between the three is complicated, with some having a greater impact than others in any given network environment. Adding more bandwidth, for example, will not always make a difference to cloud storage projects if there is too much latency due to extremely long distances. Similarly, having access to all the bandwidth available will not matter if packets are being dropped or delivered out of order due to congestion, as is often the case in Multiprotocol Label Switching (MPLS) and Internet connections. Simply deploying additional bandwidth to a system's WAN links will not solve these network challenges. Instead, such an investment may result in extra costs, given the price of WAN bandwidth.
WAN optimisation is the unsung hero
It is critical that organisations are aware that the only way to ensure optimal data delivery is to establish a fully equipped network that will be able to cope with the increased flow of traffic cloud storage initiatives bring. Failing to do so will result in an environment plagued by issues that will only lead to compromised performance and business benefits. To do this, organisations need to optimise the WAN, which has the ability to reduce over 90 percent of the traffic across the network and provide the scalability needed to support all current and emerging applications.
Sign up for CIO Asia eNewsletters.