Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How to use cloud storage as primary storage

Barry Phillips, Chief Marketing Officer, Panzura | April 11, 2016
It takes a combination of caching, global deduplication, security, and global file locking.

The goal is to have as much active data in the cache as possible.  This can be accomplished by having enough storage to cache the active data, and/or by using an efficient caching algorithm.  We typically find that customers underestimate the amount of local cache needed even if they are planning for growth.  They often add more users than they forecast, or they put more types of data in the cache than they originally planned, as the data in the cache does not need to have back-up, DR, or archive systems. 

A caching algorithm uses machine learning to know what data needs to be locally cached and what data can “recede” into the cloud.  There are techniques that can be used in the caching algorithm to predict what data needs to be kept in cache based on how the data was written in time.  The goal is to predict what data is needed based on the data that is being accessed and then “pre-fetch” other data that is not cached. 

Caching does not have to be black or white from a file perspective if global deduplication is used.  A global dedup table in cache enables the caching algorithm to leverage common blocks across different files so it only pulls down the missing blocks of a file if that file is accessed but not fully in cache.  This dramatically reduces the amount of time to access a file that is not fully cached locally. 

Global dedup is especially useful when transferring a file from one local cache to another local cache assuming both cache’s are connected to the same cloud storage.  Since each local cache has a dedup table, it knows what blocks that it is missing from a file that is being transferred.  Only the missing blocks are actually transferred across the wide area network between the two different local caches.  Electronic Arts reduced the transfer times of 10GB-to-50GB game build files from over 10 hours to just minutes as only the new blocks of the files were actually transferred. 

While caching and dedup are a tremendous help, they do not fully solve the latency issue.  Caching and dedup eliminate or significantly reduce the time to transfer data, but do not solve for “application chattiness.” People often talk about chattiness and latency, but do not fully understand how the combination of latency and chattiness can have a much bigger performance impact than data transfer.  This can be illustrated with a time and motion study that was done with a chatty application opening a small 1.5MB file across the country -- from New York to California. 

CAD, like other technical applications, has a significant number of file operations that happen sequentially when a file is opened. In the case of AutoCAD, the most widely used CAD program, nearly 16,000 file operations happen when the file is opened.  This is the “chattiness” of the application.  If the authoritative copy of the file (with the file lock) is 86 milliseconds away (the round trip latency from California to New York), then it takes 16,000 * 86ms for the file to open – approximately 22 minutes. The actual data transfer for a 1.5MB is a fraction of the 22 minutes.


Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.