* Is data rolled up and transmitted to other internal or external entities? We all know that this is pervasive across the Internet and that there are many different opt-in/opt-out programs. It’s really important to understand if the cloud provider shares data with anyone, how they share it, when they share it, why they share it, and where is it transmitted.
Beyond security and privacy, your cloud provider’s activities will intersect with many of your organization’s day-to-day operations. Understanding this will help you determine if the ways in which the providers handle your data and serve it to your constituents supports or impacts your operations.
* What is the database and storage architecture redundancy model? Redundancy, in particular, is important because it focuses on how to deal with infrastructure failure without impacting business continuity.
* What is the backup frequency? We’ve all heard this mantra since computers were introduced: back up, back up, back up. And it is extremely important to understand the frequency with which cloud providers do backups. Obviously, the more frequent the backup, the better your redundancy will be. It will make it easier for your provider to restore service to a specific point and time if there is any failure.
* What is the recovery time from failure? It is inevitable that your provider will have an issue at some point in time. It is imperative that you understand how long it will take your cloud provider to recover your data. Is it minutes, hours, days, or weeks? Failures will happen, but you need to know how quickly it will take to recover from that failure when you’re leveraging a service provider.
* How can we access or download data from the service? Asking this question helps you to understand the different philosophies of service providers and get better insight into how those steps align or conflict with your operational processes.
* Which analytical tools are available to view our data? The service provider may have a wealth of your data in their service, and you might not want to have to pull all that data out and leverage third-party analytics tools to compress it and make sense of it. It’s much more beneficial if the service provider provides you that service so that you can do aggregation and modeling of the data.
* If there is data corruption, what is the maximum data loss that we can expect? This should tie into the redundancy and recovery questions, noted earlier, and they should be closely aligned. How long will it take to recover from a data failure, and how will that recovery process actually affect the data quality?
Sign up for CIO Asia eNewsletters.