Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Self-driving cars? Get ready for self-driving data

Lance Smith | June 16, 2015
Once the dreams of science-fiction, self-driving cars will soon allow passengers to specify a destination and let the car pick the best route based on factors such as time, traffic, freeways and fuel consumption. This kind of automation for an enterprise's most precious commodity -- data -- is also soon coming to a data center near you.

Once the dreams of science-fiction, self-driving cars will soon allow passengers to specify a destination and let the car pick the best route based on factors such as time, traffic, freeways and fuel consumption. This kind of automation for an enterprise's most precious commodity — data — is also soon coming to a data center near you.

Intelligent data mobility delivered through data virtualization will allow IT professionals to specify service-level objects (SLO) such as performance, reliability, high availability, archiving and cost, and then let software automatically move data to the right storage in real time. Let's examine the problem of data immobility and how data placement through data virtualization will finally solve common mismatch of compute and storage, resource sprawl and the cost of overprovisioning.

Data silo snowball rolls ever faster

The value and demands of data change over time, but today most data stays where it is first written. This is because applications are not storage aware. When initiated, applications are specifically configured to the available storage, whether DAS, SAN, NAS or tape, fast (low latency), shared, protected or slow. This one-to-one relationship creates data silos that make it a complex and time-consuming task to move data to a faster or more cost effective storage location as business needs change. More importantly, it means interrupting services.

Data silos have long been a problem, but the challenges facing modern businesses are making this a critical issue. To stay competitive, companies must now support multiple devices per employee, run real-time business analytics over all kinds of data, support development methodologies that accelerate time to market, and respond to quickly evolving compliance and regulatory requirements. This pressure is causing data needs to change faster than ever before, but companies today do not have the visibility or the time to size, procure and migrate data frequently enough to make the best use of their resources at all times. The scale of the challenge is growing quickly, and IT is in a race to keep up.

The data silo tax: Overprovisioning and migration migraines

As a result of the sprint to stay ahead of exponential data growth, enterprises today put extraordinary effort into initially allocating the right storage for each application and typically overprovision to defer migrating data as long as possible. When migration can't be deferred any longer, realigning data to resources is a manual process that consists primarily of putting smaller containers into bigger ones. This approach alleviates the symptoms without addressing the core problem: as business needs evolve, data can't easily be moved to more suitable storage.

Given that IT professionals are in high demand, it can be difficult for companies to even hire enough experts to keep their critical computing infrastructure running smoothly. Even when enough staff can be hired for the IT team, the labor and infrastructure costs of data silos are immense. IDC recently reported that 60% of all large IT project spending is on migrations — just moving data to new storage. In addition, research from Schneider Electric found that actual capacity load at initial deployment of new storage resources is just 20% and peaks at a mere 60%. This means that 40% or more of data center infrastructure is commonly wasted, simply to defer the need to migrate data later on.

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.