Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

What hyperscale storage really means

Rob Whiteley | March 3, 2016
Commodity-based and software-defined, hyperscale infrastructure picks up where hyperconvergence leaves off

What we've experienced with our customers is a gathering confirmation of what we've been noting for a while now: that hyperconverged is an answer and not theanswer when exploring modern storage architectures. To be sure, the industry is seeing a big pendulum swing to hyperconverged because of its simplicity. But if your data is growing exponentially and your compute needs are not, then you have an impedance mismatch that is not well suited for hyperconvergence.

Hyperscale or hyperconverged?

Hyperconverged can be a simpler, more cost-effectiveapproach. However, what our customers discover with Hedvig is that we support a feature that makes hyperscale appropriate for almost all workloads: client-side caching. Hedvig can take advantage of local SSD and PCIe devices in your compute tier to build a write-through cache. This significantly improves read performance and, more important, solves the data locality challenge. Storage is still decoupled and runs in its own dedicated, hyperscale tier, but applications, VMs, and containers can benefit from data cached locally at the compute tier. This also solves the problem of how to grow your caching tier, but that's a topic for another article.

As an example of this benefit, one customer chose Hedvig's hyperscale approach for VDI, a workload traditionally reserved for hyperconverged solutions as discussed above. In this instance, the customer had "power users" that required 16 vCPUs and 32GB of memory to be dedicated to each hosted desktop. As a result, the company was forced to deploy a large number of hyperconverged nodes to support the processing and memory requirements, while unnecessarily increasing storage capacity in lockstep.

With the Hedvig platform, the customer was able to create dedicated nodes to run the Citrix XenDesktop farm on beefy blade servers with adequate CPU and RAM. The data was kept on a separate hyperscale Hedvig cluster on rack-mount servers, with data cached back on the XenDesktop servers in local SSDs. The result? A dramatically less expensive solution (60 percent less). More significant, it also provided a more flexible environment where the company could ride Moore's Law and buy the most powerful servers needed to upgrade their desktop performance without having to upgrade storage servers.

Based on our experience, there are some easy rules of thumb to determine which architecture is right for you.

  • Choose hyperscale when... your organization has 5,000 employees or more, more than 500 terabytes of data, more than 500 applications, or more than 1,000 VMs.
  • Choose hyperconverged when... you're below these watermark numbers, have five or fewer staff managing your virtual infrastructure, or you're in a remote or branch office.

The good news is that it doesn't have to be an either/or decision. You can start in a hyperconverged environment, then switch to hyperscale, or you can mix and match the two. Our philosophy is that your applications dictate which one you should use. And as your application needs will change over time, so should your deployment.

 

Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.