Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The pros and cons of hyper-converged solutions

Rob Enderle | March 21, 2016
Though hyper-converged solutions are currently very popular, columnist Rob Enderle writes that despite how flexible and powerful they can be, there are issues.

Hyper-converged solutions (in essence a software-defined system with tightly integrated storage, networking and compute resources) are all the rage at the moment. However, while I was doing a talk on trends through the end of the decade, someone who represented an enterprise class company, explained why they didn’t work for him. They’d done extensive trials from a broad number of vendors and the results made it clear that they needed to approach their problems with a more traditional solution.   I’m a fan of the concept of hyper-converged, but we so often focus almost exclusively on the benefits of this approach we forget that it also has some shortcomings.  

Let’s refresh this week.

Benefits

The clear benefits of having a highly integrated and tested system are short deployment times and relatively high reliability. This is because each component is put through a massive series of tests to assure it works with every other component and, done right, you end up with an enterprise class appliance. Basically, it is almost a data center in a box (we’ll get to the “almost” part in a minute).

Where a traditional solution based on buying components could take months to configure and install a hyper-converged system can be implemented in weeks and sometimes even in days. And because so much work is put into the interoperation of the components and given that the management systems that surround the solutions are uniquely designed around them, much of the complexity that typically makes managing and assuring a data center is eliminated. This offers a huge advantage for those providing a broad range of relatively generic services either for their own companies or as a service provider for others.

When I’ve spoken to service providers who have implemented a good hyper-converged solution it is almost like talking to a religious fanatic. They just gush at how flexible and surprisingly powerful the result is.

But there are issues.

Hyper-converged issues

The big one is performance and that was what the audience member was explaining to me.   His Hadoop deployment required minimal latency and massive performance and no hyper-converged solution he tested met his needs. He lived on the cutting edge of Intel technology and when he tried to get a solution based on that he couldn’t find one. This is because part of creating a hyper-converged solution is massive interoperability testing, which can take months to complete after a new processor and chipset are announced.

The reason they deploy so quickly is because this testing is done before the system is certified for sale, but you can always buy a server on the cutting edge and do this testing yourself. And because Intel has done a stunning job tuning its solution for Hadoop the result is you get a massive improvement in performance tied to the performance of their new products. So if you are willing to trade off interoperation for performance and you need the absolute highest performance then, as my audience member pointed out, hyper-converged solutions, at least for this use, is not for you.

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.