Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Debunking the myths about scale-up architectures

Ferhat Hatay, Fujitsu Oracle Center of Excellence | Feb. 10, 2015
Given the rapid pace of server design innovation, earlier concerns about scale-up servers no longer hold water.

This vendor-written tech primer has been edited to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

When growing capacity and power in the data center, the architectural trade-offs between server scale-up vs. scale-out continue to be debated. Both approaches are valid: scale-out adds multiple, smaller servers running in a distributed computing model, while scale-up adds fewer, more powerful servers that are capable of running larger workloads.

Today, much of the buzz is around scale-out architectures, which have been made popular by companies like Facebook and Google, because this architecture is commonly viewed as more cost-effective and "infinitely" scalable.

But, given the rapid pace of server design innovation, earlier concerns about scale-up servers no longer hold water. Newer scalable system designs blend features from both scale-up and scale-out approaches blurring the distinction between the two. Today's modern scale-up architectures bring scalability, capacity and reliability together with the economics of the scale-out model. The scale-up model should now be considered for emerging applications like Big Data and Deep Analytics, especially given its inherent advantages such as globally addressable flat memory space for In-Memory Computing, scalability with low overhead, and easier management.

Let's take a look at the facts behind common scale-up server myths:

Myth #1: Scale-up is prohibitively expensive. The higher cost of larger systems used to be a valid argument because things like special memory, I/O and other components -- while offering key benefits and higher value to the customer -- drove up the cost.  Not anymore. Today, modern scale-up systems are designed to use low cost, commodity components as much as possible, debunking the "too expensive" argument. Plus, fewer larger systems lessen the overhead and are easier to manage than hundreds or even thousands of smaller servers. This is a big win for scale-up systems, since IT departments are looking at overall operating expenses, not just initial acquisition costs.

Myth #2: Scale-out leads to higher reliability. Many IT administrators worry about systems going down and interrupting business operations. The redundancy across multiple systems in a scale-out model holds appeal because the failure of a single server is easily tolerated. Yet the challenge with sprawling, distributed systems has always been mapping workloads and applications across multiple systems and the myriad of complexities and costs it introduces.

Newer scale-up servers build high reliability into every level of the architecture, from processor to component to complete system, for continuous business operations. These systems constantly monitor themselves and can even take proactive measures to ensure uninterrupted operation such as dynamically degrading, off-lining, or replacing failed or failing components on-the-fly. Many of these newer servers also employ physical as well as software-based "partitioning" which provides levels of isolation to improve availability.


1  2  Next Page 

Sign up for CIO Asia eNewsletters.