Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Debunking the myths about scale-up architectures

Ferhat Hatay, Fujitsu Oracle Center of Excellence | Feb. 10, 2015
Given the rapid pace of server design innovation, earlier concerns about scale-up servers no longer hold water.

Myth #3: Scale-up offers limited scalability. The notion that a single scale-up system is limited to the resources within its physical "box" reflects conventional thinking. The capacity of these systems has grown tremendously over the years. Today's servers can offer up to hundreds of times higher compute density than previous generations, as well as much more memory and I/O capacity all "shrunk" into a highly scalable, compact (as small as 1U) footprint that consumes significantly less energy.

These compact, yet powerful servers feature sophisticated innovations borrowed from mainframe computing to ensure the highest levels of reliability. At the other end of the spectrum, some scale-up systems can grow to more than 1,000 processor cores, all in a single system.

Innovations in system interconnect technologies have broken architectural limitations, enabling flexible growth across physical system boundaries with modular "building blocks." Combining the best of both worlds, dynamic scalability is a powerful feature that merges the large transaction and analytics processing power of scale-up servers with the capacity growth and economic benefits of scale-out servers. Dynamic scaling is a way of bridging to the new world of cloud computing while protecting investments in existing applications.

Unique Benefits of Scale-Up

It's worth noting that there are additional, unique advantages that scale-up architectures offer. One big advantage is large memory and compute capacity which makes In-Memory Computing possible. This means that large databases can now reside entirely in memory, boosting the analytics performance, as well as speeding up transaction processing. By virtually eliminating disk accesses, database query times can be shortened by many orders of magnitude, leading to real-time analytics for greater business productivity, converting wait time to work time.

Scale-up servers that utilize an interconnect versus an external network offer accelerated processing due to reduced software overhead and lower latency in the movement of data between processors and memory across the entire system.

Is it feasible and economical to support both scale-out and scale-up workloads on the same system or class of systems? At the end of the day, it's a question of how many nodes (scale-out) and the size of each node (scale-up).

For newer workloads like Big Data or Deep Analytics, the scale-up model is a compelling option that should be considered. Given the significant innovations in server design over the past few years, concerns about cost and scalability in the scale-up model have been rendered invalid. With the unique advantages that newer scale-up systems offer, businesses today are realizing that a single scale-up server can process Big Data and other large workloads as well or better than a collection of small scale-out servers in terms of performance, cost, power, and server density.

 

Previous Page  1  2 

Sign up for CIO Asia eNewsletters.