Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Big data showdown: Cassandra vs. HBase

Rick Grehan | April 3, 2014
Bigtable-inspired open source projects take different routes to the highly scalable, highly flexible, distributed, wide column data store

In this brave new world of big data, a database technology called "Bigtable" would seem to be worth considering — particularly if that technology is the creation of engineers at Google, a company that should know a thing or two about managing large quantities of data. If you believe that, two Apache database projects — Cassandra and HBase — have you covered.

Bigtable was originally described in a 2006 Google research publication. Interestingly, that paper doesn't describe Bigtable as a database, but as a "sparse, distributed, persistent multidimensional map" designed to store petabytes of data and run on commodity hardware. Rows are uniquely indexed, and Bigtable uses the row keys to partition data for distribution around the cluster. Columns can be defined within rows on the fly, making Bigtable for the most part schema-less.

Cassandra and HBase have borrowed much from the original Bigtable definition. In fact, whereas Cassandra descends from both Bigtable and Amazon's Dynamo, HBase describes itself as an "open source Bigtable implementation." As such, the two share many characteristics, but there are also important differences.

Born for big data
Both Cassandra and HBase are NoSQL databases, a term for which you can find numerous definitions. Generally, it means you cannot manipulate the database with SQL. However, Cassandra has implemented CQL (Cassandra Query Language), the syntax of which is obviously modeled after SQL.

Both are designed to manage extremely large data sets. HBase documentation proclaims that an HBase database should have hundreds of millions or — even better — billions of rows. Anything less, and you're advised to stick with an RDBMS.

Both are distributed databases, not only in how data is stored, but also in how the data can be accessed. Clients can connect to any node in the cluster and access any data.

Both claim near linear scalability. Need to manage twice the data? Then double the number of nodes in your cluster.

Both safeguard data loss from cluster node failure via replication. A row written to the database is primarily the responsibility of a single cluster node (the row-to-node mapping being determined by whatever partitioning scheme you've employed). But the data is mirrored to other cluster members called replica nodes (the user-configurable replication factor specifies how many). If the primary node fails, its data can still be fetched from one of the replica nodes.

Both are referred to as column-oriented databases, which can be confusing because it sounds like a relational database that's been conceptually rotated 90 degrees. What's even more confusing is the fact that data is apparently arranged initially by row, and a table's primary key is the row key. However, unlike a relational database, no two rows in a column-oriented database need have the same columns. As mentioned above, you can add columns to a row on the fly (after the table has been created). In fact, you can add lots of columns to a row. The exact upper limit is hard to calculate, but it's unlikely you'll hit the limit even if you add tens of thousands of columns.

 

1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.