Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

BLOG: Big Data driving middleware adoption

Damien Wong | March 5, 2013
As organisations increase the volume, velocity, and variability of the type of data they collect, process, and manage, traditional database designs are hard pressed to keep up.

Industry research firm IDC defines the digital universe as a measure of all the digital data created, replicated and consumed in a single year and estimates that it is on track to grow 50-fold between 2010 and 2020.[i] Organisations today are not only concerned about how to store tremendous amounts of data, they're also struggling to analyse it. They need to harness the increased volume, variety and speed of data if they are going to succeed.

Businesses run on information, but big data introduces data sets so large and complex that storing them for easy retrieval is cumbersome. For the foreseeable future, organisations will continue to rely on infrastructure specifically designed for big data applications to run reliably and scale seamlessly to keep up with the pace at which data is generated or transferred.

According to a recent survey by The Linux Foundation, organisations are continuing to expand their reliance on Linux to support mission-critical applications and the rise of big data.[ii]

In-memory data grids

IT departments struggling to design and implement solutions capable of managing exponential data growth while meeting strict requirements for application scale and performance are turning to in-memory data grids (IMDGs). According to Gartner, 40 percent of large enterprises will deploy one or more IMDGs by 2014.[iii]

As organisations increase the volume, velocity, and variability of the type of data they collect, process, and manage, traditional database designs are hard pressed to keep up. This presents challenges related to scalability, infrastructure design, security, timely access for critical decisions, and increased costs due to the increased complexity.

In-memory data grids meet those additional considerations and provide a reliable, transactionally-consistent, distributed in-memory data store. They can be used to incrementally extend the performance and scalability of established applications. They can also be used to implement brand new high-performance/high-scale applications - all based on the idea of using main memory for fast access, distributing data to scale and working with another master data repository or maintaining duplicate, remote nodes to provide resilience and persistence.

Ultimately, these benefits empower the business to better meet customer service expectations, maintain application performance through peak demand cycles, respond to business opportunities in real time, and make better use of developer resources by freeing up application developers to concentrate on making applications versus vying for database resources.

Damien Wong is general manager, ASEAN, Red Hat.


[i]    http://www.emc.com/leadership/digital-universe/iview/executive-summary-a-universe-of.htm

[ii]   http://www.linuxfoundation.org/publications/linux-foundation/linux-adoption-trends-end-user-report-2012

[iii]   Massimo Pezzini, Yefim V. Natis, Mark Driver, Eric Knipp, "Predicts 2012: Cloud and In-Memory Drive Innovation in Application Platforms," Gartner, Dec. 6, 2011, available from Red Hat, accessed Jan. 29, 2013. https://engage.redhat.com/forms/gartner-java-ee?sc_cid=70160000000UIMcAAO&offer_id=70160000000UKszAAG

 

Sign up for CIO Asia eNewsletters.