Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Conquering Big Data with stream computing

Sim Ahmed | April 11, 2012
There is big data, and then there is mind-bogglingly enormous data; the latter is the scale at which Mahmoud Mahmoud has been focusing his research on for the last three years. And he says his work will be a "paradigm shift" in the way businesses use big data in the future.

There is big data, and then there is mind-bogglingly enormous data; the latter is the scale at which Mahmoud Mahmoud has been focusing his research on for the last three years. And he says his work will be a "paradigm shift" in the way businesses use big data in the future.

The AUT University computer scientist has been teaching on and off for the better part of a decade, and is currently working on finishing his doctorate. He originally came to New Zealand in 1994, from Kuwait where he was raised and educated.

Mahmoud started his career as a graphic designer, but followed a childhood passion for computers to his current position.

"I have always been a computer geek, even as a little child I remember while the other kids were doing their reports on ancient Egypt using colouring pencils and paper, I did mine using a word processor on my computer," recalls Mahmoud.

Since 2009, Mahmoud and his team at AUT's Institute of Radio Astronomy and Space Research (IRASR) have been working on ways to glean useful information from the enormous quantity of data that is produced by mega-science projects like the Square Kilometre Array (SKA).

IRASR was a part of the joint bid with Australia's Commonwealth Scientific and Industrial Research Organisation to build the SKA radio telescope project in the Australia-New Zealand region.

Mahmoud's research led to a paper published in late 2011 on the use of stream-computing to analyse the data as it is produced, instead of storing it to be mined later. He explains that stream-computing is much like putting your finger in the air to gauge which way the wind is blowing, it is quick and relatively effective.

"With stream-computing, rather than storing the data we store the queries we want to apply to it. We probe the data with questions using the queries, and are given real-time answers as it comes by," says Mahmoud.

"The idea is you don't need to wait until there is downtime to process the information. You can immediately elicit out of the stream the relevant information without needing to store it.

"When you consider that 99 percent of the data collected is likely to be nothing but noise, this saves a lot of time, and money wasted on storage."

Mahmoud says stream-computing could be valuable for businesses looking to leverage the large amounts of data created from various sources online, to make better business decisions.

"Businesses are in the age of information overload. You have stock prices, market data, Twitter, Facebook, SMS, blogs -- all the information just coming out of your ears and being wasted," says Mahmoud.

"Each one of those points of information can lead to better forecasting and decision making when harnessed correctly, but it's also important it is collected in a reasonable amount of time to give businesses agility and a competitive edge."

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.