An example of its use is in the financial sector, where banks and other financial institutions are constantly monitoring market data for the latest trends. Mahmoud says stream-data would enable those businesses to make better decisions on-the-fly without needing to wait several hours for the information to be compiled and analysed.
He says projects like SKA will help speed up research and development in stream-computing by bringing in business interest, but only if the infrastructure is interoperable with what is currently used in enterprise.
Mahmoud's research uses IBM's InfoSphere Stream technology as its parallelisation middleware to manage CPU usage and to hold queries. He says other stream-computing infrastructure, like that used by CERN for its Large Hadron Collider research, uses highly customised components which would be difficult to replicate for business use. "At CERN they use a protocol called White Rabbit to query their data. This is a very comprehensive system, but it's not interoperable with other protocols," says Mahmoud. "They manufacture everything right down to layer one, it needs special hardware and routers which couldn't be used by most modern businesses."
He says further research could help realise the "Holy Grail" of cloud computing, which is contextual search and answers.
"At the end of the day people should be able to use language specific to their domain or expertise, whether you are in the medical field or financial field or whatever, to ask questions to this stream, and it will give you a contextually aware answer.
"This is the Holy Grail for cloud computing, and while we are not there yet our research is heading in that direction."http://cio.co.nz/cio.nsf/spot/conquering-big-data-by-stream-computing
Sign up for CIO Asia eNewsletters.