While companies such as Splunk have long offered search engines for machine data, Sumo Logic moves that technology a step forward, the company claimed.
"The trouble with search is that you need to know what you are searching for. If you don't know everything about your data, you can't by definition, search for it. Machine learning became a fundamental part of how we uncover interesting patterns and anomalies in data," explained Sumo Logic chief marketing officer Sanjay Sarathy, in an interview.
For instance, the company, which processes about 5 petabytes of customer data each day, can recognize similar queries across different users, and suggest possible queries and dashboards that others with similar setups have found useful.
"Crowd-sourcing intelligence around different infrastructure items is something you can only do as a native cloud service," Sarathy said.
With Sumo Logic, an e-commerce company could ensure that each transaction conducted on its site takes no longer than three seconds to occur. If the response time is lengthier, then an administrator can pinpoint where the holdup is occurring in the transactional flow.
One existing Sumo Logic customer, fashion retailer Tobi, plans to use the new capabilities to better understand how its customers interact with its website.
One-upping IBM on the name game is DataRPM, which crowned its own big data-crunching natural language query engine Sherlock (named after Sherlock Holmes who, after all, employed Watson to execute his menial tasks).
Sherlock is unique in that it can automatically create models of large data sets. Having a model of a data set can help users pull together information more quickly, because the model describes what the data is about, explained DataRPM CEO Sundeep Sanghavi.
DataRPM can analyze a staggeringly wide array of structured, semi-structured and unstructured data sources. "We'll connect to anything and everything," Sanghavi said.
The service company can then look for ways that different data sets could be combined to provide more insight.
"We believe that data warehousing is where data goes to die. Big data is not just about size, but also about how many different sources of data you are processing, and how fast you can process that data," Sanghavi said, in an interview.
For instance, Sherlock can pull together different sources of data and respond with a visualization to a query such as "What was our revenue for last year, based on geography?" The system can even suggest other possible queries as well.
Sherlock has a few advantages over Watson, Sanghavi claimed. The training period is not as long, and the software can be run on-premise, rather than as a cloud service from IBM, for those shops that want to keep their computations in-house. "We're far more affordable than Watson," Sanghavi said.
Sign up for CIO Asia eNewsletters.