And Silverstone, while he also agrees that, “nothing can prevent anything from being hacked,” said he thinks Gartner’s Hype Cycle conclusion is dead wrong. Machine learning, rather than being overhyped, “is severely, significantly under-hyped,” he said.
“When you have enough data and you understand why the data show certain trends, you can improve prediction to much better than 90% – perhaps to more than 99%.”
That, he said, means it is possible not just to ask the machine, “Will I be attacked next week?” but, “Will I be attacked next Tuesday from China at 3 p.m.?” and even “What and when is the likely next hack, from where and by whom?”
“That is possible today with very high accuracy,” he said, “and I believe that much more complicated algorithms are not only possible but are actually being used. We can do stuff today that two years ago I wouldn’t have believed.”
Crosby agrees that machine learning is a “powerful tool,” acknowledging its effectiveness in cases like Google search and the recommendation engines of companies like Amazon and Netflix.
But he also noted that Google’s attempt to identify flu epidemics, “turned out to be woefully inaccurate.”
He argued that while machine learning is very good at finding similarities between things, “it’s not so good at finding anomalies. In fact, any discussion of anomalous behavior presumes that it is possible to describe normal behavior,” which he said is very difficult.
“This gives malicious actors plenty of opportunity to ‘hide in plain sight’ and even an opportunity to train the system that malicious activity is normal,” he said.
But Elgen, said while those difficulties exist, that doesn’t mean machine learning has no added value. Every system is “game-able,” he said, “but the question we should be asking in addition is what is the extent to which this problem is worse without [machine learning]?”
Jou is even more emphatic. “Machine learning has proven that it can define what is normal and then define anomalies,” he said.
He agreed that it will not replace humans, but said it takes what humans do – recognize patterns and then anomalies to those patterns – and automates it. “Machine learning is really just taking data sets, finding the patterns and defining what is normal versus what is ‘weird’,” he said.
He said the technique that attackers use to try to fool a system into thinking that their activity is normal, called “model poisoning,” can be countered by, “using multiple models per data source.
“That means an adversary using this approach would have to have full knowledge of all the models being used to detect dangerous behaviors, and would need to simultaneously poison all models and data sources,” he said.
Sign up for CIO Asia eNewsletters.