"Some of the acceleration and uptake of deep learning wasn't due to those research breakthroughs but to the ready availability of software that lets you do this stuff," says Owen, a former senior engineer at Google.
"For example, about two years now Google released a deep learning package called TensorFlow, and that sort of thing is really what has pushed adoption and usage of deep learning forward in the mainstream by leaps and bounds.
"That's really what's driven the explosion in the last five years. It's the translation of those ideas into free software."
Deep learning often requires special hardware, but this has also become more accessible. More challenging is the knowledge and experience needed to use the various tools and techniques.
Deep learning remains largely unchartered territory, and even experienced machine learning scientists have to learn on the job when they arrive in the field. This has led to a talent war breaking out amongst the biggest tech companies on the planet.
Deep learning limitations
AI has received mixed press coverage of late, and the recent controversy of DeepMind Health's access to NHS patient records has raised privacy concerns. Deep learning raises unique challenges because as its models grow in complexity the results become harder to interpret.
"They're very complicated models that have a huge number of numbers, and it's not clear what they mean, so it's hard to understand why a result is connected to a certain input.
"That can become a problem if we need that kind of transparency in order to spot that the logic of the model is not one we wish to accept. I think the problem is these tools may let us all too easily take latent biases hidden in our data and further enshrine them by building predictive models that suggest future action."
A team of researchers at MIT may have found a solution. By analysing the activity of different neurons in a network, they could understand which individual neurons were responsible for making certain decisions. The discovery could provide a method to uncover algorithmic bias and explain specific actions derived from deep learning algorithms.
Although deep learning began as an attempt to statistically model how neurons work, Owen is keen to emphasise it still doesn't reproduce the same thinking and learning of the human brain.
"I do caution people to take from all this that someone we've figured out how to make machines think. It's a powerful cross of techniques but it's more statistical models, there's no actual fundamental breakthrough in understanding the human brain here."
Neither does the growth in deep learning render other machine learning algorithms obsolete. Deep learning needs huge datasets and computing power to function effectively, and in many cases, simpler algorithms such as support vector machines will suffice.
Sign up for CIO Asia eNewsletters.