Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Easier, faster: The next steps for deep learning

Serdar Yegulalp | June 8, 2017
Rapidly advancing software frameworks, dedicated silicon, Spark integrations, and higher level APIs aim to put deep learning within reach.

Google has made some noises about optimizing TensorFlow to work well on mobile devices. A startup named Brodmann17 is also looking at ways to deliver deep learning applications on smartphone-grade hardware using “5% of the resources (compute, memory, and training data)” of other solutions.

The company’s approach, according to CEO and co-founder Adi Pinhas, is to take existing, standard neural network modules, and use them to create a much smaller model. Pinhas said the smaller models amount to “less than 10% of the data for the training, compared to other popular deep learning architectures,” but with around the same amount of time needed for the training. The end result is a slight trade-off of accuracy for speed—faster prediction time, but also lower power consumption and less memory needed.

Don’t expect to see any of this delivered as an open source offering, at least not at first. Brodmann17’s business model is to provide an API for cloud solutions and an SDK for local computing. That said, Pinhas did say “We hope to widen our offering in the future,” so commercial-only offerings may well just be the initial step.

 

Sparking a new fire

Earlier this year, InfoWorld contributor James Kobielus predicted the rise of native support for Spark among deep learning frameworks. Yahoo has already brought TensorFlow to Spark, as described above, but Spark’s main commercial provider, Databricks, is now offering its own open source package to integrate deep learning frameworks with Spark.

Deep Learning Pipelines, as the project is called, approaches the integration of deep learning and Spark from the perspective of Spark’s own ML Pipelines. Spark workflows can call into libraries like TensorFlow and Keras (and, presumably, CNTK as well now). Models for those frameworks can be trained at scale in the same way Spark does other things at scale, and by way of Spark’s own metaphors for handling both data and deep learning models.

Many data wranglers are already familiar with Spark and working with it. To put deep learning in their hands, Databricks is allowing them to start where they already are, rather than having to figure out TensorFlow on its own.

 

Deep learning for all?

A common thread through many of these announcements and initiatives is how they are meant to, as Databricks put it in its own press release, “democratize artificial intelligence and data science.” Microsoft’s own line about CNTK 2.0 is that it is “part of Microsoft’s broader initiative to make AI technology accessible to everyone, everywhere.”

The inherent complexity of deep learning isn’t the only hurdle to be overcome. The entire workflow for deep learning remains an ad-hoc creation. There is a vacuum to be filled, and the commercial outfits behind all of the platforms, frameworks, and clouds are vying to fill it with something that resembles an end-to-end solution. 

 

Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.