Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Google’s machine-learning cloud pipeline explained

Serdar Yegulalp | May 22, 2017
You’ll be dependent on TensorFlow to get the full advantage, but you’ll gain a true end-to-end engine for machine learning.

 

But are you willing to lock yourself into TensorFlow?

There’s one possible downside to Google’s vision: that the performance boost provided by TPUs works only if you use the right kind of machine-learning framework with it. And that means Google’s own TensorFlow.

It’s not that TensorFlow is a bad framework; in fact, it’s quite good. But it’s only one framework of many, each suited to different needs and use cases. So TPUs’ limitation of supporting just TensorFlow means you have to use it, regardless of its fit, if you want to squeeze maximum performance out of Google’s ML cloud. Another framework might be more convenient to use for a particular job, but it might not train or serve predictions as quickly because it’ll be consigned to running only on GPUs.

None of this also rules out the possibility that Google could introduce other hardware, such as customer-reprogrammable FPGAs, to allow frameworks not directly sponsored by Google to also have an edge.

But for most people, the inconvenience of being able to use TPUs to accelerate only certain things will be far outweighed by the convenience of having a managed, cloud-based everything-in-one-place pipeline for machine-learning work. So, like it or not, prepare to use TensorFlow.

 

Previous Page  1  2 

Sign up for CIO Asia eNewsletters.