Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Why Google A.I. is the last user interface

Mike Elgan | Oct. 10, 2016
There's no question that A.I. is the next UI. The question is: Whose A.I.?

That omnipresent A.I. will do it all.

Unlike Google Now (or, for that matter, Siri, Alexa and Cortana), Assistant should be able to "figure things out," even with a cryptic or vague request. The ability to do this should improve over time as users rank responses.

Assistant should also make minor decisions. So if you're in your car and say: "Play 'Gold' from Kiiara," that song will play through your car's sound system. But if you say the same thing at home, it will play through either your Home device or your Chromecast, depending on which you tend to prefer.

Another Assistant skill is that it pays attention to (or, depending on your views on privacy, "spies on") your conversations and interjects helpful information and links, such as restaurant recommendations. If you're doing this in Google's Allo, both parties see the recommendations.

Assistant gets even more contextual with a Pixel phone. A long press on the home button while you're looking at a photo, for example, returns personalized search results based on the content of the photo (It's basically the Google Now On Tap feature extended to Assistant and the new phones).

We'll always have other user interfaces -- virtual, mixed and augmented reality, for example. But these are for experience. For information, the interface will be conversation. You'll be able to stop worrying about devices, apps, platforms and all the rest. You just talk, and Assistant makes it happen. That's the vision, anyway.

When A.I. chooses bots for you

Google is planning to open up Assistant to developers via a platform called Actions on Google.

Developers won't build "apps" or even "bots," according to Google's lingo, but "Actions." These "Actions" can be either Direct Actions or Conversation Actions.

Direct Actions are simple query-response events. So an airline might build Direct Actions so that a question about when a flight lands returns the estimated landing time.

Conversation Actions are harder to build but easier to use. They involve back-and-forth. So an airline's Conversation Action might enable you to ask: "Which weekend in October has the cheapest price for flying to Vegas"? In response, the Action might initially gather more information, asking "Do you mind a stop-over?" for example. With Conversation Actions, the interaction involves the "bot" not only giving answers and doing things for you (such as booking those tickets), but also asking you questions to get more information.

Google last month bought a two-year-old Silicon Valley startup called, whose technology will be offered to developers for building Assistant Conversation Actions. (Awkwardly, has it's own app called "Assistant," which reportedly has some 20 million users.)

A smattering of companies have already jumped on board to offer third-party additions to Assistant. These include news sources like CNN, CNBC, The Huffington Post, ABC News Radio, CBS Sports, CBS Radio News and others; music sites such as Tunein, Pandora, iHeart Radio and Spotify; food-related brands like Food Network, Vivino and OpenTable; and home automation companies like SmartThings. Intriguingly, crowdsourced information services Quora and Jelly have also signed up.


Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.