Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Android virtual assistant showdown: Google Now vs. Cortana vs. Hound

Derek Walter | Oct. 2, 2015
We asked the top three digital helpers 100 questions to gauge how useful they are with everyday tasks and more complicated search queries.

google vs cortana vs hound
Credit: Derek Walter

Google isn’t the only one with a voice-activated digital assistant that can serve at your beck and call.

Two other competitors are vying for your attention, with hopes that they will bring superior voice search and perform more tasks pull you away from Google. 

First there’s Microsoft’s Cortana, which brings its Halo-inspired name and deep ties to Bing and Microsoft products. Then there’s the scrappy Hound, the product of years of behind-the-scene voice recognition work by Soundhound, a company primarily known for its song identification app.

Can either of them really surpass Google Now? One of Android’s best features is the ability to yell, “OK Google” to your phone and get a snappy response. On the surface, going with another search engine besides Google seems to defeat the purpose of using Android in the first place

But Android is built for openness, so there’s no reason not to put another service to the test. In all fairness, both challengers are in Beta (Hound requires an invite) and won’t have the same home court advantages as Google. 

But when it comes to competition, there are no excuses. So read on to find out if you need to squeeze aside Google and make some room on your home screen.

The methodology

I created a list of 100 questions that represent a broad spectrum of the kind of queries an ideal digital assistant would answer. Some of the items are voice commands that Google already performs, and I wanted to see how Hound and Cortana would handle them. 

Others were more aspirational, the idea being they would probably fail, because I wanted to really stretch what the voice search apps were capable of. A few of these were a complete flop; others surprised me.

Two sets of questions were contextual: I started with one question, like “Where’s the best place to get sushi around here?” and then follow it up with “How late is it open?” These were meant to measure how well each service remembers your past questions, knowing what pronouns like “it” refer to, for example.

I asked each question by voice, with the hope of getting the right information and a spoken answer. With that in mind, I rated each answer on the following scale:

  • 3 - Perfection! A spoken answer with details, directly tied to the question.
  • 2 - Good, with the task performed or question answered. Not as detailed or no voice feedback.
  • 1 - The app just performed a web search.
  • 0 - No comprehension of the question or an unrelated result. In short, fail.

For one measure, I tallied up their scores to see how they fared in terms of the number of points collected.

 

1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.