Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Virtual assistants: They're made out of people!

Mike Elgan | Feb. 2, 2016
Why Siri, Cortana and Alexa are only semi-fictional characters.

It turns out that people say all kinds of crazy, obscene and inappropriate things to virtual assistants. The people who craft answers need to decide how to deal with these questions. In the case of Cortana, the writers make a point to respond with an answer that's neither encouraging nor funny. (By making it funny, they encourage uses to be abusive just for laughs.)

You might ask yourself: What difference does it make if users are verbally abusive to a virtual assistant? After all, no humans get their feelings hurt. No harm done, right?

Part of the craft of virtual assistant character development is to create a trusting, respectful relationship between human and assistant. In a nutshell, if the assistant takes abuse, you won't respect it. If you don't respect it, you won't like it. And if you don't like it, you won't use it.

In other words, a virtual assistant must exhibit ethics, at some level, or else people will feel uncomfortable using it.

They also have to exhibit a human-like "multidimensional intelligence," according to Intel's director of intelligent digital assistance and voice, Pilar Manchon. The reason is that people form an impression of the virtual assistant within minutes and base their usage on that perception.

When we interact with a virtual agent, we're compelled to behave in a specifically social way because we're social animals. It's just how we're wired. In order for users to feel comfortable with a virtual assistant, the assistant must exhibit what Manchon describes as multidimensional intelligence, which includes social intelligence, emotional intelligence and more. Not doing so would make the agent unlikable in the same way and for the same reason that a real person without these traits is unlikable.

Those are just examples of some of the issues the virtual assistant writing teams have to grapple with. The entire process of providing responses to queries requires that a group of humans consider what the best response would be to any possible question posed by users.

Yes, virtual assistants involve monster computers with artificial intelligence crunching away on understanding the user and grabbing the right data. But the response comes in the form of words carefully crafted by people.

It's not just a human response, it's better than a human response. At least under the circumstances.

With real people, you can't overcome the vagaries and perils of people having a bad day, people guessing, people lying, people expressing subtle bigotry and more.

While virtual assistant responses are far from perfect, their improvement rate is always headed in one direction -- up. The people who craft these responses deal with complaints and errors and problems every day, and they chip away constantly at the rough edges. Over time, the responses get more helpful, more accurate, more carefully crafted and less offensive or off-putting, while remaining decidedly human all the while.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.