Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The TV is the new tablet: How gesture-based computing is evolving

Colin Neagle | Nov. 15, 2012
Few people watch television alone today, even when they're by themselves. Most are gravitating toward the multi-screen experience, in which viewers keep a smartphone, tablet or laptop close by so they can access the Web while they watch TV. But as televisions become smarter and gesture-based computing evolves, viewers may be able to mount and control everything they need on the living room wall.

Prentice says "the role of the TV is changing inside the home," and believes that before long it will become simply "the largest screen in the house." Because of that, he can easily see why consumers would be interested in next-generation navigation capabilities.

"Losing the remote control is so common," Prentice says. "It's interesting to see this generation of super-smart televisions and they all come with a remote that was built in the 1990s, which just seems kind of crazy."

Many modern televisions are equipped with media applications that are accessible with these outdated remote control devices. Wala suggests adapting the same mobile apps found on Apple's App Store or Google's Play store for the television and making them accessible through gesture control. The app stores could even make themselves available in this form factor, enabling users to purchase and download new apps directly on the TV, Wala says. Once the television reaches this level of access to content, gesture navigation will feel as intuitive as touchscreen interaction did on the iPhone, he says.

"You want to be able to point and click on what you want to interact with," Wala says. "You want to see things from your perspective and interact with them in a way that you would on your touchscreen-type device, and Kinect can't do that and Wii Motion can't do that."

Kinect and Wii Motion are more suited for capturing broader gestures, such as full-body motion or swiping from one page on the screen to the next, Wala says. Tarsier doesn't necessarily compete with them, but aims to improve pinpoint navigation for more control over the specific content on the TV.

Wala says Tarsier aims to license the technology to OEMs that can integrate it into televisions before shipping them. He may be onto something; Prentice says the market for gesture-based navigation is still entirely up for grabs.

"To be honest I think the market is wide open. Kinect opened things up and gave people a low-cost way to get in there and experiment," Prentice says. "We don't yet have a sort of dominant player in that space."

More broadly, Prentice believes touch-less, gesture-based computing holds significant advantages over touchscreen technology in certain use cases. In healthcare, for example, the tablet can be an operational godsend, but when multiple doctors and nurses touch them in between examining patients, they can become "germ magnets," Prentice says.

"A touchscreen you don't have to touch, which sounds like a bit of an oxymoron but you understand what I mean, makes a heck of a lot of sense," Prentice says.

Still, the path to mainstream gesture-based computing has its obstacles. Prentice cites the problem of differentiating between gestures meant to control the device and natural motion when users are simply moving around near it. Companies like Tarsier, Leap Motion and Microsoft are undoubtedly tackling those challenges, racing to be on the forefront of the latest user navigation revolution. Once these issues are ironed out, though, gesture-based computing will follow the same path as every other technology, and artifacts like the mouse and keyboard will be relegated to history.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.