Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

5 ways Microsoft will enable your PC to see, sense, and understand

Mark Hachman | April 23, 2014
Two years ago, Microsoft's Kinect for Windows literally opened the PC's eyes. And now Microsoft researchers are teaching it to see.

Two years ago, Microsoft's Kinect for Windows literally opened the PC's eyes. And now Microsoft researchers are teaching it to see.

For too long, PCs sat mute, dumbly waiting for users to type on their keyboards or insert a disk. Then they became connected, reaching out to others when users commanded. At Microsoft's Silicon Valley Techfair last week, company researchers showed how they're taking the PC in a new direction, combining machine vision with a new independence so they may recognize and interpret what the PC sees and present that information in a useful context.

Unlike Google or other Silicon Valley companies, Microsoft has traditionally operated more like a public university than a private company, hosting similar research showcases once or twice a year. Yes, it holds some of its research close to its vest, especially that which later emerges as products, like its Cortana digital assistant. But many more are released to the public, both to show off the company's technical expertise as well as to demonstrate potential directions for the company. 

In all, Microsoft researchers showed off about 18 projects last week. We selected five, with four of those incorporating Kinect in some way. And no, we don't think Microsoft hit a home run with each. After all, future successes are often built upon past failures.

Your webcam: the next Kinect

If you've been attentively reading our coverage of Microsoft's Build, this presentation by Vivek Pradeep shouldn't seem all that new: Microsoft Kinect for Windows executive Michael Mott exclusively revealed to PCWorld that Microsoft is actively working to use conventional webcams as depth cameras, like its Microsoft Kinect.

In the video, Pradeep and his colleague show off what they call MonoFusion. Conceptually, it's pretty simple to explain: Using an unmodified webcam, the two researchers pan the camera over the scene. Behind the scenes, the Microsoft software interpreted what it sees from a depth perspective, creating 3D models of the objects in a Kinect-like fashion. The software then applies a color map, or texture, to the objects, essentially transforming the video of a collection of stuffed animals into models of the animals themselves. 

What Microsoft created, Pradeep noted, is a simple and powerful SDK to take the imagery and export the 3D models into a game or augmented reality application. It certainly has all sorts of possibilities.

Floating displays with gesture recognition

About a year ago, Microsoft researcher Jinha Lee unveiled a spectacular 3D desktop that used the combination of polarized glass and some intelligent software to create the illusion of a desktop with depth. Now Microsoft researcher Tim Large has developed a second, physical "floating display" to provide a somewhat similar illusion.

 

1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.