Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Peering into the sci-fi future of PC displays

Brad Chacos | April 1, 2013
Moore's law may keep us supplied with octa-core smartphone processors and PCs packed with millions of transistors, but not all areas of technology keep the pedal to the proverbial metal as enthusiastically as the chip technology. Specifically, desktop displays--the portals through which we glimpse the output of those hulking CPUs--are stuck in neutral while the technology in the rest of your PC tears ahead at breakneck speed.

Microsoft Research is already working hard to tackle these issues before they become widespread problems. The Towards Large-Display Experiences project, as outlined in the video below, has come up with one such possible solution that involves using a stylus for fine work and touchscreen gestures for basic controls, paired with user recognition tied to now-ubiquitous smartphones. It's an interesting response to a potentially huge (pun intended) problem.

Pushing all those pixels is another problem altogether, but fear not: Microsoft Research is hard at work on that, too. A project called Foveated Rendering aims to reduce processing requirements by taking advantage of the fact that the human eye doesn't see as much detail at its periphery. Basically, the display tracks your peepers and makes sure the area you're staring at is rendered in full glory, while dropping the resolution at the farther reaches of the screen.

During tests, users couldn't tell the difference between a full-fledged image and one using foveated rendering--yet the less-detailed image required only one-sixth as much computing power to create.

Thinking even bigger

Wall-size displays? Pfffft. That's small potatoes for the team behind LightSpace, yet another Microsoft Research project that wants to kill monitors as we know them, and turn every object in your office into a potential PC display.

LightSpace relies on a series of cameras and projectors to "create a highly interactive space where any surface, and even the space between surfaces, is fully interactive." In a nutshell, the cameras track your movement around the room and observe your interactions with the images projected by the setup, which can be cast on any surface--walls, chairs, desks, you name it. At a basic level, the camera tracking allows you to manipulate the images using familiar multi-touch gestures, but it also supports more-exotic commands such as dragging the image from one object to another, or "picking up" the image and handing it to another person.

It's nifty stuff. You'll definitely want to watch the video above, if only to skip to the interesting bits. It's also complicated stuff: LightSpace must be calibrated to the room it's being used in. At the moment, that knocks it from the realm of everyday use, but as cheap sensors like the Kinect grow more powerful and perhaps capable of mapping a room in 3D on the fly, a system like LightSpace could take off.

Talking heads in 3D

While others are busy trying to revolutionize displays entirely, USC's Institute for Creative Technologies is trying to make those tedious videoconferences just a tad less so with its 3D Video Teleconferencing System.

If you've ever had the (dis)pleasure of sitting through a videoconference, you're well aware that those calling in spend most of their meeting time staring into their laptop screens rather than the webcam itself, making even the most extroverted alpha males seem like disconnected introverts.

 

Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.