Patrick O' Connor, another system designer for Microsoft, explained that the chip can resolve players less than 1 meter away, and players at more than 2 meters away, with a 70-degree field of view. Microsoft also has to take into account a whole range of environments--from players that like to play games in the dark to broad daylight.
The Kinect on the Xbox One actually "illuminates" the room with its own modulated light. What the camera's 512 x 424 sensor "sees" is the reflected light, which will be out of phase depending on the way it reflects from objects. The Xbox One then adds or subtracts their values to weed out the ambient light, determine the player's location and motion, and what is the "active" image. Each Xbox One can distinguish up to six different players, O'Connor said, but exactly "who" is playing at any one time takes some figuring.
What the Xbox One can do, however, is determine, on a per-pixel basis, how best to "see" the object. Since the camera can't actually lengthen its exposure or widen its aperture, it needs to use different shutter times, before picking the best one. Combining the Kinect's own illumination with intelligent "reading" of the image eliminates common problems, such as sidelighting that can confuse the camera, O'Connor said.
Sign up for CIO Asia eNewsletters.