It's been an interesting year or so following the development of Microsoft's HoloLens. First there was the skeletal prototype unveiled at its January 2015 launch, with an umbilical cord to a PC slung around your neck. Then came the first stand-alone units at Build later that year, where I was able to build and deploy applications to a USB-connected device.
Now Microsoft has started shipping the first developer hardware, and it ran the press through an abbreviated form of the training it's giving the first tranche of developers at Build 2016. Building a new application, we were able to explore some of the newly announced collaborative features with user avatars and shared experiences.
The shipping hardware is a big improvement over last year's prototypes. It's lighter, easier to adjust, and much more comfortable -- and we're finally allowed to take photographs. There are also improvements in the field of view, with more vertical space for the images. It's still a letterbox view of the virtual world, but it is at least a wider one. Images are also sharper, with text and line art clearly visible.
We started with a new built-in calibration app. This allowed us to first adjust the device so that we could see the display area clearly, before testing standard gestures in both left and right eyes, ensuring that overlays worked well and the device cameras have a clear view of your gestures. One note about the developer hardware: There's no longer any need to measure and set your inter-pupillary distance in the device settings, instead it's all handled automatically.
The development tutorial Microsoft was running at Build 2016 was again based on Unity's 3D model tools, using scripts to add functions from what Microsoft is calling its HoloToolkit, much of which is available through its online developer resources and can be used in the HoloLens emulator.
Starting with an existing animated 3D object in Unity, we were able to use scripts to place it anywhere in a stage with a simple gesture, using gaze to point a cursor. Once the scripts had been enabled, the app was exported from Unity into Visual Studio, where it was compiled and wirelessly delivered to the HoloLens.
This last step was a big change from 2015, removing the need to tether the device while working with it. Now code can be compiled and deployed while you're wearing the device; all you need is its IP address. There's a version of the Windows Start menu, with Cortana and a few basic applications ready to use out the box; however, I didn't have the time to explore these.
Sign up for CIO Asia eNewsletters.