Microsoft previews HoloLens headset

Microsoft has unveiled a prototype of HoloLens, its most intriguing product in years, as part of a pre-launch of Windows 10.

HoloLens gives a new dimension to the ‘holographic method’ invented by Hungarian-British physicist Dennis Gabor, for which he was awarded the Nobel Prize in Physics in 1971.

It is a head-mounted holographic display with a built-in computer. The headset is bigger and more substantial than Google Glass, but far less boxy than Facebook’s Oculus Rift.

HoloLens. (Image courtesy Microsoft.)

HoloLens. (Image courtesy Microsoft.)

The device, which Microsoft plans to make accessible to developers this spring, seems similar to the goggles made by Google-backed augmented reality startup Magic Leap.

HoloLens has a depth camera with a field of vision that spans 120 by 120 degrees. The camera can sense what the user’s hands are doing even when they are stretched out.

Sensors send terabytes of data every second to an onboard central processing unit, a graphics processing unit and a holographic processing unit.

The device enables creation of realistic three-dimensional images or holograms by tricking the user’s brain into seeing light as matter. In order to create such images, light particles bounce around millions of times in the device’s light engine. The photons then enter the goggles’ two lenses, where they rebound between layers of multi colored glasses before they reach the back of the user’s eye.

“Holographic computing enabled by Windows 10 is here. HoloLens is real; it will be available during Windows 10’s lifetime,” Alex Kipman, Technical Fellow at Microsoft, said while unveiling the device at the Windows 10 pre-launch function at Microsoft headquarters at Seattle.

Images are displayed onto a rectangular box that appears in front of the user. Users can see reality beyond its borders using peripheral vision, thus helping users retain a sense of the real world around them. This is in fact an advantage the device has over the design of Oculus Rift where the user is effectively blind while using the headset. With HoloLens, the user can walk around, see things that surround him/her and move head while focusing in on the central display.

“You can look around the holograms. There is no lag. They look like digital objects and not photo-realistic but are incredible,” Dieter Bohn, one of a few who were walked through a live demo, told The Verge.

Luckily, there are no fancy control switches—the device is totally controlled by the user’s eyes, hands and voice. The device monitors the position of the users’ eyes and displays the cursor wherever they is looking. While voice commands are also integrated, users can “click” onscreen controls by holding a fist in front of their face, raising a finger vertically and then rolling it back down.

“When you think about technology today, it is behind the glass screen,” said Microsoft’s Kipman. “HoloLens unlocks the screen.”

In the demo video above, users were taught how to fix a drain pipe that appeared in front of them, by using a combination of a headset, augmented reality and Skype.

Microsoft has also teamed up with NASA to let scientists explore what Curiosity sees on Mars. The demo lit up a room and turned it into Mars and users walked around the rocky terrain, bumped into the Curiosity rover and closely saw the planet.

“This is the next PC,” said a Microsoft spokesman, indicating that Microsoft has significant plans for the device.

While consumers and developers will eventually decide whether HoloLens will become the virtual platform of choice, the device has demonstrated that Microsoft has finally decided to get into the game.

Jojo Puthuparampil

Jojo Puthuparampil is a freelance writer who covers business and technology.

  • Rene

    We all have to be mindful of marketer speech with its odd use of fancy terms. It leads to bizarre sentences like this: “light particles bounce around millions of times in the device’s light engine.”

    Having talked to the people who were given access to try the HoloLens, it is primarily an augmented reality and mixed reality device. It clearly can do immersive – the Mars simulation was mostly immersive. However being able to gently reenter first reality makes it versatile and easier on the stomach (no simulator sickness).

    The HoloLens does not do peripheral vision yet, and it is not certain if it will in the first production version. Then again, neither does the Rift.

    How it works is this:

    The large clear visor in the HoloLens headset is a three layer holographic mirror/lens. Making a holographic lens vastly lowers the price of otherwise making exotic physical lenses. Each layer is tuned to interact with a specific wavelength: red, green, blue respectively. The printed holograms form a mirror/lens whose virtual shape and focus point is very different than the visor’s physical placement. They are designed to reflect three laser projectors which paint the virtual images into your eyes. Eye tracking cameras watch the user’s gaze and focus to adjust the virtual image focus. Add to the imager ultra high precision laser ring gyros, accelerometers, true vertical sensing, a system to light up the space in front of the headset to determine precisely where hands and upper limbs are located, gesture recognition software, spatialized sound arrays, and you have a fine product shipping this year, especially with partners making the software apps to use this to its full potential.

    The APIs to access the HoloLens environment are built into Windows 10. Yes, any app could be part of the virtual space. Existing apps’ windows can be placed into flat desktops and those then can be placed in the virtual space. The level of integration appears to permit creating a huge virtual task space that sits literally in front of you or pinned to real walls, tables, floor, your refrigerator, a picture frame, whatever. The system knows the real geo-space around you. It understands other geo-spaces and where things should be pinned. Or, if you want floaters that follow you, that can be done too. Or, using the new APIs, an app can create an entire geo-space.

    That is the primary scenario for HoloLens – it is an enabling platform. It is a major departure from the very old desktop metaphor into a fully spatialized task space. It also permits old school apps to coexist in that space.

    Meetings are a natural fit for Windows Holographic (yes, that’s the name for this task space). You create a meeting with a holographic Skype or Xoom app. You gesture where the people’s screens should appear at your desk, and it’s done.

    There seems to be a huge lineup of games running on Windows 10, many from Xbox, that access the virtual task space. There might be user activities than contain social frameworks to them.

    As for generic virtual worlds, I believe that to fully participate in the virtual space created by HoloLens an SL-style viewer would have to use the new APIs to place its rendered objects in that space. It is at that level of abstraction where it would work best.

    And so, the question is who, if anyone, is doing that work?

  • There is an interesting discussion on the Inworldz forum here: http://inworldz.com/forums/viewtopic.php?f=4&t=19761&start=30
    It would be interesting to know what the hypergrid people think too.
    Rene summarizes it a bit how this thing works, and why it is different of the occulus.
    The discussion above encompasses the technical aspect, but also the social aspects and use/adoption aspects, compared to the facebook occulus ring.
    My stance is that this headset looks better than the occulus, especially without the costly lenses which limit the field of vision and the heavy screen in an offset position. I also better trust microsoft than facebook on the social aspect, save that they have no experience in user-created content, only in ready-to-think video games. I hope they update soon 🙂

  • Rene

    I’d rather keep the discussion here as it is not related specifically to InWorldz.
    For any SL-style virtual world aka OpenSim, the biggest problems I see with any of this tech is low frame rates in content rich scenes, and slow texture loading. Both of these artifacts can create a nausea inducing problem, though less so with augmented or mixed reality systems.