Remember the HoloLens? It was Microsofts Artificial Reality (AR) headset. Well, it looks like they are taking a second swing at it. Microsoft recently revealed the latest version of its reality-altering headset on February 26 at the 2019 Mobile World Congress in Barcelona, España. The headset has been overhauled to reflect Microsoft’s new strategy for tackling AR.
A brief history of the HoloLens
The original HoloLens was released in 2015 and was pitched, somewhat over promisingly, as being able to virtually transform users’ living rooms into a football stadium. Not the first time Microsoft had overstated the level of immersion of its tech. We all remember the life-changing promise of the Xbox's Kinect. It may have been this overpromise that lead to its eventual failure in the consumer market. The company later announced that it would focus instead more on enterprise cases. A story not unlike Google Glass, but at least they made it clear that it was a development kit and not the market ready consumer model.
The HoloLens 2
HoloLens 2 is a different animal entirely. It's no longer about the consumer market or transforming your living room into a virtually augmented palace of endless opportunity. It’s now about business. About productivity. Microsoft fixing its sights on your office space. They want to replace rows of screens and monitors with AR headsets where users can interact simultaneously. The nightmare this could cause for responsive web design threatens to keep me up at night. How do I establish breakpoints? Do we need to map against a Z-Axis and scale a canvass? Will the AR "Pixels" scale or just introduce more theoretical pixels into the hologram? Rendering with frames in web and software-based applications, as specified below, will also introduce a lot of new challenges. Especially in terms of computing power for office workers.
"For any given frame, your app must render using the view transform, projection transform, and viewport resolution provided by the system. Additionally, your application must never assume that any rendering or view parameter remains fixed from frame-to-frame. Engines like Unity handle all these transforms for you in their own camera objects so that the physical movement of your users and the state of the system is always respected. If your application allows for virtual movement of the user through the world (e.g. using the thumbstick on a gamepad), you can add a parent rig object above the camera that moves it around. This causes the camera to reflect the user's virtual and physical motion. If your application modifies the view transform, projection transform, or viewport dimension provided by the system, it must inform the system by calling the appropriate override API." Microsofts Mixed Reality Rendering Guide
Keep on scrollin