the New XR SDK Pipeline with HoloLens 2: Part 2

In the first part of this series, I provided a detailed walkthrough of setting up a project using new Unity XR SDK pipeline for HoloLens 2 development and how to integrate it with the HoloLens 2 toolchain.

HoloLens2

In this post, I will continue building on that project by showing you how to set up an HoloLens 2 scene in Unity using the Mixed Reality Toolkit. Finally I will show you how to set up and configure one of the MRTK built-in example projects.

Configuring a scene for the HoloLens 2

Use the project from the previous post in which you configured the project settings, player settings, build settings, and imported the MRTK to use with the new XR SDK pipeline.

Setting up a new scene for the HoloLens only takes a few steps.

add_scene

1. From the Mixed Reality Toolkit item in the toolbar, select Add to Scene and Configure.

mrtoolkit

2. This will add the needed MRTK components to your current scene. Verify in the Hierarchy window that your new scene includes the following game objects: Directional Light, MixedReality Toolkit, and Mixed Reality Playspace. Select the MixedReality Toolkit game object.

default_xrsdk

3. In the Inspector pane for the MixedReality Toolkit game object, there is a dropdown of various configuration profiles. The naming of these profiles is confusing. It is extremely important that you switch from the default DefaultMixedRealityToolkitConfigurationProfile to DefaultXRSDKConfigurationProfile. Without making this change, even basic head tracking will not work for you.

4. Next, click on the Clone button and choose a new pertinent name for your application’s configuration (if  you can’t think of one, then the old standby MyConfigurationProfile will work in a pinch – you can go back and change it later).

cam_clone

5. The MRTK configuration files are set up in a daisy chain fashion, with config files referencing other config files, all of which can be copied and customized. Go ahead and clone the DefaultXRSDKCameraProfile. Rename it to something your can remember (MyCameraProfile will work in a pinch).

6. Save all your changes with Ctrl+S.

Opening an MRTK example project

Being able to test out a working HoloLens 2 application can be instructive. If you followed along with the previous post, you should already have the example scenes imported into your project.

features

If you missed this step, you can open up the Mixed Reality Feature Tool now and import the Mixed Reality Toolkit Examples.

mrtk_examples

1. After importing, the MRTK examples are still compressed in a package. In the Project pane, navigate to the Packages folder. Then right click on Projects > Mixed Reality Toolkit Examples and click on View in Package Manager in the context menu.

examples_import

2. In Package Manager, select the Mixed Reality Toolkit Examples package. This will list all of the compressed MRTK demos to the right.

demos_handtracking

3. Click on the Import into Project button next to the Demos – HandTracking sample to decompress it.

hand_inter

4. There are a few ways to open your scene. I will demonstrate one of them.Type Ctrl+O on your keyboard (this is equivalent to selecting File | Open Scene on the toolbar). A file explorer window will open up. Navigate to the Assets folder for your Unity project. You will find the HandInteractionExample scene under a folder called Samples. Select it.

inter_scene

The interaction sample is one of the nicest ways to try out the hand tracking capabilities of the HoloLens 2. It still needs to be configured to work with the new XR SDK pipeline, however.

Configuring an MRTK demo scene

Before deploying this scene to a HoloLens 2, you must first configure the scene to use the XR SDK pipeline.

change_config

1. Select the MixedRealityToolkit game object in the Hierarchy pane. In the Inspector pane, switch from the default configuration profile to the one you created earlier when you were creating your own scene.

2. Ctrl+S to save your changes.

Preparing the MRTK demo scene for deployment

player_settings

1. Open up the Build Settings window either by typing Ctrl+Shift+B or by selecting File | Build Settings from the project toolbar.

2. Click on the Add Open Scenes button to add the example scene.

3. Ctrl+S to save your build settings.

build_deploy

4. One of the nicest features of the Mixed Reality Toolkit, going back all the way to the original HoloLens Toolkit it developed out of, is the build feature. Building a project for HoloLens has several involved steps which include building a Visual Studio project for UWP, compiling the UWP project into a Windows Store assembly, and finally deploying the appx to either a HoloLens 2 device or to an emulator. The MRTK build window lets you do all of this from inside the Unity IDE.

From the Mixed Reality Toolkit menu on the toolbar, select Utilities | Build Window. From here, you can build and deploy your application. Alternatively, you can build your appx file and deploy it from the device portal, which is what I usually do.

Summary

This post completes the walkthrough showing you how to set up and then build a HoloLens 2 application in Unity using the new XR SDK pipeline. It is specifically intended for developers who have developed for the HoloLens before and may have missed a few tool cycles, but should be complete enough to also help developers new to spatial computing get quickly up to speed on the praxis of HoloLens 2 development in Unity.

The New XR SDK Pipeline with HoloLens 2: A Walkthrough

The HoloLens 2 toolchain is under continuous development. In addition, the development of the Mixed Reality Toolkit (MRTK), the developer kit for HoloLens, must be synced with the continuous development of the Unity Game Engine. For these reasons, it is necessary for developers to be careful about using the correct versions of each tool in order to avoid complications.

HoloLens2

As of February, 2021, I’m  doing HoloLens development with Visual Studio 2019 v16.5.5, Unity Hub 2.4.2, Unity 2019.4.20f, Mixed Reality Feature Tool 1.0.2102 beta,  MRTK 2.5.4 + MRTK 2.6.0-preview, and Windows 10 Pro 10.0.18363.

MRTK, MR Feature Tool beta,  and the new Unity XR SDK pipeline

The most accurate documentation on getting started with HoloLens 2 development, at this time, is found in the MRTK github repository and on Microsoft’s MR Feature Tool landing page. There are nevertheless some gaps between these two docs that it would be helpful to fill in.

The new Unity XR SDK pipeline is a universal plugin framework that replaces something now known as the legacy Unity XR pipeline. Because of all the moving parts, many people have trouble getting the new pipeline working correctly, especially if they have not kept up with the changes for a while or if they are new to the HoloLens 2.

You will want to download Unity Hub if you don’t have it already.  Unity Hub will help you manage multiple versions of Unity on your development machine. It is fairly common to switch between different versions of Unity if you are working in VR and Mixed Reality as you go back and forth between older versions for maintenance and pursue newer versions for the latest features. As a rule, never upgrade the Unity version of an existing project if things are working well enough.

Create a new unity project

Use the Unity Hub Installs tab to get the latest version of Unity 2019.4, which you will need in order to successfully work with the latest MRTK. Later versions of Unity will not currently work for developing HoloLens 2 applications with the MRTK 2.5.4. Versions earlier than Unity 2018.4 also will not work well.

Some of the documentation mentions using Unity 2020.2 with OpenXR. This is something totally different. Just ignore it.

new_file

Start by creating a new Unity 2019.4 project.

new_2019

When you do this from Unity Hub, you can use the pull down menu to select from any of the versions installed on your development computer.

package_mgr

When your Unity app has been created, open the Unity Package Manager from the Windows menu.

xr_plugin

In Package Manager, select the Windows XR Plugin in the left panel. Then click the Install button in the lower left corner of the main panel.

xr_legacy_etc

This will also install the following components automatically: XR Interaction Subsystems, XR Legacy Input Helpers, and XR Plugin Management.

*** Notice that the component called Windows Mixed Reality is not installed. Don’t touch it. This is a left over from the legacy Unity XR pipeline and will eventually be removed from Unity. You will use a different method to get the MRTK into your project. ***

Configure project settings for HoloLens 2

You should now edit your project settings. Open the project settings panel by selecting Edit | Project Settings in the menu bar.

project_settings1

1. Select XR Plug-in Management in the left-hand pane and then, in the main window, check off Windows Mixed Reality to select it as your plug-in provider. This lets the new XR SDK pipeline work with your MRTK libraries (which you will import in the next section).

buffer16

2. Under XR Plug-in Management, select the Windows Mixed Reality entry in the left-hand pane. Make sure Depth Buffer 16 Bit is selected as your Depth Buffer Format and that Shared Depth Buffer is checked off. This will improve the performance of your spatial computing application.

project_settings_qual

3. Select Quality in the left-hand pane. To improve performance, click the down arrow under the Windows logo to set your default quality setting to Low for your spatial computing application.

tmpro

4. While you are configuring your project settings, you might as well also import TextMesh Pro. Select TextMesh Pro in the left-hand pane and click on the Import TMP Essentials button in the main window. TMP will be useful for drawing text objects in your spatial computing application.

player_packagename

5. Select Player  in the right-hand pane to edit your player settings. Change the Package name entry to something relevant for your project. (The project name is how the HoloLens identifies your application. If you are quickly prototyping and deploying projects and forget to change the package name, you will get an obscure message about saying your Template3D package is already installed. This is just the default package name on all new Unity projects.)

You are now ready to import the Mixed Reality Toolkit.

Retrieve MRTK components with MR Feature Tool

The HoloLens team has created a new tool called the Mixed Reality Feature Tool for Unity to help you acquire and deploy the correct version of the MRTK to your project.

feature_tool

1. After downloading the feature tool, you can go into settings and check off the Include preview releases box in order to get the 2.6.0-preview.20210220.4 build of MRTK. Alternatively, you can use MRTK version 2.5.4 if you are uncomfortable with using a preview build.

features

2. Follow the wizard steps to select and download the MRTK features you are interested in. At a minimum, I’d recommend selecting the Toolkit Foundation, Toolkit Extensions, Toolkit Tools, Toolkit Standard Assets, and Toolkit Examples.

fature_path

3. On the Import features screen, select the path to your Unity project. Validate that everything is correct before importing the selected features into your Unity HoloLens 2 project.

4. Click on the Import button.

Configure build settings for HoloLens 2

As the MRTK libraries are added to your project, your Unity project may hang while it is being refreshed if you still have it open (which is very okay).

add_scene

1. When the refresh is done, the Mixed Reality Toolkit item will appear your Unity menu bar.

apply

2. At the same time, an MRTK configuration window will also pop up in the Unity IDE. Accept the suggested project setting changes by selecting Apply.

switch_settings

3. Click on File | Build Settings… to open the Build Settings window. Select Universal Windows Platform in the left pane.

switch

4. Upon selecting Universal Windows Platform as your build target, you will be prompted to switch platforms. Click on the Switch Platform button to confirm that you will be building an application for the UWP platform. This will initiate a series of updates to the project that may freeze the IDE as the project is refreshed.

switch_uwp

5. After your switch to the Universal Windows Platform, MRTK may prompt you to make additional changes to your project. Apply the recommended changes.

switch_settings

6. For a release deployment, you will want the following build settings:

    • Target Device should be HoloLens.
    • Architecture is ARM64. This is the processor used in the HoloLens 2.
    • Build Type is D3D Project. Any other build type will project a standard 2D UWP window into the HoloLens.
    • Build configuration is Master for a release build, rather than a Release, as odd as that seems. This is the most lightweight build you can deploy to your device, and consequently the best performing.
    • Compression Method should be LZ4HC for release builds. This takes a lot longer to compile, but is the most efficient in a deployed application.

You can close the Build Settings window.

Summary

This walkthrough is intended to get you through the initial steps for integrating the new Unity XR SDK pipeline with the Mixed Reality Toolkit. For completeness, I will walk you through setting up and deploying a HoloLens scene in the next post.

Tech means never having to say you’re sorry

spatial

There have been a series of amazing turns of event in the mixed reality world lately. The big headliners for me are:

1) Magic Leap laid off about a thousand employees due to diminishing funds but then was able to get a lifeline of $350 million, which will save the jobs of the remaining 300-400 engineers. The creative teams and sales teams appear to have been gutted in the first round of layoffs, unfortunately.

2) Microsoft HoloLens announced general availability of the HoloLens 2 on the Microsoft Store starting in July. Also availability in more countries starting in the fall.

3) The Unity XR SDK is getting closer to shipping, or has already shipped but is only working well with some platforms for now? Obviously some things need to be ironed out, but this appears to be the future of cross-platform AR development.

4) Spatial.io, the cross-platform XR collaboration platform, has made its product free.

Along with these there have been a series of refreshingly honest video interviews with some of the central people in the current evolution of mixed reality that help to frame our understanding of what has been going on at Microsoft and at Magic Leap over the past five years.

The XR Talk podcast is always great (thanks Roland for introducing me to it). This meandering interview with Graeme Devine, post-Leap, is particularly fascinating. There’s a great story of how he delivered the blade Orcrist (or was it Glamdring?) to Neal Stephenson in order to tempt him to come work with Magic Leap.

This week also saw the hosting of MR Dev Days conference on altspaceVR, which was a fascinating and wonderfully international experience.  Big thanks to Jesse McCulloch and everyone else responsible for throwing it together. The highlight of the show was a very frank conversation between Rene Schulte and Alex Kipman which I can’t recommend enough.

kipman

During the fireside chat (and the keynote the previous day) Kipman acknowledged the drawn out distribution of the HoloLens 2 and thanked developers for their patience. He also discussed the bucket problem (I think that’s what it’s called?) in which losing a a bucket of credibility requires a lot of buckets to regain that same level of credibility (pretty sure I messed up that metaphor).

We’ve seen a lot of that in the MR world this month. The financial problems at Magic Leap will have put a lot of people off. The fact that all the laid off employees have refrained from criticizing the company in the aftermath has been  surprising and probably speaks well for the company culture.

Meanwhile in the HoloLens world, public and private message boards indicate a lot of frustration with the product team. Original messaging suggested the devices would be out in early 2019, but over a year later, individual devs still have problems getting devices.

Looked at objectively, it’s pretty clear that if the HoloLens team could have gotten more devices to indie devs they would have and that the delays were not intentional. But knowing that also doesn’t necessarily make the bad feelings go away, given the difference between knowing and feeling, and this  in turn may have a depressing effect on any excitement around the wider release in July and then in the fall.

For what it’s worth, I think an apology goes a long way, and a heartfelt, personalized acknowledgment of the people who might feel slighted will go the greatest way. The difficulty here is that in a corporate culture like Microsoft’s, acknowledgement of mistakes is as alien to the normal way of doing things as – well, to be honest – as it is in the Trump administration. The culture of the Trump administration, after all, comes out of common practice in the modern corporation.

This isn’t always the case, though, and I have two pieces of evidence. Microsoft is very good at giving out chachkas, and even sent out an impressive gift pack for their online Build conference. The two best things I ever got from Microsoft, though, were personalized notes.

The first is a note from the Ben Lower / Heather Mitchell days of the Kinect program. Somebody wrote this out by hand, providing both an acknowledgment of who I am and what I had done (and to be honest, I was surprised they even knew who I was at the time):

WIN_20200523_18_10_50_Pro

The next is a card from the early days of the HoloLens program (Venessa Arnauld / Aileen Mcgraw).

WIN_20200523_18_10_27_Pro

These are my two most prized possessions from Microsoft over the past decade. Friends and associates have similar mementos they memorialize at home. The lesson from these two examples, for me, is that in tech you don’t always have to say you are sorry. It is often good enough, and probably more meaningful,  to acknowledge the legitimate concerns, understandable feelings,  and obvious humanity of the people who make you successful.

That’s a lot of personalized messages, but also the sort of thing that can easily repair broken or damaged relationships with your developer community.

Microsoft’s convergence of chatbots and mixed reality

One of the biggest trends in mixed reality this year is the arrival of chatbots on platforms like HoloLens. Speech commands are a common input for many XR devices. Adding conversational AI to extend these native speech recognition capabilities is a natural next steps toward a future in which personalized virtual assistant backed by powerful AI accompany us in hologram form. They may be relegated to providing us with shopping suggestions, but perhaps, instead, they’ll become powerful custom tools that help make us sharper, give honest feedback, and assist in achieving our personal goals.

If you have followed the development of sci-fi artificial intelligence in television and movies over the years, the move from voice to full holograms will seem natural. In early sci-fi, such as HAL from the movie 2001: A Space Odyssey or the computer from the original Star Trek, computer intelligence was generally represented as a disembodied voice. In more recent incarnations of virtual assistance, such as Star Trek Voyager and Blade Runner 2049, these voices are finally personified by full holograms of the Emergency Medical Hologram and Joi.

In a similar way, Cortana, Alexa, and Siri are slowly moving from our smartphones, Echos, and Invoke devices to our holographic headsets. These are still early days, but the technology is already in place and the future incarnation of our virtual assistants is relatively clear.

The rise of the chatbot

For Microsoft’s personal digital assistant Cortana, who started her life as a hologram in the Halo video games for Xbox, the move to holographic headsets is a bit of a homecoming. It seems natural, then, that when Microsoft HoloLens was first released in 2016, Cortana was already built into the onboard holographic operating system.

Then, in a 2017 article on the Windows Apps Team blog, Building the Terminator Vision HUD in HoloLens, Microsoft showed people how to integrate Azure Cognitive Services into their holographic head-mounted display in order to provide smart object recognition and even translation services as a Terminator-like HUD overlay.

The only thing left to do to get to a smart virtual assistant was to tie together the HoloLens’s built-in Cortana speech capabilities with some AI to create an interactive experience. Not surprisingly, Microsoft was able to fill this gap with the Bot Framework.

Virtual assistants and Microsoft Bot Framework

Microsoft Bot Framework combines AI backed by Azure Cognitive Serviceswith natural-language capabilities. It includes a set of open source SDKs and tools that enable developers to build, test, and connect bots that interact naturally with users. With the Microsoft Bot Framework, it is easy can create a bot that can speak, listen, understand, and even learn from your users over time with Azure Cognitive Services. This chatbot technology is sometimes referred to as conversational AI.

There are several chatbot tools available. I am most familiar with the Bot Framework, so I will be talking about that. Right now, chatbots built with the Bot Framework can be adapted for speech interactions or for text interactions like the UPS virtual assistant example above. They are relatively easy to build and customize using prepared templates and web-based dialogs.

One of my favorite ways to build a chatbot is by using QnA Maker, which lets you simply point to an online FAQ page or upload product documentation to use as the knowledge base for your bot service. QnA Maker then walks you through applying a chatbot personality to your knowledge base and deploying it, usually with no custom coding. What I love about this is that you can get a sophisticated chatbot rolled out in about half a day.

Using the Microsoft Bot Framework, you also have the ability to take full control of the creation process to customize your bot in code. Bot apps can be created in C#, JavaScript, Python or Java. You can extend the capabilities of the Bot Framework with middleware that you either create yourself or bring into your code from third parties. There are even advanced capabilities available for managing complex conversation flows with branches and loops.

Ethical chatbots

Having introduced the idea above of building a Terminator HUD using Cognitive Services, it’s important to also raise awareness about fostering an environment of ethical AI and ethical thinking around AI. To borrow from the book The Future Computed, AI systems should be fair, reliable and safe, private and secure, inclusive, transparent, and accountable. As we build all forms of chatbots and virtual assistants, we should always consider what we intend our intelligent systems to do, as well as concern ourselves with what they might do unintentionally.

The ultimate convergence of AI and mixed reality

Today, chatbots are geared toward integrating skills for commerce like finding directions, locating restaurants, and providing help with a company’s products through virtual assistants. One of the chief research goals driving better chatbots is to personalize the chatbot experience. Achieving a high level of personalization will require extending current chatbots with more AI capabilities. Fortunately, this isn’t a far-future thing. As shown in the Terminator HUD tutorial above, adding Cognitive Services to your chatbots and devices is easy to do.

Because holographic headsets have many external sensors, AI will also be useful for analyzing all this visual and location data and turning it into useful information through the chatbot and Cognitive Services. For instance, cameras can be used to help translate street signs if you are in a foreign city or to identify products when you are shopping and provide helpful reviews.

Finally, AI will be needed to create realistic 3D model representations of your chatbot and overcome the uncanny valley that is currently holding back VR, AR, and MR. When all three elements are in place to augment your chatbot — personalization, computer vision, and humanized 3D modeling — we’ll be that much closer to what we’ve always hoped for — personalized AI that looks out for us as individuals.

Here is some additional reading on the convergence of chatbots and MR you will find helpful:

The Fork in Mixed Reality

Futuristic Spaceman - Programmer Reading Projected Information

Yogi Berra gnomically said, “when you come to a fork in the road, take it.”  On the evening of Friday, February 1st, 2019 at approximately 9 PM EST, that’s exactly what happened to Mixed Reality.

The Mixed Reality Toolkit open source project, which grew out of the earlier HoloLens Toolkit on github, was forked into the Microsoft MRTK and a cross-platform XRTK (read the announcement). While the MRTK will continue to target primarily Microsoft headsets like the HoloLens and WMR, XRTK will feature a common framework for HoloLens, Magic Leap, VR Headsets, Mobile AR – as well as HoloLens 2 and any other MR devices that eventually come on the market.

So why did this happen? The short of it is that open source projects can sometimes serve multiple divergent interests and sometimes they cannot. Microsoft was visionary in engineering and releasing the original HoloLens MR Headset. They made an equally profound and positive step back in 2016 by choosing to open source the developer SDK/Framework/Toolkit (your choice) that allows developers to build Unity apps for the HoloLens. This was the original HoloLens Toolkit (HLTK).

While the HLTK started as a primarily Microsoft engineering effort, members of the community quickly jumped in and began contributing more and more code to the point that the Microsoft contributions became a minority of overall contributions. This, it should be noted, goes against the common trend of a single company paying their own engineers to keep an open source project going. The HLTK was an open source success story.

In this regard, it is worth calling out two developers in particular, Stephen Hodgson and Simon Jackson, for the massive amounts of code and thought leadership they have contributed to the MR community. Unsung heroes barely captures what they have done.

In 2017 Microsoft started helping to build occluded WinMR (virtually the same as VR) devices with several hardware vendors and it made sense to create something that supported more than just the HoloLens. This is how the MRTK came to be. It served the same purpose as the HLTK, to accelerate development with Unity scripts and components, but now with a larger perspective about who should be served.

In turn, this gave birth to something that is generally known as MRTK vNext, an ambitious project to support not just Microsoft devices but also platforms from other vendors. And what’s even more amazing, this was again driven by the community rather than by Microsoft itself. Microsoft was truly embracing the open source mindset and not just paying lip service to it as many naysayers were claiming.

But as Magic Leap, the other major MR Headset vendor, finally released their product in fall, 2018, things began to change. Unlike Microsoft, Magic Leap developed their SDK in-house and threw massive resources at it. Meanwhile, Microsoft finally started throwing their engineers at the MRTK again after taking a long hiatus. This may have been in response to the Magic Leap announcement or equally could have been because the team was setting the stage for a HoloLens 2 announcement in early 2019.

And this was the genesis of the MR fork in the road: For Microsoft, it did not make sense to devote engineering dollars toward creating a platform that supported their competitors’ devices. In turn, it probably didn’t make sense for engineers from Google, Magic Leap, Apple, Amazon, Facebook, etc. to devote their time toward a project that was widely seen as  a vehicle for Microsoft HMDs.

And so a philosophical split needed to occur. It was necessary to fork MRTK vNext. The new XRTK (which is also pronounced “Mixed Reality Toolkit”) is a cross-platform framework for HoloLens as well as Magic Leap (Lumin SDK support is in fact already working in XRTK and is getting even more love over the weekend even as I write).

But XRTK will also be a platform that supports developing for Oculus Rift, Oculus Go, HTC Vive, Apple ARKit, Google ARCore, the new HoloLens 2 which may or may not be announced at MWC 2019, and whatever comes next in the  Mixed Reality Continuum.

So does this mean it is time to stick a fork in the Microsoft MRTK? Absolutely not. Microsoft’s MRTK will continue to do what people have long expected of it, supporting both HoloLens and Occluded WinMR devices (that is such a wicked mouthful — I hope someone will eventually give it a decent name like “Windows Surface Kinect for Azure Core DotNet Silverlight Services” or something similarly delightful).

In the meantime, while Microsoft is paying its engineers to work on the MRTK, XRTK needs fresh developers to help contribute. If you work for a player in the MR/VR/AR/XR space, please consider contributing to the project.

Or to word it in even stronger terms, if you give half a fork about the future of mixed reality, go check out  XRTK  and start making a difference today.

10 Questions with Jasper Brekelmans

This is the first in a series of interviews intended to help people get to know the movers & shakers as well as the drones & technicians (sometimes the same person is all four) who are making mixed reality … um … a reality.  I’ve borrowed the format from Vox but added some new questions.

brekel

Though not widely known outside of certain circles, when you ask experienced HoloLens developers who they most admire, Jasper’s name usually comes up. Jasper is the creator of the Brekel Toolset, an affordable tool for doing motion capture with the Kinect sensor. He also works with HoloLens, Oculus, and the Vive and his innovative projects have been featured on RoadToVR and other venues. His work on collaboration between multiple Vive headsets was mind-blowing—but then again, so was his HoloLens motion capture demo with a live dancer, his HoloLens integration with Autodesk MotionBuilder, and his recent release of the OpenVR Recorder.

Without further ado, here are Jasper’s answers to 10 questions:

 

What movie has left the most lasting impression on you?
Spring Summer, Fall, Winter… and Spring“, “A Clockwork Orange“, “The Evil Dead“, “The Wrestler“, “Straight Story“, “Hidden Figures“…… too many to choose 🙂

What is the earliest video game you remember playing?
Pac-Man (arcade) and Donkey Kong (handheld).

Who is the person who has most influenced the way you think?
A work mentor and some close personal friends.

When was the last time you changed your mind about something?
Probably on a weekly basis on something or other.

What’s a programming skill people assume you have but that you are terrible at?
Heavily math based algorithms and/or coding for mobile platforms.

What inspires you to learn?
The goal of having new possibilities with freshly learned skills.

What do you need to believe in order to get through the day?
That what I do matters to others.

What’s a view that you hold but can’t defend?
That humanity will be better off once next generations have grown up with true AR glasses/lenses technology, have played with virtual galaxies and value virtual objects similarly to physical objects for certain purposes.

What will the future killer Mixed Reality app do?
Empower users in their daily live without them realizing it while at the same time letting new users realize what they miss instantly.

What book have you recommended the most?
Ready Player One.

Pokémon Go as An Illustration of AR Belief Circles

venn

Recent rumors circling around Pokémon Go suggest that they will delay their next major update until next year. It was previously believed that they would be including additional game elements, creatures and levels beyond level 40 sometime in December.

A large gap between releases like this would seem to leave the door open to other copy cat games to move into the opening that Niantec is providing them. And maybe this wouldn’t be such a bad thing. While World of Warcraft is the most successful MMORPG, for instance, it certainly wasn’t the first. Dark Age of Camelot, Everquest, Asheron’s Call and Ultima Online all preceded it. What WoW did was perhaps to collect the best features of all these games while also ride the right graphics card cycle to success.

A similar student-becomes-the-master trope can play out for other franchise owners, since the only thing that seems to be required to get a game similar to Pokemon going is a pre-existing storyline (like WoW had) and 3D assets either available or easily created to go into the game. With Azure and AWS cloud computing easily available, even infrastructure isn’t such a challenge as it was when the early MMORPGs were starting. Possible franchise holders that could make the leap into geographically-aware augmented reality games include Disney, Wow itself, Yu-Gi-Oh!, Magic the Gathering, and Star Wars.

Imagine going to the park one day and asking someone else face down staring at their phone if they know where the bulbasaur showing up on the nearby is and having them not knowing what you are talking about because they are looking for Captain Hook or a jawa on their nearby?

This sort of experience is exemplary of what Vernor Vinge calls belief circles in his book about augmented reality, Rainbow’s End. Belief circles describe groups of people who share a collaborative AR experience. Because they also share a common real life world with others, their belief circles may conflict with other people’s belief circles. What’s even more peculiar is that members of different belief circles do not have access to each other’s augmented worlds – a peculiar twist on the problem of other minds. So while a person in H.P. Lovecraft’s belief circle can encounter someone in Terry Pratchett’s Discworld belief circle at a Starbuck’s, it isn’t at all clear how they will ultimately interact with one another. Starbuck’s itself may provide virtual assets that can be incorporated into either belief circle in order to attract customers from different worlds and backgrounds – basically multi-tier marketing of the future. Will different things be emphasized in the store based on our self-selected belief circles? Will our drinks have different names and ingredients? How will trademark and copyright laws impact the ability to incorporate franchises into the muti-aspect branding of coffee houses, restaurants and other mall stores?

But most of all, how will people talk to each other? One of the great pleasures of playing Pokemon today is encountering and chatting with people I otherwise wouldn’t meet and having a common set of interests that trump our political and social differences. Belief circles in the AR future of five to ten years may simply encourage the opposite trend of community Balkanization in interest zones. Will high concept belief circles based on art, literature and genre fiction simply devolve into Democrat and Republican belief circles at some point?

HoloLens Occlusion vs Field of View

prometheusmovie6812

[Note: this post is entirely my own opinion and purely conjectural.]

Best current guesses are that the HoloLens field of view is somewhere between 32 degrees and 40 degrees diagonal. Is this a problem?

We’d definitely all like to be able to work with a larger field of view. That’s how we’ve come to imagine augmented reality working. It’s how we’ve been told it should work from Vernor Vinge’s Rainbow’s End to Ridley Scott’s Prometheus to the Iron Man trilogy – in fact going back as far as Star Wars in the late 70’s. We want and expect a 160-180 degree FOV.

So is the HoloLens’ field of view (FOV) a problem? Yes it is. But keep in mind that the current FOV is an artifact of the waveguide technology being used.

What’s often lost in the discussions about the HoloLens field of view – in fact the question never asked by the hundreds of online journalists who have covered it – is what sort of trade-off was made so that we have the current FOV.

A common internet rumor – likely inspired by a video by tech evangelist Bruce Harris taken a few months ago – is that it has to do with cost of production and consistency in production. The argument is borrowed from chip manufacturing and, while there might be some truth in it, it is mostly a red herring. An amazingly comprehensive blog post by Oliver Kreylos in August of last year went over the evidence as well as related patents and argued persuasively that while increasing the price of the waveguide material could improve the FOV marginally, the price difference was prohibitively expensive and ultimately nonsensical. At the end of the day, the FOV of the HoloLens developer unit is a physical limitation, not a manufacturing limitation or a power limitation.

haunted_mansion

But don’t other AR headset manufacturers promise a much larger FOV? Yes. The Meta 2 (shown below) has a 90 degree field of view. The way the technology works, however, involves two LED screens that are then viewed through plastic positioned at 45 degrees to the screens (technically known as a beam splitter, informally known as a piece of glass) that reflects the image into the user’s eyes at approximately half the original brightness while also letting the real world in front of the user (though half of that light is also scattered). This is basically the same technique used to create ghostly images in the Haunted Mansion at Disneyland.

brain

The downside of this increased FOV is you are loosing a lot of brightness through the beam splitter. You are also losing light based on the distance it takes the light to pass through the plastic and get to your eyes. The result is a see-through “hologram”.

Iron-Man-AR

But is this what we want? See-through holograms? The visual design team for Iron man decided that this is indeed what they wanted for their movies. The translucent holograms provide a cool ghostly effect, even in a dark room.

leia

The Princess Leia hologram from the original Star Wars, on the other hand, is mostly opaque. That visual design team went in a different direction. Why?

leia2

My best guess is that it has to do with the use of color. While the Iron Man hologram has a very limited color palette, the Princess Leia hologram uses a broad range of facial tones to capture her expression – and also so that, dramatically, Luke Skywalker can remark on how beautiful she is (which obviously gets messed up by the Return of the Jedi). Making her transparent would simply wash out the colors and destroy much of the emotional content of the scene.

star_wars_chess

The idea that opacity is a pre-requisite for color holograms is confirmed in the Star Wars chess scene on the Millennium Falcon. Again, there is just enough transparency to indicate that the chess pieces are holograms and not real objects (digital rather than physical).

dude

So what kind of holograms does the HoloLens provide, transparent or near-opaque? This is something that is hard to describe unless you actually see it for yourself but the HoloLens “holograms” will occlude physical objects when they are placed in front of them. I’ve had the opportunity to experience this several times over the last year. This is possible because these digital images use a very large color palette and, more importantly, are extremely intense. In fact, because the holoLens display technology is currently additive, this occlusion effect actually works best with bright colors. As areas of the screen become darker, they actually appear more transparent.

Bigger field of view = more transparent , duller holograms. Smaller field of view = more opaque, brighter holograms.

I believe Microsoft made the bet that, in order to start designing the AR experiences of the future, we actually want to work with colorful, opaque holograms. The trade-off the technology seems to make in order to achieve this is a more limited field of view in the HoloLens development kits.

At the end of the day, we really want both, though. Fortunately we are currently only working with the Development Kit and not with a consumer device. This is the unit developers and designers will use to experiment and discover what we can do with HoloLens. With all the new attention and money being spent on waveguide displays, we can optimistically expect to see AR headsets with much larger fields of view in the future. Ideally, they’ll also keep the high light intensity and broad color palette that we are coming to expect from the current generation of HoloLens technology.

HoloLens Hardware Specs

visual_studio

Microsoft is releasing an avalanche of information about HoloLens this week. Within that heap of gold is, finally, clearer information on the actual hardware in the HoloLens headset.

I’ve updated my earlier post on How HoloLens Sensors Work to reflect the updated spec list. Here’s what I got wrong:

1. Definitely no internal eye tracking camera. I originally thought this is what the “gaze” gesture was. Then I thought it might be used for calibration of interpupillary distance. I was wrong on both counts.

2. There aren’t four depth sensors. Only one. I had originally thought these cameras would be used for spatial mapping. Instead just the one depth camera is, and it maps a 75 degree cone out in front of the headset, with a range of 0.8 M to 3.1 M.

3.  The four cameras I saw are probably just grayscale cameras – and it’s these cameras along with cool algorithms that are being used to do inside-out position tracking along with the IMU.

Here are the final sensor specs:

  • 1 IMU
  • 4 environment understanding cameras
  • 1 depth camera
  • 1 2MP photo / HD video camera
  • Mixed reality capture
  • 4 microphones
  • 1 ambient light sensor

The mixed reality capture is basically a stream that combines digital objects with the video stream coming through the HD video camera. It is different from the on-stage rigs we’ve seen which can calculate the mixed-reality scene from multiple points of view. The mixed reality capture is from the user’s point of view only. The mixed-reality capture can be used for streaming to additional devices like your phone or TV.

Here are the final display specs:

  • See-through holographic lenses (waveguides)
  • 2 HD 16:9 light engines
  • Automatic pupillary distance calibration
  • Holographic Resolution: 2.3M total light points
  • Holographic Density: >2.5k radiants (light points per radian)

I’ll try to explain “light points” in a later post – if I can ever figure it out.