A User’s Guide to the terms VR, AR, MR and XR – with a tangent about pork

IMG_0083

Virtual reality, augmented reality, mixed reality and XR (or xR) are terms floating around that seem to describe the same things – but maybe not – and sometimes people get very angry if you use the terms incorrectly (or at least they say they do).

The difficulty is that these terms come from different sources and for different reasons, yet the mind naturally seeks to find order and logic in the world it confronts. A great English historical example is the way Anglo-Saxon words for animals have complementary Norman words for the cooked versions of those beasts: cow and beef (boeuf), pig and pork (porc), sheep and mutton (mouton). It is how the mind deals with a superfluity of words – we try to find a reason to keep them all.

vietnamese-pork-noodle

So as an experiment and a public service, here’s a guide to using these terms in a consistent way. My premise is that these terms are a part of natural language and describe real things rather than marketing terms meant to either boost products or boost personal agendas (such as the desire to be the person who coined a new term). Those constraints actually make it pretty easy to fit all these phrases into a common framework and uses grammar to enforce semantic distinctions:

1) Virtual reality is a noun for a 3D simulated reality that you move through by moving your body. A sense of space is an essential component of VR. VR includes 360 videos as well as immersive 3D games on devices like the Oculus Rift, HTC Vive and Microsoft Immersive headsets.

2) Augmented reality is a noun for an experience that combines digital objects and the real world, typically by overlaying digital content on top of a video of a real world (e.g. Pokémon Go) or by overlaying digital content on top of a transparent display (e.g. HoloLens, Meta, Magic Leap, Daqri).

3) Mixed reality is an adjective that modifies nouns in order to describe both virtual and augmented reality experiences. For instance:

a. A mixed reality headset enables virtual reality to be experienced.

b. The Magic Leap device will let us have mixed reality experiences.

4) xR is an umbrella term for the nouns virtual reality and augmented reality. You use xR generically when you are talking about broad trends or ambiguously when you are talking in a way that includes both VR and AR (for instance, I went to an event about xR where different MR experiences were on display). xR may, optionally, also cover AI and ML (aren’t they the same thing?).

This isn’t necessarily how anyone has consistently used these terms in 2017, but I feel like there is a trend towards these usages. I’m going to try to use them in this way in 2018 and see how it goes.

A Guide to Online HoloLens Tutorials

There are lots of great video tutorials and advanced HoloLens materials online that even people who work with HoloLens aren’t always aware of. I’d like to fix that in this post.

1. The Fundamentals

lil

If you are still working through the basics with the HoloLens, then I highly recommend the course that Dennis Vroegop and I did for LinkedIn Learning: App Development for Microsoft HoloLens. We approached it with the goal of providing developers with everything we wish we had known when we started working with HoloLens in early 2016. The course was filmed in a studio at the Lynda.com campus in Carpinteria, California, so the overall quality is considerably higher than most other courses you’ll find.

 

2. The Mixed Reality Toolkit (HoloToolkit)

hodgson

Once you understand the fundamentals of working with the HoloLens, the thing to learn is the ins-and-outs of the Mixed Reality Toolkit, which is the open source SDK for working with the HoloLens APIs. Stephen Hodgson, a Mixed Reality developer at Valorem, is one of the maintainers of (and probably biggest developer on) the MRTK. He does live streams on Saturdays to address people’s questions about the toolkit. His first two hour-long streamcasts cover the MRTK Input module:

#1 Input 1

#2 Input 2

The next three deal with Sharing Services:

#3 Sharing 1

#4 Sharing 2

#5 Sharing 3

These courses provide the deepest dive you’re ever likely to get about developing for HoloLens.

 

3. HoloLens Game Tutorial

chad

Sometimes it is helpful to have a project to work through from start to finish. Chad Carter provides this with a multipart series on game development for Mixed Reality. So far there are five lessons … but the course is ongoing and well worth keeping up with.

#1 Setup

#2 Core Game Logic

#3 The Game Controller

#4 Motion Controllers

#5 Keeping Score

 

4. Scale and Rotation System

jason_odom

Jason Odom’s tutorial series deals with using Unity effectively for HoloLens. It brings home the realization that most of 3D development revolves around moving, resizing, hiding and revealing objects. It’s written for an older version of the toolkit, so some things will have changed since then. By the way, Jason’s theme song for this series is an ear worm. Consider yourself warned.

#1 Setup

#2 Scale and Rotate Manager

#3 Scale and Rotate Class

#4 Scale and Rotate Class part 2 

#5 Scale and Rotate Class part 3

#6 Scale and Rotate More Manager Changes

#7 Scale and Rotate Temporary Insanity

#8 Scale and Rotate Q & A

 

5. HoloLens Academy

There’s also, of course, Microsoft’s official tutorial series known as the HoloLens Academy. It’s thorough and if you follow through the lessons, you’ll gain a broad understanding of the capabilities of the HoloLens device. One thing to keep in mind is that the tutorials are not always synced up with the latest MRTK so don’t get frustrated when you encounter a divergence between what the tutorials tell you to do and what you find in the MRTK, which is being updated at a much more rapid rate than the tutorials are.

 

6. Summing up

You’re probably now wondering if watching all these videos will make you a HoloLens expert. First of all, expertise isn’t something that you arrive at overnight. It takes time and effort.

Second of all – yeah. Pretty much. HoloLens development is a very niche field and it hasn’t been around for very long. It has plenty of quirks but all these videos will address those quirks either directly or obliquely. If you follow all these videos, you’ll know most of everything I know about the HoloLens, which is kinda a lot.

So have fun, future expert!

Projecting Augmented Reality Worlds

WP_20141105_11_05_56_Raw

In my last post, I discussed the incredible work being done with augmented reality by Magic Leap. This week I want to talk about implementing augmented reality with projection rather than with glasses.

To be more accurate, varieties of AR experiences are often projection based. The technical differences depend on which surface is being projected on. Google glass projects on a surface centimeters from the eye. Magic Leap is reported to project directly on the retina (virtual retinal display technology).

AR experiences being developed at Microsoft Research, which I had the pleasure of visiting this past week during the MVP Summit, are projected onto pre-existing rooms without the need to rearrange the room itself. Using fairly common projection mapping techniques combined with very cool technology such as the Kinect and Kinect v2, the room is scanned and appropriate distortions are created to make projected objects look “correct” to the observer.

An important thing to bear in mind as you look through the AR examples below is that they are not built using esoteric research technology. These experiences are all built using consumer-grade projectors, Kinect sensors and Unity 3D. If you are focused and have a sufficiently strong desire to create magic, these experiences are within your reach.

The most recent work created by this group (led by Andy Wilson and Hrvoje Benko) is a special version of RoomAlive they created for Halloween called The Other Resident. Just to prove I was actually there, here are some pictures of the lab along with the Kinect MVPs amazed that we were being allowed to film everything given that most of the MVP Summit involves NDA content we are not allowed to repeat or comment on.

WP_20141105_004

WP_20141105_016

WP_20141105_013

 

IllumiRoom is a precursor to the more recent RoomAlive project. The basic concept is to extend the visual experience on the gaming display or television with extended content that responds dynamically to what is seen onscreen. If you think it looks cool in the video, please know that it is even cooler in person. And if you like it and want it in your living room, then comment on this thread or on the youtube video itself to let them know it is definitely an M viable product for the XBox One, as the big catz say.

The RoomAlive experience is the crown jewel at the moment, however. RoomAlive uses multiple projectors and Kinect sensors to scan a room and then use it as a projection surface for interactive, procedural games: in other words, augmented reality.

A fascinating aspect of the RoomAlive experience is how it handles appearance preserving point-of-view dependent visualizations: the way objects need to be distorted in order to appear correct to the observer. In the Halloween experience at the top, you’ll notice that the animation of the old crone looks like it is positioned in front of the chair she is sitting on even the the projection surface is actually partially extended in front of the chair back and at the same time extended several feet behind the chair back for the shoulders and head.  In the RoomAlive video just above you’ll see the view dependent visualization distortion occurring with the running soldier changing planes at about 2:32”.

 

You would think that these appearance preserving PDV techniques will fall apart anytime you have more than one person in the room. To address this problem, Hrvoje and Andy worked on another project that plays with perception and physical interactions to integrate two overlapping experiences in a Wizard Battle scenario called Mano-a-Mano or, more technically, Dyadic Projected Spatial Augmented Reality. The globe at visualization at 2:46” is particularly impressive.

My head is actually still spinning following these demos and I’m still in a bit of a fugue state. I’ve had the opportunity to see lots of cool 3D modeling, scanning, virtual experiences, and augmented reality experiences over the past several years and felt like I was on top of it, but what MSR is doing took me by surprise, especially when it was laid out sequentially as it was for us. A tenth of the work they have been doing over the past two years could easily be the seed of an idea for any number of tech startups.

In the middle of the demos, I leaned over to one of the other MVPs and whispered in his ear that I felt like Steve Jobs at Xerox PARC seeing the graphical user interface and mouse for the first time. He just stroked his beard and nodded. It was a magic moment.

Why Magic Leap is Important

 magic-leap-shark-640x426

This past weekend a neighbor invited our entire subdivision to celebrate an Indian holiday called Diwali with them – The Festival of Lights. Like many traditions that immigrant families carry to the New World in their luggage, it had become an amalgamation of old and new. The hosts and other Indians from the neighborhood wore traditional South-East Asian formalwear. I was painfully underdressed in an old oxford, chinos and flip-flops. Others came in the formalwear of their native countries. Some just put on jackets and ties. We organized this Diwali as a pot-luck and had an interesting mix of biryanis, spaghetti, enchiladas, pancakes with syrup, borscht, tomato korma, Vietnamese spring rolls and puri.

The most important part of the celebration was the lighting of fireworks. For about two solid hour, children ran through a smoky cul-de-sac waving sparklers while firecrackers went off around them. Towards the end of this celebration, one of our hosts pulled out her iPhone in order to Facetime with her father in India and show him the children playing in the background just as they would have back home, forming a line of continuity between continents using a 1500 year old ritual and an international cellular system. Diwali is called the Festival of Lights, according to Wikipedia, because it celebrates the spiritual victory of light over darkness and ignorance.

When I got home I did some quick calculations. In order to get to that Apple moment our host had with her father – we no longer have Hallmark moments but only Apple moments today – took approximately seven years. This is the amount of time it takes for a technology to seem fantastic and impractical – because we don’t believe it can be done and can’t imagine how we would use it in everyday life if it was – to having it be unexceptional.

2001-telepresence (1)

Video conferencing has been a staple of science fiction for ages, from 2001: A Space Odyssey to Star Trek. It was only in 2010, however, that Apple announced the FaceTime app making it generally available to anyone who could afford an iPhone. I’m basing the seven years from fantasy to facticity, though, on length of time since the initial release of the iPhone in 2007.

Magic Leap, the digital reality technology that has just received half a billion dollars of funding from companies like Google, is important because it points the way to what can happen in the next seven years. I will paint a picture for you of what a world with this kind of digital reality technology will look like and it’s perfectly okay if you feel it is too out there. In fact, if you end up thinking what I’m describing is plausible, then I haven’t done a good enough job of portraying that future.

Magic Leap is creating a wearable product which may or may not be called Dragonstone glasses and which may or may not be a combination of light field technology – like that used in the Lytro camera – and depth detection – like the Kinect sensor. They are very secretive about what they are doing exactly. When Leap Magic CEO Rony Abovitz talks about his product, however, he uses code to indicate what it is and what it isn’t.

In an interview with David Lidsky, Abovitz let slip that Dragonstone is “not holography, it’s not stereoscopic 3-D. You don’t need a giant robot to hold it over your head, you don’t need to be at home to use it. It’s not made from off-the-shelf parts. It’s not a cellphone in a View-Master.” At first reading, this seems like a quick swipe at Oculus Rift, the non-mobile, stereoscopic virtual reality solution built from consumer parts by Oculus VR and, secondarily, Samsung Gear VR, the mobile add-on to Samsung’s Galaxy Note 4 that turns it into a virtual reality device with stereoscopic audio. Dig a little deeper, however, and it’s apparent that his grand sweep of dismissal takes in a long list of digital reality plays over the years.

Let’s start with holography. Actually, let’s start with a very specific hologram.

let the wookie win

The 1977 holographic chess game from Star Wars is the precursor to both virtual and augmented reality as we think of them – for convenience, I am including them all under the “digital reality” rubric. No child saw this and didn’t want it. From George Lucas imaginative leap, we already see an essential aspect of the digital experience we crave that differentiates it from the actual technology we have. Actual holography involves a frame that we view the virtual image through. In Lucas’s vision, however, the holograms take up space and have a location.

harryhausen

What’s intriguing about the Star Wars scene is that as a piece of film magic, the technology behind the chess game wasn’t particularly innovative. It’s pretty much just the same claymation techniques Ray Harryhausen and others had been using since the 50’s and involves superimposing a animated scene over a live scene. The difference comes in how George Lucas incorporates it into the story. Whereas all the earlier films that mixed live and animated sequences sought to create the illusion that the monsters were real, in the battle chess scene, it is clear that they are not – for instance because they are semi-transparent. Because the elements of the chess game are explicitly not real within the movie narrative – unlike Wookies, Hutts, and Ton-tons – they are suddenly much more interesting. They are something we can potentially recreate.

AR

The difference between virtual reality and augmented reality is similarly one of context. Which is which depends on how we, as the observer, are related to the digital experience. In the case of augmented reality, the context is the real world into which digital objects are inserted. An example of this occurs in Empire Strikes Back [1980], where the binoculars on Hoth provide additional information presented as an overlay on the real world.

The popular conception of virtual reality, as opposed to the technical accomplishment, probably dates to the publication of William Gibson’s Neuromancer in 1984. Gibson’s “cyberspace” is a fully digital immersive world. Unlike augmented reality where the context is our reality, in cyberspace the context is a digital space into which we, as observers and participants, are superimposed.

 titan

To schematize the difference, in augmented reality, reality is the background and digital content is in the foreground; in virtual reality, the background that we perceive is digital while the foreground is a combination of digital and actual objects. I find this to be a clean way of distinguishing the two and preferable to the tendency to distinguish them based on different degrees of immersion. To the extent that contemporary VR is built around improving the video game experience, we see that POV games have, as a goal, to create increasingly realistic world – but what is more realistic than the real world. On the other side, augmented reality, when done right, have the potential to be incredibly immersive.

magic quadrant

We can subdivide augmented reality even further. We’ll actually need to in order to elucidate why AR in Magic Leap is different from AR in Google Glass. Overlaying digital content on top of reality can take several forms and tends to fall along two axes. An AR experience is either POV or non-POV. It can also be either informational or interactive.

terminator_view

Augmented Reality in the POV-Informatics quadrant is often called Terminator Vision after the 1984 sci-fi Austrian body-builder augmented film. I’m not sure why a computer, the Terminator, would need a display to present data to itself, but in terms of the narrative it does wonders for the audience. It gives a completely false sense of what it must be like to think like a computer.

google glass

Experiences in the non-POV-Informatics quadrant are typically called Heads-Up-Displays or HUD. They have their source in military applications but are probably best known from first-person-shooters where the view-point is tied to objects like windshields or gun-sights rather than to the point-of-view of the player. They also don’t take up the entire view and consequently we can look away from them – unlike Terminator Vision. Google Glass is actually an example of a HUD – though it is sometimes mistaken for TV — since the display only fills up the right corner of the visual field.

fiducial

Non-POV interactive can be either magic mirror experiences or hand-held games and advertisements involving fiducials. This is a common way of creating augmented reality experiences for the iPad and smartphones. The device camera is pointed toward a fiducial, such as a picture in a catalog, and a 3-D model is layered over the video returned by the camera. Interestingly Qualcomm, one of the backers in Magic Leaps recent round of funding, is also a leader in developing tools for this type of AR experience.

hope

POV interactive, the final quadrant, is where Magic Leap falls. I don’t need to describe it because its exemplar is the sort of experience that Rony Abovitz says Dragonstone is not – the hologram from Star Wars. The difference is that where Abovitz is referring to the sort of holography we can do in actual reality, Magic Leap’s technology is the kind of holography that, so far, we have only been able to do in the movies.

If you examine the two images I’ve included from Star Wars IV, you’ll notice that the holograms are seen not from a single point of view but from multiple points of view. This is a feature of persistent augmented reality. The digital AR objects virtually exist in a real-world location and exist that way for multiple people. Even though Luke and Ben have different photons shooting at their eyes displaying the image of Leia from different perspectives, they are nevertheless looking at the same virtual Princess.

This kind of persistence, and the sort of additional technology required to make it work, helps to explain part of the reason Google is interested in it. Google, as we know, already has its own augmented reality play. Where Google brings something new to a POV interactive AR experience is in its expertise in geolocation, without which persistent AR entities would be much harder to create.

This sort of AR experience does not necessarily imply the use of glasses. We don’t know what sort of pseudo-technology is used the the Star Wars universe, but there are indications that it is some sort of projection. In Vernor Vinge’s sci-fi novel Rainbow’s End [2006], persistent augmented reality is projected on microscopic filaments that people experience without wearables.

Because Magic Leap is creating the experience inside a wearable close-range display, i.e. glasses, additional tricks are required. In addition to geolocation – which is only a guess at this point – it will also require some sort of depth sensor to determine if real-world objects are located between the viewer and the object’s location. If there is, then the occlusion of the virtual entity has to be simulated in the visualization – basically, a chunk has to be cut out of the image.

magic-leap-whale

If I have described the Magic Leap technology correctly – and there’s a good chance I have not given the secretiveness around it – then what we are looking at seven years out is a world in which everything we see is constantly being photoshopped in real-time. At a basic level, this fulfills the Magic Leap promise to re-enchant the world with digital entities and also makes sense of their promotional materials.

There are also some interesting side-effects. For one, an augmented world would effectively turn everything and everyone into a potential billboard. Given Google’s participation, this seems even likely. As with the web, advertisements will pay for the content that populates an augmented reality world. Like the web and mobile devices, the same geolocation that makes targeted content possible may also be used to track our behavior.

magic

There are additional social consequences. Many strange aspects of online behavior may make its way into our world. Pseudo-anonymity, which can encourage bad behavior in good people, can become a larger aspect of our world. Instead of appearing as themselves, people may prefer enhanced versions of themselves or even avatars.

jedi_council

In seven years, it may become normal to sit across a conference table from a giant rabbit and Master Chief discussing business strategies. Constant self-reinvention, which is a hallmark of the online experience, may become even more prevalent. In turn, reputation systems may also become more common as a way to curb the problems associated with anonymity. Liking someone I pass in the street may become much more literal.

Jedi

There is also, however, the cool stuff. Technology, despite all the frequent articles to the contrary, has the power to bring people together. Imagine one day being able to share an indigenous festival with loved ones who live thousands of miles away. My eleven year-old daughter has grown up with friends from around the world whom she has met online. Technology allows her not only to chat with them with texts, but also to speak with them while she is performing chores or walking around the house. Yet she has never met any of them. In seven years, we may live in a world where physical distance no longer implies emotional distance and where sitting around chatting face-to-face with someone you have never actually met face-to-face does not seem at all strange.

For me, Magic Leap points to a future where physical limitations are no longer limitations in reality.