Why are the best Augmented Reality Experiences inside of Virtual Reality Experiences?

elite_cockpit

I’ve been playing the Kickstarted space simulation game Elite: Dangerous for the past several weeks with the Oculus Rift DK2. Totally work related, of course.

Basically I’ve had the DK2 since Christmas and had been looking for a really good game to go with my device (rather than the other way around). After shelling out $350 for the goggles, $60 more for a game didn’t seem like such a big deal.

In fact, playing Elite: Dangerous with the Oculus and an XBox One gamepad has been one of the best gaming experiences I have ever had in my life – and I’m someone who played E.T. on the Atari 2600 when it first came out so I know what I’m talking about, yo. It is a fully realized Virtual Reality environment which allows me to fly through a full simulation of the galaxy based on current astronomical data. When I am in the simulation, I objectively know that I am playing a game. However, all of my peripheral awareness and background reactions seem to treat the simulation as if it is real. My sense of space changes and my awareness expands into the virtual space of the simulation. If I don’t mistake the VR experience for reality, I nevertheless do experience a strong suspension of disbelief when I am inside of it.

elite_cockpit2

One of the things I’ve found fascinating about this Virtual Reality simulation is that it is full of Augmented Reality objects. For instance, the two menu bars at the top of the screencap above, to the top left and the top right, are full holograms. When I move my head around, parallax effects demonstrate that their positions are anchored to the cockpit rather than to my personal perspective. If the VR goggles allowed me to do it, I would be able to even lean forward and look at the backside of those menus. Interestingly, when the game is played in normal 3D first person mode rather than VR with the Oculus, those menus are rendered as head-up displays and are anchored to my point of view as I use the mouse to look around the cockpit — in much the same way that google glass anchored menus to the viewer instead of the viewed.

The navigation objects on the dashboard in front of me are also AR holograms. Their locations are anchored to the cockpit rather than to me, and when I move around I can see them at different angles. At the same time, they exhibit a combination of glow and transparency that isn’t common to real-world objects and that we have come to recognize, from sci fi movies, as the inherent characteristics of holograms.

I realized at about the 60 hour mark into my gameplay \ research that one of the current opportunities as well as problems with AR devices like the Magic Leap and HoloLens is that not many people know how to develop UX for them. This was actually one of the points of a panel discussion concerning HoloLens at the recent BUILD conference. The field is wide open. At the same time, UX research is clearly already being done inside VR experiences like Elite: Dangerous. The hologram-based control panel at the front of the cockpit is a working example of how to design navigation tools using augmented reality.

elite_cockpit3

Another remarkable feature of the HoloLens is the use of gaze as an input vector for human-computer interactions. Elite: Dangerous, however, has already implemented it. When the player looks at certain areas of the cockpit, complex menus like the one shown in the screencap above pop into existence. When one removes one’s direct gaze, the menu vanishes. If this were a usability test for gaze-based UI, Elite: Dangerous will have already collected hours of excellent data from thousands of players to verify whether this is an effective new interaction (in my experience, it totally is, btw). This is also the exact sort of testing that we know will need to be done over the next few years in order to firm up and conventionalize AR interactions. By happenstance, VR designers are already doing this for AR before AR is even really on the market.

sao1

The other place augmented reality interaction design research is being carried out is in Japanese anime. The image above is from a series called Sword Art Online. When I think of VR movies, I think of The Matrix. When I put my children into my Oculus, however, they immediately connected it to SAO. SAO is about a group of beta testers for a new MMORPG that requires virtual reality goggles who become trapped inside the MMORPG due to the evil machinations of one of the game developers. While the setting of the VR world is medieval, players still interact with in-game AR control panels.

sao2

Consider why this makes sense when we ask the hologram versus head-up display question. If the menu is anchored to our POV, it becomes difficult to actually touch menu items. They will move around and jitter as the player looks around. In this case, a hologram anchored to the world rather than to the player makes a lot more sense. The player can process the consistent position of the menu and anticipate where she needs to place her fingers in order to interact with it. Sword Art Online effectively provides what Bill Buxton describes as a UX sketch for interactions of this sort.

On an intellectual level, consider how many overlapping interaction metaphors are involved in the above sketch. We have a 1) GUI-based menu system transposed to 2) touch (no right clicking) interactions, then expressed as 3) an augmented reality experience placed inside of 4) a virtual reality experience (and communicated inside a cartoon).

Why is all of this possible? Why are the best augmented reality experiences inside of virtual reality experiences and cartoons? I think it has to do with cost of execution. Illustrating an augmented reality experience in an anime is not really any more difficult than illustrating a field of grass or a cute yellow gerbil-like character. The labor costs are the same. The difficulty is only in the conceptualization.

Similarly, throwing a hologram into a virtual reality experience is not going to be any more difficult than throwing a tree or a statue into the VR world. You just add some shaders to create the right transparency-glowy-pulsing effect and you have a hologram. No additional work has to be done to marry the stereoscopic convergence of hologram objects and the focal position of real world locations as is required for really good AR. In the VR world, these two things – the hologram world and the VR world – are collapsed into one thing.

There has been a tendency to see virtual reality and mixed reality as opposed technologies. What I have learned from playing with both, however, is that they are actually complementary technologies. While we wait for AR devices to be released by Microsoft, Magic Leap, etc. it makes sense to jump into VR as a way to start understanding how humans will interact with digital objects and how we must design for these interactions. Additionally, because of the simplification involved in creating AR for VR rather than AR for reality, it is likely that VR will continue to hold a place in the design workflow for prototyping our AR experiences even years from now when AR becomes not only a reality but an integral thread in the fabric of reality.

On The Gaze as an input device for Hololens

microsoft-hololens-build-anatomy

The Kinect sensor and other NUI devices have introduced an array of newish interaction patterns between humans and computers: tap, touch, speech, finger tracking, body gestures. The hololens provides us with a new method of interaction that hasn’t been covered extensively from a UX perspective before: The Gaze. Hololens performs eye tracking in order to aid users of the AR device to activate menus.

Questions immediately arise as to the role this will play in surveillance culture, and even more in the surveillance of surveillance culture. While sensors track our gaze, will they similarly inform us about the gaze of others? Will we one day receive notifications that someone is checking us out? Quis custodiet ipsos custodes? To the eternal question who watches the watchers, we finally have an answer. HoloLens does.

lacan gaze

Even though The Gaze has not been analyzed deeply from a UX perspective, it has been the object of profound study from a phenomenological and a structuralist point of view. In this post I want to introduce you to five philosophical treatments of The Gaze covering the psychological, the social, the cinematic, the ethical and the romantic. To start, the diagram above is not from an HCI book as one might reasonably assume but rather from a monograph by the psychoanalyst Jacques Lacan.

the_ambassadors

A distinction is often drawn between Lacan’s early studies of The Gaze and his later conclusions about it. The early work relates it to a “mirror stage” of self-awareness and concerns the gaze when directed to ourselves rather than to others:

“This event can take place … from the age of six months on; its repetition has often given me pause to reflect upon the striking spectacle of a nursling in front of a mirror who has not yet mastered walking or even standing, but who … overcomes, in a flutter of jubilant activity, the constraints of his prop in order to adopt a slightly leaning-forward position and take in an instantaneous view of the image in order to fix it in his mind.”

This notion flowered in the later work The Split Between the Eye and the Gaze into a theory of narcissism in which the subject sees himself/herself as an objet petit a (a technical term for an object of desire) through the distancing effect of the gaze. Through this distancing, the subject also become alienated from itself. What is probably essential for us in this work – as students of emerging technologies –  is the notion that the human gaze is emotionally distancing. This observation was later taken up in post-colonial theory as the “Imperial Gaze” and in feminist theory as “objectification.”

eye-tracking

Michel Foucault achieved fame as a champion of the constructivist interpretation of truth but it is often forgotten that he was also an historian of science. A major theme in his work is the way in which the gaze of the other affects and shapes us – in particular the “scientific gaze.” Being watched causes us discomfort and we change our behavior – sometimes even our perception of who we are – in response to it. The grand image Foucault raises to encapsulate this notion is Jeremy Bentham’s Panopticon, a circular prison in which everyone is watched by everyone else.

THE HOUSEHOLD OF PHILIP IV or LAS MENINAS by Juan Bautista Martinez del Mazo (c1612-15-1667) after Diego Velazquez (1599 - 1660), at Kingston Lacy, Dorset

Where Lacan concentrates on the self-gaze and Foucault on the way the gaze makes us feel, Slovoj Zizek is concerned with the appearance of The Gaze when we gaze back at it. He writes in an essay called “Why Does the Phaullus Appear” from the collection Enjoy Your Symptom:

Shane Walsh (Jon Bernthal) - The Walking Dead - Season 2, Episode 12 - Photo Credit: Gene Page/AMC

Let us take the ‘phantom of the opera,’ undoubtedly mass culture’s most renowned specter, which has kept the popular imagination occupied from Gaston Leroux’s novel at the turn of the century through a series of movie and television versions up to the recent triumphant musical: in what consists, on a closer look, the repulsive horror of his face? The features which define it are four:

“1) the eyes: ‘his eyes are so deep that you can hardly see the fixed pupils. All you can see is two big black holes, as in a dead man’s skull.’ To a connoisseur of Alfred Hitchcock, this image instantly recalls The Birds, namely the corpse with the pecked-out eyes upon which Mitch’s mother (Jessica Tandy) stumbles in a lonely farmhouse, its sight causing her to emit a silent scream. When, occasionally, we do catch the sparkle of these eyes, they seem like two candles lit deep within the head, perceivable only in the dark: these two lights somehow at odds with the head’s surface, like lanterns burning at night in a lonely, abandoned house, are responsible for the uncanny effect of the ‘living dead.’”

Obviously whatever Zizek says about the phantom of the opera applies equally well to The Walking Dead. What ultimately distinguishes vampires, zombies, demons and ghosts lies in the way they gaze at us.

While Zizek finds in the eyes a locus for inhumanity, the ethicist Emmanual Levinas believes this is where our humanity resides. These two notions actually complement each other, since what Zizek indentifies as disturbing is the inability to project a human mind behind a vacant stare. As Levinas says in a difficult and metaphysical way in his masterpiece Totality and Infinity:

“The presentation of the face, expression, does not disclose an inward world previously closed, adding thus a new region to comprehend or to take over. On the contrary, it calls to me above and beyond the given that speech already puts in common among us…. The third party looks at me in the eyes of the Other – language is justice. It is not that there first would be the face, and then the being it manifests or expresses would concern himself with justice; the epiphany of the face qua face opens humanity…. Like a shunt every social relation leads back to the presentation of the other to the same without the intermediary of any image or sign, solely by the expression of the face.”

The face and the gaze of the other implies a demand upon us. For Levinas, unlike Foucault, this demand isn’t simply a demand to behave according to norms but more broadly posits an existential command. The face of the other asks us implicitly to do the right thing: it demands justice.

new_optics_06

The final aspect of the gaze to be discussed – though probably the first aspect to occur to the reader – is the gaze of love, i.e. love at first sight. This was a particular interest of the scholar Ioan P. Couliano. In his book Eros and Magic in the Renaissance Couliano examines old medical theories about falling in love and cures for infatuation and obsession. He relates this to Renaissance theories about pneuma [spiritus, phantasma], which was believed to be a pervasive fluid that allowed objects to be sensed through apparently empty air and become transmitted to the brain and the heart. In this regard, Couliano raises a question that would only make sense to a true Renaissance man: “How does a woman, who is so big, penetrate the eyes, which are so small?” He quotes the 13th century Bernard of Gordon:

Leopold_von_Sacher-Masoch_with_Fannie

The illness called ‘hereos’ is melancholy anguish caused by love for a woman. The ‘cause’ of this affliction lies in the corruption of the faculty to evaluate, due to a figure and a face that have made a very strong impression. When a man is in love with a woman, he thinks exaggeratedly of her figure, her face, her behavior, believing her to be the most beautiful, the most worthy of respect, the most extraordinary with the best build in body and soul, that there can be. This is why he desires her passionately, forgetting all sense of proportion and common sense, and thinks that, if he could satisfy his desire, he would be happy. To so great an extent is his judgment distorted that he constantly thinks of the woman’s figure and abandons all his activities so that, if someone speaks to him, he hardly hears him.”

And here is Couliano’s gloss of Bernard’s text:

RokebyVenus

“If we closely examine Bernard of Gordon’s long description of ‘amor hereos,’ we observe that it deals with a phantasmic infection finding expression in the subject’s melancholic wasting away, except for the eyes. Why are the eyes excepted? Because the very image of the woman has entered the spirit through the eyes and, through the optic nerve, has been transmitted to the sensory spirit that forms common sense…. If the eyes do not partake of the organism’s general decay, it is because the spirit uses those corporeal apertures to try to reestablish contact with the object that was converted into the obsessing phantasm: the woman.”

 

[As an apology and a warning, I want to draw your attention to the use of ocular vocabulary such as “perspective,” “point of view,” “in this regard,” etc. Ocular phrases are so pervasive in the English language that it is nearly impossible to avoid them and it would be more awkward to try to do so than it is to use them without comment. If you intend to speak about visual imagery, take my advice and pun proudly and without apology – for you will see that you have no real choice in the matter.]

HoloCoding resources from Build 2015

darren! 

The Hololens team has stayed extremely quiet over the past 100 days in order to have a greater impact at Build. Alex Kipman was the cleanup batter on the first day keynote at Build with an amazing overview of realistic Hololens scenarios. This was followed by Hololens demos as well as private tutorials on using Hololens with a version of Unity 3D. Finally there were sessions on Hololens and a pre-recorded session on using the Kinect with the brand new RoomAlive Toolkit.

darren?

Here are some useful links:

 

There were two things I found particularly interesting in Alex Kipman’s first day keynote presentation.

microsoft-hololens-build-anatomy

The first was the ability of the onstage video to actually capture what was being shown through the hololens but from a different perspective. The third person view of what the person wearing the hololens even worked when the camera moved around the room. Was this just brilliant After Effects work perfectly synced to the action onstage? Or were we seeing a hololens-enabled camera at work? If the latter – this might be even more impressive than the hololens itself.

mission_impossible_five

Second, when demonstrating the ability to pin movies to the wall using Hololens gestures, why was the new Mission Impossible trailer used for the example? Wouldn’t something from, say, The Matrix been much more appropriate.

Perhaps it was just a licensing issue, but I like to think there was a subtle nod to the inexplicable and indirect role Tom Cruise has played in the advancement of Microsoft’s holo-technologies. Minority Report and the image of Cruise wearing biking gloves with his arms raised in the air, conductor-like, was the single most powerful image invoked with Microsoft first introduced the Kinect sensor. As most people know by now, Alex Kipman was the man responsible not only for carrying the Kinect nee Natal Project to success, but now for guiding the development of the Hololens. Perhaps showing Tom Cruise onstage at Build was a subtle nod to this implicit relationship.