Immersion and the Star Wars Galactic Star Cruiser

IMG_0284

In the second week of March, I took my family to the Galactic Starcruiser at Disneyworld in Orlando, Florida, informally known as the Star Wars Hotel. The Starcruiser is a two-night immersive Star Wars experience with integrated storylines, themed meals, costumes, rides and games. For those familiar with the Disneyworld vacation experience, it should be pointed out that even though the Star Wars themed Galaxy’s Edge area in Hollywood Studios, it isn’t a resort hotel. Instead, it can best be thought of as a ride in itself.

20220307_152237

The design is that of a cruise ship, with a dining hall and helm in the “front” and an engine room in the “back”, and a space bar off of the main muster area. The NPCs and the staff never break character, but work hard to maintain the illusion that we are all on real space cruise. Besides humans, the “crew” is also staffed with aliens and robots – two essential aspects of Star Wars theming.

IMG_0165

In line with the cruise experience, you even do a one-day excursion to a nearby alien planet. I’ve had trouble writing about this experience because it felt highly personal, lighting off areas of my child brain that were set aside for space travel fantasies. At the same time, it is also very nerdy, and the intersection of the highly nerdy and the highly personal is dangerous territory. Nevertheless, it being May 4th today, I felt I could not longer put it off.

How you do Immersion?

“Immersion” is the touchstone for what people and tech companies are calling the Metaverse. Part of this is a carry over from VR pitching, and was key to explaining why being inside a virtual reality experience was different and better than simply playing a 3D video game with a flat screen and a controller.

IMG_0103

But the term “immersion” hides as much as it reveals. How can “immersion” be a distinguishing feature of virtual reality when it is already a built-in aspect of real reality? What makes for effective immersion? What are the benefits of immersion? Why would anyone pay to be immersed in someone else’s reality? Is immersion a way to telling a story or is storyline a component of an immersive experience?

20220307_144400

A Russian doll aspect of the starcruiser is the “Simulation Room” which, in the storyline of the ship, is an augmented area in that recreates the climate of the planet the ship is headed toward. The room is equipped with an open roof which happens to perfectly simulate the weather in central Florida. The room also happens to be where the Saja scholars provide instruction on Jedi history and philosophy.

Space Shrimp (finding the familiar in the unfamiliar)

I’m the sort of person who finds it hard to every be present in the moment. I’m either anticipating and planning for the next day, the next week, the next few years, or I am reliving events from the past which I wish had gone better (or wish had never happened at all).

20220307_150017

For two and a half days on this trip, I was fully captivated by the imaginary world I was living through. There wasn’t a moment after about the first hour when I was thinking about anything but the mission I was on and the details of the world I was in. I didn’t feel tempted to check my phone or know what was happening in the outside world.

20220308_180003

An immersive experience, it seems to me, is one that can make you forget about the world in this way, by replacing it with a more captivating world and not letting go of you. I’ve been going over in my head the details of the star wars experience that make this work and I think the blue shrimp we had for dinner one night is the perfect metaphor for how Disney accomplishes immersion.

IMG_0231

To create immersion, there can be nothing that references the outside world. The immersive experience must be self-contained and everyone in the immersive experience, from cabin boy to captain, must only reference things inside the world of the starcruiser. Fortunately Star Wars is a pre-designed universe. This helps in providing the various details that are self-referential and remind us of the world of the movies rather than the world of the world.

IMG_0125

A great example of this is the industrial overhead shower spout and the frosted glass sliding shower door in our cabin. They are small details but harken back to the design aesthetic of the star wars movies, which contain, surprisingly, a lot of blue tinted frosted glass.

IMG_0159

This extends to the food. All the food is themed, in a deconstructionist tour de force, to appear twisted and alien. We drank blue milk and ate bantha steaks. We feasted on green milk and salads made from the vegetation found on the planet Falucia.

IMG_0278

And here there is a difficulty. Humans have a built-in sense of disgust of strange foods that at some point protected our ancestors from accidentally poisoning themselves. And so each item of food had to indicate, through appearance or the name given on the menu, what it was an analog of in the real world. I often found myself unable to enjoy a dish until I could identify what it was meant to be (the lobster bisque was especially difficult to identify).

20220308_185244(1)

What I took from this was that for immersion to work, things have to be self-referential but cannot be totally unfamiliar. As strange as each dish looked, it had to be, like the blue shrimp, analogous with something people knew from the real world outside the ship. Without these analogical connections, the food will tend to create aversion and anxiety instead of the sense of immersion intended.

IMG_0250

One take way is that as odd as the food sometimes looked, the food analogs were always meals familiar to Americans. Things common to other parts of the world, like chicken feet or durian fruit or balut, would not go over well even though they taste good (to many people).

IMG_0285

A second take away is that the galactic food has to be really, really good. In modern American cuisine, it is typical to provide the story behind the food explaining each ingredient’s purpose, where it comes from and how to use it in the dish (is it a salad or a garnish?). The galactic food can’t provide these value-add story points and only has fictitious ones.

IMG_0268(1)

In the case of the food served on the starcruiser, then, each dish has to stand on its own merits, without the usual restaurant storytelling elements that contribute to the overall sense that you are eating something expensive and worthy of that expense. Instead, each dish requires us to taste, smell, and feel the food in our mouths and decide if we liked it or not. I don’t think I’ve ever had to do that before.

World building – (decrepit futurism)

The world of Star Wars is one of decrepit futurism. It is a world of wonders in decline.

IMG_0169

There are other kinds of futurism like the streamlined retro-futurism of the 30s and 50s or contemporary Afro-futurism. The decrepit futurism of Star Wars takes a utopic society and dirties it up, both aesthetically and morally. The original Star Wars starts off at the dissolution of the Senate marking a political decline. George Lucas doubles down on this in the prequels making this also a spiritual decline in which the Jedi are corrupted by a malignant influence and end up bringing about the fall of their own order. The story of the sequels (which is the period in which the galactic space voyage takes place) is about the difficultly and maybe impossibility of restoring the universe to its once great heights.

IMG_0127

As beautiful and polished as all the surfaces are on the star cruiser, the ship is over 200 years old and has undergone massive renovations. Despite this, the engines continue to provide trouble (which you get to help fix). Meanwhile, the political situation in the galaxy in general and on the destination planet in particular is fraught, demanding that voyagers choose which set of storylines they will pursue. Will they help the resistance or be complicit with the First Order? Or will they opt out of this choice and instead be a Han Solo-like rogue pursuing profit amid the disorder?

IMG_0292

The metaphysics of the Star Wars universe is essentially fallibilist and flawed – which in turn opens the way for moral growth and discovery.

The decrepit futurism of Star Wars has always seemed to me to be one of the things that makes it work best because it artfully dodges the question of why things aren’t better in a technologically advanced society. Decrepit futurism says that things once were (our preconceptions of what the future and progress entails is preserved) but have fallen from the state of grace through a Sith corruption. In falling short, the future comes down to the level where the rest of us live.

IMG_0295

It’s also probably why Luke, in the last trilogy, never gets to be the sort of teacher we hoped he would be to Rey. The only notion we have of the greatness and wisdom of a true Jedi master comes from glimpses we get through Yoda in The Empire Strikes Back, but he is only able to achieve this level of wisdom by losing everything. Greatness in Star Wars is always something implied but never seen.

Storytelling (narrative as an organizing principle)

Much is made of storytelling and narrative in the world of immersive experiences. Some people talk as if immersion is simply a medium for storytelling – but I think it is the other way around. Immersion is created out of world building and design that distract us from our real lives. The third piece of immersion is storytelling.

IMG_0136

But one thing I discovered on the Galactic Starcruiser is that the stories in an immersive experience don’t have to be all that great – they don’t have to have the depth of a Dostoevsky novel. Instead they can be at the level of a typical MMORPG. They can be as simple as go into the basement and kill rats to get more information. Hack a wall terminal to get a new mission. Follow the McGuffin to advance the storyline.

IMG_0007

Narrative in an immersive experience is not magic. It’s just a way of organizing time and actions for people, much the way mathematical formulas organize the relationship between numbers or physics theorems organize the interactions of physical bodies. Narratives help us keep the thread while lots of other things are going on around us.

The main difficulty of a live theater narrative, like the one on the starcruiser, is that the multiple story lines have to work well together and work even if people are not always paying attention or even following multiple plots at the same time. Additionally, at some point, all of the storylines must converge. In this case, keeping things simple is probably the only way to go.

2022-03-09_20-_20Star_20Wars_20Galactic_20Starcruiser_20-_20Star_20Wars_20Galactic_20Starcruiser_5

Crafting a narrative for immersive experiences, it seems to me, is a craft rather than an art. It doesn’t have to provide any revelations or tell us truths about ourselves. It just has to get people from one place in time to another.

The real art, of course, is that exercised by the actors who must tell these stories over and over and improvise when guests throw them a curve ball while keeping within the general outline of the overarching narrative. And being able to do this for 3 days at a time is a special gift.

Westworld vs the Metaverse (what is immersion)

Using the Galactic Starcruiser as the exemplar of an immersive experience, I wanted to go back to the question of how immersion in VR is different from immersion in reality. To put it another way, what is the difference between Westworld and the Metaverse?

2022-03-08_20-_20Disneys_20Hollywood_20Studios_20-_20Millennium_20Falcon_13

There seems to be something people are after when they get excited about the Metaverse and I think it’s at bottom the ability to simulate a fantasy. Back when robots were all the rage (about the time Star Wars was originally made in the 70s) Michael Crichton captured this desire for fantasy in his film Westworld. The circle of reference is complete when one realizes that Chrichton based his robots on the animatronics at Disneyland and Disneyworld.

IMG_0156(1)

So what’s the difference between Westworld and the Metaverse? One of the complaints about the Metaverse (and more specifically VR) is that the lack of haptic feedback diminishes the experience. The real world, of course, is full of haptic feedback. More than this, it is also full of flavors and smells, which you cannot currently get from the Metaverse. It can also be full of people that can improvise around your personal choices so that the experience never glitches. This provides a more open world type of experience, whereas the Metaverse as it currently stands will have a lot of experiences on rails.

From all this, it seems as if the Metaverse aspires to be Westworld (or even the Galactic Starcruiser) but inevitably falls short sensuously and dynamically.

20220307_133505

The outstanding thing about the Metaverse, though, is that it can be mass produced – precisely because it is digital and not real. The Starcruiser is prohibitively expensive dinner theater which I was able to pull off through some dumb luck with crypto currencies. It’s wonderful and if you can afford it I highly encourage you to go on that voyage into your childhood.

IMG_0226

The Metaverse, on the other hand, is Westworld-style immersion for the masses. The bar to entry for VR is relatively low compared to a real immersive experience. Now all we have to do is get the world building, design, and storylines right.

I’m going on a cruise

Halcyon-GalacticStarcruiserConceptArt

In mid-March, I’ll be headed to the planet Batuu with may family on the galactic starcruiser Halcyon.

In other worlds, I’ll be staying at the new Disneyworld Star Wars hotel in Orlando, Florida. We’ve wanted to go since the concept for the hotel was first announced several years ago. In part, this is because both my wife and I are of an age that the original Star Wars trilogy had a huge impact on how we saw things and approached even fundamental matters like ethics, religion, and purpose. We never ascribed these cultural substructures to Star Wars directly – but that was what lines the bottom of Plato’s aviary for a lot of people.

starwars_cruise

A second reason I am personally fascinated by the idea of the Star Wars hotel is that I’ve been working on technological installations for over a decade, starting with large touch screens and now in VR and Spatial Computing (MR, AR, whatever you want to call it). The world I work in is the opposite of the prescription that form should follow function. In the world of using technology to augment reality, form is playful and function is whatever you happen to find in it – which we also call “experiences”.

The Star Wars hotel is the ultimate “experience”, recreating through sensory tricks something many people have imagined in their mind’s eye since childhood. This is the ultimate goal of all the “V” and “A” and “Meta” realities. But as most insiders know, the best fake reality always has a bit of real reality in it to heighten the effect.

We had assumed that we’d never be able to book passage for at least the first year but then something funny happened. Three months before the maiden voyage of the Halcyon, people began canceling their reservations. Whether this was because of the COVID upsurge at the end of 2021 or because people were getting cold feet, I don’t know. In any case, openings started appearing on the cruise calendar and I was able to book a cabin.

So now we spend our evenings practicing Sabacc, printing greebles, and exploring the Star Wars fashion universe. We also watch the nanology (sp?) of films. We tell each other intricate back stories about our characters on the cruise, which may never come up. We debate whether we should support the resistance or the fascist New Order. We buy sensible walking shoes on Zappos that are in line with the Star Wars aesthetic. We eat lots of preparatory salad because the Galactic buffet is supposed to be extensive.

I feel fortunate to be in a position to do this with my family. It is supposed to be an interactive theatrical experience backed by lots of tech. Our cabin will have a large window with digital stars going by outside (for just this alone, I would have wanted to go).

As we pack for the trip, I’ll blog (a moribund medium, I know) about the preparation we are doing to get ready for our first cruise. Stay tuned.

Introduction to Critical Code Theory

Ever since I read an interesting but mostly inaccurate article on Plato and Object-Oriented Programming over ten years ago on Code Project, I’ve wanted to do a series of blog posts on the relationship between philosophy (once my vocation) and coding (my current passion). Partly this was out of laziness, since there aren’t many people are familiar with both the phenomenological tradition in philosophy and the practice of software programming, so I thought it would be an easy way to say some obvious things about philosophy that might impress coders who didn’t know anything about it, on the one hand, and obvious things about how software is made that might impress philosophers, sociologists and the the lit crit crowd, on the other. But that never really happened, except in occasional blog post here, due to the laziness I referred to above.

So I’ve finally started a substack devoted to discussing code and critical theory as a separate project to be pursued diligently, while this blog will start to be devoted more to the straight forward discussion of spatial computing and how to code for spatial computing. In effect, I am separating theory from practice.

If the notion of a new discipline of  Critical Code Theory sounds interesting to you (or if you want to give input as to what Critical Code Theory ought to be and what it should cover) then I invite you to subscribe to my substack at https://criticalcodetheory.substack.com/ .

I promise to be diligent about providing regular content through my substack newsletter. I also promise that in reading about critical theory applied to software code, you will gain a basic linguistic competence about the use of philosophical language in other critical disciplines and will gain a greater appreciation of the history and intricacies of these theoretical projects.

Please sign up for the newsletter; please tell your friends if you like what you read; please let me know in the comments to the Critical Code Theory substack if there are things you don’t like, or things you feel I get wrong about either the code or the theory. In other words, I invite you to be critical.

Simulations and Simulacra

In a 2010 piece for The New Yorker called Painkiller Deathstreak , the novelist Nicholson Baker reported on his efforts to enter the world of console video games with forays into triple-A titles such as Call of Duty: World at War, Halo 3: ODST, God of War III, Uncharted 2: Among Thieves, and Red Dead Redemption.

rdr1

“[T]he games can be beautiful. The ‘maps’ or ‘levels’—that is, the three-dimensional physical spaces in which your character moves and acts—are sometimes wonders of explorable specificity. You’ll see an edge-shined, light-bloomed, magic-hour gilded glow on a row of half-wrecked buildings and you’ll want to stop for a few minutes just to take it in. But be careful—that’s when you can get shot by a sniper.”

In his journey through worlds rendered on what was considered high-end graphics a decade ago, Nicholson discovered both the frustrations of playing war games against 13 year olds (currently they would be old enough to be stationed in Afghanistan) as well as the peace to be found in virtual environments like Red Dead Redemption’s Western simulator.

Red-Dead-Redemption

“But after an exhausting day of shooting and skinning and looting and dying comes the real greatness of this game: you stand outside, off the trail, near Hanging Rock, utterly alone, in the cool, insect-chirping enormity of the scrublands, feeling remorse for your many crimes, with a gigantic predawn moon silvering the cacti and a bounty of several hundred dollars on your head. A map says there’s treasure to be found nearby, and that will happen in time, but the best treasure of all is early sunrise. Red Dead Redemption has some of the finest dawns and dusks in all of moving pictures.”

I was reminded of this essay yesterday when Youtube’s algorithms served up  a video of Red Dead Redemption 2 (the sequel to the game Nicholson wrote about) being rendered in 8K on an NVidia 3090 graphics card with raytracing turned on.

The encroachment of simulations upon the real world, to the point that they not only look as good as the real world (real?) but in some aspects even better, has interestingly driven the development of the sorts of AI algorithms that serve these videos up to us on our computers. Simulations require mathematical calculations that cannot be done as accurately or as fast on standard CPUs. This is why hardcore gamers pay upwards of a thousand dollars for bleeding edge graphics cards that are specially designed to perform floating point calculations.

rdr2-4

These types of calculations, interestingly, are also required for working with large data sets for machine learning. The algorithms that steer our online interests, after all, are just simulations themselves, designed to replicate aspects of the real world in order to make predictions about what sorts of videos (based on a predictive model of human behavior honed to our particular tastes) are most likely to increase our dwell time on Youtube.

rdr2-6

Simulations, models and algorithms at this point are all interchangeable terms. The best computer chess programs may or may not understand how chess players think (this is a question for the philosophers). What cannot be denied is that they adequately simulate a master chess player that can beat all the other chess players in the world. Other programs model the stock market and tune them back into the past to see how accurate they are as simulations, then tune them into the future in order to find out what will happen tomorrow – at which point we call them algorithms. Like memory, presence and anticipation for us meatware beings, simulation, model and algorithm make up the false consciousness of AIs.

rdr2_8

Simulacra and Simulation, Jean Baudrillard’s 1981 treatise on virtual reality,  opens with an analysis of the George Luis Borges short story On Exactitude in Science, about imperial cartographers who strive after precision by creating ever larger and larger maps, until the maps eventually achieve a one-to-one scale, becoming exact even as they overtake their intended purpose.

rdr2-5

“The territory no longer precedes the map, nor survives it. Henceforth, it is the map that precedes the territory – precession of simulacra – it is the map that engenders the territory and if we were to revive the fable today, it would be the territory whose shreds are slowly rotting across the map. It is the real, and not the map, whose vestiges subsist here and there, in the deserts which are no longer those of the Empire, but our own. The desert of the real itself.”

I was thinking of Baudrillard and Borges this morning when, by coincidence, Youtube served up a video of comparative map sizes in video games. Even as rendering versimilitude has been one way to gauge the increasing realism of video games, the size of game worlds has been another. A large world provides depth and variety – a simulation of the depth and happenstance we expect in reality – that increases the immersiveness of the game.

rdr2-7

Space exploration games like No Man’s Sky and Elite Dangerous attempt to simulate all of known space as your playing ground, while Microsoft’s Flight Simulator uses data from Bing Maps to allow you to fly over the entire earth. In each case, the increased size is achieved by surrendering on detail. But this setback is temporary, and over time we will be able to match the extent of these simulations with detail, also, until the difference between the real and the model of the real is negligible.

rdr2-9

One of the key difficulties with VR adoption (and to some extent the superiority of AR) is the falling anxiety everyone experiences as they move around in virtual reality. The suspicion that there are hidden objects in the world that the VR experience does not reveal to us prevents us from being fully immersed in the game – except in the case of the highly popular horror genre VR games in which inspiring anxiety is a mark of success. As the movements continue to both increase the detail of our simulations of the real world – to the point of simulating the living room sofa and the kitchen cabinet – and expand the coverage of our simulations across the world so there is no surveillable surface that can escape the increasing exactness of our model, we will eventually overcome VR anxiety. At that point, we will be able to walk around in our VR goggles without ever being afraid of tripping over objects, because there will be a one-to-one correspondence between what we see and what we feel. AR and VR will be indistinguishable at that exacting point, and we will at last be able to tread upon the sands of the desert of the real.

the New XR SDK Pipeline with HoloLens 2: Part 2

In the first part of this series, I provided a detailed walkthrough of setting up a project using new Unity XR SDK pipeline for HoloLens 2 development and how to integrate it with the HoloLens 2 toolchain.

HoloLens2

In this post, I will continue building on that project by showing you how to set up an HoloLens 2 scene in Unity using the Mixed Reality Toolkit. Finally I will show you how to set up and configure one of the MRTK built-in example projects.

Configuring a scene for the HoloLens 2

Use the project from the previous post in which you configured the project settings, player settings, build settings, and imported the MRTK to use with the new XR SDK pipeline.

Setting up a new scene for the HoloLens only takes a few steps.

add_scene

1. From the Mixed Reality Toolkit item in the toolbar, select Add to Scene and Configure.

mrtoolkit

2. This will add the needed MRTK components to your current scene. Verify in the Hierarchy window that your new scene includes the following game objects: Directional Light, MixedReality Toolkit, and Mixed Reality Playspace. Select the MixedReality Toolkit game object.

default_xrsdk

3. In the Inspector pane for the MixedReality Toolkit game object, there is a dropdown of various configuration profiles. The naming of these profiles is confusing. It is extremely important that you switch from the default DefaultMixedRealityToolkitConfigurationProfile to DefaultXRSDKConfigurationProfile. Without making this change, even basic head tracking will not work for you.

4. Next, click on the Clone button and choose a new pertinent name for your application’s configuration (if  you can’t think of one, then the old standby MyConfigurationProfile will work in a pinch – you can go back and change it later).

cam_clone

5. The MRTK configuration files are set up in a daisy chain fashion, with config files referencing other config files, all of which can be copied and customized. Go ahead and clone the DefaultXRSDKCameraProfile. Rename it to something your can remember (MyCameraProfile will work in a pinch).

6. Save all your changes with Ctrl+S.

Opening an MRTK example project

Being able to test out a working HoloLens 2 application can be instructive. If you followed along with the previous post, you should already have the example scenes imported into your project.

features

If you missed this step, you can open up the Mixed Reality Feature Tool now and import the Mixed Reality Toolkit Examples.

mrtk_examples

1. After importing, the MRTK examples are still compressed in a package. In the Project pane, navigate to the Packages folder. Then right click on Projects > Mixed Reality Toolkit Examples and click on View in Package Manager in the context menu.

examples_import

2. In Package Manager, select the Mixed Reality Toolkit Examples package. This will list all of the compressed MRTK demos to the right.

demos_handtracking

3. Click on the Import into Project button next to the Demos – HandTracking sample to decompress it.

hand_inter

4. There are a few ways to open your scene. I will demonstrate one of them.Type Ctrl+O on your keyboard (this is equivalent to selecting File | Open Scene on the toolbar). A file explorer window will open up. Navigate to the Assets folder for your Unity project. You will find the HandInteractionExample scene under a folder called Samples. Select it.

inter_scene

The interaction sample is one of the nicest ways to try out the hand tracking capabilities of the HoloLens 2. It still needs to be configured to work with the new XR SDK pipeline, however.

Configuring an MRTK demo scene

Before deploying this scene to a HoloLens 2, you must first configure the scene to use the XR SDK pipeline.

change_config

1. Select the MixedRealityToolkit game object in the Hierarchy pane. In the Inspector pane, switch from the default configuration profile to the one you created earlier when you were creating your own scene.

2. Ctrl+S to save your changes.

Preparing the MRTK demo scene for deployment

player_settings

1. Open up the Build Settings window either by typing Ctrl+Shift+B or by selecting File | Build Settings from the project toolbar.

2. Click on the Add Open Scenes button to add the example scene.

3. Ctrl+S to save your build settings.

build_deploy

4. One of the nicest features of the Mixed Reality Toolkit, going back all the way to the original HoloLens Toolkit it developed out of, is the build feature. Building a project for HoloLens has several involved steps which include building a Visual Studio project for UWP, compiling the UWP project into a Windows Store assembly, and finally deploying the appx to either a HoloLens 2 device or to an emulator. The MRTK build window lets you do all of this from inside the Unity IDE.

From the Mixed Reality Toolkit menu on the toolbar, select Utilities | Build Window. From here, you can build and deploy your application. Alternatively, you can build your appx file and deploy it from the device portal, which is what I usually do.

Summary

This post completes the walkthrough showing you how to set up and then build a HoloLens 2 application in Unity using the new XR SDK pipeline. It is specifically intended for developers who have developed for the HoloLens before and may have missed a few tool cycles, but should be complete enough to also help developers new to spatial computing get quickly up to speed on the praxis of HoloLens 2 development in Unity.

The New XR SDK Pipeline with HoloLens 2: A Walkthrough

The HoloLens 2 toolchain is under continuous development. In addition, the development of the Mixed Reality Toolkit (MRTK), the developer kit for HoloLens, must be synced with the continuous development of the Unity Game Engine. For these reasons, it is necessary for developers to be careful about using the correct versions of each tool in order to avoid complications.

HoloLens2

As of February, 2021, I’m  doing HoloLens development with Visual Studio 2019 v16.5.5, Unity Hub 2.4.2, Unity 2019.4.20f, Mixed Reality Feature Tool 1.0.2102 beta,  MRTK 2.5.4 + MRTK 2.6.0-preview, and Windows 10 Pro 10.0.18363.

MRTK, MR Feature Tool beta,  and the new Unity XR SDK pipeline

The most accurate documentation on getting started with HoloLens 2 development, at this time, is found in the MRTK github repository and on Microsoft’s MR Feature Tool landing page. There are nevertheless some gaps between these two docs that it would be helpful to fill in.

The new Unity XR SDK pipeline is a universal plugin framework that replaces something now known as the legacy Unity XR pipeline. Because of all the moving parts, many people have trouble getting the new pipeline working correctly, especially if they have not kept up with the changes for a while or if they are new to the HoloLens 2.

You will want to download Unity Hub if you don’t have it already.  Unity Hub will help you manage multiple versions of Unity on your development machine. It is fairly common to switch between different versions of Unity if you are working in VR and Mixed Reality as you go back and forth between older versions for maintenance and pursue newer versions for the latest features. As a rule, never upgrade the Unity version of an existing project if things are working well enough.

Create a new unity project

Use the Unity Hub Installs tab to get the latest version of Unity 2019.4, which you will need in order to successfully work with the latest MRTK. Later versions of Unity will not currently work for developing HoloLens 2 applications with the MRTK 2.5.4. Versions earlier than Unity 2018.4 also will not work well.

Some of the documentation mentions using Unity 2020.2 with OpenXR. This is something totally different. Just ignore it.

new_file

Start by creating a new Unity 2019.4 project.

new_2019

When you do this from Unity Hub, you can use the pull down menu to select from any of the versions installed on your development computer.

package_mgr

When your Unity app has been created, open the Unity Package Manager from the Windows menu.

xr_plugin

In Package Manager, select the Windows XR Plugin in the left panel. Then click the Install button in the lower left corner of the main panel.

xr_legacy_etc

This will also install the following components automatically: XR Interaction Subsystems, XR Legacy Input Helpers, and XR Plugin Management.

*** Notice that the component called Windows Mixed Reality is not installed. Don’t touch it. This is a left over from the legacy Unity XR pipeline and will eventually be removed from Unity. You will use a different method to get the MRTK into your project. ***

Configure project settings for HoloLens 2

You should now edit your project settings. Open the project settings panel by selecting Edit | Project Settings in the menu bar.

project_settings1

1. Select XR Plug-in Management in the left-hand pane and then, in the main window, check off Windows Mixed Reality to select it as your plug-in provider. This lets the new XR SDK pipeline work with your MRTK libraries (which you will import in the next section).

buffer16

2. Under XR Plug-in Management, select the Windows Mixed Reality entry in the left-hand pane. Make sure Depth Buffer 16 Bit is selected as your Depth Buffer Format and that Shared Depth Buffer is checked off. This will improve the performance of your spatial computing application.

project_settings_qual

3. Select Quality in the left-hand pane. To improve performance, click the down arrow under the Windows logo to set your default quality setting to Low for your spatial computing application.

tmpro

4. While you are configuring your project settings, you might as well also import TextMesh Pro. Select TextMesh Pro in the left-hand pane and click on the Import TMP Essentials button in the main window. TMP will be useful for drawing text objects in your spatial computing application.

player_packagename

5. Select Player  in the right-hand pane to edit your player settings. Change the Package name entry to something relevant for your project. (The project name is how the HoloLens identifies your application. If you are quickly prototyping and deploying projects and forget to change the package name, you will get an obscure message about saying your Template3D package is already installed. This is just the default package name on all new Unity projects.)

You are now ready to import the Mixed Reality Toolkit.

Retrieve MRTK components with MR Feature Tool

The HoloLens team has created a new tool called the Mixed Reality Feature Tool for Unity to help you acquire and deploy the correct version of the MRTK to your project.

feature_tool

1. After downloading the feature tool, you can go into settings and check off the Include preview releases box in order to get the 2.6.0-preview.20210220.4 build of MRTK. Alternatively, you can use MRTK version 2.5.4 if you are uncomfortable with using a preview build.

features

2. Follow the wizard steps to select and download the MRTK features you are interested in. At a minimum, I’d recommend selecting the Toolkit Foundation, Toolkit Extensions, Toolkit Tools, Toolkit Standard Assets, and Toolkit Examples.

fature_path

3. On the Import features screen, select the path to your Unity project. Validate that everything is correct before importing the selected features into your Unity HoloLens 2 project.

4. Click on the Import button.

Configure build settings for HoloLens 2

As the MRTK libraries are added to your project, your Unity project may hang while it is being refreshed if you still have it open (which is very okay).

add_scene

1. When the refresh is done, the Mixed Reality Toolkit item will appear your Unity menu bar.

apply

2. At the same time, an MRTK configuration window will also pop up in the Unity IDE. Accept the suggested project setting changes by selecting Apply.

switch_settings

3. Click on File | Build Settings… to open the Build Settings window. Select Universal Windows Platform in the left pane.

switch

4. Upon selecting Universal Windows Platform as your build target, you will be prompted to switch platforms. Click on the Switch Platform button to confirm that you will be building an application for the UWP platform. This will initiate a series of updates to the project that may freeze the IDE as the project is refreshed.

switch_uwp

5. After your switch to the Universal Windows Platform, MRTK may prompt you to make additional changes to your project. Apply the recommended changes.

switch_settings

6. For a release deployment, you will want the following build settings:

    • Target Device should be HoloLens.
    • Architecture is ARM64. This is the processor used in the HoloLens 2.
    • Build Type is D3D Project. Any other build type will project a standard 2D UWP window into the HoloLens.
    • Build configuration is Master for a release build, rather than a Release, as odd as that seems. This is the most lightweight build you can deploy to your device, and consequently the best performing.
    • Compression Method should be LZ4HC for release builds. This takes a lot longer to compile, but is the most efficient in a deployed application.

You can close the Build Settings window.

Summary

This walkthrough is intended to get you through the initial steps for integrating the new Unity XR SDK pipeline with the Mixed Reality Toolkit. For completeness, I will walk you through setting up and deploying a HoloLens scene in the next post.

When GameStop Killed XBox One Kinect

producers

If you look up the Xbox One Kinect (informally known as the Kinect 2) on the GameStop website, you’ll read in the product description that “[t]he best Xbox One experience is with Kinect.”

Over the course of the Xbox One’s life, there were approximately 38 games that supported Kinect body tracking. None of them were triple-A games. This is out of 2682 games for the Xbox One. While Microsoft initially planned to require that the Kinect be always on, by the time of the Xbox One’s release on November 2013, this requirement was removed.  By the summer of 2014, Microsoft unbundled the Kinect from their game console, allowing people to purchase the Xbox One at a lower price point that was more competitive with PlayStation 4. The final blow came in late 2015, when Microsoft removed their Kinect support for navigating the Xbox dashboard.

Before going into some theories on what happened to the Kinect, I wanted to give my “they’re all dirty” metaphor for the recent rise and fall of the GameStop stock price. The weak GameStop business was being shorted by hedge funds. Small investors gathered on Reddit decided to fight this by pumping money into GameStop stocks in order to inflate the price artificially. They typically used the app Robinhood, which doesn’t charge trading fees, to do this. In the end, the hedge funds appear to have hedged their best, because even as they lost money on their shorts, they made money by fulfilling the trades coming through Robinhood from these reddit investors.

Isn’t this the plot of Mel Brooks’ The Producers?  While the purpose of the stock market is supposed to be efficiently moving investor money into the hands of companies in order to create value, short-selling is a speculative financial instrument to allow people to bet that certain companies will fail.  Like Leo Bloom, hedge funds like Melvin Capital and Citadel recognized that sometimes you can make more money with a failed venture than with a successful one.

In order to improve the odds of failure, Leo Bloom and Max Bialystock stack the deck by finding the worst script, the worst director and the worst cast for their Broadway show. Similarly, in order to improve the odds of driving down the price of GameStop stock, Citadel let people know that they were shorting the stock. Who would invest in a company that Wall Street big guns were trying to destroy?

The problem for The Producers is that the worst play, Springtime for Hitler, the worst director (who turned it into a Busby Berkeley style musical), and the worst cast (drugged addled hippies), come together to create something that people can enjoy ironically. The play is so bad, it is good.

producers_aud

The worst director and worst cast in the GameStop saga are the Robinhood app and the reddit community /wallstreetbets. Robinhood allows (and encourages) inexperienced investors to bet against Wall Street professionals, which is about as successful as betting against the house in Las Vegas. /wallstreetbets, in turn, allows users to try out betting systems. The latest one depends on treating the stock market ironically, assuming that investment is primarily about manipulating markets rather than finding good companies to invest in. The only difference between /wallstreetbets and the hedge funds, is that one is made up of market outsiders and the other by insiders. Late capitalism. Post-truth investment.

There was a time when GameStop wasn’t just a carcass being fought over by carrion feeders looking for a quick meal. In 2013, GameSpot was a quickly growing company that made its money reselling second-hand console game disks.

In the lead up to the release of the XBox One, it turns out that Microsoft was attempting to kill this aftermarket. Even into the middle of 2013, Microsoft was considering dropping the optical drive from its hardware altogether and making the purchase of games completely cloud-based, like Steam.

ps4vid

It is clear from the confusion around the May, 2013 Xbox One reveal that this idea had lingering ramifications for the strategy around connectivity. Two requirements for a digital only game distribution system are a need for all consoles to be online, at least part of the time, and complex digital licensing verification systems. It turned out that the aftermarket in video games, brokered through third-parties like GameStop, was a much bigger deal than Microsoft realized and their inability to explain how people would be able to exchange and sell used games inspired one of the great marketing trolls of all time, when Sony created a commercial demonstrating how to exchange PlayStation games.

Today any teenager can explain to you the market forces that are destroying GameStop’s business model. There is no need for a company to provide an aftermarket for video games when no one uses disks anymore. Everything is digital in 2021 and everything is online. Almost like an act of revenge for 2013, Microsoft is even strong arming its Microsoft Gold subscribers to upgrade to the Xbox Game Pass by raising prices for the former. Xbox Game Pass allows users to have access to a broad range of games without having to buy those games individually, including the top games from the past two to three years.

Microsoft was ahead of its time in 2013. But what made it want to get rid of disks? One theory is that without a disk drive, Microsoft would have been able to drop the launch price of its console by $50. As it turned out, the Xbox with a disk drive and bundled with an Xbox Kinect, brought the initial price of an Xbox One to $499. The Sony PlayStation 4 launched at a $399 price point.

kinect-for-windows

This one hundred dollar difference turned out to be nearly fatal for the Xbox, which was forced to unbundle the Kinect 2 from its Xbox One by the middle of 2014, finally making their console competitive on price with the PlayStation. It was even able to undercut the price of the PlayStation by selling an unbundled Xbox One for $349 shortly after. This suggests that without an optical drive, the Xbox might have sold for only $50 more than the PlayStation 4, or even for the same price, at launch, while including a key differentiator with the Kinect.

Why did Microsoft insist on bundling the Kinect with the Xbox One in the first place? The problem for Microsoft was that in order to make the Kinect successful, it needed triple-A game companies to create games that used it. But this entails extra design and development costs for game companies. There is no way they would take on this additional cost without a guarantee of a user base that owned Kinect devices. There was a virtuous circle – or perhaps a vicious one – in which game makers need players with Kinects before they will create games for the Kinect, while console buyers need to be shown games that highlight the Kinect before they will buy a console that requires them to buy a Kinect.  In the end, neither of these things happened.

There was an underlying reason that Microsoft wanted to get Kinects into consumer living rooms. While the Kinect’s primary feature is its body tracking, which could be used as a controller for playing games and navigating screens, it’s secondary feature is a directional microphone plugged into Microsoft’s cutting edge speech recognition. It could have become an essential interface between consumers and the commercial internet, with Microsoft as the essential broker for these transactions and interactions.

Echo

As usual Microsoft was ahead of its time, and even as it quickly killed the Kinect in 2014, Amazon was releasing its own natural language devices built around Alexa, which soon expanded into a tool for not only accessing data on the internet, but also for integrating with services and controlling home devices.

But alas, GameStop created an aftermarket for game disks, that prevented Microsoft from getting rid of its Xbox One optical drive, that caused the Xbox One to lose on price to the PlayStation 4, that caused the XBox to drop the Kinect, that caused Microsoft to cede the living room device market to Amazon.

TL;DR 2/n

“So, in the next century there will be no more books. It takes too long to read, when success comes from gaining time. What will be called a book will be a printed object whose “message” (its information content) and name and title will first have been broadcast by the media, a film, a newspaper interview, a television program, and a cassette recording. It will be an object from whose sales the publisher (who will also have produced the film the interview, the program, etc.) will obtain a certain profit margin, because people will think that they must “have” it (and therefore buy it) so as not to be taken for idiots or to break (my goodness) the social bond! The book will be distributed at a premium, yielding a financial profit for the publisher and a symbolic one for the reader.” – Jean-François Lyotard, The Differend: Phrases in Dispute, 1981

rate2

rate3

rate6

room9

rate9

rate4

rate8

rate5

rate7

ratex

Unmasking, Optics, and Surveillance 1/n

rioters

How do you deal with people who refuse to wear masks?

According to fedscoop, tracking down rioters from the January 6th Capitol invasion will be easy due to three reason:

  1. rioters typically didn’t wear masks
  2. rioters photographed, videoed, and streamed their insurrection
  3. surveillance software is extremely good at analyzing photographs and videos for facial matches
  4. (as an aside, facial recognition software is better with white faces than with minority faces. the overwhelming majority of the rioters were white – and men.)

One way to make sense of this is to realize that masking has taken on mythic overtones in America’s culture wars and the Trump supporters who came to attend rallies in the capital, before they became rioters in the Capitol, are anti-mask. Then when they became a mob and invaded the home of the legislative branch of government, they simply didn’t have masks on them.

On the other hand, the rioters seemed anxious to be seen, livestreaming what they perceived as a revolution as it was occurring. If there was no COVID, it seems likely the rioters would have done the same thing and, potentially, there was more masking than there would have otherwise been because of the pandemic.

There are then two plausible reasons rioters didn’t wear masks. First, the rioting was a surprise to most of them and most of them hadn’t known that they would end up breaking the law. Second, they didn’t see themselves as breaking the law, but thought they were on the same side as the police, the president, and other lawful authorities.

At some point, not wearing COVID masks overlaps with not wearing criminal masks, the first from the belief that COVID is not real and the second out of the belief that breaking into the Capitol is not a crime. But surely, deep inside, there is the suspicion for these people that both the disease and the crime are real.

This inherent conflict between wanting to hide our true selves while also wanting to reveal ourselves online is at the heart of the societal changes driven by social media like Twitter and Facebook. We know that these companies make their money by surveilling our online behavior and selling our information. Yet we see this as a fair trade because they give us the ability to be heard and connect with other people who think like us.

The structural artifact created is that unwanted surveillance is inextricable from the opportunity for identitarian expression.

For Capitol rioters, being observed is the natural corollary to being observed.

Due to the bad optics of the rioting of the U.S. Capitol, some Trump supporters are now disavowing the rioters and attempting to unmask them as Antifa agents pretending to be militia/3 percenters/bougaloo bois/ proud bois/ white supremacists.

In this final turn, the ideology critique tradition that runs through Nietzsche, Freud, Marx, critical theory and eventually critical race theory,  reaches an apex of sorts – unmasking as a tactic for erasing one’s tracks, even when everything has been caught on film.

In 1983 David Copperfield made the Statue of Liberty disappear on live television. It was similar to many other disappearing tricks he had performed over the years, but the scale and the fact that it was being filmed made it seem all the more inexplicable. According to some debunkers, however, the fact that it was filmed, and that we all have a bias toward believing what we see with our own eyes, made it actually easier for Copperfield to create his illusion.

As a software developer working with virtual reality, computer vision and artificial intelligence, and also as a former philosophy student, the intersection of these three themes, unmasking, optics and surveillance, are a rich mine for me. In the next few days I want to take each of these concepts apart philosophically and historically, in isolation and in relation to each other, and destrukt them to see what falls out. I want to address Kant’s distinction between the private and public spheres in What Is Enlightenment?  while also covering the role of the unmasking motif in Scooby-Doo, naturlich. I want to dig into why magicians never reveal their tricks and why politicians never admit they are wrong. Along the way, if I am feeling particularly self-destructive, I want to touch on Critical Race Theory, cancel culture, right wing safe spaces, the politics of personal destruction, nuclear options and redemption through art vs salvation through politics.

Patrick Leahy Cannot Preside Over a Presidential Impeachment

I’m not a lawyer, much less a Constitutional scholar, so I really have little weight to throw toward resolving the question of who should preside over the second impeachment trial of President Donald Trump. This is the second time of late that I’ve opined on matters for which I am fairly unqualified to opine. I’m even starting to worry that I’m becoming a bit of a habitual self-investigator rather than merely an easily distracted autodidact.

At the same time, I have been trained as a post-grad philosophy student to deal with some fairly difficult texts, many of which contradict each other, all dealing with extremely abstruse ideas and involving dense argumentation. Which is to say, I really find it difficult to resist.

It was recently reported that Senator Patrick Leahy will be presiding over the upcoming impeachment trial of Donald Trump rather than Supreme Court Chief Justice Roberts. The reasons for this are twofold.

First, Justice Roberts appears to have demurred when approached by Senator Chuck Schumer concerning the matter. 

Second, Article I, Section 3, Clause 6 states that “When the President of the United States is tried, the Chief Justice shall preside.” In other cases, such as impeachment of a Vice President or other civil officers, the President Pro Tempore of the Senate presides over impeachments. This case seems to fall somewhere in-between as Trump is no longer a sitting President of the United States.

The complication here is that how we read Article I, Section 3, Clause 6 on this matter is tied to our interpretation of Article II, Section 4 of the Constitution says this about the President of the United States: “The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.”

A minority of Constitutional experts who have weighed in on the matter interpret this section to mean that an ex-President of the United States cannot be impeached and tried, since the plain text of the Constitution says only Presidents, i.e. sitting Presidents, can be impeached and tried.

Against this argument opposing late impeachment, Brian C. Kalt, the foremost expert on late impeachments, makes it clear in a 2002 law journal article, The Constitutional Case for the Impeachability of Former Federal Officials,  that this is not a correct interpretation of the Constitution’s plain text, which is much more ambiguous.

The plain text arguments tend to take the form that if non-sitting Presidents are impeachable, then the Constitution should have said “The President, Vice President or other civil officers [or former Presidents, former Vice Presidents or other former civil officers]…” Because it doesn’t then they are not.

An even less tenable argument being thrown around is that Donald Trump is now a private citizen and if the Constitution wanted to allow impeachment and trial of private citizens like you or me, then it would have said so. This is a fairly weak argument, though, since a private citizen being impeached for high crimes while in civil office is clearly different from trying a private citizen who has never held federal office (or even trying a former official for offenses committed out of office, for that matter).

The right way to look at Article II, Section 4 is that it serves to limit Congressional power regarding who can be impeached and tried, but sets no rules regarding the timing of the impeachment and trial. This interpretation brings it in line with precedent, both in English Common Law and the contemporary understanding of impeachment as articulated in the state constitutions, as well as structural arguments for late impeachment (Presidents should be discouraged from doing impeachable things late in their presidencies).

But if the timing of the impeachment trial is not constrained when the Constitution says “President of the United States” in the context of impeachment, then this would seem to apply to Article I, Section 3, Clause 6, also. If presidential impeachment trials in the Senate must be presided over by the Chief Justice of the Supreme Court, then this would be true whether an incumbent President or a former President is being tried.

Moreover, the Chief Justice does not appear to have a say in the matter. The power to try an impeached President  is vested in the Senate and not the Supreme Court. The Senate makes its own rules about how it interprets the Constitution with regard to impeachment powers.

But I’m not a Constitutional expert and I’m not a lawyer. At the very least, though, it strikes this layman as odd that the Senate should choose to interpret “President” as including ex-Presidents in one part of the Constitution while deciding that it excludes ex-Presidents in another.

And if I’m noticing that, as a layman, it is not only probable but certain that the Republican defenders of President Trump in the Senate and dependable if flexible conservatives at the Wall Street Journal, National Review, and other publications will do so as well, arguing that while it may be the case that Donald Trump committed convictable acts, the process is so flawed that he must be exonerated.

Patrick Leahy cannot be allowed to preside over President Donald Trump’s second impeachment trial. Chief Justice Roberts needs to do his job.