Pokémon Go as An Illustration of AR Belief Circles

venn

Recent rumors circling around Pokémon Go suggest that they will delay their next major update until next year. It was previously believed that they would be including additional game elements, creatures and levels beyond level 40 sometime in December.

A large gap between releases like this would seem to leave the door open to other copy cat games to move into the opening that Niantec is providing them. And maybe this wouldn’t be such a bad thing. While World of Warcraft is the most successful MMORPG, for instance, it certainly wasn’t the first. Dark Age of Camelot, Everquest, Asheron’s Call and Ultima Online all preceded it. What WoW did was perhaps to collect the best features of all these games while also ride the right graphics card cycle to success.

A similar student-becomes-the-master trope can play out for other franchise owners, since the only thing that seems to be required to get a game similar to Pokemon going is a pre-existing storyline (like WoW had) and 3D assets either available or easily created to go into the game. With Azure and AWS cloud computing easily available, even infrastructure isn’t such a challenge as it was when the early MMORPGs were starting. Possible franchise holders that could make the leap into geographically-aware augmented reality games include Disney, Wow itself, Yu-Gi-Oh!, Magic the Gathering, and Star Wars.

Imagine going to the park one day and asking someone else face down staring at their phone if they know where the bulbasaur showing up on the nearby is and having them not knowing what you are talking about because they are looking for Captain Hook or a jawa on their nearby?

This sort of experience is exemplary of what Vernor Vinge calls belief circles in his book about augmented reality, Rainbow’s End. Belief circles describe groups of people who share a collaborative AR experience. Because they also share a common real life world with others, their belief circles may conflict with other people’s belief circles. What’s even more peculiar is that members of different belief circles do not have access to each other’s augmented worlds – a peculiar twist on the problem of other minds. So while a person in H.P. Lovecraft’s belief circle can encounter someone in Terry Pratchett’s Discworld belief circle at a Starbuck’s, it isn’t at all clear how they will ultimately interact with one another. Starbuck’s itself may provide virtual assets that can be incorporated into either belief circle in order to attract customers from different worlds and backgrounds – basically multi-tier marketing of the future. Will different things be emphasized in the store based on our self-selected belief circles? Will our drinks have different names and ingredients? How will trademark and copyright laws impact the ability to incorporate franchises into the muti-aspect branding of coffee houses, restaurants and other mall stores?

But most of all, how will people talk to each other? One of the great pleasures of playing Pokemon today is encountering and chatting with people I otherwise wouldn’t meet and having a common set of interests that trump our political and social differences. Belief circles in the AR future of five to ten years may simply encourage the opposite trend of community Balkanization in interest zones. Will high concept belief circles based on art, literature and genre fiction simply devolve into Democrat and Republican belief circles at some point?

Pokemonography

Pokémon Go is the first big augmented reality hit. It also challenges our understanding of what augmented reality means. While it has AR modes for catching as well as battling pokémon, it feels like an augmentation of reality even when these modes are disabled.

krabby

Pokémon Go in large part is a game overlaid on top of a maps application. Maps apps, in turn, are an augmentation overlaid on top of our physical world that track our position inside of the digital representation of streets and roads. More than anything else, it is the fully successful cartography referred to George Luis Borges’s story On Exactitude in Science, prominently referred to in Baudrillard’s monograph Simulacra and Simulation.

Pokémon Go’s digital world is also the world’s largest game world. Games like Fallout 4 and Grand Theft Auto V boast of worlds that encompass 40 sq miles and 50 sq miles, respectively. Pokémon Go’s world, on the other hand, is co-extensive with the mapped world (or the known world, as we once called it).  It has a scale of one kilometer to one kilometer.

dragonaire

Pokémon Go is an augmented reality game even when we have AR turned off. Players share the same geographic space we do but live, simultaneously, in a different world revealed to them through their portable devices. It makes the real world more interesting – so much so that the sedentary will participate in exercise while the normally cautious will risk sunburn and heatstroke in post-climate change summers around the world in order to participate in an alternative reality. In other words, it shapes behavior by creating new goals. It creates new fiscal economies in the process.

electabuzz

Which is all a way of saying that Pokémon Go does what marketing has always wanted to do. It generates a desire for things that, up to a month ago, did not exist.

magikarp

A desire for things which, literally, do not exist today.

What more fitting moniker to describe a desire for something that does not exist than Pokémonography. Here are some pics from my personal collection.

porygon

snorlax

warturturtle

raichu

dratini

lapras

dragonite

gyrados

HoloLens Surface Reconstruction XEF File Format

[Update 4/23 – this turns out to be just a re-appropriation of an extension name. Kinect studio doesn’t recognize the HoloLens XEF format and vice-versa.]

The HoloLens documentation reveals interesting connections with the Kinect sensor. As most people by now know, the man behind the HoloLens, Alex Kipman, was also behind the Kinect v1 and Kinect v2 sensors.

kinect_studio_xef

One of the more interesting features of the Kinect was its ability to perform a scan and then play that scan back later like a 3D movie. The Kinect v2 even came with a recording and playback tool for this called Kinect Studio. Kinect Studio v2 serialized recordings in a file format known as the eXtended Event File format. This basically recorded depth point information over time – along with audio and color video if specified.

Now a few years later we have HoloLens. Just as the Kinect included a depth camera, the HoloLens also has a depth camera that it uses to perform spatial mapping of the area in front of the user. These spatmaps are turned into simulations that are then combined with code so that in the final visualization, 2D apps appear to be pinned to globally fixed positions while 3D objects and characters seem to be aware of physical objects in the room and interact with them appropriately.

xef

Deep in the documentation on the HoloLens emulator is fascinating information about the ability to play back previously scanned rooms in the emulator. If you have a physical headset, it turns out you can also record surface reconstructions using the windows device portal.

The serialization format, it turns out, is the same one that is used in Kinect Studio v2: *.xef .

An interesting fact about the XEF format is that Microsoft never released any documentation about what the xef format looked like. When I open up a saved xef file in Notepad++, this is what it looks like:

xef_np 

Microsoft also never released a library to deserialize depth data from the xef format, which forced many people trying to make recordings to come up with their own, idiosyncratic formats for saving depth information.

Hopefully, now that the same format is being used across devices, Microsoft will be able to finally release a lib for general use – and if not that, then at least a spec of how xef is formed.

Rethinking VR/AR Launch Dates

The HTC Vive, Oculus Rift and Microsoft HoloLens all opened for pre-orders in 2016 with plans to ship in early April (or late March in the case of the Oculus). All have run into fulfillment problems creating general confusion for their most ardent fans.

I won’t try to go into all the details of what each company originally promised and then what each company has done to explain their delays. I honestly barely understand it. Oculus says there were component shortages and is contacting people through email to update them. Oculus also refunded some shipping costs for some purchasers as compensation. HTC had issues with their day one ordering process and is using its blog for updates. Microsoft hasn’t acknowledged a problem but is using its developer forum to clarify the shipping timeline.

Maybe it’s time to acknowledge that spinning up production for expensive devices in relatively small batches is really, really hard. Early promises from 2015 followed by CES in January 2016 and then GDC in March probably created an artificial timeline that was difficult to hit.

On top of this, internal corporate pressure has probably also driven each product group to hype to the point that it is difficult to meet production goals. HTC probably has the most experience with international production lines for high tech gear and even they stumbled a bit.

Maybe it’s also time to stop blaming each of these companies as they reach out for the future. All that’s happened is that some early adopters aren’t getting to be as early as they want to be (including me, admittedly).

As William Gibson said, “The future is already here — it’s just not very evenly distributed.”

HoloLens Occlusion vs Field of View

prometheusmovie6812

[Note: this post is entirely my own opinion and purely conjectural.]

Best current guesses are that the HoloLens field of view is somewhere between 32 degrees and 40 degrees diagonal. Is this a problem?

We’d definitely all like to be able to work with a larger field of view. That’s how we’ve come to imagine augmented reality working. It’s how we’ve been told it should work from Vernor Vinge’s Rainbow’s End to Ridley Scott’s Prometheus to the Iron Man trilogy – in fact going back as far as Star Wars in the late 70’s. We want and expect a 160-180 degree FOV.

So is the HoloLens’ field of view (FOV) a problem? Yes it is. But keep in mind that the current FOV is an artifact of the waveguide technology being used.

What’s often lost in the discussions about the HoloLens field of view – in fact the question never asked by the hundreds of online journalists who have covered it – is what sort of trade-off was made so that we have the current FOV.

A common internet rumor – likely inspired by a video by tech evangelist Bruce Harris taken a few months ago – is that it has to do with cost of production and consistency in production. The argument is borrowed from chip manufacturing and, while there might be some truth in it, it is mostly a red herring. An amazingly comprehensive blog post by Oliver Kreylos in August of last year went over the evidence as well as related patents and argued persuasively that while increasing the price of the waveguide material could improve the FOV marginally, the price difference was prohibitively expensive and ultimately nonsensical. At the end of the day, the FOV of the HoloLens developer unit is a physical limitation, not a manufacturing limitation or a power limitation.

haunted_mansion

But don’t other AR headset manufacturers promise a much larger FOV? Yes. The Meta 2 (shown below) has a 90 degree field of view. The way the technology works, however, involves two LED screens that are then viewed through plastic positioned at 45 degrees to the screens (technically known as a beam splitter, informally known as a piece of glass) that reflects the image into the user’s eyes at approximately half the original brightness while also letting the real world in front of the user (though half of that light is also scattered). This is basically the same technique used to create ghostly images in the Haunted Mansion at Disneyland.

brain

The downside of this increased FOV is you are loosing a lot of brightness through the beam splitter. You are also losing light based on the distance it takes the light to pass through the plastic and get to your eyes. The result is a see-through “hologram”.

Iron-Man-AR

But is this what we want? See-through holograms? The visual design team for Iron man decided that this is indeed what they wanted for their movies. The translucent holograms provide a cool ghostly effect, even in a dark room.

leia

The Princess Leia hologram from the original Star Wars, on the other hand, is mostly opaque. That visual design team went in a different direction. Why?

leia2

My best guess is that it has to do with the use of color. While the Iron Man hologram has a very limited color palette, the Princess Leia hologram uses a broad range of facial tones to capture her expression – and also so that, dramatically, Luke Skywalker can remark on how beautiful she is (which obviously gets messed up by the Return of the Jedi). Making her transparent would simply wash out the colors and destroy much of the emotional content of the scene.

star_wars_chess

The idea that opacity is a pre-requisite for color holograms is confirmed in the Star Wars chess scene on the Millennium Falcon. Again, there is just enough transparency to indicate that the chess pieces are holograms and not real objects (digital rather than physical).

dude

So what kind of holograms does the HoloLens provide, transparent or near-opaque? This is something that is hard to describe unless you actually see it for yourself but the HoloLens “holograms” will occlude physical objects when they are placed in front of them. I’ve had the opportunity to experience this several times over the last year. This is possible because these digital images use a very large color palette and, more importantly, are extremely intense. In fact, because the holoLens display technology is currently additive, this occlusion effect actually works best with bright colors. As areas of the screen become darker, they actually appear more transparent.

Bigger field of view = more transparent , duller holograms. Smaller field of view = more opaque, brighter holograms.

I believe Microsoft made the bet that, in order to start designing the AR experiences of the future, we actually want to work with colorful, opaque holograms. The trade-off the technology seems to make in order to achieve this is a more limited field of view in the HoloLens development kits.

At the end of the day, we really want both, though. Fortunately we are currently only working with the Development Kit and not with a consumer device. This is the unit developers and designers will use to experiment and discover what we can do with HoloLens. With all the new attention and money being spent on waveguide displays, we can optimistically expect to see AR headsets with much larger fields of view in the future. Ideally, they’ll also keep the high light intensity and broad color palette that we are coming to expect from the current generation of HoloLens technology.

HoloLens Hardware Specs

visual_studio

Microsoft is releasing an avalanche of information about HoloLens this week. Within that heap of gold is, finally, clearer information on the actual hardware in the HoloLens headset.

I’ve updated my earlier post on How HoloLens Sensors Work to reflect the updated spec list. Here’s what I got wrong:

1. Definitely no internal eye tracking camera. I originally thought this is what the “gaze” gesture was. Then I thought it might be used for calibration of interpupillary distance. I was wrong on both counts.

2. There aren’t four depth sensors. Only one. I had originally thought these cameras would be used for spatial mapping. Instead just the one depth camera is, and it maps a 75 degree cone out in front of the headset, with a range of 0.8 M to 3.1 M.

3.  The four cameras I saw are probably just grayscale cameras – and it’s these cameras along with cool algorithms that are being used to do inside-out position tracking along with the IMU.

Here are the final sensor specs:

  • 1 IMU
  • 4 environment understanding cameras
  • 1 depth camera
  • 1 2MP photo / HD video camera
  • Mixed reality capture
  • 4 microphones
  • 1 ambient light sensor

The mixed reality capture is basically a stream that combines digital objects with the video stream coming through the HD video camera. It is different from the on-stage rigs we’ve seen which can calculate the mixed-reality scene from multiple points of view. The mixed reality capture is from the user’s point of view only. The mixed-reality capture can be used for streaming to additional devices like your phone or TV.

Here are the final display specs:

  • See-through holographic lenses (waveguides)
  • 2 HD 16:9 light engines
  • Automatic pupillary distance calibration
  • Holographic Resolution: 2.3M total light points
  • Holographic Density: >2.5k radiants (light points per radian)

I’ll try to explain “light points” in a later post – if I can ever figure it out.

The Timelessness of Emerging Technology

In the previous post I wrote about “experience.” Here I’d like to talk about the “emerging.”

flash-gordon

There are two quotations everyone who loves technology or suffers under the burden of technology should be familiar with. The first is Arthur C. Clarke’s quote about technology and magic. The other is William Gibson’s quote about the dissemination of technology. It is the latter quote I am initially concerned with here.

The future has arrived – it’s just not evenly distributed yet.

The statement may be intended to describe the sort of life I live. I try to keep on top of new technologies and figure out what will be hitting markets in a year, in two years, in five years. I do this out of the sheer joy of it but also – perhaps to justify this occupation – in order to adjust my career and my skills to get ready for these shifts. As I write this, I have about fifteen depth sensors of different makes, including five versions of the Kinect, sitting in boxes under a table to my left. I have three webcams pointing at me and an IR reader fifteen inches from my face attached to an Oculus Rift. I’m also surrounded by mysterious wires, soldering equipment and various IoT gadgets that constantly get misplaced because they are so small. Finally, I have about five moleskine notebooks and an indefinite number of sticky pads on which I jot down ideas and appointments. As emerging technology is disseminated from wherever it is this stuff comes from – probably somewhere in Shenzhen, China – I make a point of getting it a little before other people do.

astra-gnome

But the Gibson quote could just as easily describe the way technology first catches on in certain communities – hipsters in New York and San Francisco – then goes mainstream in the first world and then finally hits the after markets globally. Emerging markets are often the last to receive emerging technologies, interestingly. When they do, however, they are transformative and spread rapidly.

 robot

Or it could simply describe the anxiety gadget lovers face when they think someone else has gotten the newest device before they have, the fear that the future has arrived for others but not for them. In this way, social media has become a boon to the distressed broadcast industry by providing a reason to see a new TV episode live before it is ruined by your Facebook friends. It is about making the future a personal experience, rather than a communal one, for a moment so that your opinions and observations and humorous quips about it — for that oh-so-brief period of time — are truly yours and original. No one likes to discover that they are the fifth person responding to a Facebook post, the tenth on instagram, or the 100th on twitter to make the same droll comment.

shape

The quote captures the fact that the future is unevenly distributed in space. What it leaves out is that the future is also unevenly distributed in time. The future I see from 2016 is different from the one I imagined in 2001, and different from the horizon my parents saw in 1960, which in turn is different from the one H. G. Wells gazed at in 1933. Our experience of the future is always tuned to our expectations at any point in time.

 lastbattlefield

This is a trope in science fiction and the original Star Trek pretty much perfected it. The future was less about futurism in that show than about projecting the issues of the 60’s into the future in order to instruct a 60’s audience about diversity, equality, and the anti-war movement. In that show the future horizon was a mirror that tried to show us what we really looked like through exaggerations, inversions and a bit of paint makeup.

kitchen

I’m grateful to Josh Rothman for introducing me to the term retrofuturism in The New Yorker, which perfectly describes this phenomenon. Retrofuturism – whose exemplars include Star Trek as well as the steampunk movement – is the quasi-historical examination of the future through the past.

flight

The exploration of retrofuturism highlights the observation that the future is always relative to the present — not just in the sense that the tortoise will always eventually catch up to the hare, but in the sense that the future is always shaped by our present experiences, aspirations and desires. There are many predictions about how the world will be in ten or twenty years that do not fit into the experience of “the future.” Global warming and scarce resources, for instance, aren’t a part of this sense. They are simply things that will happen. Incrementally faster computers and cheaper electronics, also, are not what we think of as the future. They are just change.

modern_mechanix

What counts as the future must involve a big and unexpected change (though the fact that we have a discipline called futurism and make a sport of future predictions crimps this unexpectedness) rather than a gradual advancement and it must improve our lives (or at least appear to; dystopic perspectives and knee-jerk neo-Ludditism crimps this somewhat, too).

balloons

Most of all, though, the future must include a sense that we are escaping the present. Hannah Arendt captured this escapist element in her 1958 book The Human Condition in which she remarks on a common perspective regarding the Sputnik space launch a year previous as the first “step toward escape from men’s imprisonment to the earth.” The human condition is to be earth-bound and time-bound. Our aspiration is to be liberated from this and we pin our hopes on technology. Arendt goes on to say:

… science has realized and affirmed what men anticipated in dreams that were neither wild nor idle … buried in the highly non-respectable literature of science fiction (to which, unfortunately, nobody yet has paid the attention it deserves as a vehicle of mass sentiments and mass desires).

As a desire to escape from the present, our experience of the future is the natural correlate to a certain form of nostalgia that idealizes the past. This conservative nostalgia is not simply a love of tradition but a totemic belief that the customs of the past, followed ritualistically, will help us to escape the things that frighten us or make us feel trapped in the present. A time machine once entered, after all, goes in both directions.

flying-car

This sense of the future, then, is relative to the present not just in time and space but also emotionally. It is tied to a desire for freedom and in this way even has a political dimension – if only as a tonic to despair and cynicism concerning direct political action. Our love-hate relationship to modern consumerism means we feel enslaved to consumerism yet also are waiting for it to save us with gadgets and toys in what the philosopher Peter Sloterdijk calls a state of “enlightened false consciousness”.

living_room

And so we arrive at the second most important quote for people coping with emerging technologies, also known as Clarke’s third law:

Any sufficiently advanced technology is indistinguishable from magic.

What counts as emerging changes over time. It changes based on your location. It changes based on our hopes and our fears. We always recognize it, though, because it feels like magic. It has the power to change our perspective and see a bigger world where we are not trapped in the way things are but instead feel like anything can happen and everything will work out.

 city

Everything will work out not in the end, which is too far away, but just a little bit more in the future, just around the corner. I prepare for it by surrounding myself with gadgets and screens and diodes and notebooks, like a survivalist waiting for civilization not to crumble, but to emerge.

Kinect developer preview for Windows 10

 

unicorn

With Kind permission from the Product Group, I am reprinting this from an email that went out the Kinect developers mailing list.

Please also check out Mike Taulty’s blog for a walkthrough of what’s provided. The short version, though, is now you can access the perception API’s to work with Kinect v2 in a UWP app as well as use your Kinect v2 for face recognition as a replacement for your password. Please bear in mind that this is the public preview rather than a final release.

Snip >>

We are happy to announce our public preview of Kinect support for Windows 10.

 

This preview adds support for using Kinect with the built-in Windows.Devices.Perception APIs, and it also provides beta support for using Kinect with Windows Hello.

 

Getting started is easy. First, make sure you already have a working Kinect for Windows V2 attached to your Windows 10 PC.  The Kinect Configuration Verifier can make sure everything is functioning okay. Also, make sure your Kinect has a good view of your face –  we recommend centering it as close to the top or bottom of your monitor as possible, and at least 0.5 meters from your face.

 

Then follow these steps to install the Kinect developer preview for Windows 10:

 

1. The first step is to opt-in to driver flighting. You can follow the instructions here to set up your registry by hand, or you can use the following text to create a .reg file to right-click and import the settings:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\DriverFlighting\Partner]

“TargetRing”=”Drivers”

 

2. Next, you can use Device Manager to update to the preview version of the Kinect driver and runtime:

 

                            1. Open Device Manager (Windows key + x, then m).

                            2. Expand “Kinect sensor devices”.

                            3. Right-click on “WDF KinectSensor Interface 0”.

                            4. Click “Update Driver Software…”

                            5. Click “Search automatically for updated driver software”.

                            6. Allow it to download and install the new driver.

                            7. Reboot.

 

Once you have the preview version of the Kinect for Windows V2 driver (version 2.1.1511.11000 or higher), you can start developing sensor apps for Windows 10 using Visual Studio 2015 with Windows developer tools. You can also set up Windows Hello to log in using your Kinect.

 

1. Go to “Settings->Accounts->Sign-in options”.

2. Add a PIN if you haven’t already.

3. In the Windows Hello section, click “Set up” and follow the instructions to enable Windows Hello!

 

That’s it! You can send us your feedback at k4w@microsoft.com.