Category Archives: Tlon

Older but not wiser

In late December I tried making some infrastructure changes to my blog, which is hosted on Microsoft Azure, and managed to hose the whole thing. Because I’m a devotee of doing things the long way, I spent the next two months learning about Docker containers and command line tools only to discover that Docker wasn’t my problem at all. There was something wrong with the way I’d configured my Linux VM and something to do with a button I’d pressed without looking at the warnings as closely as they warranted.

Long story short, I finally just blew away that VM and slowly reconstructed my blog from post fragments and backups I found on various machines around the house.

I still need to go through and reconstruct the WordPress categories. For now, though, I will take a moment to pause and reflect on the folly of my technical ways.

My problem with More Personal Computing as a Branding Attempt

We all know that Microsoft has had a long history with problematic branding. For every “Silverlight” that comes along we get many more confusing monikers like “Microsoft Office Professional Plus 2007.” As the old saw goes, if Microsoft invented the iPod, they would have called it the “Microsoft I-pod Pro 2005 Human Ear Professional Edition.”

While “More Personal Computing” breaks the trend of long academic nomenclature, it is still a bit wordy. It’s also a pun. For anyone who hasn’t figured out the joke, MPC can mean either [More] Personal Computing — for people who still haven’t gotten enough personal computing, apparently — or [More Personal] Computing — for those who like their technology to be intimate and a wee bit creepy.

But the best gloss on MPC, IMO, comes from this 1993 episode of The Simpsons. Enjoy:

MIXED REALITY ESSENTIALS: A CONCISE COURSE

On Saturday, October 29th, Dennis Vroegop and I will be running a Mixed Reality Workshop as part of the DEVintersection conference in Las Vegas. Dennis is both a promoter and trainer in Mixed Reality and has made frequent appearances on European TV talking about this emerging technology as well as consulting on and leading several high-profile mixed reality projects. I’ve worked as a developer on several commercial mixed reality experiences while also studying and writing about the various implications and scenarios for using mixed reality in entertainment and productivity apps.

Our workshop will cover the fundamentals of building for mixed reality through the first half of the day. Through the rest of the day, we will work with you to build your own mixed reality application of your choice—so come with ideas of what you’d like to make. And if you aren’t sure what you want to create in mixed reality, we’ll help you with that, too.

Here’s an outline of what we plan to cover in the workshop:

  1. Hardware: an overview of the leading mixed reality devices and how they work.
  2. Tools: an introduction to the toolchain used for mixed reality development emphasizing Unity and Visual Studio.
  3. Hello Unity: hands-on development of an MR app using gestures and voice commands.
  4. SDK: we’ll go over the libraries used in MR development, what they provide and how to use them.
  5. Raycasting – covering some things you never have to worry about in 2D programming.
  6. Spatial Mapping and Spatial Understanding – how MR devices recognize the world around them.
  7. World Anchors – fixing virtual objects in the real world.

Break for lunch

    8.  Dennis and I will help you realize your mixed reality project. At the end of the workshop, we’ll do a show and tell to share what you’ve built and go over next steps if you want to publish your work.

We are extremely excited to be doing this workshop at DEVintersection. Mixed Reality is forecasted to be a multi-billion dollar industry by 2020. This is your opportunity to get in at the ground floor with some real hands-on experience.

(Be sure to use the promo code ASHLEY for a discount on your registration.)

Pokémon Go as An Illustration of AR Belief Circles

venn

Recent rumors circling around Pokémon Go suggest that they will delay their next major update until next year. It was previously believed that they would be including additional game elements, creatures and levels beyond level 40 sometime in December.

A large gap between releases like this would seem to leave the door open to other copy cat games to move into the opening that Niantec is providing them. And maybe this wouldn’t be such a bad thing. While World of Warcraft is the most successful MMORPG, for instance, it certainly wasn’t the first. Dark Age of Camelot, Everquest, Asheron’s Call and Ultima Online all preceded it. What WoW did was perhaps to collect the best features of all these games while also ride the right graphics card cycle to success.

A similar student-becomes-the-master trope can play out for other franchise owners, since the only thing that seems to be required to get a game similar to Pokemon going is a pre-existing storyline (like WoW had) and 3D assets either available or easily created to go into the game. With Azure and AWS cloud computing easily available, even infrastructure isn’t such a challenge as it was when the early MMORPGs were starting. Possible franchise holders that could make the leap into geographically-aware augmented reality games include Disney, Wow itself, Yu-Gi-Oh!, Magic the Gathering, and Star Wars.

Imagine going to the park one day and asking someone else face down staring at their phone if they know where the bulbasaur showing up on the nearby is and having them not knowing what you are talking about because they are looking for Captain Hook or a jawa on their nearby?

This sort of experience is exemplary of what Vernor Vinge calls belief circles in his book about augmented reality, Rainbow’s End. Belief circles describe groups of people who share a collaborative AR experience. Because they also share a common real life world with others, their belief circles may conflict with other people’s belief circles. What’s even more peculiar is that members of different belief circles do not have access to each other’s augmented worlds – a peculiar twist on the problem of other minds. So while a person in H.P. Lovecraft’s belief circle can encounter someone in Terry Pratchett’s Discworld belief circle at a Starbuck’s, it isn’t at all clear how they will ultimately interact with one another. Starbuck’s itself may provide virtual assets that can be incorporated into either belief circle in order to attract customers from different worlds and backgrounds – basically multi-tier marketing of the future. Will different things be emphasized in the store based on our self-selected belief circles? Will our drinks have different names and ingredients? How will trademark and copyright laws impact the ability to incorporate franchises into the muti-aspect branding of coffee houses, restaurants and other mall stores?

But most of all, how will people talk to each other? One of the great pleasures of playing Pokemon today is encountering and chatting with people I otherwise wouldn’t meet and having a common set of interests that trump our political and social differences. Belief circles in the AR future of five to ten years may simply encourage the opposite trend of community Balkanization in interest zones. Will high concept belief circles based on art, literature and genre fiction simply devolve into Democrat and Republican belief circles at some point?

Pokemonography

Pokémon Go is the first big augmented reality hit. It also challenges our understanding of what augmented reality means. While it has AR modes for catching as well as battling pokémon, it feels like an augmentation of reality even when these modes are disabled.

krabby

Pokémon Go in large part is a game overlaid on top of a maps application. Maps apps, in turn, are an augmentation overlaid on top of our physical world that track our position inside of the digital representation of streets and roads. More than anything else, it is the fully successful cartography referred to George Luis Borges’s story On Exactitude in Science, prominently referred to in Baudrillard’s monograph Simulacra and Simulation.

Pokémon Go’s digital world is also the world’s largest game world. Games like Fallout 4 and Grand Theft Auto V boast of worlds that encompass 40 sq miles and 50 sq miles, respectively. Pokémon Go’s world, on the other hand, is co-extensive with the mapped world (or the known world, as we once called it).  It has a scale of one kilometer to one kilometer.

dragonaire

Pokémon Go is an augmented reality game even when we have AR turned off. Players share the same geographic space we do but live, simultaneously, in a different world revealed to them through their portable devices. It makes the real world more interesting – so much so that the sedentary will participate in exercise while the normally cautious will risk sunburn and heatstroke in post-climate change summers around the world in order to participate in an alternative reality. In other words, it shapes behavior by creating new goals. It creates new fiscal economies in the process.

electabuzz

Which is all a way of saying that Pokémon Go does what marketing has always wanted to do. It generates a desire for things that, up to a month ago, did not exist.

magikarp

A desire for things which, literally, do not exist today.

What more fitting moniker to describe a desire for something that does not exist than Pokémonography. Here are some pics from my personal collection.

porygon

snorlax

warturturtle

raichu

dratini

lapras

dragonite

gyrados

HoloLens Surface Reconstruction XEF File Format

[Update 4/23 – this turns out to be just a re-appropriation of an extension name. Kinect studio doesn’t recognize the HoloLens XEF format and vice-versa.]

The HoloLens documentation reveals interesting connections with the Kinect sensor. As most people by now know, the man behind the HoloLens, Alex Kipman, was also behind the Kinect v1 and Kinect v2 sensors.

kinect_studio_xef

One of the more interesting features of the Kinect was its ability to perform a scan and then play that scan back later like a 3D movie. The Kinect v2 even came with a recording and playback tool for this called Kinect Studio. Kinect Studio v2 serialized recordings in a file format known as the eXtended Event File format. This basically recorded depth point information over time – along with audio and color video if specified.

Now a few years later we have HoloLens. Just as the Kinect included a depth camera, the HoloLens also has a depth camera that it uses to perform spatial mapping of the area in front of the user. These spatmaps are turned into simulations that are then combined with code so that in the final visualization, 2D apps appear to be pinned to globally fixed positions while 3D objects and characters seem to be aware of physical objects in the room and interact with them appropriately.

xef

Deep in the documentation on the HoloLens emulator is fascinating information about the ability to play back previously scanned rooms in the emulator. If you have a physical headset, it turns out you can also record surface reconstructions using the windows device portal.

The serialization format, it turns out, is the same one that is used in Kinect Studio v2: *.xef .

An interesting fact about the XEF format is that Microsoft never released any documentation about what the xef format looked like. When I open up a saved xef file in Notepad++, this is what it looks like:

xef_np 

Microsoft also never released a library to deserialize depth data from the xef format, which forced many people trying to make recordings to come up with their own, idiosyncratic formats for saving depth information.

Hopefully, now that the same format is being used across devices, Microsoft will be able to finally release a lib for general use – and if not that, then at least a spec of how xef is formed.

Rethinking VR/AR Launch Dates

The HTC Vive, Oculus Rift and Microsoft HoloLens all opened for pre-orders in 2016 with plans to ship in early April (or late March in the case of the Oculus). All have run into fulfillment problems creating general confusion for their most ardent fans.

I won’t try to go into all the details of what each company originally promised and then what each company has done to explain their delays. I honestly barely understand it. Oculus says there were component shortages and is contacting people through email to update them. Oculus also refunded some shipping costs for some purchasers as compensation. HTC had issues with their day one ordering process and is using its blog for updates. Microsoft hasn’t acknowledged a problem but is using its developer forum to clarify the shipping timeline.

Maybe it’s time to acknowledge that spinning up production for expensive devices in relatively small batches is really, really hard. Early promises from 2015 followed by CES in January 2016 and then GDC in March probably created an artificial timeline that was difficult to hit.

On top of this, internal corporate pressure has probably also driven each product group to hype to the point that it is difficult to meet production goals. HTC probably has the most experience with international production lines for high tech gear and even they stumbled a bit.

Maybe it’s also time to stop blaming each of these companies as they reach out for the future. All that’s happened is that some early adopters aren’t getting to be as early as they want to be (including me, admittedly).

As William Gibson said, “The future is already here — it’s just not very evenly distributed.”

HoloLens Occlusion vs Field of View

prometheusmovie6812

[Note: this post is entirely my own opinion and purely conjectural.]

Best current guesses are that the HoloLens field of view is somewhere between 32 degrees and 40 degrees diagonal. Is this a problem?

We’d definitely all like to be able to work with a larger field of view. That’s how we’ve come to imagine augmented reality working. It’s how we’ve been told it should work from Vernor Vinge’s Rainbow’s End to Ridley Scott’s Prometheus to the Iron Man trilogy – in fact going back as far as Star Wars in the late 70’s. We want and expect a 160-180 degree FOV.

So is the HoloLens’ field of view (FOV) a problem? Yes it is. But keep in mind that the current FOV is an artifact of the waveguide technology being used.

What’s often lost in the discussions about the HoloLens field of view – in fact the question never asked by the hundreds of online journalists who have covered it – is what sort of trade-off was made so that we have the current FOV.

A common internet rumor – likely inspired by a video by tech evangelist Bruce Harris taken a few months ago – is that it has to do with cost of production and consistency in production. The argument is borrowed from chip manufacturing and, while there might be some truth in it, it is mostly a red herring. An amazingly comprehensive blog post by Oliver Kreylos in August of last year went over the evidence as well as related patents and argued persuasively that while increasing the price of the waveguide material could improve the FOV marginally, the price difference was prohibitively expensive and ultimately nonsensical. At the end of the day, the FOV of the HoloLens developer unit is a physical limitation, not a manufacturing limitation or a power limitation.

haunted_mansion

But don’t other AR headset manufacturers promise a much larger FOV? Yes. The Meta 2 (shown below) has a 90 degree field of view. The way the technology works, however, involves two LED screens that are then viewed through plastic positioned at 45 degrees to the screens (technically known as a beam splitter, informally known as a piece of glass) that reflects the image into the user’s eyes at approximately half the original brightness while also letting the real world in front of the user (though half of that light is also scattered). This is basically the same technique used to create ghostly images in the Haunted Mansion at Disneyland.

brain

The downside of this increased FOV is you are loosing a lot of brightness through the beam splitter. You are also losing light based on the distance it takes the light to pass through the plastic and get to your eyes. The result is a see-through “hologram”.

Iron-Man-AR

But is this what we want? See-through holograms? The visual design team for Iron man decided that this is indeed what they wanted for their movies. The translucent holograms provide a cool ghostly effect, even in a dark room.

leia

The Princess Leia hologram from the original Star Wars, on the other hand, is mostly opaque. That visual design team went in a different direction. Why?

leia2

My best guess is that it has to do with the use of color. While the Iron Man hologram has a very limited color palette, the Princess Leia hologram uses a broad range of facial tones to capture her expression – and also so that, dramatically, Luke Skywalker can remark on how beautiful she is (which obviously gets messed up by the Return of the Jedi). Making her transparent would simply wash out the colors and destroy much of the emotional content of the scene.

star_wars_chess

The idea that opacity is a pre-requisite for color holograms is confirmed in the Star Wars chess scene on the Millennium Falcon. Again, there is just enough transparency to indicate that the chess pieces are holograms and not real objects (digital rather than physical).

dude

So what kind of holograms does the HoloLens provide, transparent or near-opaque? This is something that is hard to describe unless you actually see it for yourself but the HoloLens “holograms” will occlude physical objects when they are placed in front of them. I’ve had the opportunity to experience this several times over the last year. This is possible because these digital images use a very large color palette and, more importantly, are extremely intense. In fact, because the holoLens display technology is currently additive, this occlusion effect actually works best with bright colors. As areas of the screen become darker, they actually appear more transparent.

Bigger field of view = more transparent , duller holograms. Smaller field of view = more opaque, brighter holograms.

I believe Microsoft made the bet that, in order to start designing the AR experiences of the future, we actually want to work with colorful, opaque holograms. The trade-off the technology seems to make in order to achieve this is a more limited field of view in the HoloLens development kits.

At the end of the day, we really want both, though. Fortunately we are currently only working with the Development Kit and not with a consumer device. This is the unit developers and designers will use to experiment and discover what we can do with HoloLens. With all the new attention and money being spent on waveguide displays, we can optimistically expect to see AR headsets with much larger fields of view in the future. Ideally, they’ll also keep the high light intensity and broad color palette that we are coming to expect from the current generation of HoloLens technology.

HoloLens Hardware Specs

visual_studio

Microsoft is releasing an avalanche of information about HoloLens this week. Within that heap of gold is, finally, clearer information on the actual hardware in the HoloLens headset.

I’ve updated my earlier post on How HoloLens Sensors Work to reflect the updated spec list. Here’s what I got wrong:

1. Definitely no internal eye tracking camera. I originally thought this is what the “gaze” gesture was. Then I thought it might be used for calibration of interpupillary distance. I was wrong on both counts.

2. There aren’t four depth sensors. Only one. I had originally thought these cameras would be used for spatial mapping. Instead just the one depth camera is, and it maps a 75 degree cone out in front of the headset, with a range of 0.8 M to 3.1 M.

3.  The four cameras I saw are probably just grayscale cameras – and it’s these cameras along with cool algorithms that are being used to do inside-out position tracking along with the IMU.

Here are the final sensor specs:

  • 1 IMU
  • 4 environment understanding cameras
  • 1 depth camera
  • 1 2MP photo / HD video camera
  • Mixed reality capture
  • 4 microphones
  • 1 ambient light sensor

The mixed reality capture is basically a stream that combines digital objects with the video stream coming through the HD video camera. It is different from the on-stage rigs we’ve seen which can calculate the mixed-reality scene from multiple points of view. The mixed reality capture is from the user’s point of view only. The mixed-reality capture can be used for streaming to additional devices like your phone or TV.

Here are the final display specs:

  • See-through holographic lenses (waveguides)
  • 2 HD 16:9 light engines
  • Automatic pupillary distance calibration
  • Holographic Resolution: 2.3M total light points
  • Holographic Density: >2.5k radiants (light points per radian)

I’ll try to explain “light points” in a later post – if I can ever figure it out.