Category Archives: Augmented Reality

Magic Leap Store publishes first pay app

seedling

The Magic Leap store (aka Magic Leap “World”) has published its first app for $9.99. This is an app created by Insomniac but published by Magic Leap itself, so in some sense is a trial run for its store. “Seedling” already made its first appearance at the Leap conference in Los Angeles in October.

$9.99 is also an interesting price, perhaps signaling a target for apps on Magic Leap devices. Back in the day, $.99 was the target price for Apple Store apps. When Microsoft came out with Windows Phone, they marketed the idea that apps should sell for more than that on their platform (more towards $1.49 or $1.99). On Steam, the magic price point for games seems to be $20 to $60.

sketchup

For the HoloLens, which uses the online Windows Store as its distribution channel, the most frequent price point seems to be free. This makes sense since even with a purported 50K HoloLens devices currently in the world, the total market is still too small to support a reasonably priced game. Trimble initially went the other way with their SketchUp Viewer,  which lists for about $1.5K, apparently trying to recoup their investment with a high price tag. Their subsequent HoloLens offering, part of a collaboration service, is free.

In order to buy Seedling, I had to go into my online magic leap creator’s account and add a payment method. This is an interesting aspect of all current VR and AR devices: entering data is rarely – and entering financial data is never – done through the actual device. We still live in a world where one must switch to either a phone or a computer in order to establish the credentials that will be used through the device.

This is ultimately a pre-NUI UX problem involving the difficulty of doing data entry without a keyboard and mouse (though we are finally getting comfortable with doing this on our smart phones, thanks to the rising comfort level with using web apps on tiny screens). This will be an ongoing problem for developing apps for the enterprisy market, where the exchange of data is pretty key.

Who knows, maybe solving this UX dilemma for the enterprise will end up being the killer app we’ve all been waiting for. I wonder how much someone would charge for it?

Thank you Techorama Netherlands

amsterdam

At the beginning of October I was invited to deliver two sessions at Techorama Netherlands: one on Cognitive Services Custom Vision and one about the HoloLens and the Magic Leap One. This is one of the best organized conferences I’ve been to and the hosts and attendees were amazing. I can’t say enough good things about it.

The lineup was also great with Scott Guthrie, Laurent Buignon, Giorgio Sardo, Shawn Wildermuth, Pete Brown, Jeff Prosise, etc. It is what is known as a first tier tech conference. What was especially impressive is that this is also the first time Techorama Netherlands was convened.

I want to also thank my friend Dennis Vroegop for hosting me and showing me around on my first trip to the Netherlands. He and Jasper Brekelmans took a weekday off to give me the full Amsterdam experience. It was also great to have beers with Roland Smeenk, Alexander Meijers and Joost van Schaik. I’m not sure why there is so much mixed reality talent in the Netherlands but there you go.

Magic Leap One vs HoloLens v1 Comparison

side-by-side

I’m currently sitting in my room at the L.A. Grand Hotel waiting for the L.E.A.P. conference to start. I’ve been holding off on this comparison post because I had promised Dennis Vroegop I would give it first as a talk at the Techorama Netherlands conference – which I did last week. I will do a feature comparison based on publicly available information, then highlight features unique to the Magic Leap, and then distinguish subtle but important differences that only become apparent from spending months with these devices at the developer level. Finally I want to point out design improvements in the Magic Leap that are so good for Mixed Reality that I predict they will be incorporated into the next version of HoloLens.

Keep in mind that this is a comparison of two different generations of devices. The Magic Leap One is coming out two years after the HoloLens and would be expected to be better. At the same time, the HoloLens v2 is being released some time in 2019 and can be expected to be better still.

1. Field of View

In raw numbers, the field of view of the Magic Leap One is approximately 25% better than the HoloLens. The HoloLens field of view is estimated to be about 29-30 degrees wide and 17 degrees high. The Magic Leap One is 40 degrees wide by 30 degrees high. There is a corresponding difference in resolution, with the HoloLens offering 1268 by 720 per eye and the Magic Leap One providing 1280 by 960 per eye.

The Magic Leap One uses the same wave guide display technology that the HoloLens does, however, so how did they pump up the FOV? First, the ML1 has a more powerful battery than the HoloLens does, and it’s often been claimed by Microsoft that FOV is largely dependent on the power of the projection. This is probably offset, though, by the fact that the ML1 is using more power to project in two planes instead of only one like the HoloLens does (with 6 Waveguide layers compared to 4 in the HoloLens).

Another trick is that the waveguides in the Magic Leap are closer to the wearer’s eyes than they are in the HoloLens. As a consequence, you can wear glasses underneath the HoloLens while you cannot do so comfortably under the Magic Leap device.

In addition to this, Jasper Brekelmans and Dennis Vroegop suggested over coffees along the Amstel River (in a conversation about David Copperfield) that because one’s peripheral vision is closed off in the ML1, the perceived FOV may be even larger than the actual. The theory behind this is that, due to the widespread use of glasses, we have become used to not paying attention to our peripheral vision so much and consequently are comfortable with this tunneling of our vision.

Blocking off the peripheral field of view might cause issues in certain industrial settings, but the general effect is that what you can see as a proportion of your overall FOV is much larger in the ML1 than it is in the HoloLens. Or another way of putting this is that the empty areas of your FOV, as a proportion of your available FOV, is much smaller than it is in the HoloLens.

On top of this, the aspect ratio of the FOV in the ML1 is much taller than in the HoloLens, which may end up doing a better job at accommodating vertical seccaddic movements of the eyes.

inset

Because of the narrower gap between the device and the wearer’s eyes, the Magic Leap can’t accommodate glasses as the Hololens can. To compensate, Magic Leap is developing relationships with online eyeglass manufacturers to provide prescription inserts that can be placed in front of the waveguides and magnetically lock into place. There’s some controversy over whether this is a good or a bad thing. Some developers have expressed concern that this will make demoing Magic Leap at events more difficult than demoing HoloLens, since those with poor vision will either not be able to participate or, alternatively, we will be forced to carry around a large suitcase of prescription inserts to every event.

On the other hand, when I think of what MR will be like in the future, I tend to think of them resembling real glasses (and not electronic contacts, which simply scare me). When they reach the size and ubiquity of modern glasses, it will make sense for each person to have their own personalized device with their appropriate prescription. Magic Leap is on the right track in this case. It’s just in the intervening period that we have to figure out how to share our limited, expensive devices with others.

HoloLens v1 Magic Leap One
Price $3000 – $5000 $2300
OS Windows Android variant
Field of View ~30 deg x 17 deg 40 deg x 30 deg
Resolution 1268 x 720 per eye 1280 x 960 per eye
Depth Sensor Time of Flight Time of Flight
Display Type Wave Guide Wave Guide
Hand Gestures Recognized 2 9
Underlying comic book technology Light Engines Light Fields
Controller Click 6 DOF
Hand tracking limited fingers (3 joints each)
Processing unit Above nose Light pack
Audio Spatial Sound Spatial Sound

2. Hardware Specs (It’s all about the battery)

HoloLens v1 Magic Leap One
Intel Atom x5-Z8100
1.04 GHz
Intel Airmont (14nm)
4 Logical Processors
64-bit capable
NVIDIA® Tegra X2 SOC
2 Denver 2.0 64-bit cores + 4 ARM Cortex A57 64-bit cores
(2 A57’s and 1 Denver accessible to applications)
8086h (Intel) GPU. NVIDIA Pascal™, 256 CUDA cores; Graphic APIs: OpenGL 4.5, Vulkan, OpenGL ES 3.3+
2GB RAM 8GB RAM
64GB Storage 128GB Storage

The Magic Leap One is overall a much beefier machine than the current HoloLens. While both the HoloLens and the Magic Leap One advertise a 3 hour battery life, these can mean vastly different things. In order to drive all of its extra hardware, the Magic Leap One needs a much beefier battery. The ML1 is powered by a twin-cell battery with 36.77 Wh, running at 3.83 V.  The HoloLens has a 1.65 Wh battery.

For overall performance, the larger battery means the world meshes (i.e. surface reconstruction, world mapping) are much denser and more frequently updated on the Magic Leap than on the HoloLens. The Time-of-Flight depth camera can fire off more frequently and for longer periods.

The larger battery and beefier specs also translate to much better 3D performance. The HoloLens is able to run 30,000 polygons at 60 fps. Beyond that, the fps begins to drop. The Magic Leap runs upwards of 1 million polygons at 60 fps.

On the downside, that more powerful battery rig needs a fan to cool it whereas the HoloLens is passively cooled. In laboratory and medical scenarios where a sterile environment must be maintained, active cooling with a fan could be a problem.

3. The HoloLens and Tracking

The HoloLens uses 4 monochrome cameras (“environment aware sensors”), an accelerometer, magnetometer and gyroscope in a sensor fusion configuration, and a custom HPU to perform head tracking. The Magic Leap one has a similar setup minus the HPU.

The HoloLens tracking is still somewhat better than the ML1’s. It loses tracking less frequently and digital content is less jittery when seen up close or while the wearer is in motion.

Overall, though, tracking performance is fairly close between the two devices.

4. Magic Leap Extras

The ML1 has a couple of features that are simply outside of the box. One is the eye tracking. There are inward facing cameras that track the wearer’s eye movements as invisible IR flashes.

The tracking is not continuous and is captured at a much lower resolution level than the displays. While they shouldn’t be used for direct user interactions, they are great for providing context for other interactions. It would be great if someone would write a keyboard that uses eye tracking to select keys. In the meantime, I wrote this heat vision demo that uses eye tracking to burn the walls of my house — I think of it as “Superman with a Migraine”. Note the eye-blink tracking.

The other cool extra in the Magic Leap is two planes of focus. Most VR devices have a single plane of focus at infinity. The HoloLens has a single plane of focus set at two meters.

In the magic leap one, when you look at near objects, objects further away (on the outer plane) seem to go out of focus. When you look at objects close up, the objects further away go out of focus. I would guess that the close plane is around a meter and the out one about 3 meters but I’m not really sure. In the Lumin OS .91, there is also a sporadic green shift in the near plane (which I expect will be fixed soon).

5. The Tether

tether

The Magic Leap One is made up of two parts: the Light Pack and the Light Wear. They are connected by a cable. The Light Wear contains all the sensors, projectors and displays while the Light Wear, worn at the hip, contains all the computer bits and the battery.

This is an engineering choice that allows for a much larger power source. Without the tether solution, a large battery would not be possible. Without the large battery, the ML1’s enhanced depth sensing, improved graphics processing and larger field of view would not be possible.

In addition, this design makes the Magic Leap a much more comfortable fit on the head. The weight distribution is better than on the HoloLens, it is lighter, and it doesn’t require extra straps.

The tether solution is actually so effective that I would be surprised if the HoloLens v2 does not follow a similar design. The original one-piece “tetherless” solution Microsoft came up with for the HoloLens was visionary, but severely limiting.

6. Developing

If you have ever developed in Unity for the Android (or really any other device) then you know how to develop for Magic Leap in Unity. You press a button and your app compiles to an .mpk image (Android uses “.apk” file extensions). If your device is attached, you can deploy directly by clicking on “build and run”.

Magic Leap apps can also be built with the Unreal Engine.

HoloLens apps run on a Unity player sandboxed in a UWP app. The development cycle consequently involves exporting your HoloLens app as a Visual Studio project targeting UWP and then building and deploying in UWP. In general (and it may just be me) this has been tedious.

It became even worse when the immersive WinMR devices (or occluded WinMR – basically Microsoft VR) devices came out last year and the basic tools used for HoloLens development, known as the HoloLens Toolkit and then the Mixed Reality Toolkit, was expanded to supported both kinds of device. Because of some issues with Unity, building for WinMR required certain versions of Unity and above while developing for HoloLens required certain versions of Unity and below. And this state went on for several months to the point that finding the correct Windows SDK paired with the right MRTK version paired with the correct Unity version became a closely kept alchemical formula passed from developer to developer.

This experience may not be the same for everyone but it left me a bit traumatized. By contrast, Magic Leap development is simply a pleasure. I can build and see the results very quickly in my device. I can wear the device for hours at a time. I typically only stop development when the ML battery runs down and I have to let it recharge. I don’t have a Magic Leap Hub, which would allow me to charge while I dev, but I intend to get one.

The Magic Leap toolkit is still not quite as capable as the open source Mixed Reality Toolkit managed by Stephen Hodgson and others.

The Magic Leap also has a simulator rather than an emulator for developing without a device. This actually makes sense since the Hololens emulator runs the HoloLens OS in a virtual machine, which might be tricky given the much larger specs of the Magic Leap.

7. Interactions

6dof

The Magic Leap supports robust hand and gesture tracking as well as a 6DOF controller. The DOF in 6DOF stands for degrees of freedom. We know not only the direction the controller is pointing in (3DOF) but also its position.

tap

I love the controller. I love it so much it made me finally admit to myself that I hate the HoloLens tap gesture. No one ever gets it right. It’s awkward. It’s uncomfortable and makes me feel like I’m performing a kung fu move.

mantis

By contrast, a controller just makes sense. The UX for MR, I believe, should always support three layers of interactions. Mixed reality UX should support hand gestures for ease of use. It should fall back to the controller for precision movements. It should finally fall back on the delta pad on the controller for accessibility.

For all of my antipathy toward the HoloLens tap, however, I have to say I miss the HoloLens bloom gesture (escape), which I keep trying to use in Magic Leap to no avail. Instead, in Magic Leap holding the controller’s Home button for three seconds is the escape gesture, which I don’t really like. It also bothers me that hand gestures aren’t supported in the core desktop (the Icon grid) – but this is still the Creator’s Edition (translation: dev edition) after all.

[Late edit thanks to SH: it should also be pointed out that the Lumin OS (the desktop layer) currently doesn’t support hand gestures, which I find baffling. For now, you can’t get past the login and other initial screens without a paired phone or a controller.]

Summing Up

So is the Magic Leap One better than the HoloLens v1? Oh yes. By leaps and bounds.

1. The development workflow is much more straight forward and pleasant.

2. The increased battery size and beefier hardware makes it possible to do things, performance wise, that the HoloLens tended to stop us from doing. Phone and tablet level experiences are doable now.

3. The Magic Leap One has a much better interaction model than the HoloLens does. How did anyone ever do MR without a controller? (Actually, everyone used an XBox controller in the end in order to get any sort of real work done, but we don’t talk about that much.)

Is it time to jump back into Mixed Reality development?

If you spent $3.2K to $5K for a HoloLens, then you owe it to yourself to spend $2,300 for a Magic Leap. It’s the device you originally wanted. The HoloLens was a brilliant device back in 2016 and really the first of its kind, but it had limitations. Many of the projects you were never able to realize in HoloLens (in the small dev community that developed around HoloLens, we all know what these are) are now doable with the improved Magic Leap specs. Additionally, your enterprise stories are much easier to sell with the controller. Instead of spending 5 minutes of your precious pitch time explaining how tap works, you can now just let your potential investors and clients go straight into the demo with a controller they basically already know how to use.

Is there a future in spatial computing?

Now there is. There was a brief pause between 2016 and the middle of 2018, but we currently have two great devices available with another shoe dropping soon. Microsoft will be coming out with a HoloLens v2 sometime in the first half of 2019 which I would predict will implement the tethered design Magic Leap is using. This will be an improvement over the current Magic Leap which in turn will be driven to improve its own tech.

Microsoft has an advantage because it started this journey back in the Kinect days and has the resources of Microsoft Research to draw on. Magic Leap has an advantage because, well, they aren’t Microsoft and don’t face the internal political problems a large tech giant does (though no doubt they have their own). More importantly, they have their own U.S.-based production lines (as well as production lines in Mexico) and are less reliant on China, which hopefully means they are capable of much quicker turn-arounds and initial SKU production.

When do we get smaller devices that wear like glasses?

I have no idea, but try to think in terms of 3, 5, 10 years. We always overestimate what can be done in 3 years but always underestimate how much things will change in 10. Somewhere in the middle, we will intersect with our MR futures.

Your comments, corrections and criticisms are welcome in the comments below. I’ll try to keep up with them and incorporate what you say into the main article as appropriate.

My VRLA Shopping List

vrla

I meant to finish this earlier in the week. I spent the past weekend in Los Angeles at the VRLA conference in order to hear Jasper Brekelmans speak about the state of the art in depth sensors and visual effects. One of the great things about VRLA is all the vendor booths you can visit that are directly related to VR and AR technology. Nary a data analytics platform pitch or dev ops consulting services shill in sight.

Walking around with Jasper, we started compiling a list of how we would spend our Bitcoin and Ethereum fortunes once they recover some of their value. What follows is my must-have shopping list if I had so much money I didn’t need anything:

1. Red Frog HoloLens mod

IMG_3301

First off is this modified HoloLens by Red Frog Digital. The fabrication allows the HoloLens to balance much better on a user’s head. It also applies no pressure to the bridge of the nose, but instead distributes it across the user’s head. The nicest thing about it is that it always provides a perfect fit, and can be properly aligned with the user’s eyes in about 5 seconds. They designed this for their Zombie Maze location-based experience and are targeting it for large, permanent exhibits / rides.

2. Cleanbox … the future of wholesome fun

cleanbox

If you’ve ever spent a lot of time doing AR and VR demos at an event, you know there are three practical problems you have to work around:

  • seating devices properly on users’ heads
  • cleaning devices between use
  • recharging devices

Cleanbox Technology provides a solution for venue-based AR/VR device cleaning. Place your head-mounted display in the box, close the lid, and it instantly gets blasted with UV rays and air. I’d personally be happy just to have nice wooden boxes for all of my gear – I have a tendency to leave them lying on the floor or scattered across my computer desk – even without the UV lights.

3. VR Hamster Ball

IMG_3012

The guy demoing this never seemed to let anyone try it, so I’m not sure if he was actually playing a hamster sim or not. I just know I want one as a 360 running-in-place controller … and as a private nap space, obviously.

4. Haptic Vest and Gauntlets

haps (2)

Bhaptics was demoing their TactSuit, which provides haptic feedback along the chest, back, arms and face. I’m going to need it to go with my giant hampster ball. They are currently selling dev units.

5. Birdly

birdly

A tilt table with an attached fan and a user control in the form of flapping wings is what you need for a really immersive VR experience. Fortunately, this is exactly what Birdly provides.

6. 5K Head-mounted Display

5KVR

I got to try out the Vive Pro, which has an astounding 2K resolution. But I would rather put my unearned money down for a VRHero 5K VR headset with 170 degree FOV. They seem to be targeting industrial use cases rather than games, though, since their demo was of a truck simulation (you stood in the road as trucks zoomed by).

7. A globe display

globe_display

Do I need a giant spherical display? No, I do not need it. But it would look really cool in my office as a conversation piece. It could also make a really great companion app for a VR or AR experience.

8. 360 Camera Rig with Red Epic Cameras

redrig

Five 6K Red Dragon Epic Cameras in a 360 video rig may seem like overkill, but with a starting price of around $250K, before tripod, lenses and a computer powerful enough to process your videos – this could make the killer raffle item at any hi-tech conference.

9. XSens Mocap Suit

According to Jasper, the XSens motion capture fullbody, lycra suit with realtime kinematics is one of the best available. I think I was quoted a price something like $7K(?) to $13K(?) Combined with my hamster ball, it would make me unstoppable in PvP competitive Minecraft.

10. AntVR AR Head-mounted display

antvr

AntVR will be launching a kickstarter campaign for their $500 augmented reality HMD in the next few weeks. I’d been reading about it for a while and was very excited to get a chance to try it out. It uses a Pepper’s ghost strategy for displaying AR, has decent tracking, works with Steam, and at $500 is very good for its price point.

11. Qualcomm Snapdragon 845

qualcomm

The new Qualcomm Snapdragon 845 Chip has built-in SLAM – meaning 6DOF inside-out tracking is now a trivial chip-based solution – unlike just two years ago when nobody outside of robotics had even heard of SLAM algorithms. This is a really big deal.

Lenovo is using this chip in its new (untethered) Mirage Solo VR device – which looks surprisingly like the Windows Occluded MR headset they built with Microsoft tracking tech. At the keynote, the Lenovo rep stumbled and said that they will support “at least” 6 degrees of freedom, which has now become an inside joke among VR and AR developers. It’s also spoiled me, because I am no longer satisfied with only 6DOF. I need 7DOF at least but what I really want is to take my DOF up to 11.

12. Kinect 4

kinect4

This wasn’t actually at VRLA, and I’m not ultimately sure what it is (maybe a competitor for the Google computer vision kit?) but Kinect for Azure was announced at the /build conference in Seattle and should be coming out sometime in 2019. As a former Kinect MVP and a Kinect book author, this announcement mellows me out like a glass of Hennessy in a suddenly quiet dance club.

While I’m waiting for bitcoin to rebound, I’ll just leave this list up on Amazon for, like, in case anyone wants to fulfill it for me or something. In the off chance that that actually comes through, I can guarantee you a really awesome unboxing video.

Hitchhiking the Backroads to Augmented Reality

To date, Microsoft has been resistant to sharing information about the HoloLens technology. Instead, they have relied on shock and awe demos to impress people with the overall experience rather than getting mired down in the nitty-gritty of the software and hardware engineering. Even something as simple as the field-of-view is never described in mundane numbers but rather in circumlocutions about tv screens X distance from the viewer. It definitely builds up mystery around the product.

Given the lack of concrete information, lots of people have attempted to fill in the gaps with varying degrees of success which, in their own way, make it difficult to navigate the technological true true. In an effort to simplify the research one typically has to do on one’s own in order to understand HoloLens and AR, I’ve made a sort of map for those interested in making their way. Here are some of the best resources I’ve found.

1. You should start with the Oculus blog, which is obviously about the Oculus and not about HoloLens. Nevertheless, the core technology the makes the Oculus Rift work is also in the HoloLens in some form. Moreover, the Oculus blog is a wonderful example of sharing and successfully explaining complicated concepts to the layman. Master these posts about how the Rift works and you are half way to understanding how HoloLens works:

2. Next, you should really read Oliver Kreylos’s (Doc OK) brilliant posts about the HoloLens field of view and waveguide display technology. Many disagreements around HoloLens would evaporate if people would simply invest half an hour into reading OK’s insights :

3. If you’ve gone through these, then you are ready for Dr. Michael J. Gourlay’s youtube discussion of surface reconstruction, occlusion, tracking and mapping. Sadly the audio drops out at key moments and the video drops out for the entire Q & A, but there’s lots of gold for everyone in this mine. Also check out his audio interview at Georgia Tech:

4. There have been lots of first-impression blog posts concerning the HoloLens, but Jasper Brekelmans provides far-and-away the best of these by following a clear just-the-facts-ma’am approach:

5. HoloLens isn’t only about learning new technology but also discovering a new design language. Mike Alger’s video provides a great introduction into the problems as well as some solutions for AR/VR interface and usability design:

6. Oculus, Leap Motion and others who have been designing VR experiences provide additional useful tips about what they have discovered along the way in articles like the now famous “Swayze Effect” (yes, that Swayze):

7. Finally, here are some video parodies and inspirational videos of VR and AR from the tv show Community and others:

I know I’ve left a lot of good material out, but these have been some of the highlights for me over the past year while hitchhiking on the backroads leading to Augmented Reality. Drop them in your mental knapsack, stick out your thumb and wait for the future to pick you up.

Ceci n’est pas une pipe bombe

homemade_clock

This is the picture of the homemade clock Ahmed Mohamed brought to his Irving, Texas high school. Apparently no one ever mistook it for a bomb, but they did suspect that it was made to look like a bomb and so they dragged the hapless boy off in handcuffs and suspended him for three days.

This is a strange case of perception versus reality in which the virtual bomb was never mistaken for a real bomb. Instead, what was identified was the fact that it was, in fact, only a bomb virtually and, as with all things virtual, therefore required some sort of explanation.

The common sympathetic explanation is that this isn’t a picture of a virtual bomb at all but rather a picture of a homemade clock. Ahmed recounts that he made the clock, in maker fashion, in order to show an engineering teacher because he had done robotics in middle school and wanted to get into a similar program in high school. Homemade clocks, of course, don’t require an explanation since they aren’t virtually anything other than themselves.

MagrittePipe

It turns out, however, that the picture at the top does not show a homemade maker clock. Various engineering types have examined the images and determined that it is in fact a disassembled clock from the 80’s.

The telling aspect is the DC power cord which doesn’t actually get used in homemade projects. Instead, anyone working with arduino projects typically (pretty much always) uses AA batteries. The clock components have also been tracked back to their original source, however, so the evidence seems pretty solid.

battery

The photo at the top shows not a virtual bomb nor a homemade clock but, in fact, a virtual homemade clock. That is, it was made to look like a homemade clock but was mistakenly believed to be something made to look like a homemade bomb.

[As a disclaimer about intentions, which is necessary because getting on the wrong side of this gets people in trouble, I don’t know Ahmed’s intentions and while I’m a fan of free speech I can’t say I actually believe in free speech having worked in marketing and I think Ahmed Mohammed looks absolutely adorable in his NASA t-shirt and I have no desire to be placed in company with those other assholes who have shown that this is not a real homemade clock but rather a reassembled 80’s clock and therefore question Ahmed’s motives whereas I refuse to try to get into a high schooler’s head, having two of my own and knowing what a scary place that can be … something, something, something … and while I can’t wholeheartedly support every tweet made by Richard Dawkins and have at times even felt in mild disagreement with things he and others have tweeted on twitter I will say that I find his book The Selfish Gene a really good read … etc, etc, … and for good measure fuck you FoxNews.]

 

micronta

The salient thing for me is that we all implicitly know that a real bomb isn’t supposed to look like a bomb. The authorities at Ahmed’s high school knew that immediately. Bombs are supposed to look like shoes or harmless tourist knickknacks. If you think it looks like a bomb, it obviously isn’t. So what does it mean to look like a bomb (to be virtually a bomb) but not be an actual bomb?

banana_tape

I covered similar territory once before in a virtual exhibit called les fruits dangereux and at the time concluded that virtual objects, like post-modern novels, involve bricolage and the combining of disparate elements in unexpected ways. For instance combining phones, electrical tape and fruit or combining clock parts and pencil cases. Disrupting categorical thinking at a very basic level makes people – especially authority people – suspicious and unhappy.

Which gets us back to racism which is apparently what has happened to Ahmed Mohammed who was led out of school in handcuffs in front of his peers – and we’re talking high school! and he wasn’t asking to be called “McLovin.” It’s pretty cruel stuff. The fear of racial mixing (socially or biologically) always raises it’s head and comes from the same desire to categorize people and things into bento box compartments. The great fear is that we start to acknowledge that we live in a continuum of types rather than distinct categories of people, races and objects. In the modern age, mass production makes all consumer objects uniform in a way that artisanal objects never were while census forms do the same for people.

Virtual reality will start by copying real world objects in a safe way. As with digital design, it will start with isomorphism to make people feel safe and comfortable. As people become comfortable, bricolage will take hold simply because, in a digital world rather than a commoditized/commodified world, mashups are easy. Irony and a bit of subversiveness will lead to bricolage with purpose as we find people’s fantasies lead them to combine digital elements in new and unexpected ways.

We can all predict augmented and virtual ways to press a digital button or flick through a digital menu projected in front of us in order to get a virtual weather forecast. Those are the sorts of experiences that just make people bored with augmented reality vision statements.

hieronymus.63a

The true promise of virtual reality and augmented reality is that they will break down our racial, social and commodity thinking. Mixed-reality has the potential to drastically change our social reality. How do social experiences change when the color of a person’s avatar tells you nothing real about them, when our social affordances no longer provide clues or shortcuts to understanding other people? In a virtual world, accents and the shoes people wear no longer tell us anything about their educational background or social status. Instead of a hierarchical system of discrete social values, we’ll live in a digital continuum.

That’s the sort of augmented reality future I’m looking forward to.

The important point in the Ahmed Mohammed case, of course, is that you shouldn’t arrest a teenager for not making a bomb.

Why Augmented Reality is harder than Virtual Reality

hololensx519_0

At first blush, it seems like augmented reality should be easier than virtual reality. Whereas virtual reality involves the generation of full stereoscopic digital environments as well as interactive objects to place in those environments, augmented reality is simply adding digital content to our view of the real world. Virtual reality would seem to be doing more heavy lifting.

perspective

In actual fact, both technologies are creating illusions to fool the human eye and the human brain. In this effort, virtual reality has an easier task because it can shut out points of reference that would otherwise belie the illusion. Augmented reality experiences, by contrast, must contend with real world visual cues that draw attention to the false nature of the mixed reality content being added to a user’s field of view.

In this post, I will cover some of the additional challenges that make augmented reality much more difficult to get right. In the process, I hope to also provide clues as to why augmented reality HMDs like HoloLens and Magic Leap are taking much longer to bring to market than AR devices like the Oculus Rift, HTC Vive and Sony Project Morpheus.

terminator vision

But first, it is necessary to distinguish between two different kinds of augmented reality experience. One is informatics based and is supported by most smart phones with cameras. The ideal example of this type of AR is the Terminator-vision from James Cameron’s 1984 film “The Terminator.” It is relatively easy to to do and is the most common kind of AR people encounter today.

star wars chess

The second, and more interesting, kind of AR requires inserting illusory 3D digital objects (rather than informatics) into the world. The battle chess game from 1977’s “Star Wars” epitomizes this second category of augmented reality experience. This is extremely difficult to do.

The Microsoft HoloLens and Magic Leap (as well as any possible HMDs Apple and others might be working on) are attempts to bring both the easy type and the hard type of AR experience to consumers.

Here are a few things that make this difficult to get right. We’ll put aside stereoscopy which has already been solved effectively in all the VR devices we will see coming out in early 2016.

cloaked predator

1. Occlusion The human brain is constantly picking up clues from the world in order to determine the relative positions of objects such as shading, relative size and perspective. Occlusion is one that is somewhat tricky to solve. Occlusion is an effect that is so obvious that it’s hard to realize it is a visual cue. When one body is in our line of sight and is positioned in front of another body, that other body is partially hidden from our view.

In the case where a real world object is in front of a digital object, we can clip the digital object with an outline of the object in front to prevent bleed through. When we try to create the illusion that a digital object is positioned in front of a real world object, however, we encounter a problem inherent to AR.

In a typical AR HMD we see the real world through a transparent screen upon which digital content is either projected or, alternatively, illuminated as with LED displays. An obvious characteristic of this is that digital objects on a transparent display are themselves semi-transparent. Getting around this issue would seem to require being able to make certain portions of the transparent display more opaque than others as needed in order to make sure our AR objects look substantial and not ghostly.

 citizen kane

2. Accommodation It turns out that stereoscopy is not the only way our eyes recognize distance. The image above is from a scene in Orson Welles’s “Citizen Kane” in which a technique called “deep focus” is used extensively. Deep focus maintains clarity in the frame whether the actors and props are in the foreground, background or middle ground. Nothing is out of focus. The technique is startling both because it is counter to the way movies are generally shot but also because it is counter to how our eyes work.

If you cover one eye and use the other to look at one of your fingers, then move the finger toward and away from you, you should notice yourself refocusing on the finger as it moves while other objects around the finger become blurry. The shape of the cornea actually becomes more rounded when objects are close in order to cause light to refract more in order to reach the retina. For further away objects, the cornea flattens out because less refraction is needed. As we become older, the ability to bow the cornea lessens and we lose some of our ability to focus on near objects – for instance when we read. In AR, we are attempting to make a digital object that is really only centimeters from our eyes appear to be much further away.

Depending on how the light from the display passes through the eye, we may end up with the digital object appearing clear while the real world objects supposedly next to it and at the same distance appear blurred.

vergence_accomodation

3. Vergence-Accommodation Mismatch The accommodation problem is one aspect of yet another VR/AR difficulty. The term vergence describes the convergence and divergence of the two eyes from one another as objects move closer or further away. An interesting aspect of stereoscopy – which is used both for virtual reality as well as augmented reality to create the illusion of depth – is that the distance at which the two eyes coordinate to see an object is generally different from the focal distance from the eyes to the display screen(s). This consequently sends two mismatched signals to the brain concerning how far away the digital object is supposed to be. Is it the focal length or the vergence length? Among other causes, vergence-accommodation mismatch is believed to be a contributing factor to VR sickness. Should the accommodation problem above be resolved for a given AR device, it is safe to assume that the vergence-accommodation mismatch will also be solved.

 4. Tetherless Battery Life Smart phones have changed our lives among other reasons because they are tetherless devices. While the current slate of VR devices all leverage powerful computers to which they are attached, since VR experiences are all currently somewhat stationary (the HTC Vive being the odd bird), AR needs to be portable. This naturally puts a strain on the battery, which needs to be relatively light since it will be attached to the head-mounted-display, but also long-lived as it will be powering occasionally intensive graphics, especially for games.

5. Tetherless GPU Another strain on the system is the capability of the GPU. Virtual reality devices can be fairly intense since they require the user to purchase a reasonably powerful and somewhat expensive graphics card. AR devices can be expected to have similar graphics requirements as VR with much less to work with since the GPU needs to be onboard. We can probably expect a streamlined graphics pipeline dedicated to and optimized for AR experiences will help offset lower GPU capabilities.

6. Applications Not even talking about killer apps, here. Just apps. Microsoft has released videos of several impressive demos including Minecraft for HoloLens. Magic Leap up to this point has only shown post-prod, heavily produced illustrative videos. The truth is that everyone is still trying to get their heads around designing for AR. There aren’t really any guidelines for how to do it or even what interactions will work. Other than the most trivial experiences (e.g. weather and clock widgets projected on a wall) this will take a while as we develop best practices while also learning from our mistakes.

Conclusion

With the exception of V-AM, these are all problems that VR does not have to deal with. Is it any wonder, then, that while we are being led to believe that consumer models of the Oculus Rift, HTC Vive and Sony Project Morpheus will come to market in the first quarter of 2016, news about HoloLens and Magic Leap has been much more muted. There is simply much more to get right before a general rollout. One can hope, however, that dev units will start going out soon from the major AR players in order to mitigate challenge #6 while further tuning continues, if needed, on challenges #1-#5.

HoloLens App Development with Unity

A few months ago I wrote a speculative piece about how HoloLens might work with XAML frameworks based on the sample applications Microsoft had been showing.

Even though Microsoft has still released scant information about integration with 3D platforms, I believe I can provide a fairly accurate walkthrough of how HoloLens development will occur for Unity3D. In fact, assuming I am correct, you can begin developing games and applications today and be in a position to release a HoloLens experience shortly after the hardware becomes available.

To be clear, though, this is just speculative and I have no insider information about the final product that I can talk about. This is just what makes sense based on publicly available information regarding HoloLens.

Unity3D integration with third party tools such as Kinect and Oculus Rift occurs through plugins. The Kinect 2 plugin can be somewhat complex as it introduces components that are unique to the Kinect’s capabilities.

The eventual HoloLens plugin, on the other hand, will likely be relatively simple since it will almost certainly be based on a pre-existing component called the FPSController (in Unity 5.1 which is currently the latest).

To prepare for HoloLens, you should start by building your experience with Unity 5.1 and the FPSController component. Here’s a quick rundown of how to do this.

Start by installing the totally free Unity 5.1 tools: http://unity3d.com/get-unity/download?ref=personal

newproject

Next, create a new project and select 3D for the project type.

newprojectcharacters

Click the button for adding asset packages and select Characters. This will give you access to the FPSController. Click done and continue. The IDE will now open with an practically empty project.

assetstore

At this point, a good Unity3D tutorial will typically show you how to create an environment. We’re going to take a shortcut, however, and just get a free one from the Asset Store. Hit Ctrl+9 to open the Asset Store from inside your IDE. You may need to sign in with your Unity account. Select the 3D Models | Environments menu option on the right and pick a pre-built environment to download. There are plenty of great free ones to choose from. For this walkthrough, I’m going to use the Japanese Otaku City by Zenrin Co, Ltd.

import

After downloading is complete, you will be presented with an import dialog box. By default, all assets are selected. Click on Import.

hierarchy_window

Now that the environment you selected has been imported, go the the scenes folder in your project window and select a sample scene from the downloaded environment. This will open up the city or dungeon or forest or whatever environment you chose. It will also make all the different assets and components associated with the scene show up in your Hierarchy window. At this point, we want to add the first-person shooter controller into the scene. You do this by selecting the FPSController from the project window under Assets/Standard Assets/Characters/FirstPersonCharacter/Prefabs and dragging the FPSController into your Hierarchy pane.

fpscontroller

This puts a visual representation of the FPS controller into your scene. Select the controller with your mouse and hit “F” to zoom in on it. You can see from the visual representation that the FPS controller is basically a collision field that can be moved with a keyboard or gamepad that additionally has a directional camera component and a sound component attached. The direction the camera faces ultimately become the view that players see when you start the game.

dungeon

Here is another scene that uses the Decrepit Dungeon environment package by Prodigious Creations and the FPS controller. The top pane shows a design view while the bottom pane shows the gamer’s first-person view.

buttons

You can even start walking through the scene inside the IDE by simply selecting the blue play button at the top center of the IDE.

The way I imagine the HoloLens integration to work is that another version of FPS controller will be provided that replaces mouse controller input with gyroscope/magnetometer input as the player rotates her head. Additionally, the single camera view will be replaced with a two camera rig that sends two different, side-by-side feeds back to the HoloLens device. Finally, you should be able to see how all of this works directly in the IDE like so:

stereoscope

There is very good evidence that the HoloLens plugin will work something like I have outlined and will be approximately this easy. The training sessions at the Holographic Academy during /Build pretty much demonstrated this sort of toolchain. Moreover, this is how Unity3D currently integrates with virtual reality devices like Gear VR and Oculus Rift. In fact, the screen cap of the Unity IDE above is from an Oculus game I’ve been working on.

So what are you waiting for? You pretty much have everything you already need to start building complex HoloLens experiences. The integration itself, when it is ready, should be fairly trivial and much of the difficult programming will be taken care of for you.

I’m looking forward to seeing all the amazing experiences people are building for the HoloLens launch day. Together, we’ll change the future of personal computing!

Marshall McLuhan and Understanding Digital Reality

Understanding McLuhan

While slumming on the internet looking for new content about digital media I came across this promising article entitled Virtual Reality, Augmented Reality and Application Development. I was feeling hopeful about it until I came across this peculiar statement:

“Of the two technologies, augmented reality has so far been seen as the more viable choice…”

What a strange thing to write. Would we ever ask whether the keyboard or the mouse is the more viable choice? The knife or the fork? Paper or plastic? It should be clear by now that this is a false choice and not a case of having your cake or eating it, too. We all know that the cake is a lie.

But this corporate blog post was admittedly not unique in creating a false choice between virtual reality and augmented reality. I’ve come across this before and it occurred to me that this might be an instance of a category mistake. A category mistake is itself a category of philosophical error identified by the philosopher Gilbert Ryle to tackle the sticky problem of Cartesian dualism. He pointed out that even though it is generally accepted in the modern world that mind is not truly a separate substance from mind but is in fact a formation that emerges in some fashion out of the structure of our brains, we nevertheless continue to divide the things of the world, almost as if by accident, into two categories: mental stuff and material stuff.

sony betamax

There are certainly cases of competing technologies where one eventually dies off. The most commonly cited example is the Betamax and VHS. Of course, they both ultimately died off and it is meaningless today to claim that either one really succeeded. There are many many more examples of apparently technological duels in which neither party ultimately falls or concedes defeat. PC versus Mac. IE vs Chrome. NHibernate vs EF. etc.

The rare case is when one technology completely dominates a product category. The few cases where this has happened, however, have so captured our imaginations that we forget it is the exception and not the rule. This is the case with category busters like the iPhone and the iPad – brands that are so powerful it has taken years for competitors to even come up with viable alternatives.

What this highlights is that, typically, technology is not a zero sum game. The norm in technology is that competition is good and leads to improvements across the board. Competition can grow an entire product category. The underlying lie, however, is perhaps that each competitor tells itself that they are in a fight to the death and that they are the next iPhone. This is rarely the case. The lie beneath that lie is that each competitor is hoping to be bought out by another larger company for billions of dollars and has to look competitive up until that happens. A case of having your cake and eating it, too.

stop_staring

There is, however, a category in which one set of products regularly displace another set of products. This happens in the fashion world.

Each season, each year, we change out our cuts, our colors and accessories. We put away last year’s fashions and wouldn’t be caught dead in them. We don’t understand how these fashion changes occur or what rules they obey but the fashion houses all seem to conform to these unwritten rules of the season and bring us similar new things at the proper time.

This is the category mistake that people make when they ask things such as which is more viable: augmented reality or virtual reality? Such questions belong to the category of fashion (which is in season: earth tones or pastels?) and not to technology. In the few unusual cases where this does happen, then the category mistake is clearly in the opposite direction. The iPhone and iPad are not technologies: they are fashion statements.

Virtual reality and augmented reality are not fashion statements. They aren’t even technologies in the way we commonly talk about technology today – they are not software platforms (though they require SDKs), they are not hardware (though they are useless without hardware), they are not development tools (you need 3D modeling tools and game engines for this). In fact, they have more in common with books, radio, movies and television than they do to software. They are new media.

Dr Mabuse

A medium, etymologically speaking, is the thing in the middle. It is a conduit from a source to a receiver – from one world to another. A medium lets us see or hear things we would otherwise not have access to. Books allow us to hear the words of people long dead. Radio transmits words over vast distances. Movies and television let us see things that other people want us to see and we pay for the right to see those things. Augmented reality and virtual reality, similarly, are conduits for new content. They allow us to see and hear things in ways we haven’t experienced content before.

The moment we cross over from talking about technology and realize we are talking about media, we automatically invoke the spirit of Marshall McLuhan, the author of Understanding Media: The Extensions of Man. McLuhan thought deeply about the function of media in culture and many of his ideas and aphorisms, such as “the medium is the message,” have become mainstays of contemporary discourse. Other concepts that were central to McLuhan’s thought still elude us and continue to be debated. Among these are his two media categories: hot and cold.

understanding media

McLuhan claimed that any media is either hot or cold, warm or cool. Cool mostly means what we think it means metaphorically; for instance, James Dean is cool in exactly the way McLuhan meant. Hot media, in turn, is in most ways what you would think it is: kinetic with a tendency to overwhelm the senses. To illustrate what he meant by hot and cold, McLuhan often provides contrasting examples. Movies are a hot medium. Television is a cold medium. Jazz is a hot medium. The twist is a cool medium. Cool media leave gaps that the observer must fill in. It is highly participatory. Hot media is a wall of sensation that does not require any filling in: McLuhan characterizes it as “high definition.”

I think it is pretty clear, between virtual reality and augmented reality, which falls into the category of a cool medium and which a hot one.

To help you come to your own conclusions about how to categorize augmented reality glasses and the virtual reality goggles, though, I’ll provide a few clues from Understanding Media:

“In terms of the theme of media hot and cold, backward countries are cool, and we are hot. The ‘city slicker’ is hot, and the rustic is cool. But in terms of the reversal of procedures and values in the electric age, the past mechanical time was hot, and we of the the TV age are cool. The waltz was hot, fast mechanical dance suited to the industrial time in its moods of pomp and circumstance.”

 

“Any hot medium allows of less participation than a cool one, as a lecture makes for less participation than a seminar, and a book for less than dialogue. With print many earlier forms were excluded from life and art, and many were given strange new intensity. But our own time is crowded with examples of the principle that the hot form excludes, and the cool one includes.”

 

“The principle that distinguishes hot and cold media is perfectly embodied in the folk wisdom: ‘Men seldom make passes at girls who wear glasses.’ Glasses intensify the outward-going vision, and fill in the feminine image exceedingly, Marion the Librarian notwithstanding. Dark glasses, on the other hand, create the inscrutable and inaccessible image that invites a great deal of participation and completion.”

 

audrey hepburn glasses