HoloLens 2 announcement Quick Recap

Basic sensors look the same between HL1 and HL2, which is nice. Still 4 monochrome cameras for SLAM, 1 TOF for spatial mapping. For me personally, knowing the size of the battery and the refresh rate of the TOF with more consistent power is really huge. Also curious about burn in from the battery-bun. Doesn’t it get hot? And the snapdragon 850 is just an overclocked 845. That’s going to be a bit hotter too? Also curious why MEMS? It’s not cheaper or lighter or much smaller than LCOS so, I think, must be the reason for the FOV improvement (right?) but  Karl Guttag complains this will make for more blurry digital images (complain? who Karl?).

Piano playing stage demo was cool but there was noticeable awkwardness in the way the hand gestures were performed (exaggerated movements) — which is actually really familiar from working with magic leap’s hand tracking. Either that needs to get better by the summer or there’s going to be some serious disappointment down the road.

Anyone seen any performance comparisons between the NVidia Tegra X2 (Magic Leap’s SOC) and Qualcomm 850 (HoloLens 2’s SOC)? I know the 845 (basically same as 850) was regarded as better than X1, but most assumed X2 would leapfrog. Haven’t found anything conclusive, though.

Doubling of FOV between HL1 and HL2 might have been miscommunicated and meant something different from what most people thought it meant. It turns out the FOV of the HL2 is actually very close to the FOV of the Magic Leap One, both of which are noticeably bigger than the original HoloLens but still significantly smaller than what everyone says they want.

My friend and VR visionary Jasper Brekelmans calculated out the HL2 FOV in degrees to be 43.57 x 29.04 with a diagonal of
52.36. Magic Leap is 40 x 30, with a diagonal of 50 (thank you magic leap for supporting whole numbers).

From now on, whenever someone talks about the field of view in AR/VR/MR/XR, we’ll all have to ask if that is being measured in degrees or steradians. Oh wells.

Internet bad boy Robert Scoble turns out to probably have one of the most interesting takes on the HoloLens 2 announcement. I hope his rehabilitation continues to go well. On the other hand it was a really bad week for tech journalists in general.

Unity even made an announcement about HoloLens 2 and even though they have been working on their own in-house core tools for XR development, way down deep in the fine print they are saying that you will need to use the Mixed Reality Toolkit v2 to develop for HoloLens – which is very eeeeenteeresting.

Coolest thing for me, outside of the device itself, was the Azure Spatial Anchors announcement. No one is really paying attention to Azure Spatial Anchors, yet, but this is a game changer. It means implementing Vernor Vinge’s belief circles. It means anyone can build their own Pokemon Go app. And it works on ARKit, ARCore and HoloLens –> so the future of XR is cross-platform.

Mike Taulty, who I can’t say enough great things about, as usual has dived in first and written up a tour of the new service.

Crap, my boss just came back from lunch. Gotta work now.

The Fork in Mixed Reality

Futuristic Spaceman - Programmer Reading Projected Information

Yogi Berra gnomically said, “when you come to a fork in the road, take it.”  On the evening of Friday, February 1st, 2019 at approximately 9 PM EST, that’s exactly what happened to Mixed Reality.

The Mixed Reality Toolkit open source project, which grew out of the earlier HoloLens Toolkit on github, was forked into the Microsoft MRTK and a cross-platform XRTK (read the announcement). While the MRTK will continue to target primarily Microsoft headsets like the HoloLens and WMR, XRTK will feature a common framework for HoloLens, Magic Leap, VR Headsets, Mobile AR – as well as HoloLens 2 and any other MR devices that eventually come on the market.

So why did this happen? The short of it is that open source projects can sometimes serve multiple divergent interests and sometimes they cannot. Microsoft was visionary in engineering and releasing the original HoloLens MR Headset. They made an equally profound and positive step back in 2016 by choosing to open source the developer SDK/Framework/Toolkit (your choice) that allows developers to build Unity apps for the HoloLens. This was the original HoloLens Toolkit (HLTK).

While the HLTK started as a primarily Microsoft engineering effort, members of the community quickly jumped in and began contributing more and more code to the point that the Microsoft contributions became a minority of overall contributions. This, it should be noted, goes against the common trend of a single company paying their own engineers to keep an open source project going. The HLTK was an open source success story.

In this regard, it is worth calling out two developers in particular, Stephen Hodgson and Simon Jackson, for the massive amounts of code and thought leadership they have contributed to the MR community. Unsung heroes barely captures what they have done.

In 2017 Microsoft started helping to build occluded WinMR (virtually the same as VR) devices with several hardware vendors and it made sense to create something that supported more than just the HoloLens. This is how the MRTK came to be. It served the same purpose as the HLTK, to accelerate development with Unity scripts and components, but now with a larger perspective about who should be served.

In turn, this gave birth to something that is generally known as MRTK vNext, an ambitious project to support not just Microsoft devices but also platforms from other vendors. And what’s even more amazing, this was again driven by the community rather than by Microsoft itself. Microsoft was truly embracing the open source mindset and not just paying lip service to it as many naysayers were claiming.

But as Magic Leap, the other major MR Headset vendor, finally released their product in fall, 2018, things began to change. Unlike Microsoft, Magic Leap developed their SDK in-house and threw massive resources at it. Meanwhile, Microsoft finally started throwing their engineers at the MRTK again after taking a long hiatus. This may have been in response to the Magic Leap announcement or equally could have been because the team was setting the stage for a HoloLens 2 announcement in early 2019.

And this was the genesis of the MR fork in the road: For Microsoft, it did not make sense to devote engineering dollars toward creating a platform that supported their competitors’ devices. In turn, it probably didn’t make sense for engineers from Google, Magic Leap, Apple, Amazon, Facebook, etc. to devote their time toward a project that was widely seen as  a vehicle for Microsoft HMDs.

And so a philosophical split needed to occur. It was necessary to fork MRTK vNext. The new XRTK (which is also pronounced “Mixed Reality Toolkit”) is a cross-platform framework for HoloLens as well as Magic Leap (Lumin SDK support is in fact already working in XRTK and is getting even more love over the weekend even as I write).

But XRTK will also be a platform that supports developing for Oculus Rift, Oculus Go, HTC Vive, Apple ARKit, Google ARCore, the new HoloLens 2 which may or may not be announced at MWC 2019, and whatever comes next in the  Mixed Reality Continuum.

So does this mean it is time to stick a fork in the Microsoft MRTK? Absolutely not. Microsoft’s MRTK will continue to do what people have long expected of it, supporting both HoloLens and Occluded WinMR devices (that is such a wicked mouthful — I hope someone will eventually give it a decent name like “Windows Surface Kinect for Azure Core DotNet Silverlight Services” or something similarly delightful).

In the meantime, while Microsoft is paying its engineers to work on the MRTK, XRTK needs fresh developers to help contribute. If you work for a player in the MR/VR/AR/XR space, please consider contributing to the project.

Or to word it in even stronger terms, if you give half a fork about the future of mixed reality, go check out  XRTK  and start making a difference today.

Why you should watch “2047: Virtual Revolution”

In the wake of Apple’s successes over the past decade, agencies and technical schools like the Savannah College of Art and Design have been pumping out web designers and a career ecosystem that supports them as well as a particularly disciplined, minimalist, flat aesthetic that can be traced back to Steve Jobs. One of the peculiarities of the rise of 3D games and VR/AR/MR/XR platforms is that these children of the Jobs revolution have little interest in working with depth. The standards of excellence in the web-based design world — much less the print-based design world it grew out of – are too different. To work in 3D feels too much like slumming.

br

But design is still king in 3D as well as on the web. When the graduates of SCAD and similar schools did not embrace 3D design, film FX designers like Ash Thorp and Greg Borenstein jumped into the space the left vacant. For a period after the release of Steven Spielberg’s Minority Report in 2002, there was a competition among FX artists to try to outdo the UIs that were created for that film. From about 2010, however, that competitive trend has mellowed out and the goal of fantasy UX in sci-fi has changed into one of working out the ergonomics of near-future tech in a way that makes it feel natural instead of theatrical. By carefully watching the development of CGI artifacts in the 21st century – as an archeologist might sift through pottery shards from the ancient past –, we can see the development of a consensus around what the future is supposed to look like. Think of it as a pragmatic futurism.

co

The 2016 French film 2047: Virtual Revolution, written and directed by Guy-Roger Duvert and staring Mike Dopud, is a great moment in the development of cinematic language in the way it takes for granted certain sci-fi visual cliches. Even more interesting is what appears to be a relatively low-budget film is able to pull off the CGI it does, indicating a general drop in price for these effects. What used to be hard and expensive is now within reach and part of the movie making vernacular.

quit

The story is about a future society that spends all its time playing online RPGs while the corporations that run these games have taken over the remnant of the world left behind. But what I found interesting was the radial interface used by players inside their RPGs.

mlui

It bears a passing similarity to the magic leap OS navigation menu.

mech

It also gives a nod to popular gaming genres like Gandam battle suits …

cd

Fantasy UX (FUX) from films like Blade Runner and The Matrix …

wow

And World of Warcraft.

Play the movie in the background while you are working if you don’t have time to get into the plot about a dystopic future in which people are willingly enslaved to VR … blah, blah, blah. But look up from time to time in order to see how the FX designers who will one day shape our futures are playing with the grammar of the VR\AR\MR\XR visual language.

The effects for 2047 appear to have been done by an FX and VR company in France called Backlight. The movie is currently free to Amazon Prime members.

For some more innovative FUX work, please take a look at 2013’s Ender’s Game or the Minority Report tv series from 2015.

Things of Note 01-02-2019

Mike Taulty has a great series on Project Prague: https://mtaulty.com/category/projectprague/

Christopher Diggins has listed out some undocumented APIs in the 3DS Max .NET SDK: https://cdiggins.github.io/blog/undocumented-3dsmax-dotnet-assemblies.html that nobody knows about. Tres cool.

Magic Leap received approximately 6,000 submissions for its Independent Creator program. I think I’m associated with at least 20 of those. Considering that cash is Magic Leap’s biggest asset, this is a great way to be using its muscle. Building up an app ecosystem for mixed reality is a great thing and will help other vendors like Microsoft and Apple down the road. And who knows. Maybe one of those 6000 proposals is the killer MR app.

There are rumors of a HoloLens v2 announcement in January, but don’t hold your breath. There are also rumors of a K4A announcement in January, which seems more likely. In the past, we’ve seen one tech announcement (Windows MR [really Windows VR]) substitute for silence regarding HoloLens, so this may be another instance of that.

Many people are asking what’s happening with the Microsoft MRTK vNext branch. It’s still out there but … who knows?

The Lumin SDK 0.19 started rolling out a couple of weeks ago and the LuminOS is now on version 0.94.0. HoloLens always got dinged for iterating too slowly while Magic Leap gets dinged for changing too quickly. What’s a bleeding edge technology company to do?

I’ve gotten through 4 of the endings of the Black Mirror movie Bandersnatch on Netflix. It makes you think, but not very hard, which is about the right pace for most of us. And Spotify has a playlist for the movie. You should listen to the Bandersnatch playlist on shuffle play, obviously.

Magic Leap Store publishes first pay app

seedling

The Magic Leap store (aka Magic Leap “World”) has published its first app for $9.99. This is an app created by Insomniac but published by Magic Leap itself, so in some sense is a trial run for its store. “Seedling” already made its first appearance at the Leap conference in Los Angeles in October.

$9.99 is also an interesting price, perhaps signaling a target for apps on Magic Leap devices. Back in the day, $.99 was the target price for Apple Store apps. When Microsoft came out with Windows Phone, they marketed the idea that apps should sell for more than that on their platform (more towards $1.49 or $1.99). On Steam, the magic price point for games seems to be $20 to $60.

sketchup

For the HoloLens, which uses the online Windows Store as its distribution channel, the most frequent price point seems to be free. This makes sense since even with a purported 50K HoloLens devices currently in the world, the total market is still too small to support a reasonably priced game. Trimble initially went the other way with their SketchUp Viewer,  which lists for about $1.5K, apparently trying to recoup their investment with a high price tag. Their subsequent HoloLens offering, part of a collaboration service, is free.

In order to buy Seedling, I had to go into my online magic leap creator’s account and add a payment method. This is an interesting aspect of all current VR and AR devices: entering data is rarely – and entering financial data is never – done through the actual device. We still live in a world where one must switch to either a phone or a computer in order to establish the credentials that will be used through the device.

This is ultimately a pre-NUI UX problem involving the difficulty of doing data entry without a keyboard and mouse (though we are finally getting comfortable with doing this on our smart phones, thanks to the rising comfort level with using web apps on tiny screens). This will be an ongoing problem for developing apps for the enterprisy market, where the exchange of data is pretty key.

Who knows, maybe solving this UX dilemma for the enterprise will end up being the killer app we’ve all been waiting for. I wonder how much someone would charge for it?

Thank you Techorama Netherlands

amsterdam

At the beginning of October I was invited to deliver two sessions at Techorama Netherlands: one on Cognitive Services Custom Vision and one about the HoloLens and the Magic Leap One. This is one of the best organized conferences I’ve been to and the hosts and attendees were amazing. I can’t say enough good things about it.

The lineup was also great with Scott Guthrie, Laurent Buignon, Giorgio Sardo, Shawn Wildermuth, Pete Brown, Jeff Prosise, etc. It is what is known as a first tier tech conference. What was especially impressive is that this is also the first time Techorama Netherlands was convened.

I want to also thank my friend Dennis Vroegop for hosting me and showing me around on my first trip to the Netherlands. He and Jasper Brekelmans took a weekday off to give me the full Amsterdam experience. It was also great to have beers with Roland Smeenk, Alexander Meijers and Joost van Schaik. I’m not sure why there is so much mixed reality talent in the Netherlands but there you go.

Magic Leap One vs HoloLens v1 Comparison

side-by-side

I’m currently sitting in my room at the L.A. Grand Hotel waiting for the L.E.A.P. conference to start. I’ve been holding off on this comparison post because I had promised Dennis Vroegop I would give it first as a talk at the Techorama Netherlands conference – which I did last week. I will do a feature comparison based on publicly available information, then highlight features unique to the Magic Leap, and then distinguish subtle but important differences that only become apparent from spending months with these devices at the developer level. Finally I want to point out design improvements in the Magic Leap that are so good for Mixed Reality that I predict they will be incorporated into the next version of HoloLens.

Keep in mind that this is a comparison of two different generations of devices. The Magic Leap One is coming out two years after the HoloLens and would be expected to be better. At the same time, the HoloLens v2 is being released some time in 2019 and can be expected to be better still.

1. Field of View

In raw numbers, the field of view of the Magic Leap One is approximately 25% better than the HoloLens. The HoloLens field of view is estimated to be about 29-30 degrees wide and 17 degrees high. The Magic Leap One is 40 degrees wide by 30 degrees high. There is a corresponding difference in resolution, with the HoloLens offering 1268 by 720 per eye and the Magic Leap One providing 1280 by 960 per eye.

The Magic Leap One uses the same wave guide display technology that the HoloLens does, however, so how did they pump up the FOV? First, the ML1 has a more powerful battery than the HoloLens does, and it’s often been claimed by Microsoft that FOV is largely dependent on the power of the projection. This is probably offset, though, by the fact that the ML1 is using more power to project in two planes instead of only one like the HoloLens does (with 6 Waveguide layers compared to 4 in the HoloLens).

Another trick is that the waveguides in the Magic Leap are closer to the wearer’s eyes than they are in the HoloLens. As a consequence, you can wear glasses underneath the HoloLens while you cannot do so comfortably under the Magic Leap device.

In addition to this, Jasper Brekelmans and Dennis Vroegop suggested over coffees along the Amstel River (in a conversation about David Copperfield) that because one’s peripheral vision is closed off in the ML1, the perceived FOV may be even larger than the actual. The theory behind this is that, due to the widespread use of glasses, we have become used to not paying attention to our peripheral vision so much and consequently are comfortable with this tunneling of our vision.

Blocking off the peripheral field of view might cause issues in certain industrial settings, but the general effect is that what you can see as a proportion of your overall FOV is much larger in the ML1 than it is in the HoloLens. Or another way of putting this is that the empty areas of your FOV, as a proportion of your available FOV, is much smaller than it is in the HoloLens.

On top of this, the aspect ratio of the FOV in the ML1 is much taller than in the HoloLens, which may end up doing a better job at accommodating vertical seccaddic movements of the eyes.

inset

Because of the narrower gap between the device and the wearer’s eyes, the Magic Leap can’t accommodate glasses as the Hololens can. To compensate, Magic Leap is developing relationships with online eyeglass manufacturers to provide prescription inserts that can be placed in front of the waveguides and magnetically lock into place. There’s some controversy over whether this is a good or a bad thing. Some developers have expressed concern that this will make demoing Magic Leap at events more difficult than demoing HoloLens, since those with poor vision will either not be able to participate or, alternatively, we will be forced to carry around a large suitcase of prescription inserts to every event.

On the other hand, when I think of what MR will be like in the future, I tend to think of them resembling real glasses (and not electronic contacts, which simply scare me). When they reach the size and ubiquity of modern glasses, it will make sense for each person to have their own personalized device with their appropriate prescription. Magic Leap is on the right track in this case. It’s just in the intervening period that we have to figure out how to share our limited, expensive devices with others.

HoloLens v1 Magic Leap One
Price $3000 – $5000 $2300
OS Windows Android variant
Field of View ~30 deg x 17 deg 40 deg x 30 deg
Resolution 1268 x 720 per eye 1280 x 960 per eye
Depth Sensor Time of Flight Time of Flight
Display Type Wave Guide Wave Guide
Hand Gestures Recognized 2 9
Underlying comic book technology Light Engines Light Fields
Controller Click 6 DOF
Hand tracking limited fingers (3 joints each)
Processing unit Above nose Light pack
Audio Spatial Sound Spatial Sound

2. Hardware Specs (It’s all about the battery)

HoloLens v1 Magic Leap One
Intel Atom x5-Z8100
1.04 GHz
Intel Airmont (14nm)
4 Logical Processors
64-bit capable
NVIDIA® Tegra X2 SOC
2 Denver 2.0 64-bit cores + 4 ARM Cortex A57 64-bit cores
(2 A57’s and 1 Denver accessible to applications)
8086h (Intel) GPU. NVIDIA Pascal™, 256 CUDA cores; Graphic APIs: OpenGL 4.5, Vulkan, OpenGL ES 3.3+
2GB RAM 8GB RAM
64GB Storage 128GB Storage

The Magic Leap One is overall a much beefier machine than the current HoloLens. While both the HoloLens and the Magic Leap One advertise a 3 hour battery life, these can mean vastly different things. In order to drive all of its extra hardware, the Magic Leap One needs a much beefier battery. The ML1 is powered by a twin-cell battery with 36.77 Wh, running at 3.83 V.  The HoloLens has a 1.65 Wh battery.

For overall performance, the larger battery means the world meshes (i.e. surface reconstruction, world mapping) are much denser and more frequently updated on the Magic Leap than on the HoloLens. The Time-of-Flight depth camera can fire off more frequently and for longer periods.

The larger battery and beefier specs also translate to much better 3D performance. The HoloLens is able to run 30,000 polygons at 60 fps. Beyond that, the fps begins to drop. The Magic Leap runs upwards of 1 million polygons at 60 fps.

On the downside, that more powerful battery rig needs a fan to cool it whereas the HoloLens is passively cooled. In laboratory and medical scenarios where a sterile environment must be maintained, active cooling with a fan could be a problem.

3. The HoloLens and Tracking

The HoloLens uses 4 monochrome cameras (“environment aware sensors”), an accelerometer, magnetometer and gyroscope in a sensor fusion configuration, and a custom HPU to perform head tracking. The Magic Leap one has a similar setup minus the HPU.

The HoloLens tracking is still somewhat better than the ML1’s. It loses tracking less frequently and digital content is less jittery when seen up close or while the wearer is in motion.

Overall, though, tracking performance is fairly close between the two devices.

4. Magic Leap Extras

The ML1 has a couple of features that are simply outside of the box. One is the eye tracking. There are inward facing cameras that track the wearer’s eye movements as invisible IR flashes.

The tracking is not continuous and is captured at a much lower resolution level than the displays. While they shouldn’t be used for direct user interactions, they are great for providing context for other interactions. It would be great if someone would write a keyboard that uses eye tracking to select keys. In the meantime, I wrote this heat vision demo that uses eye tracking to burn the walls of my house — I think of it as “Superman with a Migraine”. Note the eye-blink tracking.

The other cool extra in the Magic Leap is two planes of focus. Most VR devices have a single plane of focus at infinity. The HoloLens has a single plane of focus set at two meters.

In the magic leap one, when you look at near objects, objects further away (on the outer plane) seem to go out of focus. When you look at objects close up, the objects further away go out of focus. I would guess that the close plane is around a meter and the out one about 3 meters but I’m not really sure. In the Lumin OS .91, there is also a sporadic green shift in the near plane (which I expect will be fixed soon).

5. The Tether

tether

The Magic Leap One is made up of two parts: the Light Pack and the Light Wear. They are connected by a cable. The Light Wear contains all the sensors, projectors and displays while the Light Wear, worn at the hip, contains all the computer bits and the battery.

This is an engineering choice that allows for a much larger power source. Without the tether solution, a large battery would not be possible. Without the large battery, the ML1’s enhanced depth sensing, improved graphics processing and larger field of view would not be possible.

In addition, this design makes the Magic Leap a much more comfortable fit on the head. The weight distribution is better than on the HoloLens, it is lighter, and it doesn’t require extra straps.

The tether solution is actually so effective that I would be surprised if the HoloLens v2 does not follow a similar design. The original one-piece “tetherless” solution Microsoft came up with for the HoloLens was visionary, but severely limiting.

6. Developing

If you have ever developed in Unity for the Android (or really any other device) then you know how to develop for Magic Leap in Unity. You press a button and your app compiles to an .mpk image (Android uses “.apk” file extensions). If your device is attached, you can deploy directly by clicking on “build and run”.

Magic Leap apps can also be built with the Unreal Engine.

HoloLens apps run on a Unity player sandboxed in a UWP app. The development cycle consequently involves exporting your HoloLens app as a Visual Studio project targeting UWP and then building and deploying in UWP. In general (and it may just be me) this has been tedious.

It became even worse when the immersive WinMR devices (or occluded WinMR – basically Microsoft VR) devices came out last year and the basic tools used for HoloLens development, known as the HoloLens Toolkit and then the Mixed Reality Toolkit, was expanded to supported both kinds of device. Because of some issues with Unity, building for WinMR required certain versions of Unity and above while developing for HoloLens required certain versions of Unity and below. And this state went on for several months to the point that finding the correct Windows SDK paired with the right MRTK version paired with the correct Unity version became a closely kept alchemical formula passed from developer to developer.

This experience may not be the same for everyone but it left me a bit traumatized. By contrast, Magic Leap development is simply a pleasure. I can build and see the results very quickly in my device. I can wear the device for hours at a time. I typically only stop development when the ML battery runs down and I have to let it recharge. I don’t have a Magic Leap Hub, which would allow me to charge while I dev, but I intend to get one.

The Magic Leap toolkit is still not quite as capable as the open source Mixed Reality Toolkit managed by Stephen Hodgson and others.

The Magic Leap also has a simulator rather than an emulator for developing without a device. This actually makes sense since the Hololens emulator runs the HoloLens OS in a virtual machine, which might be tricky given the much larger specs of the Magic Leap.

7. Interactions

6dof

The Magic Leap supports robust hand and gesture tracking as well as a 6DOF controller. The DOF in 6DOF stands for degrees of freedom. We know not only the direction the controller is pointing in (3DOF) but also its position.

tap

I love the controller. I love it so much it made me finally admit to myself that I hate the HoloLens tap gesture. No one ever gets it right. It’s awkward. It’s uncomfortable and makes me feel like I’m performing a kung fu move.

mantis

By contrast, a controller just makes sense. The UX for MR, I believe, should always support three layers of interactions. Mixed reality UX should support hand gestures for ease of use. It should fall back to the controller for precision movements. It should finally fall back on the delta pad on the controller for accessibility.

For all of my antipathy toward the HoloLens tap, however, I have to say I miss the HoloLens bloom gesture (escape), which I keep trying to use in Magic Leap to no avail. Instead, in Magic Leap holding the controller’s Home button for three seconds is the escape gesture, which I don’t really like. It also bothers me that hand gestures aren’t supported in the core desktop (the Icon grid) – but this is still the Creator’s Edition (translation: dev edition) after all.

[Late edit thanks to SH: it should also be pointed out that the Lumin OS (the desktop layer) currently doesn’t support hand gestures, which I find baffling. For now, you can’t get past the login and other initial screens without a paired phone or a controller.]

Summing Up

So is the Magic Leap One better than the HoloLens v1? Oh yes. By leaps and bounds.

1. The development workflow is much more straight forward and pleasant.

2. The increased battery size and beefier hardware makes it possible to do things, performance wise, that the HoloLens tended to stop us from doing. Phone and tablet level experiences are doable now.

3. The Magic Leap One has a much better interaction model than the HoloLens does. How did anyone ever do MR without a controller? (Actually, everyone used an XBox controller in the end in order to get any sort of real work done, but we don’t talk about that much.)

Is it time to jump back into Mixed Reality development?

If you spent $3.2K to $5K for a HoloLens, then you owe it to yourself to spend $2,300 for a Magic Leap. It’s the device you originally wanted. The HoloLens was a brilliant device back in 2016 and really the first of its kind, but it had limitations. Many of the projects you were never able to realize in HoloLens (in the small dev community that developed around HoloLens, we all know what these are) are now doable with the improved Magic Leap specs. Additionally, your enterprise stories are much easier to sell with the controller. Instead of spending 5 minutes of your precious pitch time explaining how tap works, you can now just let your potential investors and clients go straight into the demo with a controller they basically already know how to use.

Is there a future in spatial computing?

Now there is. There was a brief pause between 2016 and the middle of 2018, but we currently have two great devices available with another shoe dropping soon. Microsoft will be coming out with a HoloLens v2 sometime in the first half of 2019 which I would predict will implement the tethered design Magic Leap is using. This will be an improvement over the current Magic Leap which in turn will be driven to improve its own tech.

Microsoft has an advantage because it started this journey back in the Kinect days and has the resources of Microsoft Research to draw on. Magic Leap has an advantage because, well, they aren’t Microsoft and don’t face the internal political problems a large tech giant does (though no doubt they have their own). More importantly, they have their own U.S.-based production lines (as well as production lines in Mexico) and are less reliant on China, which hopefully means they are capable of much quicker turn-arounds and initial SKU production.

When do we get smaller devices that wear like glasses?

I have no idea, but try to think in terms of 3, 5, 10 years. We always overestimate what can be done in 3 years but always underestimate how much things will change in 10. Somewhere in the middle, we will intersect with our MR futures.

Your comments, corrections and criticisms are welcome in the comments below. I’ll try to keep up with them and incorporate what you say into the main article as appropriate.

Thank you Atlanta Code Camp 2018

I was invited to speak at Atlanta Code Camp on September 15th. I spoke about mixed reality and was fortunately able to run some demos. When I got out of standard business app dev a few years ago and began specializing in VR on MR, one of the unfortunate side-effects was that I saw a lot less of the community (Mixed Reality tends to be more of a global community, for whatever reason, that convenes online rather than in person). Coming in for this event is one of my great chances to meet up with old friends.

In the speakers room, we talked about military apocalypse preparedness, the difficulty of getting jobs on the dark net, the advantages of trio programming (a better version of pair programming), and being paid in bitcoin.

Mixed Reality As An Art Form

First a homework assignment and then a request. Please watch a few of these http://kogonada.com/ video essays on cinema.

I’d like to put together something like this for AR and VR. The basic idea is to take something small in [AR | Cinema | Sci Fi television] and then develop the topic into a conversation. But I need your help with ideas.

For instance, consider the flick in Sci Fi movies but also in Hawaii Five-Oh and in the Avengers movie used to send a piece of digital media from one screen to another. Where did it come from? How did it develop as a gesture? Has it changed over time? Where is it used in real world apps? Does it make sense as good UX or is it pure Sci Fi stuff?

Alternatively, what is the the relationship between John Carpenter’s They Live (long overdue for a big budget remake), The Matrix and spatial computing?

Alternatively, what does Beaudrillard’s ‘desert of the real’ tell us about VR?

Alternatively, what sort of H-C interactions do we need before head-mounted-displays could begin to compete with smartphones as portable computers and begin to fulfill their financial promise.

So I need help with ideas to explore, in essay style, with suggestions for media that can be used to explore.

Once we have enough, and if there is enough interest, I’ll work with you on producing these essays as video essays and we’ll put together an online book about MR, with each of you as an essay author and providing narration.

It’s at minimum a six month project but something I feel the world needs and we are uniquely positioned to create. I’m a little tired of both book writing and recording technical videos when good design thinking around MR is what we’re most lacking in right now.

So, my dear readers, have you got any ideas that you have been keeping to yourselves about the design and philosophy behind MR but needed help expressing?

My VRLA Shopping List

vrla

I meant to finish this earlier in the week. I spent the past weekend in Los Angeles at the VRLA conference in order to hear Jasper Brekelmans speak about the state of the art in depth sensors and visual effects. One of the great things about VRLA is all the vendor booths you can visit that are directly related to VR and AR technology. Nary a data analytics platform pitch or dev ops consulting services shill in sight.

Walking around with Jasper, we started compiling a list of how we would spend our Bitcoin and Ethereum fortunes once they recover some of their value. What follows is my must-have shopping list if I had so much money I didn’t need anything:

1. Red Frog HoloLens mod

IMG_3301

First off is this modified HoloLens by Red Frog Digital. The fabrication allows the HoloLens to balance much better on a user’s head. It also applies no pressure to the bridge of the nose, but instead distributes it across the user’s head. The nicest thing about it is that it always provides a perfect fit, and can be properly aligned with the user’s eyes in about 5 seconds. They designed this for their Zombie Maze location-based experience and are targeting it for large, permanent exhibits / rides.

2. Cleanbox … the future of wholesome fun

cleanbox

If you’ve ever spent a lot of time doing AR and VR demos at an event, you know there are three practical problems you have to work around:

  • seating devices properly on users’ heads
  • cleaning devices between use
  • recharging devices

Cleanbox Technology provides a solution for venue-based AR/VR device cleaning. Place your head-mounted display in the box, close the lid, and it instantly gets blasted with UV rays and air. I’d personally be happy just to have nice wooden boxes for all of my gear – I have a tendency to leave them lying on the floor or scattered across my computer desk – even without the UV lights.

3. VR Hamster Ball

IMG_3012

The guy demoing this never seemed to let anyone try it, so I’m not sure if he was actually playing a hamster sim or not. I just know I want one as a 360 running-in-place controller … and as a private nap space, obviously.

4. Haptic Vest and Gauntlets

haps (2)

Bhaptics was demoing their TactSuit, which provides haptic feedback along the chest, back, arms and face. I’m going to need it to go with my giant hampster ball. They are currently selling dev units.

5. Birdly

birdly

A tilt table with an attached fan and a user control in the form of flapping wings is what you need for a really immersive VR experience. Fortunately, this is exactly what Birdly provides.

6. 5K Head-mounted Display

5KVR

I got to try out the Vive Pro, which has an astounding 2K resolution. But I would rather put my unearned money down for a VRHero 5K VR headset with 170 degree FOV. They seem to be targeting industrial use cases rather than games, though, since their demo was of a truck simulation (you stood in the road as trucks zoomed by).

7. A globe display

globe_display

Do I need a giant spherical display? No, I do not need it. But it would look really cool in my office as a conversation piece. It could also make a really great companion app for a VR or AR experience.

8. 360 Camera Rig with Red Epic Cameras

redrig

Five 6K Red Dragon Epic Cameras in a 360 video rig may seem like overkill, but with a starting price of around $250K, before tripod, lenses and a computer powerful enough to process your videos – this could make the killer raffle item at any hi-tech conference.

9. XSens Mocap Suit

According to Jasper, the XSens motion capture fullbody, lycra suit with realtime kinematics is one of the best available. I think I was quoted a price something like $7K(?) to $13K(?) Combined with my hamster ball, it would make me unstoppable in PvP competitive Minecraft.

10. AntVR AR Head-mounted display

antvr

AntVR will be launching a kickstarter campaign for their $500 augmented reality HMD in the next few weeks. I’d been reading about it for a while and was very excited to get a chance to try it out. It uses a Pepper’s ghost strategy for displaying AR, has decent tracking, works with Steam, and at $500 is very good for its price point.

11. Qualcomm Snapdragon 845

qualcomm

The new Qualcomm Snapdragon 845 Chip has built-in SLAM – meaning 6DOF inside-out tracking is now a trivial chip-based solution – unlike just two years ago when nobody outside of robotics had even heard of SLAM algorithms. This is a really big deal.

Lenovo is using this chip in its new (untethered) Mirage Solo VR device – which looks surprisingly like the Windows Occluded MR headset they built with Microsoft tracking tech. At the keynote, the Lenovo rep stumbled and said that they will support “at least” 6 degrees of freedom, which has now become an inside joke among VR and AR developers. It’s also spoiled me, because I am no longer satisfied with only 6DOF. I need 7DOF at least but what I really want is to take my DOF up to 11.

12. Kinect 4

kinect4

This wasn’t actually at VRLA, and I’m not ultimately sure what it is (maybe a competitor for the Google computer vision kit?) but Kinect for Azure was announced at the /build conference in Seattle and should be coming out sometime in 2019. As a former Kinect MVP and a Kinect book author, this announcement mellows me out like a glass of Hennessy in a suddenly quiet dance club.

While I’m waiting for bitcoin to rebound, I’ll just leave this list up on Amazon for, like, in case anyone wants to fulfill it for me or something. In the off chance that that actually comes through, I can guarantee you a really awesome unboxing video.