My VRLA Shopping List

vrla

I meant to finish this earlier in the week. I spent the past weekend in Los Angeles at the VRLA conference in order to hear Jasper Brekelmans speak about the state of the art in depth sensors and visual effects. One of the great things about VRLA is all the vendor booths you can visit that are directly related to VR and AR technology. Nary a data analytics platform pitch or dev ops consulting services shill in sight.

Walking around with Jasper, we started compiling a list of how we would spend our Bitcoin and Ethereum fortunes once they recover some of their value. What follows is my must-have shopping list if I had so much money I didn’t need anything:

1. Red Frog HoloLens mod

IMG_3301

First off is this modified HoloLens by Red Frog Digital. The fabrication allows the HoloLens to balance much better on a user’s head. It also applies no pressure to the bridge of the nose, but instead distributes it across the user’s head. The nicest thing about it is that it always provides a perfect fit, and can be properly aligned with the user’s eyes in about 5 seconds. They designed this for their Zombie Maze location-based experience and are targeting it for large, permanent exhibits / rides.

2. Cleanbox … the future of wholesome fun

cleanbox

If you’ve ever spent a lot of time doing AR and VR demos at an event, you know there are three practical problems you have to work around:

  • seating devices properly on users’ heads
  • cleaning devices between use
  • recharging devices

Cleanbox Technology provides a solution for venue-based AR/VR device cleaning. Place your head-mounted display in the box, close the lid, and it instantly gets blasted with UV rays and air. I’d personally be happy just to have nice wooden boxes for all of my gear – I have a tendency to leave them lying on the floor or scattered across my computer desk – even without the UV lights.

3. VR Hamster Ball

IMG_3012

The guy demoing this never seemed to let anyone try it, so I’m not sure if he was actually playing a hamster sim or not. I just know I want one as a 360 running-in-place controller … and as a private nap space, obviously.

4. Haptic Vest and Gauntlets

haps (2)

Bhaptics was demoing their TactSuit, which provides haptic feedback along the chest, back, arms and face. I’m going to need it to go with my giant hampster ball. They are currently selling dev units.

5. Birdly

birdly

A tilt table with an attached fan and a user control in the form of flapping wings is what you need for a really immersive VR experience. Fortunately, this is exactly what Birdly provides.

6. 5K Head-mounted Display

5KVR

I got to try out the Vive Pro, which has an astounding 2K resolution. But I would rather put my unearned money down for a VRHero 5K VR headset with 170 degree FOV. They seem to be targeting industrial use cases rather than games, though, since their demo was of a truck simulation (you stood in the road as trucks zoomed by).

7. A globe display

globe_display

Do I need a giant spherical display? No, I do not need it. But it would look really cool in my office as a conversation piece. It could also make a really great companion app for a VR or AR experience.

8. 360 Camera Rig with Red Epic Cameras

redrig

Five 6K Red Dragon Epic Cameras in a 360 video rig may seem like overkill, but with a starting price of around $250K, before tripod, lenses and a computer powerful enough to process your videos – this could make the killer raffle item at any hi-tech conference.

9. XSens Mocap Suit

According to Jasper, the XSens motion capture fullbody, lycra suit with realtime kinematics is one of the best available. I think I was quoted a price something like $7K(?) to $13K(?) Combined with my hamster ball, it would make me unstoppable in PvP competitive Minecraft.

10. AntVR AR Head-mounted display

antvr

AntVR will be launching a kickstarter campaign for their $500 augmented reality HMD in the next few weeks. I’d been reading about it for a while and was very excited to get a chance to try it out. It uses a Pepper’s ghost strategy for displaying AR, has decent tracking, works with Steam, and at $500 is very good for its price point.

11. Qualcomm Snapdragon 845

qualcomm

The new Qualcomm Snapdragon 845 Chip has built-in SLAM – meaning 6DOF inside-out tracking is now a trivial chip-based solution – unlike just two years ago when nobody outside of robotics had even heard of SLAM algorithms. This is a really big deal.

Lenovo is using this chip in its new (untethered) Mirage Solo VR device – which looks surprisingly like the Windows Occluded MR headset they built with Microsoft tracking tech. At the keynote, the Lenovo rep stumbled and said that they will support “at least” 6 degrees of freedom, which has now become an inside joke among VR and AR developers. It’s also spoiled me, because I am no longer satisfied with only 6DOF. I need 7DOF at least but what I really want is to take my DOF up to 11.

12. Kinect 4

kinect4

This wasn’t actually at VRLA, and I’m not ultimately sure what it is (maybe a competitor for the Google computer vision kit?) but Kinect for Azure was announced at the /build conference in Seattle and should be coming out sometime in 2019. As a former Kinect MVP and a Kinect book author, this announcement mellows me out like a glass of Hennessy in a suddenly quiet dance club.

While I’m waiting for bitcoin to rebound, I’ll just leave this list up on Amazon for, like, in case anyone wants to fulfill it for me or something. In the off chance that that actually comes through, I can guarantee you a really awesome unboxing video.

Is Ethical AI Ethical?

Is ethical AI ethical? This is not meant to be a self-referential question as much as a question about semantics. Do we know what we mean when we talk about ethics? And, as a corollary,  can we practice ethical AI if we aren’t sure what we mean when we talk about ethics. (Whether we speak correctly about artificial intelligence is a matter we can examine later.)

Is it possible that ethics is one of those concepts we all think we understand well,  agree that it is important to understand in order to lead good lives, but don’t really have a clear grasp of.

If this is the case, how do we go about practicing ethical AI? What is the purpose of it? How do we judge whether our ethical standards regarding AI are sufficient or effective? What would effective AI ethics look like? And is the question of ethical AI one of those problems we need to develop AI in order to solve?

Toward an ethical AI, here are some passages I consider important:

“Everyone will readily agree that it is of the highest importance to know whether or not we are duped by morality.” — Emmanuel Levinas, Totality and Infinity

“What is happiness?

“To crush your enemies, to see them driven before you, and to hear the lamentations of their women!” – Conan the Barbarian

“Imagine that the natural sciences were to suffer the effects of a catastrophe. A series of environmental disasters are blamed by the general public on scientists. Widespread riots occur, laboratories are burnt down, physicists are lynched, books and instruments are destroyed. Finally a Know-Nothing political movement takes power and successfully abolishes science teaching in schools and universities, imprisoning and executing the remaining scientists. Later still there is a reaction against this destructive movement and enlightened people seek to revive science, although they have largely forgotten what it was….

“The hypothesis which I wish to advance is that in the actual world which we inhabit the language of morality is in the same state of grave disorder as the language of natural science in the imaginary world which I described. What we possess, if this view is true, are the fragments of a conceptual scheme, parts of which now lack those contexts from which their significance derived.”  — Alasdair MacIntyre, After Virtue

“A great-souled person, because he holds few things in high honor, is not someone who takes small risks or is passionately devoted to taking risks, but he is someone who takes great risks, and when he does take a risk he is without regard for his life, on the ground that it is not on just any terms that life is worth living.” – Aristotle, Nicomachean Ethics

“In the name of God, the Merciful, the Compassionate.

“Someone asked the eminent shaykh Abu ‘Ali b. Sina (may God the Exalted have mercy on him) the meaning of the Sufi saying, He who knows the secret of destiny is an atheist.  In reply he stated that this matter contains the utmost obscurity, and is one of those matters which may be set down only in enigmatic form and taught only in a hidden manner, on account of the corrupting effects its open declaration would have on the general public.  The basic principle concerning it is found in a Tradition of the Prophet (God bless and safeguard him): Destiny is the secret of God; do not declare the secret of God.  In another Tradition, when a man questioned the Prince of the Believers, ‘Ali (may God be pleased with him), he replied, Destiny is a deep sea; do not sail out on it.  Being asked again he replied, It is a stony path; do not walk on it.  Being asked once more he said, It is a hard ascent; do not undertake it.

“The shaykh said: Know that the secret of destiny is based upon certain premisses, such as 1) the world order, 2) the report that there is Reward and Punishment, and 3) the affirmation of the resurrection of souls.” — Avicenna, On the Secret of Destiny

“The greatest recent event—that “God is dead,” that the belief in the Christian God has ceased to be believable—is even now beginning to cast its first shadows over Europe. For the few, at least, whose eyes, whose suspicion in their eyes, is strong and sensitive enough for this spectacle, some sun seems to have set just now…” – F. Nietzsche, The Gay Science (1887)

HoloLens and MR Device Dev Advisory Mar-2018

I’m currently doing HoloLens development with VS 2017 v15.6.4, Unity 2017.3.1p3, MRTK 2017.1.2, and W10 17128 (Insider Build).

Unity 2017.3.1p3, a patch release, includes the 2017.3.1p1 fix for hologram instability:

(993880) – XR: Fixed stabilization plane not getting set correctly via the SetFocusPointForFrame() API, resulting in poor hologram stabilization and color separation on HoloLens.

There continues to be uncertainty about whether this fixes all the stabilization problems or not – though it’s definitely better than it has been over the past several months.

UnityWebRequest continues to always return true for the isNetworkError property. Use isError or isHttpError instead. The older WWW class probably shouldn’t be used anymore. There are reports that media downloads aren’t working with UnityWebRequest while other file types are.

So if you have something working now, and have work-arounds in place, you probably shouldn’t upgrade. I know of HoloLens developers who are still very happy working in the older Unity 5.6.3.

April is shaping up to be very interesting. According to Unity, they will be releasing Unity 2018.1.0 then. For UWP/HoloLens developers, this means the addition of the .NET Standard 2.0 API compatibility level. .NET Standard 2.0 can be thought of as the set of APIs commonly supported by both .NET Core 2.0 (what UWP uses) and .NET Framework 4.6.1 (used for Windows apps and in the IDE). By supporting this, Unity 2018.1.0 should provide us with the ability to write much more common MonoBehaviour script code that works in both the IDE and on the HoloLens without using precompiler directives.

Of course, this is only useful if the HoloLens actually supports .NET Core 2.0, which is why the announcement of the RS4 Technical Preview is such a big deal. This is the first major firmware update for the HoloLens since release, and brings with it all the changes to Windows platform since the Anniversary Update (build 10.0.14393) which was also known as RS1 and which supports .NET Core 1.0.

Redstone 4 (build 10.0.17133), also known as the Spring Creators Update, is supposed to drop for PCs in mid-April. Which coincidentally is also when Unity 2018.1.0 is supposed to drop. So it would not be out of the question to expect a version of RS4 for HoloLens to drop at around the same time.

What sets RS4 for HoloLens apart from RS4 for Windows? For one thing, on the HoloLens we will have access to a new feature called Research Mode, providing access to low level sensor data such as the ToF depth camera and potentially the 4 mono cameras and the microphones. This in turn can be used to try out new algorithms beyond what the HoloLens already currently uses for data analysis.

On the UI front, the MR Design Labs interface tools have finally been integrated into the dev branch of the Mixed Reality Toolkit. Fingers crossed that this will make its way into the main branch in April also.

Finally, Magic Leap’s mixed reality headset, dubbed the Magic Leap One, had its debut at GDC this month. They also opened their creator portal to all developers, with links to documentation, the Lumin SDK, a special version of Unity 2018 to develop ML apps and a simulator to test gesture and controller interactions.

In the interest of full disclosure, I’ve been developing for the Magic Leap for a while under NDAs and inside a locked room ensorcelled by eldritch spells. It’s a great device and finally creates some good competition for the HoloLens team at Microsoft.

The first reaction among people working with the HoloLens and occluded MR devices may be to be defensive and tribal. Please resist this instinct.

A second, well-funded device like the Magic Leap One means all that much more marketing dollars from both Microsoft and Magic Leap spent on raising the profile of Mixed Reality (or Spatial Computing, as ML is calling it). It means healthy competition between the two device makers that will encourage both companies to improve their tech in efforts to grow and hold large swaths of the AR market. It also means a new device to which most of your spatial development skills will easily transfer. In other words, this is a good thing my MR homies. Embrace it.

And from the development side, there are lots of things to like about Magic Leap. Lumin is Linux/Mono based, which means a higher level of compatibility between the platform and pre-existing Unity assets from the Asset store. It also supports development in Unreal. Lastly, it also supports development on a Mac, potentially offering a way for crossover between the design, gaming and enterprise dev worlds. This in turn raises interest in high-end AR and will make people take a second look at HoloLens and the occluded MR devices.

It doesn’t take a weatherman to know it’s going to be a great summer for Mixed Reality / Spatial Computing developers.

I’m leaving Facebook because blah blah ethics

facebook

I’ve deleted my Facebook account (which apparently takes anywhere from 24 hrs to 45 days to accomplish) over their involvement with Cambridge Analytica. I’m not saying you should too, but you should think about it. For me it came out of distaste at colluding with a large company whose sole purpose is to collect personal data and monetize it. The data is being collected for current but primarily future AI applications – I think there’s little doubt about that. Also Facebook has known about its privacy and trust problems for a long time and has been warned internally about it but chosen to do nothing until they got caught.

At an ethical level (rather than just moral squeamishness on my part) I’ve been advocating for greater ethical scrutiny of matters touching on AI, so it would have been hypocritical to let this pass. That made it an easy decision on my part (though still painful) but is somewhat unique to my situation and past statements and doesn’t necessarily apply to any of you.

https://www.theatlantic.com/technology/archive/2018/03/facebook-cambridge-analytica/555866/ 

But at least it’s a good moment to pause, consider and exercise our ability to think ethically. What is the case for staying with Facebook? What is the case for leaving Facebook?

Here’s my first go at an ethical case:

For staying:

Facebook helps me connect with family and old friends in a convenient way. I might lose these relationships if I delete Facebook.

As a Microsoft MVP, I’m judged on my reach and influence. Deleting Facebook amputates my reach and consequently my appeal to Microsoft as an MVP.

Facebook is used to organize people against repressive regimes in liberation movements. This is a clear good.

For leaving:

Cambridge Analytica was not an isolated instance. Facebook was a poor steward of our personal information.

Facebook was warned about its policies and didn’t act until 2015 to review its internal practices.

This is Facebook’s core business and not an aberration. They are collecting data for AI and had a strong interest in seeing how it could be analyzed and used.

Participating in Facebook gives them access to my information as well as that of my friends.

Their policies can change at any time.

Participating in Facebook encourages family and friends to stay on Facebook.

The loose relationships formed on Facebook encourage and facilitate the spread of false and/or skewed information.

Facebook served as a medium for swaying the 2016 U.S. elections through bots and Russian active measures.

Analysis:

The harm outweighs the gains. The potential for future harm also outweighs the potential for future gains. Also, I am co-implicated in present and future harm promulgated by Facebook through continued participation. QED It is ethically necessary to sever relations with Facebook by deleting my account.

Meta-analysis:

But just because something is ethically wrong, does that make it wrong wrong?

Alasdair MccIntyre proposed in his 1981 book After Virtue that we post-moderns have lost the sense for moral language and that worldviews involving character, moral compasses and virtue are unintelligible to us, for better or worse.

Malcolm Gladwell makes a contiguous point in a talk at New Yorker Con about how we justify right and wrong to ourselves these days in terms of “harm” (and why it doesn’t work).

In a past life, I used to teach ethics to undergraduates and gave them the general message that everyone learns from Ethics 101 in college: ethics is stopping and thinking about not doing something bad just before you are about to do it anyways.

Along those lines, here are some arguments I’ve been hearing against acting on our ethical judgment against Facebook:

1. It will not really harm Facebook. If a few hundred thousand North Americans leave Facebook this week, it will be made up for by a few hundred thousand Indians signing up for Facebook next week.

2. Facebook is modifying its policies as of right now to fix these past errors.

3. Facebook has just severed relations with Cambridge Analytica, who are the real culprits anyways.

4. Since 2015, you’ve had the ability to go into your Facebook settings and change the default privacy permissions to better secure your private information.

Which are legitimate points. Here are some counter-points (but again this is just me doing me while you need to do you. I’m not judging your choices even if I sound judgy at the moment):

1. Leaving Facebook isn’t primarily about imposing practical consequences on Facebook (though if it did that would be gravy) but rather about imposing consequences upon myself. What sort of person am I if I do not leave Facebook knowing what they have done?

2. It isn’t even primarily about what bad deeds Facebook has committed in the past but rather about what those actions and policies say about who Facebook is today. Facebook is, to use a term the author Charles Stross coins in his talk Dude, You Broke the Future, a slow AI. It is a large corporation which has an internal culture that leads it to repeatedly follow the same algorithms. There is no consciousness guiding it, just algorithms. And the algorithms collect widgets, in economic terms – which in this case is your personal data. It can do nothing else. Eventually it wants to turn your personal data into cash because of the algorithm.

3. In the moral languages of the past, we would call this its character. Facebook has a bad character (or even Facebook is a bad character). Having good character means having principles you do not break and lines you do not cross.

4. I want to wear a white hat. To that purpose, even if I can’t stop a bad character, I don’t have to help it or be complicit in its work.

Switching gears, somewhere between World War I and World War II, we probably lost the proper language to discuss ethics, values, morals, what have you. There are artifacts from that time and before, however, that we can dig up in order to understand how we used to think. Here’s one of my favorites, from the e. e. cummings poem i sing of Olaf glad and big:

Olaf(upon what were once knees)
does almost ceaselessly repeat
“there is some shit I will not eat”

This is what I would say to Mark Zuckerberg, if he would listen, and even if he won’t.

The AI Ethics Challenge

A few years ago, CNNs were understood by only a handful of PhDs. Today, companies like Facebook, Google and Microsoft are snapping up AI majors from universities around the world and putting them toward efforts to consumerize AI for the masses. At the moment, tools like Microsoft’s Cognitive Services, Google Cloud Vision and WinML are placing this power in the hands of line-of-business software developers.

But with great power comes great responsibility. While being a developer even a few years ago really meant being a puzzle-solver who knew their way around a compiler (and occasionally did some documentation), today with our new-found powers it requires that we also be ethicists (who occasionally do documentation). We must think through the purpose of our software and the potential misuses of it the way, once upon a time, we anticipated ways to test our software. In a better, future world we would have ethics harnesses for our software, methodologies for ethics-driven-development, continuous automated ethics integration and so on.

Yet we don’t live in a perfect world and we rarely think about ethics in AI beyond the specter of a robot revolution. In truth, the  Singularity and the Skynet takeover (or the Cylon takeover) are straw robots that distract us from real problems. They are raised, dismissed as Sci Fi fantasies, and we go on believing that AI is there to help us order pizzas and write faster Excel macros. Where’s the harm in that?

So lets start a conversation about AI and ethics; and beyond that, ML and ethics, Mixed Reality and ethics, software consulting and ethics. Because through a historical idiosyncrasy it has fallen primarily on frontline software developers to start this dialog and we should not shirk the responsibility. It is what we owe to future generations.

I propose to do this in two steps:

1. I will challenge other technical bloggers to address ethical issues in their field.  This will provide a groundwork for talking about ethics in technology, which as a rule we do not normally do on our blogs. They, in turn, will tag five additional bloggers, and so on.

2. For one week, I will add “and ethicist” to my LinkedIn profile description and challenge each of the people I tag to do the same. I understand that not everyone will be able to do this but it will serve to draw attention to the fact that “thinking ethically” today is not to be separated from our identity as “coders”, “developers” or even “hackers”. Ethics going forward is inherent in what we do.

Here are the first seven names in this ethics challenge:

I want to thank Rick Barraza and Joe Darko, in particular, for forcing me to think through the question of AI and ethics at the recent MVP Summit in Redmond. These are great times to be a software developer and these are also dark times to be a software developer. But many of us believe we have a role in making the world a better place and this starts with conversation, collegiality and a certain amount of audacity.

Windows Insider Builds and Unity Licenses

On one of my dev PCs, I’m on the Windows 10 Insider Builds inner ring, which updates my computer with intermediate builds of Windows. This gives me access to the latest features for testing. Unfortunately, it also means that my Unity Plus license goes into an inconsistent state every few weeks after an Insider Build changes my computer profile enough that Unity no longer recognizes it as the same computer.

maximum

In the past I’ve tied going into my account management and try to revoke the assigned seat and then assign it again to myself. Sometimes this works but often it doesn’t. Somehow during the Windows update process, something happens with the original license that it cannot be disabled correctly.

 

my_seats

I recently found out that there is a different route to deactivating seats that will work even after a Windows 10 update. Back in the account landing page, you need to navigate to My Seats instead of going through the Organizations menu.

remove_activations

 

 

 

This leads you to a page that lets you remove all license activations. Once you’ve clicked on “Remove all my activations”, you can then successfully use your license key to reactivate your Unity IDE.

Reminder to pre-buy your Black Panther tickets

black_panth

Tickets are actually already selling out for the new Black Panther movie, in part because of buy outs of complete theaters for children like these Ron Clark Academy students in Atlanta:

happy1

happy2

For those who don’t know (bad nerds) the Black Panther is a superhero in the Marvel Universe who is also the king of the African nation of Wakanda. Wakanda is secretly the most technologically advanced country in the world, the sole source of vibranium in the world (magic metal, Captain America’s shield is made of it), but projects an image of just being another African nation in order to avoid interference. In the marvel universe it had an all out war with the Submariner’s Atlantean army a few years ago and currently auteur Ta-Nehisi Coates is taking a turn at writing the series and problematizing it (which I don’t totally like but tastes and all that).

Deadly_Hands_of_Kung_Fu_29

The history of the series is basically the usual Marvel thing – Marvel takes advantage of racial trends and exploits them (like with Luke Cage, Iron Fist and Shang-chi) and end up creating something kind of miraculous. In this case, a kingdom of black people who are more advanced than anybody else, culturally and technologically.

wonder-woman-4

If I can talk race and gender a little (feel free to squirm) according to a friend, it does for black people what she assumes Wonder Women did for white women. You get to see yourself in an ideal way without any cultural or political baggage. How do you create a movie hero without any cultural baggage or identity politics attached – you create a fictional country like Themyscira or Wakanda and make your characters come from there – that way they don’t become walking political arguments but instead just _are_.

thorteamup

So after seeing Wonder Woman, my wife asked me if that’s what it’s like for men to see movies, and I think, yeah, pretty much. I’m not Thor, but he’s an ideal projection of myself when I watch the movie and he gets to drink and carouse and hang out with his buddies and women admire him and no one ever neggs him for it. And my wife said she’s learned to watch and enjoy those kinds of movies but Wonder Woman showed her what that experience could really be like.

fury

At the risk of overselling — Black Panther is going to do that for race, according to a friend who got to go to the Hollywood premiere. No white guilt, no resentment, no countries getting called sh* holes, just gorgeous, powerful black people and a reprieve from our crazy mixed up world for a while. Plus, again according to the friend, it’s also another fun Marvel movie.

b_mirror

And here’s the catch for lovers of VR and AR – obviously there’s going to be lots of great Cinema4D faux-holograms used to show how advanced Wakanda is. Not only did Marvel movies pioneer this, but holograms are the chief way movies and tv show “advanced” societies (e.g. Black Mirror, Electric Dreams).

vr

But more importantly, when we talk about “virtual” immersive experiences I think we implicitly know it means more than just having objects in a 3D space. The world is a given and stuck thing, while virtual reality frees us from that and lets us see it differently. The killer AR/VR app is going to do that at a very deep level. I think Black Panther is going to provide an ideal/target/goal for what we want to achieve with all of our headgear. An artificial experience that alters the way we see reality – if only for a few hours plus the afterglow period. Great virtual reality needs to alter our real reality – and make it better.

HoloLens and MR Device Dev Advisory Jan-2018

I’ve come to accept that doing HoloLens and MR Device development means working with constant issues. As long as I can stay on top of what these issues are, I feel less like pulling out my hair. And I’m not just a member of the hair club for men – I also want to help you avoid hair loss with monthly updates.

I’m currently doing HoloLens development with VS 2017 v15.3.3, Unity 2017.2.0f3, MRTK 2017.1.2, and W10 17074 (Insider Build).

shared_buffer

This month saw the release of Unity 2017.3.0f3, which introduced a fix but also some new bugs. The fix is to the HoloLens stabilization issues introduced in December’s Unity 2017.2.1 release which caused holograms to be jittery. In Unity 2017.3.0f3, a new player setting in the editor called shared depth buffer fixes this. Just expand the Windows Mixed Reality node under XR Settings and check off Enable Depth Buffer Sharing. On the other hand, this seems to conflict with the stabilization logic in the MRTK, so you may see some jumpiness (but no jitteriness!) so you’ll want to remove that older logic from your code.

2017.3.0f3 also introduced problems with WWW (Unity’s older internet communication class). Basically, it doesn’t work when running in UWP anymore (though it does on other platforms and in the editor), so if your code or any assets you may be depending on for internet communication depend on WWW, you’ll have issues.

If you are using older stable builds of the MR Toolkit (up to 2017.1.2), you’ll start getting warnings and errors in 2017.2.0f03 and above about outdated APIs. Unity introduces API changes with each monthly release. The UnityEngine.VR.WSA namespace, for instance, is now UnityEngine.XR.WSA (sometimes the Unity editor will automatically fix this  for you when you migrate a project but often it doesn’t). In a couple of cases (like in the HandGuidance class) you’ll notice that the InteractionManager APIs have changed.

UnityEngine.XR.WSA.Input.InteractionManager.SourceLost += InteractionManager_SourceLost;
            UnityEngine.XR.WSA.Input.InteractionManager.SourceUpdated += InteractionManager_SourceUpdated;
            UnityEngine.XR.WSA.Input.InteractionManager.SourceReleased += InteractionManager_SourceReleased;

becomes:

UnityEngine.XR.WSA.Input.InteractionManager.InteractionSourceLost += InteractionManager_SourceLost;
            UnityEngine.XR.WSA.Input.InteractionManager.InteractionSourceUpdated += InteractionManager_SourceUpdated;
            UnityEngine.XR.WSA.Input.InteractionManager.InteractionSourceReleased += InteractionManager_SourceReleased;

The signature of the event handlers also change. Just follow Visual Studio Intellisense’s notes on how to fix the signatures.

Is there a mixed-reality dress code?

Not to derail us, but how should MR devs dress?

Trunk-Club-Box

My feeling is we shouldn’t be wearing the standard enterprise / consultant software dev uniform of a golf shirt and khaki pants with dog walker shoes. That isn’t really who we are. ORMs are not the highlight of our day and our job doesn’t end when the code compiles. We actually care how it works and even if everything works we care if it is easy for the user to understand our app. We even occasionally open up Photoshop and Cinema4D.

    silicon-valley

    We aren’t web devs. Hoodie,  jeans and Converse aren’t appropriate either. We don’t chase after the latest javascript framework every six weeks. We worry pathologically about memory allocation and performance. Our world isn’t obsessively flat. It’s obsessively three dimensional. Our uniform should reflect this, also.

      GivenchyVR_10

      This is the hard part, but here’s the start of a suggestion of the general style (subdued expensive) for men (because I have no clue about women’s fashion): faded black polo shirt buttoned to the top, slightly linty black velveteen jacket, black jeans, Hermès pocket square, leather dress shoes. It signals concern with UI but not excessive concern. Comfort is also important (UX) as is the quality of the materials (the underlying code and software architecture).

      Finally, MR/VR/AR/XR development is premium work and deserves premium rates. The clothes we wear should reflect this fundamental rate, indicating that if what we are paid doesn’t support our clothing habit (real or imagined), we will walk away (the ability to walk away from a contract being the biggest determiner of pricing).

        sid

        Black, of course, suggests the underlying 70’s punk mentality that drives innovation. MR devs are definitely not grunge rockers. The pocket handkerchief suggests flair.

        [This post was excerpted from a discussion on the Microsoft MVP Mixed Reality list.]

        Virtual Nostalgia

        crait_bikes

        One of the pleasures of revisiting a film franchise is the sense that one is coming back to a familiar setting with familiar people – such is the feeling of returning to the Star Wars universe.

        When I went to see The Last Jedi on December 16 (3D + IMAX) I underwent an odd version of this experience. As the heroes descended on the world of Crait, a red planet dusted with white dust, I had the sense that I had been there before. This was because I had been playing Star Wars Battlefront II over the previous week; in the multiplayer game, the planet Crait had just been introduced as a new location for battles and I’d been struggling against Storm Troopers (or as a Storm Trooper) through the trenches and tunnels of Crait for many, repetitive hours. Not only that, but the 3D models used to build the 3D battle world for the game appeared to be based on the same visual assets used for the movie.

        And so, when I saw the way the light reflected off of the red mud on the walls of the Crait trenches, I had an “aha” moment of recognition. My spatial memory told me I had been here before.

        trench 

        We might say that this was a case of déjà vu, since I had never been to Crait in reality – but only in a video game. But then one must recall that the “vu” experience of the déjà vu also never happened – the CGI world on the screen is not a place that exists in any reality. I had experienced a virtual nostalgia for a space that didn’t exist – a sense of returning home when there is no home to return to.

        We aren’t quite in the territory of Blade Runner manufactured memories, yet, but we are a step closer. Games and technology that give us a sense of place and affect that peculiar and primeval faculty of the brain (the ability to remember places that made our hunter-gatherer ancestors so effective and that was later exploited to form the Ars Memoriae) will have unexpected side effects.

        I think this is a new type of experience and one that marks an inflection point in mankind’s progress – if I may be allowed to be a bit grandiose. For while in all previous generations, mimetic technologies such as writing, encyclopedias, computers, and the internet, have all tended to diminish our natural memories, this new age of virtual reality and 3D spaces has, for the first time, started to provide us with a superfluity of unexpected and artificial memories.

        Authentically Virtual