HoloLens and MR Device Dev Advisory Mar-2018

I’m currently doing HoloLens development with VS 2017 v15.6.4, Unity 2017.3.1p3, MRTK 2017.1.2, and W10 17128 (Insider Build).

Unity 2017.3.1p3, a patch release, includes the 2017.3.1p1 fix for hologram instability:

(993880) – XR: Fixed stabilization plane not getting set correctly via the SetFocusPointForFrame() API, resulting in poor hologram stabilization and color separation on HoloLens.

There continues to be uncertainty about whether this fixes all the stabilization problems or not – though it’s definitely better than it has been over the past several months.

UnityWebRequest continues to always return true for the isNetworkError property. Use isError or isHttpError instead. The older WWW class probably shouldn’t be used anymore. There are reports that media downloads aren’t working with UnityWebRequest while other file types are.

So if you have something working now, and have work-arounds in place, you probably shouldn’t upgrade. I know of HoloLens developers who are still very happy working in the older Unity 5.6.3.

April is shaping up to be very interesting. According to Unity, they will be releasing Unity 2018.1.0 then. For UWP/HoloLens developers, this means the addition of the .NET Standard 2.0 API compatibility level. .NET Standard 2.0 can be thought of as the set of APIs commonly supported by both .NET Core 2.0 (what UWP uses) and .NET Framework 4.6.1 (used for Windows apps and in the IDE). By supporting this, Unity 2018.1.0 should provide us with the ability to write much more common MonoBehaviour script code that works in both the IDE and on the HoloLens without using precompiler directives.

Of course, this is only useful if the HoloLens actually supports .NET Core 2.0, which is why the announcement of the RS4 Technical Preview is such a big deal. This is the first major firmware update for the HoloLens since release, and brings with it all the changes to Windows platform since the Anniversary Update (build 10.0.14393) which was also known as RS1 and which supports .NET Core 1.0.

Redstone 4 (build 10.0.17133), also known as the Spring Creators Update, is supposed to drop for PCs in mid-April. Which coincidentally is also when Unity 2018.1.0 is supposed to drop. So it would not be out of the question to expect a version of RS4 for HoloLens to drop at around the same time.

What sets RS4 for HoloLens apart from RS4 for Windows? For one thing, on the HoloLens we will have access to a new feature called Research Mode, providing access to low level sensor data such as the ToF depth camera and potentially the 4 mono cameras and the microphones. This in turn can be used to try out new algorithms beyond what the HoloLens already currently uses for data analysis.

On the UI front, the MR Design Labs interface tools have finally been integrated into the dev branch of the Mixed Reality Toolkit. Fingers crossed that this will make its way into the main branch in April also.

Finally, Magic Leap’s mixed reality headset, dubbed the Magic Leap One, had its debut at GDC this month. They also opened their creator portal to all developers, with links to documentation, the Lumin SDK, a special version of Unity 2018 to develop ML apps and a simulator to test gesture and controller interactions.

In the interest of full disclosure, I’ve been developing for the Magic Leap for a while under NDAs and inside a locked room ensorcelled by eldritch spells. It’s a great device and finally creates some good competition for the HoloLens team at Microsoft.

The first reaction among people working with the HoloLens and occluded MR devices may be to be defensive and tribal. Please resist this instinct.

A second, well-funded device like the Magic Leap One means all that much more marketing dollars from both Microsoft and Magic Leap spent on raising the profile of Mixed Reality (or Spatial Computing, as ML is calling it). It means healthy competition between the two device makers that will encourage both companies to improve their tech in efforts to grow and hold large swaths of the AR market. It also means a new device to which most of your spatial development skills will easily transfer. In other words, this is a good thing my MR homies. Embrace it.

And from the development side, there are lots of things to like about Magic Leap. Lumin is Linux/Mono based, which means a higher level of compatibility between the platform and pre-existing Unity assets from the Asset store. It also supports development in Unreal. Lastly, it also supports development on a Mac, potentially offering a way for crossover between the design, gaming and enterprise dev worlds. This in turn raises interest in high-end AR and will make people take a second look at HoloLens and the occluded MR devices.

It doesn’t take a weatherman to know it’s going to be a great summer for Mixed Reality / Spatial Computing developers.

I’m leaving Facebook because blah blah ethics

facebook

I’ve deleted my Facebook account (which apparently takes anywhere from 24 hrs to 45 days to accomplish) over their involvement with Cambridge Analytica. I’m not saying you should too, but you should think about it. For me it came out of distaste at colluding with a large company whose sole purpose is to collect personal data and monetize it. The data is being collected for current but primarily future AI applications – I think there’s little doubt about that. Also Facebook has known about its privacy and trust problems for a long time and has been warned internally about it but chosen to do nothing until they got caught.

At an ethical level (rather than just moral squeamishness on my part) I’ve been advocating for greater ethical scrutiny of matters touching on AI, so it would have been hypocritical to let this pass. That made it an easy decision on my part (though still painful) but is somewhat unique to my situation and past statements and doesn’t necessarily apply to any of you.

https://www.theatlantic.com/technology/archive/2018/03/facebook-cambridge-analytica/555866/ 

But at least it’s a good moment to pause, consider and exercise our ability to think ethically. What is the case for staying with Facebook? What is the case for leaving Facebook?

Here’s my first go at an ethical case:

For staying:

Facebook helps me connect with family and old friends in a convenient way. I might lose these relationships if I delete Facebook.

As a Microsoft MVP, I’m judged on my reach and influence. Deleting Facebook amputates my reach and consequently my appeal to Microsoft as an MVP.

Facebook is used to organize people against repressive regimes in liberation movements. This is a clear good.

For leaving:

Cambridge Analytica was not an isolated instance. Facebook was a poor steward of our personal information.

Facebook was warned about its policies and didn’t act until 2015 to review its internal practices.

This is Facebook’s core business and not an aberration. They are collecting data for AI and had a strong interest in seeing how it could be analyzed and used.

Participating in Facebook gives them access to my information as well as that of my friends.

Their policies can change at any time.

Participating in Facebook encourages family and friends to stay on Facebook.

The loose relationships formed on Facebook encourage and facilitate the spread of false and/or skewed information.

Facebook served as a medium for swaying the 2016 U.S. elections through bots and Russian active measures.

Analysis:

The harm outweighs the gains. The potential for future harm also outweighs the potential for future gains. Also, I am co-implicated in present and future harm promulgated by Facebook through continued participation. QED It is ethically necessary to sever relations with Facebook by deleting my account.

Meta-analysis:

But just because something is ethically wrong, does that make it wrong wrong?

Alasdair MccIntyre proposed in his 1981 book After Virtue that we post-moderns have lost the sense for moral language and that worldviews involving character, moral compasses and virtue are unintelligible to us, for better or worse.

Malcolm Gladwell makes a contiguous point in a talk at New Yorker Con about how we justify right and wrong to ourselves these days in terms of “harm” (and why it doesn’t work).

In a past life, I used to teach ethics to undergraduates and gave them the general message that everyone learns from Ethics 101 in college: ethics is stopping and thinking about not doing something bad just before you are about to do it anyways.

Along those lines, here are some arguments I’ve been hearing against acting on our ethical judgment against Facebook:

1. It will not really harm Facebook. If a few hundred thousand North Americans leave Facebook this week, it will be made up for by a few hundred thousand Indians signing up for Facebook next week.

2. Facebook is modifying its policies as of right now to fix these past errors.

3. Facebook has just severed relations with Cambridge Analytica, who are the real culprits anyways.

4. Since 2015, you’ve had the ability to go into your Facebook settings and change the default privacy permissions to better secure your private information.

Which are legitimate points. Here are some counter-points (but again this is just me doing me while you need to do you. I’m not judging your choices even if I sound judgy at the moment):

1. Leaving Facebook isn’t primarily about imposing practical consequences on Facebook (though if it did that would be gravy) but rather about imposing consequences upon myself. What sort of person am I if I do not leave Facebook knowing what they have done?

2. It isn’t even primarily about what bad deeds Facebook has committed in the past but rather about what those actions and policies say about who Facebook is today. Facebook is, to use a term the author Charles Stross coins in his talk Dude, You Broke the Future, a slow AI. It is a large corporation which has an internal culture that leads it to repeatedly follow the same algorithms. There is no consciousness guiding it, just algorithms. And the algorithms collect widgets, in economic terms – which in this case is your personal data. It can do nothing else. Eventually it wants to turn your personal data into cash because of the algorithm.

3. In the moral languages of the past, we would call this its character. Facebook has a bad character (or even Facebook is a bad character). Having good character means having principles you do not break and lines you do not cross.

4. I want to wear a white hat. To that purpose, even if I can’t stop a bad character, I don’t have to help it or be complicit in its work.

Switching gears, somewhere between World War I and World War II, we probably lost the proper language to discuss ethics, values, morals, what have you. There are artifacts from that time and before, however, that we can dig up in order to understand how we used to think. Here’s one of my favorites, from the e. e. cummings poem i sing of Olaf glad and big:

Olaf(upon what were once knees)
does almost ceaselessly repeat
“there is some shit I will not eat”

This is what I would say to Mark Zuckerberg, if he would listen, and even if he won’t.

The AI Ethics Challenge

A few years ago, CNNs were understood by only a handful of PhDs. Today, companies like Facebook, Google and Microsoft are snapping up AI majors from universities around the world and putting them toward efforts to consumerize AI for the masses. At the moment, tools like Microsoft’s Cognitive Services, Google Cloud Vision and WinML are placing this power in the hands of line-of-business software developers.

But with great power comes great responsibility. While being a developer even a few years ago really meant being a puzzle-solver who knew their way around a compiler (and occasionally did some documentation), today with our new-found powers it requires that we also be ethicists (who occasionally do documentation). We must think through the purpose of our software and the potential misuses of it the way, once upon a time, we anticipated ways to test our software. In a better, future world we would have ethics harnesses for our software, methodologies for ethics-driven-development, continuous automated ethics integration and so on.

Yet we don’t live in a perfect world and we rarely think about ethics in AI beyond the specter of a robot revolution. In truth, the  Singularity and the Skynet takeover (or the Cylon takeover) are straw robots that distract us from real problems. They are raised, dismissed as Sci Fi fantasies, and we go on believing that AI is there to help us order pizzas and write faster Excel macros. Where’s the harm in that?

So lets start a conversation about AI and ethics; and beyond that, ML and ethics, Mixed Reality and ethics, software consulting and ethics. Because through a historical idiosyncrasy it has fallen primarily on frontline software developers to start this dialog and we should not shirk the responsibility. It is what we owe to future generations.

I propose to do this in two steps:

1. I will challenge other technical bloggers to address ethical issues in their field.  This will provide a groundwork for talking about ethics in technology, which as a rule we do not normally do on our blogs. They, in turn, will tag five additional bloggers, and so on.

2. For one week, I will add “and ethicist” to my LinkedIn profile description and challenge each of the people I tag to do the same. I understand that not everyone will be able to do this but it will serve to draw attention to the fact that “thinking ethically” today is not to be separated from our identity as “coders”, “developers” or even “hackers”. Ethics going forward is inherent in what we do.

Here are the first seven names in this ethics challenge:

I want to thank Rick Barraza and Joe Darko, in particular, for forcing me to think through the question of AI and ethics at the recent MVP Summit in Redmond. These are great times to be a software developer and these are also dark times to be a software developer. But many of us believe we have a role in making the world a better place and this starts with conversation, collegiality and a certain amount of audacity.