The AI Ethics Challenge

A few years ago, CNNs were understood by only a handful of PhDs. Today, companies like Facebook, Google and Microsoft are snapping up AI majors from universities around the world and putting them toward efforts to consumerize AI for the masses. At the moment, tools like Microsoft’s Cognitive Services, Google Cloud Vision and WinML are placing this power in the hands of line-of-business software developers.

But with great power comes great responsibility. While being a developer even a few years ago really meant being a puzzle-solver who knew their way around a compiler (and occasionally did some documentation), today with our new-found powers it requires that we also be ethicists (who occasionally do documentation). We must think through the purpose of our software and the potential misuses of it the way, once upon a time, we anticipated ways to test our software. In a better, future world we would have ethics harnesses for our software, methodologies for ethics-driven-development, continuous automated ethics integration and so on.

Yet we don’t live in a perfect world and we rarely think about ethics in AI beyond the specter of a robot revolution. In truth, the  Singularity and the Skynet takeover (or the Cylon takeover) are straw robots that distract us from real problems. They are raised, dismissed as Sci Fi fantasies, and we go on believing that AI is there to help us order pizzas and write faster Excel macros. Where’s the harm in that?

So lets start a conversation about AI and ethics; and beyond that, ML and ethics, Mixed Reality and ethics, software consulting and ethics. Because through a historical idiosyncrasy it has fallen primarily on frontline software developers to start this dialog and we should not shirk the responsibility. It is what we owe to future generations.

I propose to do this in two steps:

1. I will challenge five technical bloggers to address ethical issues in their field.  This will provide a groundwork for talking about ethics in technology, which as a rule we do not normally do on our blogs. They, in turn, will tag five additional bloggers, and so on.

2. For one week, I will add “and ethicist” to my LinkedIn profile description and challenge each of the people I tag to do the same. I understand that not everyone will be able to do this but it will serve to draw attention to the fact that “thinking ethically” today is not to be separated from our identity as “coders”, “developers” or even “hackers”. Ethics going forward is inherent in what we do.

Here are the first five (or six in this case) names in this ethics challenge:

I want to thank Rick Barraza and Joe Darko, in particular, for forcing me to think through the question of AI and ethics at the recent MVP Summit in Redmond. These are great times to be a software developer and these are also dark times to be a software developer. But many of us believe we have a role in making the world a better place and this starts with conversation, collegiality and a certain amount of audacity.

Windows Insider Builds and Unity Licenses

On one of my dev PCs, I’m on the Windows 10 Insider Builds inner ring, which updates my computer with intermediate builds of Windows. This gives me access to the latest features for testing. Unfortunately, it also means that my Unity Plus license goes into an inconsistent state every few weeks after an Insider Build changes my computer profile enough that Unity no longer recognizes it as the same computer.


In the past I’ve tied going into my account management and try to revoke the assigned seat and then assign it again to myself. Sometimes this works but often it doesn’t. Somehow during the Windows update process, something happens with the original license that it cannot be disabled correctly.



I recently found out that there is a different route to deactivating seats that will work even after a Windows 10 update. Back in the account landing page, you need to navigate to My Seats instead of going through the Organizations menu.





This leads you to a page that lets you remove all license activations. Once you’ve clicked on “Remove all my activations”, you can then successfully use your license key to reactivate your Unity IDE.

Reminder to pre-buy your Black Panther tickets


Tickets are actually already selling out for the new Black Panther movie, in part because of buy outs of complete theaters for children like these Ron Clark Academy students in Atlanta:



For those who don’t know (bad nerds) the Black Panther is a superhero in the Marvel Universe who is also the king of the African nation of Wakanda. Wakanda is secretly the most technologically advanced country in the world, the sole source of vibranium in the world (magic metal, Captain America’s shield is made of it), but projects an image of just being another African nation in order to avoid interference. In the marvel universe it had an all out war with the Submariner’s Atlantean army a few years ago and currently auteur Ta-Nehisi Coates is taking a turn at writing the series and problematizing it (which I don’t totally like but tastes and all that).


The history of the series is basically the usual Marvel thing – Marvel takes advantage of racial trends and exploits them (like with Luke Cage, Iron Fist and Shang-chi) and end up creating something kind of miraculous. In this case, a kingdom of black people who are more advanced than anybody else, culturally and technologically.


If I can talk race and gender a little (feel free to squirm) according to a friend, it does for black people what she assumes Wonder Women did for white women. You get to see yourself in an ideal way without any cultural or political baggage. How do you create a movie hero without any cultural baggage or identity politics attached – you create a fictional country like Themyscira or Wakanda and make your characters come from there – that way they don’t become walking political arguments but instead just _are_.


So after seeing Wonder Woman, my wife asked me if that’s what it’s like for men to see movies, and I think, yeah, pretty much. I’m not Thor, but he’s an ideal projection of myself when I watch the movie and he gets to drink and carouse and hang out with his buddies and women admire him and no one ever neggs him for it. And my wife said she’s learned to watch and enjoy those kinds of movies but Wonder Woman showed her what that experience could really be like.


At the risk of overselling — Black Panther is going to do that for race, according to a friend who got to go to the Hollywood premiere. No white guilt, no resentment, no countries getting called sh* holes, just gorgeous, powerful black people and a reprieve from our crazy mixed up world for a while. Plus, again according to the friend, it’s also another fun Marvel movie.


And here’s the catch for lovers of VR and AR – obviously there’s going to be lots of great Cinema4D faux-holograms used to show how advanced Wakanda is. Not only did Marvel movies pioneer this, but holograms are the chief way movies and tv show “advanced” societies (e.g. Black Mirror, Electric Dreams).


But more importantly, when we talk about “virtual” immersive experiences I think we implicitly know it means more than just having objects in a 3D space. The world is a given and stuck thing, while virtual reality frees us from that and lets us see it differently. The killer AR/VR app is going to do that at a very deep level. I think Black Panther is going to provide an ideal/target/goal for what we want to achieve with all of our headgear. An artificial experience that alters the way we see reality – if only for a few hours plus the afterglow period. Great virtual reality needs to alter our real reality – and make it better.

HoloLens and MR Device Dev Advisory Jan-2018

I’ve come to accept that doing HoloLens and MR Device development means working with constant issues. As long as I can stay on top of what these issues are, I feel less like pulling out my hair. And I’m not just a member of the hair club for men – I also want to help you avoid hair loss with monthly updates.

I’m currently doing HoloLens development with VS 2017 v5.3.3, Unity 2017.2.0f3, MRTK 2017.1.2, and W10 17074 (Insider Build).


This month saw the release of Unity 2017.3.0f3, which introduced a fix but also some new bugs. The fix is to the HoloLens stabilization issues introduced in December’s Unity 2017.2.1 release which caused holograms to be jittery. In Unity 2017.3.0f3, a new player setting in the editor called shared depth buffer fixes this. Just expand the Windows Mixed Reality node under XR Settings and check off Enable Depth Buffer Sharing. On the other hand, this seems to conflict with the stabilization logic in the MRTK, so you may see some jumpiness (but no jitteriness!) so you’ll want to remove that older logic from your code.

2017.3.0f3 also introduced problems with WWW (Unity’s older internet communication class). Basically, it doesn’t work when running in UWP anymore (though it does on other platforms and in the editor), so if your code or any assets you may be depending on for internet communication depend on WWW, you’ll have issues.

If you are using older stable builds of the MR Toolkit (up to 2017.1.2), you’ll start getting warnings and errors in 2017.2.0f03 and above about outdated APIs. Unity introduces API changes with each monthly release. The UnityEngine.VR.WSA namespace, for instance, is now UnityEngine.XR.WSA (sometimes the Unity editor will automatically fix this  for you when you migrate a project but often it doesn’t). In a couple of cases (like in the HandGuidance class) you’ll notice that the InteractionManager APIs have changed.

UnityEngine.XR.WSA.Input.InteractionManager.SourceLost += InteractionManager_SourceLost;
            UnityEngine.XR.WSA.Input.InteractionManager.SourceUpdated += InteractionManager_SourceUpdated;
            UnityEngine.XR.WSA.Input.InteractionManager.SourceReleased += InteractionManager_SourceReleased;


UnityEngine.XR.WSA.Input.InteractionManager.InteractionSourceLost += InteractionManager_SourceLost;
            UnityEngine.XR.WSA.Input.InteractionManager.InteractionSourceUpdated += InteractionManager_SourceUpdated;
            UnityEngine.XR.WSA.Input.InteractionManager.InteractionSourceReleased += InteractionManager_SourceReleased;

The signature of the event handlers also change. Just follow Visual Studio Intellisense’s notes on how to fix the signatures.

Is there a mixed-reality dress code?

Not to derail us, but how should MR devs dress?


My feeling is we shouldn’t be wearing the standard enterprise / consultant software dev uniform of a golf shirt and khaki pants with dog walker shoes. That isn’t really who we are. ORMs are not the highlight of our day and our job doesn’t end when the code compiles. We actually care how it works and even if everything works we care if it is easy for the user to understand our app. We even occasionally open up Photoshop and Cinema4D.


    We aren’t web devs. Hoodie,  jeans and Converse aren’t appropriate either. We don’t chase after the latest javascript framework every six weeks. We worry pathologically about memory allocation and performance. Our world isn’t obsessively flat. It’s obsessively three dimensional. Our uniform should reflect this, also.


      This is the hard part, but here’s the start of a suggestion of the general style (subdued expensive) for men (because I have no clue about women’s fashion): faded black polo shirt buttoned to the top, slightly linty black velveteen jacket, black jeans, Hermès pocket square, leather dress shoes. It signals concern with UI but not excessive concern. Comfort is also important (UX) as is the quality of the materials (the underlying code and software architecture).

      Finally, MR/VR/AR/XR development is premium work and deserves premium rates. The clothes we wear should reflect this fundamental rate, indicating that if what we are paid doesn’t support our clothing habit (real or imagined), we will walk away (the ability to walk away from a contract being the biggest determiner of pricing).


        Black, of course, suggests the underlying 70’s punk mentality that drives innovation. MR devs are definitely not grunge rockers. The pocket handkerchief suggests flair.

        [This post was excerpted from a discussion on the Microsoft MVP Mixed Reality list.]

        Virtual Nostalgia


        One of the pleasures of revisiting a film franchise is the sense that one is coming back to a familiar setting with familiar people – such is the feeling of returning to the Star Wars universe.

        When I went to see The Last Jedi on December 16 (3D + IMAX) I underwent an odd version of this experience. As the heroes descended on the world of Crait, a red planet dusted with white dust, I had the sense that I had been there before. This was because I had been playing Star Wars Battlefront II over the previous week; in the multiplayer game, the planet Crait had just been introduced as a new location for battles and I’d been struggling against Storm Troopers (or as a Storm Trooper) through the trenches and tunnels of Crait for many, repetitive hours. Not only that, but the 3D models used to build the 3D battle world for the game appeared to be based on the same visual assets used for the movie.

        And so, when I saw the way the light reflected off of the red mud on the walls of the Crait trenches, I had an “aha” moment of recognition. My spatial memory told me I had been here before.


        We might say that this was a case of déjà vu, since I had never been to Crait in reality – but only in a video game. But then one must recall that the “vu” experience of the déjà vu also never happened – the CGI world on the screen is not a place that exists in any reality. I had experienced a virtual nostalgia for a space that didn’t exist – a sense of returning home when there is no home to return to.

        We aren’t quite in the territory of Blade Runner manufactured memories, yet, but we are a step closer. Games and technology that give us a sense of place and affect that peculiar and primeval faculty of the brain (the ability to remember places that made our hunter-gatherer ancestors so effective and that was later exploited to form the Ars Memoriae) will have unexpected side effects.

        I think this is a new type of experience and one that marks an inflection point in mankind’s progress – if I may be allowed to be a bit grandiose. For while in all previous generations, mimetic technologies such as writing, encyclopedias, computers, and the internet, have all tended to diminish our natural memories, this new age of virtual reality and 3D spaces has, for the first time, started to provide us with a superfluity of unexpected and artificial memories.

        A User’s Guide to the terms VR, AR, MR and XR – with a tangent about pork


        Virtual reality, augmented reality, mixed reality and XR (or xR) are terms floating around that seem to describe the same things – but maybe not – and sometimes people get very angry if you use the terms incorrectly (or at least they say they do).

        The difficulty is that these terms come from different sources and for different reasons, yet the mind naturally seeks to find order and logic in the world it confronts. A great English historical example is the way Anglo-Saxon words for animals have complementary Norman words for the cooked versions of those beasts: cow and beef (boeuf), pig and pork (porc), sheep and mutton (mouton). It is how the mind deals with a superfluity of words – we try to find a reason to keep them all.


        So as an experiment and a public service, here’s a guide to using these terms in a consistent way. My premise is that these terms are a part of natural language and describe real things rather than marketing terms meant to either boost products or boost personal agendas (such as the desire to be the person who coined a new term). Those constraints actually make it pretty easy to fit all these phrases into a common framework and uses grammar to enforce semantic distinctions:

        1) Virtual reality is a noun for a 3D simulated reality that you move through by moving your body. A sense of space is an essential component of VR. VR includes 360 videos as well as immersive 3D games on devices like the Oculus Rift, HTC Vive and Microsoft Immersive headsets.

        2) Augmented reality is a noun for an experience that combines digital objects and the real world, typically by overlaying digital content on top of a video of a real world (e.g. Pokémon Go) or by overlaying digital content on top of a transparent display (e.g. HoloLens, Meta, Magic Leap, Daqri).

        3) Mixed reality is an adjective that modifies nouns in order to describe both virtual and augmented reality experiences. For instance:

        a. A mixed reality headset enables virtual reality to be experienced.

        b. The Magic Leap device will let us have mixed reality experiences.

        4) xR is an umbrella term for the nouns virtual reality and augmented reality. You use xR generically when you are talking about broad trends or ambiguously when you are talking in a way that includes both VR and AR (for instance, I went to an event about xR where different MR experiences were on display). xR may, optionally, also cover AI and ML (aren’t they the same thing?).

        This isn’t necessarily how anyone has consistently used these terms in 2017, but I feel like there is a trend towards these usages. I’m going to try to use them in this way in 2018 and see how it goes.

        A Guide to Online HoloLens Tutorials

        There are lots of great video tutorials and advanced HoloLens materials online that even people who work with HoloLens aren’t always aware of. I’d like to fix that in this post.

        1. The Fundamentals


        If you are still working through the basics with the HoloLens, then I highly recommend the course that Dennis Vroegop and I did for LinkedIn Learning: App Development for Microsoft HoloLens. We approached it with the goal of providing developers with everything we wish we had known when we started working with HoloLens in early 2016. The course was filmed in a studio at the campus in Carpinteria, California, so the overall quality is considerably higher than most other courses you’ll find.


        2. The Mixed Reality Toolkit (HoloToolkit)


        Once you understand the fundamentals of working with the HoloLens, the thing to learn is the ins-and-outs of the Mixed Reality Toolkit, which is the open source SDK for working with the HoloLens APIs. Stephen Hodgson, a Mixed Reality developer at Valorem, is one of the maintainers of (and probably biggest developer on) the MRTK. He does live streams on Saturdays to address people’s questions about the toolkit. His first two hour-long streamcasts cover the MRTK Input module:

        #1 Input 1

        #2 Input 2

        The next three deal with Sharing Services:

        #3 Sharing 1

        #4 Sharing 2

        #5 Sharing 3

        These courses provide the deepest dive you’re ever likely to get about developing for HoloLens.


        3. HoloLens Game Tutorial


        Sometimes it is helpful to have a project to work through from start to finish. Chad Carter provides this with a multipart series on game development for Mixed Reality. So far there are five lessons … but the course is ongoing and well worth keeping up with.

        #1 Setup

        #2 Core Game Logic

        #3 The Game Controller

        #4 Motion Controllers

        #5 Keeping Score


        4. Scale and Rotation System


        Jason Odom’s tutorial series deals with using Unity effectively for HoloLens. It brings home the realization that most of 3D development revolves around moving, resizing, hiding and revealing objects. It’s written for an older version of the toolkit, so some things will have changed since then. By the way, Jason’s theme song for this series is an ear worm. Consider yourself warned.

        #1 Setup

        #2 Scale and Rotate Manager

        #3 Scale and Rotate Class

        #4 Scale and Rotate Class part 2 

        #5 Scale and Rotate Class part 3

        #6 Scale and Rotate More Manager Changes

        #7 Scale and Rotate Temporary Insanity

        #8 Scale and Rotate Q & A


        5. HoloLens Academy

        There’s also, of course, Microsoft’s official tutorial series known as the HoloLens Academy. It’s thorough and if you follow through the lessons, you’ll gain a broad understanding of the capabilities of the HoloLens device. One thing to keep in mind is that the tutorials are not always synced up with the latest MRTK so don’t get frustrated when you encounter a divergence between what the tutorials tell you to do and what you find in the MRTK, which is being updated at a much more rapid rate than the tutorials are.


        6. Summing up

        You’re probably now wondering if watching all these videos will make you a HoloLens expert. First of all, expertise isn’t something that you arrive at overnight. It takes time and effort.

        Second of all – yeah. Pretty much. HoloLens development is a very niche field and it hasn’t been around for very long. It has plenty of quirks but all these videos will address those quirks either directly or obliquely. If you follow all these videos, you’ll know most of everything I know about the HoloLens, which is kinda a lot.

        So have fun, future expert!

        10 Questions with Phoenix Perry


        Certain people are bellwethers for creative technology and you want to check in on what they are up to every 3 to 6 months to find out where the zeitgeist of the coding world is headed. I’m thinking of people like Kyle McDonald, James George and Phoenix Perry – folks who, per Jean Cocteau’s maxim, manage to stay on the avant-garde even when everyone else has caught up to what had been the avant-garde half a year earlier.

        Phoenix is currently teaching physical computing in London. She has spoken and led workshops at most of the leading conferences devoted to emerging technology. You can (and should) keep up  with her adventures on her website,, and on twitter.

        What movie has left the most lasting impression on you?

        What is the earliest video game you remember playing?

        Who is the person who has most influenced the way you think?
        The women of Code Liberation. Over the duration of the organization have radically shifted how I think and who I am. Mentoring younger women in tech has changed who I am. The conversations we have are inspired and open up my mind to a deeper,  more compassionate way to live.

        When was the last time you changed your mind about something?
        This week. The river is different every time you step into it.

        What’s a programming skill people assume you have but that you are terrible at?
        I think it’s more a skill level. People assume I’m some super expert but the truth is I’m often relearning my skill set constantly because it’s so broad. For example, every single time I look at javascript, it’s brand new all over. I’ll go delve in an area deeply and the other spaces will move forward and I’m a novice all over again.

        What inspires you to learn?
        Humility at how little I know.

        What do you need to believe in order to get through the day?
        I need to believe the people around me value my work and contributions.

        What’s a view that you hold but can’t defend?
        I hate Opera. Have no real reason why other than it sounds so annoying to my ears. 
        What will the future killer Mixed Reality app do?

        One that allows me to interact with one experience fluidly across contexts.

        What book have you recommended the most?
        Memories, Dreams and Reflections by Carl Jung

        Farewell, Keyword Manager


        With the Mixed Reality Toolkit 2017.1.2, the Keyword Manager was finally retired, after being “obsolete” for the past several toolkit iterations.

        As the toolkit matures, many key components are being refactored to make them more flexible and architecturally proper. The downside to this – and the source of much frustration – is that these refactors tend to upend what developers are used to doing. The Keyword Manager is a great example of this. It was one of the best Unity style interfaces for HoloLens / MR because it encapsulated a lot of complex code in a easy to use drag-and-drop style visual.

        The challenge for those working on the toolkit – Stephen Hodgson, Neeraj Wadhwa, and all the others – is to refactor without too badly breaking the interface abstractions we’ve all gotten used to. For the KeywordManager refactor, this was accomplished by breaking the original component into two parts, a SpeechInputSource and a SpeechInputHandler.


        The SpeechInputSource lets you determine if speech recognition starts up automatically and the recognition confidence level (you want this higher if you are using ambiguous or short phrases like “Start” and “Stop”). The Persistent Keywords field lets you keep the same speech recognition phrases between different scenes in your app. Most important, though, is the Keywords list. This lets you add to a list of phrases you want to be recognized in your app.


        The SpeechInputHandler is the component that lets you determine what happens when a phrase is recognized (the response). You click on the plus icon  to add a response, select the phrase that will be handled in the Keyword field, and then can drag and drop gameobjects into your response and select the script and method that is called.

        The one thing you need to remember to do is the check off the Is Global Listener field if you want behavior similar to the old KeywordManager. This will listen for all speech commands all the time. If Is Global Listener is not selected, then only the SpeechInputHandler  the user is gazing at will receive commands. This is really useful if you have multiple copies of the same object and only want to apply commands to a particular instance at a time.

        Authentically Virtual