10 Questions with Jasper Brekelmans

This is the first in a series of interviews intended to help people get to know the movers & shakers as well as the drones & technicians (sometimes the same person is all four) who are making mixed reality … um … a reality.  I’ve borrowed the format from Vox but added some new questions.


Jasper is the creator of the Brekel Toolset, an affordable tool for doing motion capture with the Kinect sensor. He also works with HoloLens, Oculus, and the Vive and his innovative projects have been featured on RoadToVR and other venues.

Without further ado, here are Jasper’s answers to 10 questions:


What movie has left the most lasting impression on you?
Spring Summer, Fall, Winter… and Spring“, “A Clockwork Orange“, “The Evil Dead“, “The Wrestler“, “Straight Story“, “Hidden Figures“…… too many to choose 🙂

What is the earliest video game you remember playing?
Pac-Man (arcade) and Donkey Kong (handheld).

Who is the person who has most influenced the way you think?
A work mentor and some close personal friends.

When was the last time you changed your mind about something?
Probably on a weekly basis on something or other.

What’s a programming skill people assume you have but that you are terrible at?
Heavily math based algorithms and/or coding for mobile platforms.

What inspires you to learn?
The goal of having new possibilities with freshly learned skills.

What do you need to believe in order to get through the day?
That what I do matters to others.

What’s a view that you hold but can’t defend?
That humanity will be better off once next generations have grown up with true AR glasses/lenses technology, have played with virtual galaxies and value virtual objects similarly to physical objects for certain purposes.

What will the future killer Mixed Reality app do?
Empower users in their daily live without them realizing it while at the same time letting new users realize what they miss instantly.

What book have you recommended the most?
Ready Player One.

HoloLens fix – Visual Studio 2017 build error

I’ve been using the latest Unity 4.6.1f1 to build HoloLens apps for Visual Studio 2017. After exporting to VS17, though, I run into the following error trying to compile my app.

1>—— Build started: Project: Assembly-CSharp-firstpass, Configuration: Debug x86 ——

1>CSC : warning CS8021: No value for RuntimeMetadataVersion found. No assembly containing System.Object was found nor was a value for RuntimeMetadataVersion specified through options.

1>  Running SerializationWeaver…

1>  System.Exception: project.lock.json file at D:\Documents\Unity\testnuget\WindowsStoreApp\GeneratedProjects\UWP\Assembly-CSharp-firstpass\project.lock.json does not exist!

1>     at usw.Program.CheckLockJsonFile(String lockJsonFile)

1>     at usw.Program.RunProgram(ConversionOptions options)

1>     at usw.Program.Main(String[] args)

1>D:\Documents\Unity\testnuget\WindowsStoreApp\GeneratedProjects\UWP\Assembly-CSharp-firstpass\Assembly-CSharp-firstpass.csproj(184,5): error MSB3073: The command “”D:\Documents\Unity\testnuget\WindowsStoreApp\Unity\Tools\SerializationWeaver\SerializationWeaver.exe” “D:\Documents\Unity\testnuget\WindowsStoreApp\GeneratedProjects\UWP\Assembly-CSharp-firstpass\bin\x86\Debug\Unprocessed\Assembly-CSharp-firstpass.dll” “-pdb” “-verbose” “-unity-engine=D:\Documents\Unity\testnuget\WindowsStoreApp\testnuget\Unprocessed\UnityEngine.dll” “D:\Documents\Unity\testnuget\WindowsStoreApp\GeneratedProjects\UWP\Assembly-CSharp-firstpass\obj\x86\Debug\x86\Debug” “-lock=D:\Documents\Unity\testnuget\WindowsStoreApp\GeneratedProjects\UWP\Assembly-CSharp-firstpass\project.lock.json” “@D:\Documents\Unity\testnuget\WindowsStoreApp\GeneratedProjects\UWP\Assembly-CSharp-firstpass\SerializationWeaverArgs.txt” “-additionalAssemblyPath=D:\Documents\Unity\testnuget\WindowsStoreApp\testnuget\Unprocessed” “-unity-networking=D:\Documents\Unity\testnuget\WindowsStoreApp\testnuget\Unprocessed\UnityEngine.Networking.dll”” exited with code 1.

2>—— Build started: Project: Assembly-CSharp, Configuration: Debug x86 ——

2>CSC : error CS0006: Metadata file ‘D:\Documents\Unity\testnuget\WindowsStoreApp\GeneratedProjects\UWP\Assembly-CSharp-firstpass\bin\x86\Debug\Assembly-CSharp-firstpass.dll’ could not be found

3>—— Build started: Project: testnuget, Configuration: Debug x86 ——

3>CSC : error CS0006: Metadata file ‘D:\Documents\Unity\testnuget\WindowsStoreApp\GeneratedProjects\UWP\Assembly-CSharp-firstpass\bin\x86\Debug\Assembly-CSharp-firstpass.dll’ could not be found

3>CSC : error CS0006: Metadata file ‘D:\Documents\Unity\testnuget\WindowsStoreApp\GeneratedProjects\UWP\Assembly-CSharp\bin\x86\Debug\Assembly-CSharp.dll’ could not be found

And the VS error window looks like this:


It’s actually a dumb problem, but since I’ve been struggling with it for days, I’m hoping blogging about it will save you a few hours of head banging.

The best clue to what’s going on is the reference to the missing project.lock.json file. This is a nuGet file and somewhere in the HoloLens documentation it mentions that a HoloLens app built with Unity requires some nuget files in order to work.


In Visual Studio I went to Tools | Options | NuGet Package Manager and discovered that I had NuGet configured incorrectly in my shiny new VS install. I’m not totally sure why. By not allowing NuGet packages to be automatically downloaded, my HoloLens app was missing required files.


Both Allow NuGet to download missing packages and Automatically check for missing packages should have been selected.

After that, HoloLens builds have been working for me and I have been able to start deploying apps again.

Playing with the HoloLens Field of View

I was working on a HoloLens project when I noticed, as I do about 2 or 3 times during every HoloLens project, that I didn’t know what the Field of View property of the default camera does in a HoloLens app. I always see it out of the corner of my eye when I have the Unity IDE open. The HoloToolkit camera configuration tool automatically sets it to 16. I’m not sure why. (h/t Jesse McCulloch pointed me to an HTK thread that provides more background on how the 16 value came about.)


So I finally decided to test this out for myself. In a regular Unity app, changing the number of degrees in the angular field of view will increase the amount of things that the camera can see, but in turn will make everything smaller. The concept comes from regular camera lenses and is related to the notion of a camera’s focal length, as demonstrated in the fit-inducing (but highly illustrative) animated gif below.


I built a quick app with the default Ethan character and placed a 3D Text element over him that checks the camera’s Field of View property on every update.

public class updateFOV : MonoBehaviour {
    private TextMesh _mesh;
    private GameObject stuff;
    void Awake()
        _mesh = GetComponent<TextMesh>();
    // Use this for initialization
    void Start () {
    // Update is called once per frame
    void Update () {
        _mesh.text = System.Math.Round( Camera.main.fieldOfView, 2).ToString();

Then I added a Keyword Manager from the HoloToolkit to handle changing the angular FOV of the camera dynamically.

public void IncreaseFOV()
    Camera.main.fieldOfView = Camera.main.fieldOfView + 1;
} public void DecreaseFOV()
    Camera.main.fieldOfView = Camera.main.fieldOfView - 1;
} public void ResetFOV()

When I ran the app in my HoloLens, the the fov reader started showing “17.82” instead of “16”. This must be the vertical FOV of the HoloLens – something else I’ve often wondered about. Assuming a 16:9 aspect ration, this gives a horizontal FOV of “31.68”, which is really close to what Oliver Kreylos guessed way back in 2015.

The next step was to increase the Field of View using my voice commands. There were two possible outcomes: either the Unity app would somehow override the natural FOV of the HoloLens and actually distort my view, making the Ethan model smaller as the FOV increased, or the app would just ignore whatever I did to the Main Camera’s FieldOfView property.


The second thing happened. As I increased the Field Of View property from “17.82” to “27.82”, there was no change in the way the character was projected. HoloLens ignores that setting.

Something strange did happen, though, after I called the ResetFieldOfView method on the Main Camera and tried to take a picture. After resetting, the FOV Reader began retrieving the true value of the FOV again. When I tried to take a picture of this, though, the FOV jumped up to “34.11”, then dropped back to “17.82”.


This, I would assume, is the vertical FOV of the locatable camera (RGB camera) on the front of the HoloLens when taking a normal picture. Assuming again a 16:9 aspect ratio, this would provide a “60.64” horizontal angular FOV. According to the documentation, though, the horizontal FOV should be “67” degrees, which is close but not quite right.

“34.11” is also close to double “17.82” so maybe it has something to do with unsplitting the render sent to each eye? Except that double would actually be “35.64” plus I don’t really know how the stereoscopic rendering pipeline works so – who knows.

In any case, I at least answered the original question that was bothering me – fiddling with that slider next to the Camera’s Field of View property doesn’t really do anything. I need to just ignore it.

How to find Great HoloLens Developers


Microsoft HoloLens is an amazing and novel device that is at the forward edge of a major transformation in the way we do computing, both professionally and as consumers. It has immediate applications in the enterprise, especially in fields that work heavily with 3D models such as architecture and design. It has strong coolness potential for companies working in the tradeshow space and for art installations.

Given its likelihood to eventually overtake the smartphone market in the next five to ten years, it should also be given attention in R&D departments of large corporations and the emerging experiences groups of well-heeled marketing firms.

the problem

Because it is a new technology, there is no established market for HoloLens developers. There is no certification process. There are no boards to license people. How do you find out who is good?

There are two legitimacy issues currently affecting the HoloLens world. One is unknown companies popping up flashy websites and publishing other people’s works as their own and exaggerating their capabilities. The internet is a great leveler, in this case, and it is hard to distinguish between what is real and what is fake.

Another is established consulting companies that have decent IT reputations but no HoloLens experience moving into the market in the hopes that a funded project will pay for their employees to learn on the job. I’ve cleaned up after several of these failed projects in the past year.

helpful rules

How do you avoid bad engagements like these? Here are some guidelines:

1. Make sure the companies you are looking to work with can show original work. If their websites are full of stock Microsoft images and their videos show work belonging to other people without proper attribution, run like the wind.

2. Find someone with HoloLens experience to vet these companies for you. Go to the main HoloLens discussion board at https://forums.hololens.com/ and see who is answering questions. These aren’t the only people who know about HoloLens development, but they do demonstrate their experience on a daily basis for the good of the mixed reality community.

3. See who is writing apps for the HoloLens Challenge. This contest happens every three weeks and challenges developers to build creative apps to specification in a short time span. Anyone who does well in the challenge is going to do a great job for you. Plus, you can actually see what they are capable of. They are effectively posting their CVs online.

4. Look for contributors to open source HoloLens projects like this and this and this.

5. Look for companies and individuals associated with the HoloLens Agency Readiness Program or the Microsoft MVP Emerging Experiences group. These are two of the longest running groups of developers and designers working with HoloLens and go back to 2015. These people have been thinking about mixed reality for a long time.

naming names

There are several areas in which you will want HoloLens expertise.

A. You need help conceptualizing and implementing a large project.

B. You need help creating a quick proof of concept to demonstrate how the HoloLens can help your company.

C. You need individuals to augment or train your internal developers for a project.

The best people for each of these areas are well known in the relatively small world of HoloLens developers. Unfortunately, because HoloLens is still niche work, they tend not to be well known, with a few exceptions, outside of that insular world.

So how do I know who’s the good and the great in Mixed Reality? Fair question.

I’ve been working on HoloLens interaction design and development since the HoloLens device started shipping in April of 2016 and have been writing about it since 2015. I have close relationships with many of the big players in this world as well as the indie devs who are shaping HoloLens experiences today and pushing the envelope for tomorrow. I’ve been working with emerging experiences for the past half decade starting with the original Microsoft Surface Table, to the Kinect v1 and v2 (here’s my book), to VR and the HoloLens. I’ve taught workshops on HoloLens development and am currently working on a Lynda.com course on mixed reality.

The lists below are a bit subjective, and lean towards the organizations and people I can personally vouch for. (If you think someone significant has been left off the following lists, please let me know in the comments.)

big projects

Interknowlogy and Interknowlogy Europe

Object Theory

Razorfish Emerging Experiences


Holoforge Interactive


small to mid-sized projects

360 World (Hungary)

OCTO Technology (France)

Stimulant (US)

8Ninths (US)

You Are Here (US)

Truth Labs (US)

Kazendi (UK)

Nexinov (Australia / Shanghai)

Thought Experiments (US)

Studio Studio (US)

Wavelength (US)

awesome hololens / mixed reality devs

Bronwen Zande (Australia)

Nick Young (New Zealand)

Bruno Capuano (Spain / Canada – Toronto)

Kenny Wang (Canada – Toronto)

Alex Drenea (Canada – Toronto)

Vangos Pterneas (Greece / US – New York)

Nate Turley (US – New York)

Milos Paripovic (US – New York)

Dwight Goins (US – Florida)

Stephen Hodgson (US – Florida)

Jason Odom (US – Alabama)

Jesse McCulloch (US – Oregon)

Michael Hoffman (US – Oregon)

Dwayne Lamb (US – Washington)

Dong Yoon Park (US – Washington)

Cameron Vetter (US – Wisconsin)

Stephen Chiou (US – Pennsylvania)

Michelle Ma (US – Pennsylvania)

Chad Carter (US – North Carolina)

Clemente Giorio (Italy)

Matteo Valoriani (Italy)

Dennis Vroegop (Netherlands)

Jasper Brekelmans (Netherlands)

Joost Van Schaik (Netherlands)

Gian Paolo Santopaolo (Switzerland)

Rene Schulte (Germany)

Vincent Guigui (France)

Johanna Rowe Calvi (France)

Nicolas Calvi (France)

Fabrice Barbin (France)

Andras Velvart (Hungary)

Tamas Deme (Hungary)

Jessica Engstrom (Sweden)

Jimmy Engstrom (Sweden)

HoloLens and the Arts

There are roughly three classifications of experiences we can build in Mixed Reality: 

The first is the enterprise experience, which can unfairly be encapsulated as people looking at engines.

The second is the gaming experience, which can unfairly be encapsulated as squirrels playing with nuts (I’m looking at you, Conker).

And then there is art, which no one is currently doing – but they/we should be. HoloLens is the greatest media shift to happen in a long while and the potential for creating both unique entertainment and transcendent experiences is profound.

Although we typically don’t think in this way regarding the HoloLens, we can. Here are three (highly recommended) sources of inspiration for anyone interested in the Arts and Mixed Reality’s bigger potential:


https://medium.com/volumetric-filmmaking James George and the people behind the RGBD depthkit are taking volumetric filmmaking head-on with a new online journal about story telling in virtual spaces. If you know these guys already, then it’s a no-brainer, but if you don’t, here’s a primer: https://vimeo.com/42852185


Yayoi Kusama is finally getting a big showing of her Infinity Mirror art at the Hirshhorn Gallery – which has already increased membership at the Hirshhorn 20x. The effects that she is producing has an obvious relationship to what we do with light – and really what we have been doing in a more or less straight line from the  Surface table to the Kinect to projection mapping and now this. It’s playing with light in a way that defies what we otherwise know about the world around us. What she does with mirrors we should be able to recreate in our HoloLenses.


Kate Soper’s 90-minute musical performance Ipsa Dixit is probably going to be the most difficult sell because it is the high-end of high art. Alex Ross in the New Yorker review of Ipsa Dixit starts off by saying the term genius these days is overused and should be retired, _but_ in the case of Ipsa Dixit … If you enjoy live performance, you know that there are still things that happen in the theater that cannot be reproduced in film and television, _but_ we can come a lot closer with mixed reality. We control 360 sound as well as 3D images the viewer can walk around. We can make either private experiences or shared experiences, and take advantage of the space the viewer occupies or occlude it. Works like Ipsa Dixit only come along once in a blue moon and they are difficult to get to see in the right way. With mixed reality, we have a medium that is able to capture the essence of genius performances like this and allow a much larger audience to experience them.

Between casual gaming and social media, the main influence of technology over the past 20 years has been to create a generation of people with extremely short attention spans. Where tl;dr started off as an ironic comment on our collective inability to concentrate, it has now become an excuse for shallow thinking and the normalization of aspergersy behavior. But it doesn’t have to be that way. Mixed reality has the potential to change all that, finally, and give us an opportunity to have a more human and thoughtful relationship to our tech.

Older but not wiser

In late December I tried making some infrastructure changes to my blog, which is hosted on Microsoft Azure, and managed to hose the whole thing. Because I’m a devotee of doing things the long way, I spent the next two months learning about Docker containers and command line tools only to discover that Docker wasn’t my problem at all. There was something wrong with the way I’d configured my Linux VM and something to do with a button I’d pressed without looking at the warnings as closely as they warranted.

Long story short, I finally just blew away that VM and slowly reconstructed my blog from post fragments and backups I found on various machines around the house.

I still need to go through and reconstruct the WordPress categories. For now, though, I will take a moment to pause and reflect on the folly of my technical ways.

The Great AI Awakening


This is a crazy long but nicely comprehensive article by the New York Times on the current state of AI: The Great AI Awakening.

While lately I’ve been buried in 3D interfaces, I’m always faintly aware of the way 1D interfaces (Cortana Skills, Speech as a service, etc.) is another fruit of our recent machine learning breakthroughs (or more accurately refocus) and of how the future success of holographic displays ultimately involves making it work with our 1D interfaces to create personal assistants. This article helps connect the dots between these, at first, apparently different technologies.

It also nicely complements Memo Atken’s Medium posts on Deep Learning and Art, which Microsoft resident genius Rick Barraza pointed me to a while back:

Part 1: The Dawn of Deep Learning

Part 2: Algorithmic Decision Making, Machine Bias, Creativity and Diversity

There’s also a nice throw away reference in the Times article about the relationship between VR and Machine Learning which is a little less obscure if you already know Baudrillard’s Simulacra and Simulation which in turn depends on Jorge Luis Borges’s very short story On Exactitude In Science.

If you really haven’t the time though, which I suspect may be the case, here are some quick excerpts starting with Google’s AI efforts:

Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.


When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this. Imagine if you could tell Google Maps, “I’d like to go to the airport, but I need to stop off on the way to buy a present for my nephew.” A more generally intelligent version of that service — a ubiquitous assistant, of the sort that Scarlett Johansson memorably disembodied three years ago in the Spike Jonze film “Her”— would know all sorts of things that, say, a close friend or an earnest intern might know: your nephew’s age, and how much you ordinarily like to spend on gifts for children, and where to find an open store. But a truly intelligent Maps could also conceivably know all sorts of things a close friend wouldn’t, like what has only recently come into fashion among preschoolers in your nephew’s school — or more important, what its users actually want. If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.


The new wave of A.I.-enhanced assistants — Apple’s Siri, Facebook’s M, Amazon’s Echo — are all creatures of machine learning, built with similar intentions. The corporate dreams for machine learning, however, aren’t exhausted by the goal of consumer clairvoyance. A medical-imaging subsidiary of Samsung announced this year that its new ultrasound devices could detect breast cancer. Management consultants are falling all over themselves to prep executives for the widening industrial applications of computers that program themselves. DeepMind, a 2014 Google acquisition, defeated the reigning human grandmaster of the ancient board game Go, despite predictions that such an achievement would take another 10 years.

My problem with More Personal Computing as a Branding Attempt

We all know that Microsoft has had a long history with problematic branding. For every “Silverlight” that comes along we get many more confusing monikers like “Microsoft Office Professional Plus 2007.” As the old saw goes, if Microsoft invented the iPod, they would have called it the “Microsoft I-pod Pro 2005 Human Ear Professional Edition.”

While “More Personal Computing” breaks the trend of long academic nomenclature, it is still a bit wordy. It’s also a pun. For anyone who hasn’t figured out the joke, MPC can mean either [More] Personal Computing — for people who still haven’t gotten enough personal computing, apparently — or [More Personal] Computing — for those who like their technology to be intimate and a wee bit creepy.

But the best gloss on MPC, IMO, comes from this 1993 episode of The Simpsons. Enjoy:

In Atlanta? Test Out Your HoloLens App at the Microsoft Innovation Center


While the HoloLens, Microsoft’s mixed-reality device, is still a bit pricey at the moment, you can still get in on HoloLens development.

Microsoft provides an HoloLens emulator that let’s you build apps on your desktop without a device. You’ll need Windows Pro and around 4 Gigs of RAM to run the emulator. The dev tools are just Visual Studio and Unity.

If you live in the Atlanta area, you can also try your app out on a real HoloLens at the Microsoft Innovation Center in downtown Atlanta. The historic FlatIron building – where the MIC is housed – will let you request time with a dev edition HoloLens on their contact page.

This is what Microsoft did for Windows Phone when it first came out, and basically provides a way to try before you buy.

So what are you waiting for? Download the tools, build an app by following the online tutorials, and schedule some time to see what your app looks like in mixed reality.

Authentically Virtual