All posts by James Ashley

Playing with the HoloLens Field of View

I was working on a HoloLens project when I noticed, as I do about 2 or 3 times during every HoloLens project, that I didn’t know what the Field of View property of the default camera does in a HoloLens app. I always see it out of the corner of my eye when I have the Unity IDE open. The HoloToolkit camera configuration tool automatically sets it to 16. I’m not sure why. (h/t Jesse McCulloch pointed me to an HTK thread that provides more background on how the 16 value came about.)

fov_question

So I finally decided to test this out for myself. In a regular Unity app, changing the number of degrees in the angular field of view will increase the amount of things that the camera can see, but in turn will make everything smaller. The concept comes from regular camera lenses and is related to the notion of a camera’s focal length, as demonstrated in the fit-inducing (but highly illustrative) animated gif below.

hololens_fov

I built a quick app with the default Ethan character and placed a 3D Text element over him that checks the camera’s Field of View property on every update.

public class updateFOV : MonoBehaviour {
    private TextMesh _mesh;
    private GameObject stuff;
    void Awake()
    {
        _mesh = GetComponent<TextMesh>();
    }
    // Use this for initialization
    void Start () {
    }
    // Update is called once per frame
    void Update () {
        _mesh.text = System.Math.Round( Camera.main.fieldOfView, 2).ToString();
    }
}

Then I added a Keyword Manager from the HoloToolkit to handle changing the angular FOV of the camera dynamically.

public void IncreaseFOV()
{
    Camera.main.fieldOfView = Camera.main.fieldOfView + 1;
} public void DecreaseFOV()
{
    Camera.main.fieldOfView = Camera.main.fieldOfView - 1;
} public void ResetFOV()
{
    Camera.main.ResetFieldOfView();
}

When I ran the app in my HoloLens, the the fov reader started showing “17.82” instead of “16”. This must be the vertical FOV of the HoloLens – something else I’ve often wondered about. Assuming a 16:9 aspect ration, this gives a horizontal FOV of “31.68”, which is really close to what Oliver Kreylos guessed way back in 2015.

The next step was to increase the Field of View using my voice commands. There were two possible outcomes: either the Unity app would somehow override the natural FOV of the HoloLens and actually distort my view, making the Ethan model smaller as the FOV increased, or the app would just ignore whatever I did to the Main Camera’s FieldOfView property.

2782

The second thing happened. As I increased the Field Of View property from “17.82” to “27.82”, there was no change in the way the character was projected. HoloLens ignores that setting.

Something strange did happen, though, after I called the ResetFieldOfView method on the Main Camera and tried to take a picture. After resetting, the FOV Reader began retrieving the true value of the FOV again. When I tried to take a picture of this, though, the FOV jumped up to “34.11”, then dropped back to “17.82”.

3411

This, I would assume, is the vertical FOV of the locatable camera (RGB camera) on the front of the HoloLens when taking a normal picture. Assuming again a 16:9 aspect ratio, this would provide a “60.64” horizontal angular FOV. According to the documentation, though, the horizontal FOV should be “67” degrees, which is close but not quite right.

“34.11” is also close to double “17.82” so maybe it has something to do with unsplitting the render sent to each eye? Except that double would actually be “35.64” plus I don’t really know how the stereoscopic rendering pipeline works so – who knows.

In any case, I at least answered the original question that was bothering me – fiddling with that slider next to the Camera’s Field of View property doesn’t really do anything. I need to just ignore it.

How to find Great HoloLens Developers

nowwhat

Microsoft HoloLens is an amazing and novel device that is at the forward edge of a major transformation in the way we do computing, both professionally and as consumers. It has immediate applications in the enterprise, especially in fields that work heavily with 3D models such as architecture and design. It has strong coolness potential for companies working in the tradeshow space and for art installations.

Given its likelihood to eventually overtake the smartphone market in the next five to ten years, it should also be given attention in R&D departments of large corporations and the emerging experiences groups of well-heeled marketing firms.

the problem

Because it is a new technology, there is no established market for HoloLens developers. There is no certification process. There are no boards to license people. How do you find out who is good?

There are two legitimacy issues currently affecting the HoloLens world. One is unknown companies popping up flashy websites and publishing other people’s works as their own and exaggerating their capabilities. The internet is a great leveler, in this case, and it is hard to distinguish between what is real and what is fake.

Another is established consulting companies that have decent IT reputations but no HoloLens experience moving into the market in the hopes that a funded project will pay for their employees to learn on the job. I’ve cleaned up after several of these failed projects in the past year.

helpful rules

How do you avoid bad engagements like these? Here are some guidelines:

1. Make sure the companies you are looking to work with can show original work. If their websites are full of stock Microsoft images and their videos show work belonging to other people without proper attribution, run like the wind.

2. Find someone with HoloLens experience to vet these companies for you. Go to the main HoloLens discussion board at https://forums.hololens.com/ and see who is answering questions. These aren’t the only people who know about HoloLens development, but they do demonstrate their experience on a daily basis for the good of the mixed reality community.

3. See who is writing apps for the HoloLens Challenge. This contest happens every three weeks and challenges developers to build creative apps to specification in a short time span. Anyone who does well in the challenge is going to do a great job for you. Plus, you can actually see what they are capable of. They are effectively posting their CVs online.

4. Look for contributors to open source HoloLens projects like this and this and this.

5. Look for companies and individuals associated with the HoloLens Agency Readiness Program or the Microsoft MVP Emerging Experiences group. These are two of the longest running groups of developers and designers working with HoloLens and go back to 2015. These people have been thinking about mixed reality for a long time.

naming names

There are several areas in which you will want HoloLens expertise.

A. You need help conceptualizing and implementing a large project.

B. You need help creating a quick proof of concept to demonstrate how the HoloLens can help your company.

C. You need individuals to augment or train your internal developers for a project.

The best people for each of these areas are well known in the relatively small world of HoloLens developers. Unfortunately, because HoloLens is still niche work, they tend not to be well known, with a few exceptions, outside of that insular world.

So how do I know who’s the good and the great in Mixed Reality? Fair question.

I’ve been working on HoloLens interaction design and development since the HoloLens device started shipping in April of 2016 and have been writing about it since 2015. I have close relationships with many of the big players in this world as well as the indie devs who are shaping HoloLens experiences today and pushing the envelope for tomorrow. I’ve been working with emerging experiences for the past half decade starting with the original Microsoft Surface Table, to the Kinect v1 and v2 (here’s my book), to VR and the HoloLens. I’ve taught workshops on HoloLens development and am currently working on a Lynda.com course on mixed reality.

The lists below are a bit subjective, and lean towards the organizations and people I can personally vouch for. (If you think someone significant has been left off the following lists, please let me know in the comments.)

big projects

Interknowlogy and Interknowlogy Europe

Object Theory

Razorfish Emerging Experiences

Valorem

Holoforge Interactive

Taqtile

small to mid-sized projects

360 World (Hungary)

OCTO Technology (France)

Stimulant (US)

8Ninths (US)

You Are Here (US)

Truth Labs (US)

Kazendi (UK)

Nexinov (Australia / Shanghai)

Thought Experiments (US)

Studio Studio (US)

Wavelength (US)

awesome hololens / mixed reality devs

Dennis Vroegop (Netherlands)

Jasper Brekelmans (Netherlands)

Gian Paolo Santopaolo (Switzerland)

Clemente Giorio (Italy)

Matteo Valoriani (Italy)

Rene Schulte (Germany)

Vincent Guigui (France)

Johanna Rowe Calvi (France)

Nicolas Calvi (France)

Fabrice Barbin (France)

Andras Velvart (Hungary)

Tamas Deme (Hungary)

Jessica Engstrom (Sweden)

Jimmy Engstrom (Sweden)

Bronwen Zande (Australia)

Nick Young (New Zealand)

Bruno Capuano (Spain / Canada – Toronto)

Kenny Wang (Canada – Toronto)

Alex Drenea (Canada – Toronto)

Vangos Pterneas (Greece / US – New York)

Nate Turley (US – New York)

Milos Paripovic (US – New York)

Dwight Goins (US – Florida)

Stephen Hodgson (US – Florida)

Jason Odom (US – Alabama)

Jesse McCulloch (US – Oregon)

Michael Hoffman (US – Oregon)

Dwayne Lamb (US – Washington)

Dong Yoon Park (US – Washington)

Stephen Chiou (US – Pennsylvania)

Michelle Ma (US – Pennsylvania)

Chad Carter (US – North Carolina)

HoloLens and the Arts

There are roughly three classifications of experiences we can build in Mixed Reality: 

The first is the enterprise experience, which can unfairly be encapsulated as people looking at engines.

The second is the gaming experience, which can unfairly be encapsulated as squirrels playing with nuts (I’m looking at you, Conker).

And then there is art, which no one is currently doing – but they/we should be. HoloLens is the greatest media shift to happen in a long while and the potential for creating both unique entertainment and transcendent experiences is profound.

Although we typically don’t think in this way regarding the HoloLens, we can. Here are three (highly recommended) sources of inspiration for anyone interested in the Arts and Mixed Reality’s bigger potential:

golan

https://medium.com/volumetric-filmmaking James George and the people behind the RGBD depthkit are taking volumetric filmmaking head-on with a new online journal about story telling in virtual spaces. If you know these guys already, then it’s a no-brainer, but if you don’t, here’s a primer: https://vimeo.com/42852185

kusama

Yayoi Kusama is finally getting a big showing of her Infinity Mirror art at the Hirshhorn Gallery – which has already increased membership at the Hirshhorn 20x. The effects that she is producing has an obvious relationship to what we do with light – and really what we have been doing in a more or less straight line from the  Surface table to the Kinect to projection mapping and now this. It’s playing with light in a way that defies what we otherwise know about the world around us. What she does with mirrors we should be able to recreate in our HoloLenses.

ipsadixit2

Kate Soper’s 90-minute musical performance Ipsa Dixit is probably going to be the most difficult sell because it is the high-end of high art. Alex Ross in the New Yorker review of Ipsa Dixit starts off by saying the term genius these days is overused and should be retired, _but_ in the case of Ipsa Dixit … If you enjoy live performance, you know that there are still things that happen in the theater that cannot be reproduced in film and television, _but_ we can come a lot closer with mixed reality. We control 360 sound as well as 3D images the viewer can walk around. We can make either private experiences or shared experiences, and take advantage of the space the viewer occupies or occlude it. Works like Ipsa Dixit only come along once in a blue moon and they are difficult to get to see in the right way. With mixed reality, we have a medium that is able to capture the essence of genius performances like this and allow a much larger audience to experience them.

Between casual gaming and social media, the main influence of technology over the past 20 years has been to create a generation of people with extremely short attention spans. Where tl;dr started off as an ironic comment on our collective inability to concentrate, it has now become an excuse for shallow thinking and the normalization of aspergersy behavior. But it doesn’t have to be that way. Mixed reality has the potential to change all that, finally, and give us an opportunity to have a more human and thoughtful relationship to our tech.

Older but not wiser

In late December I tried making some infrastructure changes to my blog, which is hosted on Microsoft Azure, and managed to hose the whole thing. Because I’m a devotee of doing things the long way, I spent the next two months learning about Docker containers and command line tools only to discover that Docker wasn’t my problem at all. There was something wrong with the way I’d configured my Linux VM and something to do with a button I’d pressed without looking at the warnings as closely as they warranted.

Long story short, I finally just blew away that VM and slowly reconstructed my blog from post fragments and backups I found on various machines around the house.

I still need to go through and reconstruct the WordPress categories. For now, though, I will take a moment to pause and reflect on the folly of my technical ways.

The Great AI Awakening

bathers_and_whale

This is a crazy long but nicely comprehensive article by the New York Times on the current state of AI: The Great AI Awakening.

While lately I’ve been buried in 3D interfaces, I’m always faintly aware of the way 1D interfaces (Cortana Skills, Speech as a service, etc.) is another fruit of our recent machine learning breakthroughs (or more accurately refocus) and of how the future success of holographic displays ultimately involves making it work with our 1D interfaces to create personal assistants. This article helps connect the dots between these, at first, apparently different technologies.

It also nicely complements Memo Atken’s Medium posts on Deep Learning and Art, which Microsoft resident genius Rick Barraza pointed me to a while back:

Part 1: The Dawn of Deep Learning

Part 2: Algorithmic Decision Making, Machine Bias, Creativity and Diversity

There’s also a nice throw away reference in the Times article about the relationship between VR and Machine Learning which is a little less obscure if you already know Baudrillard’s Simulacra and Simulation which in turn depends on Jorge Luis Borges’s very short story On Exactitude In Science.

If you really haven’t the time though, which I suspect may be the case, here are some quick excerpts starting with Google’s AI efforts:

Google’s decision to reorganize itself around A.I. was the first major manifestation of what has become an industrywide machine-learning delirium. Over the past four years, six companies in particular — Google, Facebook, Apple, Amazon, Microsoft and the Chinese firm Baidu — have touched off an arms race for A.I. talent, particularly within universities. Corporate promises of resources and freedom have thinned out top academic departments. It has become widely known in Silicon Valley that Mark Zuckerberg, chief executive of Facebook, personally oversees, with phone calls and video-chat blandishments, his company’s overtures to the most desirable graduate students. Starting salaries of seven figures are not unheard-of. Attendance at the field’s most important academic conference has nearly quadrupled. What is at stake is not just one more piecemeal innovation but control over what very well could represent an entirely new computational platform: pervasive, ambient artificial intelligence.

 

When he has an opportunity to make careful distinctions, Pichai differentiates between the current applications of A.I. and the ultimate goal of “artificial general intelligence.” Artificial general intelligence will not involve dutiful adherence to explicit instructions, but instead will demonstrate a facility with the implicit, the interpretive. It will be a general tool, designed for general purposes in a general context. Pichai believes his company’s future depends on something like this. Imagine if you could tell Google Maps, “I’d like to go to the airport, but I need to stop off on the way to buy a present for my nephew.” A more generally intelligent version of that service — a ubiquitous assistant, of the sort that Scarlett Johansson memorably disembodied three years ago in the Spike Jonze film “Her”— would know all sorts of things that, say, a close friend or an earnest intern might know: your nephew’s age, and how much you ordinarily like to spend on gifts for children, and where to find an open store. But a truly intelligent Maps could also conceivably know all sorts of things a close friend wouldn’t, like what has only recently come into fashion among preschoolers in your nephew’s school — or more important, what its users actually want. If an intelligent machine were able to discern some intricate if murky regularity in data about what we have done in the past, it might be able to extrapolate about our subsequent desires, even if we don’t entirely know them ourselves.

 

The new wave of A.I.-enhanced assistants — Apple’s Siri, Facebook’s M, Amazon’s Echo — are all creatures of machine learning, built with similar intentions. The corporate dreams for machine learning, however, aren’t exhausted by the goal of consumer clairvoyance. A medical-imaging subsidiary of Samsung announced this year that its new ultrasound devices could detect breast cancer. Management consultants are falling all over themselves to prep executives for the widening industrial applications of computers that program themselves. DeepMind, a 2014 Google acquisition, defeated the reigning human grandmaster of the ancient board game Go, despite predictions that such an achievement would take another 10 years.

My problem with More Personal Computing as a Branding Attempt

We all know that Microsoft has had a long history with problematic branding. For every “Silverlight” that comes along we get many more confusing monikers like “Microsoft Office Professional Plus 2007.” As the old saw goes, if Microsoft invented the iPod, they would have called it the “Microsoft I-pod Pro 2005 Human Ear Professional Edition.”

While “More Personal Computing” breaks the trend of long academic nomenclature, it is still a bit wordy. It’s also a pun. For anyone who hasn’t figured out the joke, MPC can mean either [More] Personal Computing — for people who still haven’t gotten enough personal computing, apparently — or [More Personal] Computing — for those who like their technology to be intimate and a wee bit creepy.

But the best gloss on MPC, IMO, comes from this 1993 episode of The Simpsons. Enjoy:

In Atlanta? Test Out Your HoloLens App at the Microsoft Innovation Center

alex

While the HoloLens, Microsoft’s mixed-reality device, is still a bit pricey at the moment, you can still get in on HoloLens development.

Microsoft provides an HoloLens emulator that let’s you build apps on your desktop without a device. You’ll need Windows Pro and around 4 Gigs of RAM to run the emulator. The dev tools are just Visual Studio and Unity.

If you live in the Atlanta area, you can also try your app out on a real HoloLens at the Microsoft Innovation Center in downtown Atlanta. The historic FlatIron building – where the MIC is housed – will let you request time with a dev edition HoloLens on their contact page.

This is what Microsoft did for Windows Phone when it first came out, and basically provides a way to try before you buy.

So what are you waiting for? Download the tools, build an app by following the online tutorials, and schedule some time to see what your app looks like in mixed reality.

HoloLens and The Plight of Men in Tech

The Golan Levin led Art && Code conference on VR and AR has just ended at Carnegie Mellon. It’s an amazing conference that reminds me of the days of MIX in terms of creativity and seriousness.  I have always secretly felt that there’s a strong correlation between gender parity at conferences and the quality of the conference — something MIX had to some degree and Art && Code has strongly.

rinseandrepeat

From Rinse and Repeat by Robert Yang

I don’t mean to be political or controversial, though. I just find this more enjoyable than //build or Ignite, which are valuable in other ways.

jchlaura

Brain by Laura Juo-Hsin Chen

A serious concern that bubbled up to my consciousness at Art && Code, however, is the plight of men in tech. Something has happened to us, I think, that in our gender isolation, we’ve become dull and less playful. Our very notions of what is fun in tech is limited and diminished and we don’t even realize it. We all go about doing our best Steve Wozniak pretending to be Steve Jobs impressions and talk about passion in a way that makes it sound like a wet rag we found in an alley, perpetuating the nerd on nerd self-violence that is choking the nascent mixed reality industry. And I think this is unfair.

memory Place

Memory Space by Sarah Rothberg

Part of the reason for this is probably the high cost of the tech we work with. When the gear is expensive, we feel a responsibility to appear more serious. We worry that appearing to be having fun doing what we do requires justification. A great example is Rene Schulte’s justification of wig caps for his company’s HoloLens demo when the only explanation needed should have been that the hololens is gangsta and our only legitimate response should have been that he needed more bling.

Nail Art Museum by Jeremy Bailey

My favorite talk at this conference was about political games by Robert Yang. The more I think about it the more profound it seems. He explores gender roles and race relations with a first person adventure that takes place in a gay spa. Mostly due to misunderstandings about what he is trying to do, his games are banned from twitch and he gets a lot of hateemails. His oeuvre of games is accused of having found rock bottom in the uncanny valley (though this is more a description of his aesthetic, I feel, rather than the success of his experiments).

I talked to him for a bit in the buffet line and he said he’s considered putting a contextual/researchy explanation in front of his games but also thinks it ruins the integrity of his bathhouse game and I agreed strongly.
And maybe sex games are still a step too far for us — but we should at least be able to be more playful and fun and willing to create apps that are exploratory rather than just cookie-cutter commercial translations of existing web apps.

Fun with Hyperbolic Space by Vi Hart

I think the cause of our predicament is the homogeneity of the people currently doing More Personal Computing development. The symptom and effect is the small number of women in this field. Hololens is, if anything, even more narrow in this way than windows development in general despite being more inherently innovative and transformative.

But what’s the point of transformation if we create a NUI world that has all the problems institutionalized in our current one?

As developers we have a limited ability to affect this — mostly because of lack of time and the other usual but legit reasons.

 

Shining 360 by Claire Hentschker

Where we can have an influence though is by by creating diversity in our own thinking. I can make this even easier — we create diversity by having more fun in what we do. If what we do looks more fun and has more joy, it will attract more diverse people who will want to use it for artistic and even subversive purposes and, in turn, make what we do even more fun. It’s a virtuous circle that gives back a hundred fold.

And while secretly my criteria for when we’ve succeeded with diversity is when we get a Robert Yang gay sex game on the HoloLens, a more modest one — just getting a non-zero number of women developers into the hololens dev community and making sure they have a voice — would also be great.

For what it’s worth, VR has the same institutional problem. But this is a fresh start and we can do better.

My modest proposal to accomplish this starts with a plea to Microsoft and other MR vendors like Meta and Magic Leap. You have plans or already have implemented plans to provide devices and developers to major large enterprises like NASA in order to build mixed reality experiences. Why not divert a portion of these resources to some of the new media artists linked in this post so we can see the true potential of Holographic Computing to change, challenge and improve our social world rather than simply find new ways to channel capital?

I understand that to some extent it’s a matter of which comes first, the chicken or the portrait of the chicken. I would simply plead with you, the mighty corporate curators of mixed reality, to this time choose the portrait of the chicken first. It’s the truer vision.

MIXED REALITY ESSENTIALS: A CONCISE COURSE

On Saturday, October 29th, Dennis Vroegop and I will be running a Mixed Reality Workshop as part of the DEVintersection conference in Las Vegas. Dennis is both a promoter and trainer in Mixed Reality and has made frequent appearances on European TV talking about this emerging technology as well as consulting on and leading several high-profile mixed reality projects. I’ve worked as a developer on several commercial mixed reality experiences while also studying and writing about the various implications and scenarios for using mixed reality in entertainment and productivity apps.

Our workshop will cover the fundamentals of building for mixed reality through the first half of the day. Through the rest of the day, we will work with you to build your own mixed reality application of your choice—so come with ideas of what you’d like to make. And if you aren’t sure what you want to create in mixed reality, we’ll help you with that, too.

Here’s an outline of what we plan to cover in the workshop:

  1. Hardware: an overview of the leading mixed reality devices and how they work.
  2. Tools: an introduction to the toolchain used for mixed reality development emphasizing Unity and Visual Studio.
  3. Hello Unity: hands-on development of an MR app using gestures and voice commands.
  4. SDK: we’ll go over the libraries used in MR development, what they provide and how to use them.
  5. Raycasting – covering some things you never have to worry about in 2D programming.
  6. Spatial Mapping and Spatial Understanding – how MR devices recognize the world around them.
  7. World Anchors – fixing virtual objects in the real world.

Break for lunch

    8.  Dennis and I will help you realize your mixed reality project. At the end of the workshop, we’ll do a show and tell to share what you’ve built and go over next steps if you want to publish your work.

We are extremely excited to be doing this workshop at DEVintersection. Mixed Reality is forecasted to be a multi-billion dollar industry by 2020. This is your opportunity to get in at the ground floor with some real hands-on experience.

(Be sure to use the promo code ASHLEY for a discount on your registration.)