Playing with the HoloLens Field of View

I was working on a HoloLens project when I noticed, as I do about 2 or 3 times during every HoloLens project, that I didn’t know what the Field of View property of the default camera does in a HoloLens app. I always see it out of the corner of my eye when I have the Unity IDE open. The HoloToolkit camera configuration tool automatically sets it to 16. I’m not sure why. (h/t Jesse McCulloch pointed me to an HTK thread that provides more background on how the 16 value came about.)

fov_question

So I finally decided to test this out for myself. In a regular Unity app, changing the number of degrees in the angular field of view will increase the amount of things that the camera can see, but in turn will make everything smaller. The concept comes from regular camera lenses and is related to the notion of a camera’s focal length, as demonstrated in the fit-inducing (but highly illustrative) animated gif below.

hololens_fov

I built a quick app with the default Ethan character and placed a 3D Text element over him that checks the camera’s Field of View property on every update.

public class updateFOV : MonoBehaviour {
    private TextMesh _mesh;
    private GameObject stuff;
    void Awake()
    {
        _mesh = GetComponent<TextMesh>();
    }
    // Use this for initialization
    void Start () {
    }
    // Update is called once per frame
    void Update () {
        _mesh.text = System.Math.Round( Camera.main.fieldOfView, 2).ToString();
    }
}

Then I added a Keyword Manager from the HoloToolkit to handle changing the angular FOV of the camera dynamically.

public void IncreaseFOV()
{
    Camera.main.fieldOfView = Camera.main.fieldOfView + 1;
} public void DecreaseFOV()
{
    Camera.main.fieldOfView = Camera.main.fieldOfView - 1;
} public void ResetFOV()
{
    Camera.main.ResetFieldOfView();
}

When I ran the app in my HoloLens, the the fov reader started showing “17.82” instead of “16”. This must be the vertical FOV of the HoloLens – something else I’ve often wondered about. Assuming a 16:9 aspect ration, this gives a horizontal FOV of “31.68”, which is really close to what Oliver Kreylos guessed way back in 2015.

The next step was to increase the Field of View using my voice commands. There were two possible outcomes: either the Unity app would somehow override the natural FOV of the HoloLens and actually distort my view, making the Ethan model smaller as the FOV increased, or the app would just ignore whatever I did to the Main Camera’s FieldOfView property.

2782

The second thing happened. As I increased the Field Of View property from “17.82” to “27.82”, there was no change in the way the character was projected. HoloLens ignores that setting.

Something strange did happen, though, after I called the ResetFieldOfView method on the Main Camera and tried to take a picture. After resetting, the FOV Reader began retrieving the true value of the FOV again. When I tried to take a picture of this, though, the FOV jumped up to “34.11”, then dropped back to “17.82”.

3411

This, I would assume, is the vertical FOV of the locatable camera (RGB camera) on the front of the HoloLens when taking a normal picture. Assuming again a 16:9 aspect ratio, this would provide a “60.64” horizontal angular FOV. According to the documentation, though, the horizontal FOV should be “67” degrees, which is close but not quite right.

“34.11” is also close to double “17.82” so maybe it has something to do with unsplitting the render sent to each eye? Except that double would actually be “35.64” plus I don’t really know how the stereoscopic rendering pipeline works so – who knows.

In any case, I at least answered the original question that was bothering me – fiddling with that slider next to the Camera’s Field of View property doesn’t really do anything. I need to just ignore it.

How to find Great HoloLens Developers

nowwhat

Microsoft HoloLens is an amazing and novel device that is at the forward edge of a major transformation in the way we do computing, both professionally and as consumers. It has immediate applications in the enterprise, especially in fields that work heavily with 3D models such as architecture and design. It has strong coolness potential for companies working in the tradeshow space and for art installations.

Given its likelihood to eventually overtake the smartphone market in the next five to ten years, it should also be given attention in R&D departments of large corporations and the emerging experiences groups of well-heeled marketing firms.

the problem

Because it is a new technology, there is no established market for HoloLens developers. There is no certification process. There are no boards to license people. How do you find out who is good?

There are two legitimacy issues currently affecting the HoloLens world. One is unknown companies popping up flashy websites and publishing other people’s works as their own and exaggerating their capabilities. The internet is a great leveler, in this case, and it is hard to distinguish between what is real and what is fake.

Another is established consulting companies that have decent IT reputations but no HoloLens experience moving into the market in the hopes that a funded project will pay for their employees to learn on the job. I’ve cleaned up after several of these failed projects in the past year.

helpful rules

How do you avoid bad engagements like these? Here are some guidelines:

1. Make sure the companies you are looking to work with can show original work. If their websites are full of stock Microsoft images and their videos show work belonging to other people without proper attribution, run like the wind.

2. Find someone with HoloLens experience to vet these companies for you. Go to the main HoloLens discussion board at https://forums.hololens.com/ and see who is answering questions. These aren’t the only people who know about HoloLens development, but they do demonstrate their experience on a daily basis for the good of the mixed reality community.

3. See who is writing apps for the HoloLens Challenge. This contest happens every three weeks and challenges developers to build creative apps to specification in a short time span. Anyone who does well in the challenge is going to do a great job for you. Plus, you can actually see what they are capable of. They are effectively posting their CVs online.

4. Look for contributors to open source HoloLens projects like this and this and this.

5. Look for companies and individuals associated with the HoloLens Agency Readiness Program or the Microsoft MVP Emerging Experiences group. These are two of the longest running groups of developers and designers working with HoloLens and go back to 2015. These people have been thinking about mixed reality for a long time.

naming names

There are several areas in which you will want HoloLens expertise.

A. You need help conceptualizing and implementing a large project.

B. You need help creating a quick proof of concept to demonstrate how the HoloLens can help your company.

C. You need individuals to augment or train your internal developers for a project.

The best people for each of these areas are well known in the relatively small world of HoloLens developers. Unfortunately, because HoloLens is still niche work, they tend not to be well known, with a few exceptions, outside of that insular world.

So how do I know who’s the good and the great in Mixed Reality? Fair question.

I’ve been working on HoloLens interaction design and development since the HoloLens device started shipping in April of 2016 and have been writing about it since 2015. I have close relationships with many of the big players in this world as well as the indie devs who are shaping HoloLens experiences today and pushing the envelope for tomorrow. I’ve been working with emerging experiences for the past half decade starting with the original Microsoft Surface Table, to the Kinect v1 and v2 (here’s my book), to VR and the HoloLens. I’ve taught workshops on HoloLens development and am currently working on a Lynda.com course on mixed reality.

The lists below are a bit subjective, and lean towards the organizations and people I can personally vouch for. (If you think someone significant has been left off the following lists, please let me know in the comments.)

big projects

Interknowlogy and Interknowlogy Europe

Object Theory

Razorfish Emerging Experiences

Valorem

Holoforge Interactive

Taqtile

small to mid-sized projects

360 World (Hungary)

OCTO Technology (France)

Made In Holo (Germany)

Stimulant (US)

8Ninths (US)

You Are Here (US)

Truth Labs (US)

Kazendi (UK)

Nexinov (Australia / Shanghai)

Thought Experiments (US)

Studio Studio (US)

Wavelength (US)

awesome hololens / mixed reality devs

Bronwen Zande (Australia)

Nick Young (New Zealand)

Bruno Capuano (Spain / Canada – Toronto)

Kenny Wang (Canada – Toronto)

Alex Drenea (Canada – Toronto)

Vangos Pterneas (Greece / US – New York)

Nate Turley (US – New York)

Milos Paripovic (US – New York)

Dwight Goins (US – Florida)

Stephen Hodgson (US – Florida)

Jason Odom (US – Alabama)

Jesse McCulloch (US – Oregon)

Michael Hoffman (US – Oregon)

Dwayne Lamb (US – Washington)

Dong Yoon Park (US – Washington)

Cameron Vetter (US – Wisconsin)

Stephen Chiou (US – Pennsylvania)

Michelle Ma (US – Pennsylvania)

Chad Carter (US – North Carolina)

Clemente Giorio (Italy)

Matteo Valoriani (Italy)

Dennis Vroegop (Netherlands)

Jasper Brekelmans (Netherlands)

Joost Van Schaik (Netherlands)

Gian Paolo Santopaolo (Switzerland)

Rene Schulte (Germany)

Vincent Guigui (France)

Johanna Rowe Calvi (France)

Nicolas Calvi (France)

Fabrice Barbin (France)

Andras Velvart (Hungary)

Tamas Deme (Hungary)

Jessica Engstrom (Sweden)

Jimmy Engstrom (Sweden)

HoloLens and the Arts

There are roughly three classifications of experiences we can build in Mixed Reality: 

The first is the enterprise experience, which can unfairly be encapsulated as people looking at engines.

The second is the gaming experience, which can unfairly be encapsulated as squirrels playing with nuts (I’m looking at you, Conker).

And then there is art, which no one is currently doing – but they/we should be. HoloLens is the greatest media shift to happen in a long while and the potential for creating both unique entertainment and transcendent experiences is profound.

Although we typically don’t think in this way regarding the HoloLens, we can. Here are three (highly recommended) sources of inspiration for anyone interested in the Arts and Mixed Reality’s bigger potential:

golan

https://medium.com/volumetric-filmmaking James George and the people behind the RGBD depthkit are taking volumetric filmmaking head-on with a new online journal about story telling in virtual spaces. If you know these guys already, then it’s a no-brainer, but if you don’t, here’s a primer: https://vimeo.com/42852185

kusama

Yayoi Kusama is finally getting a big showing of her Infinity Mirror art at the Hirshhorn Gallery – which has already increased membership at the Hirshhorn 20x. The effects that she is producing has an obvious relationship to what we do with light – and really what we have been doing in a more or less straight line from the  Surface table to the Kinect to projection mapping and now this. It’s playing with light in a way that defies what we otherwise know about the world around us. What she does with mirrors we should be able to recreate in our HoloLenses.

ipsadixit2

Kate Soper’s 90-minute musical performance Ipsa Dixit is probably going to be the most difficult sell because it is the high-end of high art. Alex Ross in the New Yorker review of Ipsa Dixit starts off by saying the term genius these days is overused and should be retired, _but_ in the case of Ipsa Dixit … If you enjoy live performance, you know that there are still things that happen in the theater that cannot be reproduced in film and television, _but_ we can come a lot closer with mixed reality. We control 360 sound as well as 3D images the viewer can walk around. We can make either private experiences or shared experiences, and take advantage of the space the viewer occupies or occlude it. Works like Ipsa Dixit only come along once in a blue moon and they are difficult to get to see in the right way. With mixed reality, we have a medium that is able to capture the essence of genius performances like this and allow a much larger audience to experience them.

Between casual gaming and social media, the main influence of technology over the past 20 years has been to create a generation of people with extremely short attention spans. Where tl;dr started off as an ironic comment on our collective inability to concentrate, it has now become an excuse for shallow thinking and the normalization of aspergersy behavior. But it doesn’t have to be that way. Mixed reality has the potential to change all that, finally, and give us an opportunity to have a more human and thoughtful relationship to our tech.

In Atlanta? Test Out Your HoloLens App at the Microsoft Innovation Center

alex

While the HoloLens, Microsoft’s mixed-reality device, is still a bit pricey at the moment, you can still get in on HoloLens development.

Microsoft provides an HoloLens emulator that let’s you build apps on your desktop without a device. You’ll need Windows Pro and around 4 Gigs of RAM to run the emulator. The dev tools are just Visual Studio and Unity.

If you live in the Atlanta area, you can also try your app out on a real HoloLens at the Microsoft Innovation Center in downtown Atlanta. The historic FlatIron building – where the MIC is housed – will let you request time with a dev edition HoloLens on their contact page.

This is what Microsoft did for Windows Phone when it first came out, and basically provides a way to try before you buy.

So what are you waiting for? Download the tools, build an app by following the online tutorials, and schedule some time to see what your app looks like in mixed reality.

HoloLens and The Plight of Men in Tech

The Golan Levin led Art && Code conference on VR and AR has just ended at Carnegie Mellon. It’s an amazing conference that reminds me of the days of MIX in terms of creativity and seriousness.  I have always secretly felt that there’s a strong correlation between gender parity at conferences and the quality of the conference — something MIX had to some degree and Art && Code has strongly.

rinseandrepeat

From Rinse and Repeat by Robert Yang

I don’t mean to be political or controversial, though. I just find this more enjoyable than //build or Ignite, which are valuable in other ways.

jchlaura

Brain by Laura Juo-Hsin Chen

A serious concern that bubbled up to my consciousness at Art && Code, however, is the plight of men in tech. Something has happened to us, I think, that in our gender isolation, we’ve become dull and less playful. Our very notions of what is fun in tech is limited and diminished and we don’t even realize it. We all go about doing our best Steve Wozniak pretending to be Steve Jobs impressions and talk about passion in a way that makes it sound like a wet rag we found in an alley, perpetuating the nerd on nerd self-violence that is choking the nascent mixed reality industry. And I think this is unfair.

memory Place

Memory Space by Sarah Rothberg

Part of the reason for this is probably the high cost of the tech we work with. When the gear is expensive, we feel a responsibility to appear more serious. We worry that appearing to be having fun doing what we do requires justification. A great example is Rene Schulte’s justification of wig caps for his company’s HoloLens demo when the only explanation needed should have been that the hololens is gangsta and our only legitimate response should have been that he needed more bling.

Nail Art Museum by Jeremy Bailey

My favorite talk at this conference was about political games by Robert Yang. The more I think about it the more profound it seems. He explores gender roles and race relations with a first person adventure that takes place in a gay spa. Mostly due to misunderstandings about what he is trying to do, his games are banned from twitch and he gets a lot of hateemails. His oeuvre of games is accused of having found rock bottom in the uncanny valley (though this is more a description of his aesthetic, I feel, rather than the success of his experiments).

I talked to him for a bit in the buffet line and he said he’s considered putting a contextual/researchy explanation in front of his games but also thinks it ruins the integrity of his bathhouse game and I agreed strongly.
And maybe sex games are still a step too far for us — but we should at least be able to be more playful and fun and willing to create apps that are exploratory rather than just cookie-cutter commercial translations of existing web apps.

Fun with Hyperbolic Space by Vi Hart

I think the cause of our predicament is the homogeneity of the people currently doing More Personal Computing development. The symptom and effect is the small number of women in this field. Hololens is, if anything, even more narrow in this way than windows development in general despite being more inherently innovative and transformative.

But what’s the point of transformation if we create a NUI world that has all the problems institutionalized in our current one?

As developers we have a limited ability to affect this — mostly because of lack of time and the other usual but legit reasons.

 

Shining 360 by Claire Hentschker

Where we can have an influence though is by by creating diversity in our own thinking. I can make this even easier — we create diversity by having more fun in what we do. If what we do looks more fun and has more joy, it will attract more diverse people who will want to use it for artistic and even subversive purposes and, in turn, make what we do even more fun. It’s a virtuous circle that gives back a hundred fold.

And while secretly my criteria for when we’ve succeeded with diversity is when we get a Robert Yang gay sex game on the HoloLens, a more modest one — just getting a non-zero number of women developers into the hololens dev community and making sure they have a voice — would also be great.

For what it’s worth, VR has the same institutional problem. But this is a fresh start and we can do better.

My modest proposal to accomplish this starts with a plea to Microsoft and other MR vendors like Meta and Magic Leap. You have plans or already have implemented plans to provide devices and developers to major large enterprises like NASA in order to build mixed reality experiences. Why not divert a portion of these resources to some of the new media artists linked in this post so we can see the true potential of Holographic Computing to change, challenge and improve our social world rather than simply find new ways to channel capital?

I understand that to some extent it’s a matter of which comes first, the chicken or the portrait of the chicken. I would simply plead with you, the mighty corporate curators of mixed reality, to this time choose the portrait of the chicken first. It’s the truer vision.

MIXED REALITY ESSENTIALS: A CONCISE COURSE

On Saturday, October 29th, Dennis Vroegop and I will be running a Mixed Reality Workshop as part of the DEVintersection conference in Las Vegas. Dennis is both a promoter and trainer in Mixed Reality and has made frequent appearances on European TV talking about this emerging technology as well as consulting on and leading several high-profile mixed reality projects. I’ve worked as a developer on several commercial mixed reality experiences while also studying and writing about the various implications and scenarios for using mixed reality in entertainment and productivity apps.

Our workshop will cover the fundamentals of building for mixed reality through the first half of the day. Through the rest of the day, we will work with you to build your own mixed reality application of your choice—so come with ideas of what you’d like to make. And if you aren’t sure what you want to create in mixed reality, we’ll help you with that, too.

Here’s an outline of what we plan to cover in the workshop:

  1. Hardware: an overview of the leading mixed reality devices and how they work.
  2. Tools: an introduction to the toolchain used for mixed reality development emphasizing Unity and Visual Studio.
  3. Hello Unity: hands-on development of an MR app using gestures and voice commands.
  4. SDK: we’ll go over the libraries used in MR development, what they provide and how to use them.
  5. Raycasting – covering some things you never have to worry about in 2D programming.
  6. Spatial Mapping and Spatial Understanding – how MR devices recognize the world around them.
  7. World Anchors – fixing virtual objects in the real world.

Break for lunch

    8.  Dennis and I will help you realize your mixed reality project. At the end of the workshop, we’ll do a show and tell to share what you’ve built and go over next steps if you want to publish your work.

We are extremely excited to be doing this workshop at DEVintersection. Mixed Reality is forecasted to be a multi-billion dollar industry by 2020. This is your opportunity to get in at the ground floor with some real hands-on experience.

(Be sure to use the promo code ASHLEY for a discount on your registration.)

Pokémon Go as An Illustration of AR Belief Circles

venn

Recent rumors circling around Pokémon Go suggest that they will delay their next major update until next year. It was previously believed that they would be including additional game elements, creatures and levels beyond level 40 sometime in December.

A large gap between releases like this would seem to leave the door open to other copy cat games to move into the opening that Niantec is providing them. And maybe this wouldn’t be such a bad thing. While World of Warcraft is the most successful MMORPG, for instance, it certainly wasn’t the first. Dark Age of Camelot, Everquest, Asheron’s Call and Ultima Online all preceded it. What WoW did was perhaps to collect the best features of all these games while also ride the right graphics card cycle to success.

A similar student-becomes-the-master trope can play out for other franchise owners, since the only thing that seems to be required to get a game similar to Pokemon going is a pre-existing storyline (like WoW had) and 3D assets either available or easily created to go into the game. With Azure and AWS cloud computing easily available, even infrastructure isn’t such a challenge as it was when the early MMORPGs were starting. Possible franchise holders that could make the leap into geographically-aware augmented reality games include Disney, Wow itself, Yu-Gi-Oh!, Magic the Gathering, and Star Wars.

Imagine going to the park one day and asking someone else face down staring at their phone if they know where the bulbasaur showing up on the nearby is and having them not knowing what you are talking about because they are looking for Captain Hook or a jawa on their nearby?

This sort of experience is exemplary of what Vernor Vinge calls belief circles in his book about augmented reality, Rainbow’s End. Belief circles describe groups of people who share a collaborative AR experience. Because they also share a common real life world with others, their belief circles may conflict with other people’s belief circles. What’s even more peculiar is that members of different belief circles do not have access to each other’s augmented worlds – a peculiar twist on the problem of other minds. So while a person in H.P. Lovecraft’s belief circle can encounter someone in Terry Pratchett’s Discworld belief circle at a Starbuck’s, it isn’t at all clear how they will ultimately interact with one another. Starbuck’s itself may provide virtual assets that can be incorporated into either belief circle in order to attract customers from different worlds and backgrounds – basically multi-tier marketing of the future. Will different things be emphasized in the store based on our self-selected belief circles? Will our drinks have different names and ingredients? How will trademark and copyright laws impact the ability to incorporate franchises into the muti-aspect branding of coffee houses, restaurants and other mall stores?

But most of all, how will people talk to each other? One of the great pleasures of playing Pokemon today is encountering and chatting with people I otherwise wouldn’t meet and having a common set of interests that trump our political and social differences. Belief circles in the AR future of five to ten years may simply encourage the opposite trend of community Balkanization in interest zones. Will high concept belief circles based on art, literature and genre fiction simply devolve into Democrat and Republican belief circles at some point?

Sketching Holographic Experiences

augmented reality D&D

These experiences sketches are an initial concept exploration for a pen and paper role playing game like Dungeons & Dragons augmented by mixed reality devices. The first inklings for this were worked out on the HoloLens forums and I want to thank everyone who was kind enough to offer their creative suggestions there.

I’ve always felt that the very game mechanics that make D&D playable are also one of the major barriers to getting a game going. The D&D mechanics were eventually translated into a variety of video games that made progressing through an adventure much easier. Instead of spending half an hour or more working out all the math for a battle, a computer can do it in a fraction of the time and also throw in nice particle effects to boot.

What gets lost in the process is the story telling element as well as the social aspect of playing. So how to we reintroduce the dungeon master and socialization elements of role playing without having to deal with all the bookkeeping?

2 

With augmented reality, we can do much of the math and bookkeeping in the background, allowing players to spend more time talking to each other, making bad puns and getting into their characters. Instead of physical playing pieces, I think we could use flat player bases instead imprinted with QR codes that identify the characters. 3D animated models can be virtually projected onto the bases. Players can then move their bases on the underlying (virtual) hex maps and the holograms will continue to be oriented on their bases correctly.

3

All viewpoints are calibrated for the different players so everyone seems the same things happening – just from different POVs. The experience can also be enhanced with voice commands so when your magic user says “magic missile” everyone gets to see appropriate particle effects on the game table shooting from the magic user character’s hands at a target.

4

I feel that the dice should be physical objects. The feel of the various dice and the sound of the dice are an essential component of pen and paper role playing. Instead, I want to use computer vision to calculate the outcomes, present digital visualizations of successful and unsuccessful rolls over the physical dice, and then perform automatic calculations in the background based on the outcome.

5

The player character’s stats and relevant info should over over him on the game table. As rolls are made, the health points and stats should update automatically.

6

While some of the holograms on the table are shared between all players, some are only for the dungeon master. As players move from area to area, opening them up through a visual fog of war, the dungeon master will be able to see secret content like the location of traps and the stats for NPCs. It may also be cool to enable remote DMs who appear virtually to host a game. The thought here is that a good DM is hard to find and in high demand. It would be interesting to use AR technology to invite celebrity DMs or paid DMs to help get a regular game going.

7 

When I started planning out how a holographic D&D game would work, there was still some confusion over the visible range of holograms with HoloLens. I was concerned that digital D&D pieces would fade or blur at close ranges – but this turns out not to be true. The main concern seems to be that looking at a hologram less than a meter away for extended periods of time will trigger vergence-accomodation mismatch for some people. In a typical D&D game, however, this shouldn’t be a problem since players can lean forward to move their pieces and then recline again to talk through the game.

8

AR can also be used to help with calorie control for that other important aspect of D&D – snack foods and sodas.

Please add your suggestions, criticisms and observations in the comments section. The next step for me is creating some prototypes  in Unity of gameplay. I’ll post these as they become ready.

And just in case it’s been a long time and you don’t remember what’s so fun about role playing games, here’s an episode of the web series Table Top with Will Wheaton, Chris Hardwick and Sam Witwer playing the Dragon Age role playing game.

Understanding HoloLens Spatial Mapping and Hologram Ranges

There seems to be some confusion over what the HoloLens’s depth sensor does, on the one hand, and what the HoloLens’s waveguides do, on the other. Ultimately the HoloLens HPU fuses these two two streams into a single image, but understanding the component pieces separately are essential for creating good holographic UX.

spatmap

From what I can gather (though I could be wrong), HoloLens uses a single depth camera similar to the Kinect  to perform spatial mapping of the surfaces around a HoloLens user. If you are coming from Kinect development, this is the same thing as surface reconstruction with that device. Different surfaces in the room are scanned by the depth cameras. Multiple passes of the scan are stitched together and merged over time (even as you walk through the room) to create a 3D mesh of the many planes in the room.

things

These surfaces can then be used to provide collision detection information for 3D models (holograms) in your HoloLens application. The range of the depth spatial mapping cameras is 0.85 meters to 3.1 meters. This means that if a user wants to include a surface that is closer than 0.85 M in their HoloLens experience, they will need to lean back.

The functioning of the depth camera shouldn’t be  confused with the functioning of the HoloLens’s four “environment aware” cameras. These cameras are used to help in nailing down the orientation of the HoloLens headset in what is known as  inside-out positional tracking. You can read more about that in How HoloLens Sensors Work. It is probably the case that the depth camera is used for finger tracking while the 4 environment aware cameras are devoted to spatial mapping.

The depth camera spatial mapping in effect creates a background context for the virtual objects created by your application. These holograms are the foreground elements.

Another way to make the distinction based on technical functionality rather than on the user experience is to think of the depth camera surface reconstruction data as input , and holograms as output. The camera is a highly evolved version of the keyboard while the waveguide displays are modern CRT monitors.

It has been misreported that the minimum distance for virtual objects in HoloLens is also 0.85 Meters. This is not so.

The minimum range for hologram placement is more like 10 centimeters. The optimal range for hologram placement, however, is 1.25 m to 5 m. In UWP, the ideal range for placing a 2D app in 3D holographic space is 2 m.

Microsoft also discusses another range they call the comfort zone for holograms. This is the range where vergence-accommodation mismatch doesn’t occur (one of the causes of VR sickness for some people). The comfort zone extends from 1 m to infinity.

Range Name Minimum Distance Maximum Distance
Depth Camera 0.85 m (2.8 ft) 3.1 m (10 ft)
Hologram Placement 0.1 m (4 inches) infinity
Optimal Zone 1.25 m (4 ft) 5 m (16 ft)
Comfort Zone 1.0 m (3 ft) infinity

The most interesting zone, right now, is of course that short range inside of 1 meter. That 1 meter min comfort distance basically prevents any direct interactions between the user’s arms and holograms. The current documentation even says:

Assume that a user will not be able to touch your holograms and consider ways of gracefully clipping or fading out your content when it gets too close so as not to jar the user into an unexpected experience.

When a user sees a hologram, they will naturally want to get a closer look and inspect the details. When a human being looks at something up close, he typically wants to reach out and touch it.

sistine_chapel

Tactile reassurance is hardwired into our brains and is a major tool for human beings to interact with the world. Punting on this interaction, as the HoloLens documentation does, is a great way to avoid this basic psychological need in the early days of designing for augmented reality.

We can pretend for now that AR interactions are going to be analogs to mouse click (air-tap) and touch-less display interactions. Eventually, though, that 1 meter range will become a major UX problem for everyone.

[Updated 4/6 after realizing I probably have the functioning of the environment aware cameras and the depth camera reversed.]

[Updated 4/21 – nope. Had it right the first time.]