Category Archives: Tlon

Rethinking VR/AR Launch Dates

The HTC Vive, Oculus Rift and Microsoft HoloLens all opened for pre-orders in 2016 with plans to ship in early April (or late March in the case of the Oculus). All have run into fulfillment problems creating general confusion for their most ardent fans.

I won’t try to go into all the details of what each company originally promised and then what each company has done to explain their delays. I honestly barely understand it. Oculus says there were component shortages and is contacting people through email to update them. Oculus also refunded some shipping costs for some purchasers as compensation. HTC had issues with their day one ordering process and is using its blog for updates. Microsoft hasn’t acknowledged a problem but is using its developer forum to clarify the shipping timeline.

Maybe it’s time to acknowledge that spinning up production for expensive devices in relatively small batches is really, really hard. Early promises from 2015 followed by CES in January 2016 and then GDC in March probably created an artificial timeline that was difficult to hit.

On top of this, internal corporate pressure has probably also driven each product group to hype to the point that it is difficult to meet production goals. HTC probably has the most experience with international production lines for high tech gear and even they stumbled a bit.

Maybe it’s also time to stop blaming each of these companies as they reach out for the future. All that’s happened is that some early adopters aren’t getting to be as early as they want to be (including me, admittedly).

As William Gibson said, “The future is already here — it’s just not very evenly distributed.”

HoloLens Occlusion vs Field of View

prometheusmovie6812

[Note: this post is entirely my own opinion and purely conjectural.]

Best current guesses are that the HoloLens field of view is somewhere between 32 degrees and 40 degrees diagonal. Is this a problem?

We’d definitely all like to be able to work with a larger field of view. That’s how we’ve come to imagine augmented reality working. It’s how we’ve been told it should work from Vernor Vinge’s Rainbow’s End to Ridley Scott’s Prometheus to the Iron Man trilogy – in fact going back as far as Star Wars in the late 70’s. We want and expect a 160-180 degree FOV.

So is the HoloLens’ field of view (FOV) a problem? Yes it is. But keep in mind that the current FOV is an artifact of the waveguide technology being used.

What’s often lost in the discussions about the HoloLens field of view – in fact the question never asked by the hundreds of online journalists who have covered it – is what sort of trade-off was made so that we have the current FOV.

A common internet rumor – likely inspired by a video by tech evangelist Bruce Harris taken a few months ago – is that it has to do with cost of production and consistency in production. The argument is borrowed from chip manufacturing and, while there might be some truth in it, it is mostly a red herring. An amazingly comprehensive blog post by Oliver Kreylos in August of last year went over the evidence as well as related patents and argued persuasively that while increasing the price of the waveguide material could improve the FOV marginally, the price difference was prohibitively expensive and ultimately nonsensical. At the end of the day, the FOV of the HoloLens developer unit is a physical limitation, not a manufacturing limitation or a power limitation.

haunted_mansion

But don’t other AR headset manufacturers promise a much larger FOV? Yes. The Meta 2 (shown below) has a 90 degree field of view. The way the technology works, however, involves two LED screens that are then viewed through plastic positioned at 45 degrees to the screens (technically known as a beam splitter, informally known as a piece of glass) that reflects the image into the user’s eyes at approximately half the original brightness while also letting the real world in front of the user (though half of that light is also scattered). This is basically the same technique used to create ghostly images in the Haunted Mansion at Disneyland.

brain

The downside of this increased FOV is you are loosing a lot of brightness through the beam splitter. You are also losing light based on the distance it takes the light to pass through the plastic and get to your eyes. The result is a see-through “hologram”.

Iron-Man-AR

But is this what we want? See-through holograms? The visual design team for Iron man decided that this is indeed what they wanted for their movies. The translucent holograms provide a cool ghostly effect, even in a dark room.

leia

The Princess Leia hologram from the original Star Wars, on the other hand, is mostly opaque. That visual design team went in a different direction. Why?

leia2

My best guess is that it has to do with the use of color. While the Iron Man hologram has a very limited color palette, the Princess Leia hologram uses a broad range of facial tones to capture her expression – and also so that, dramatically, Luke Skywalker can remark on how beautiful she is (which obviously gets messed up by the Return of the Jedi). Making her transparent would simply wash out the colors and destroy much of the emotional content of the scene.

star_wars_chess

The idea that opacity is a pre-requisite for color holograms is confirmed in the Star Wars chess scene on the Millennium Falcon. Again, there is just enough transparency to indicate that the chess pieces are holograms and not real objects (digital rather than physical).

dude

So what kind of holograms does the HoloLens provide, transparent or near-opaque? This is something that is hard to describe unless you actually see it for yourself but the HoloLens “holograms” will occlude physical objects when they are placed in front of them. I’ve had the opportunity to experience this several times over the last year. This is possible because these digital images use a very large color palette and, more importantly, are extremely intense. In fact, because the holoLens display technology is currently additive, this occlusion effect actually works best with bright colors. As areas of the screen become darker, they actually appear more transparent.

Bigger field of view = more transparent , duller holograms. Smaller field of view = more opaque, brighter holograms.

I believe Microsoft made the bet that, in order to start designing the AR experiences of the future, we actually want to work with colorful, opaque holograms. The trade-off the technology seems to make in order to achieve this is a more limited field of view in the HoloLens development kits.

At the end of the day, we really want both, though. Fortunately we are currently only working with the Development Kit and not with a consumer device. This is the unit developers and designers will use to experiment and discover what we can do with HoloLens. With all the new attention and money being spent on waveguide displays, we can optimistically expect to see AR headsets with much larger fields of view in the future. Ideally, they’ll also keep the high light intensity and broad color palette that we are coming to expect from the current generation of HoloLens technology.

HoloLens Hardware Specs

visual_studio

Microsoft is releasing an avalanche of information about HoloLens this week. Within that heap of gold is, finally, clearer information on the actual hardware in the HoloLens headset.

I’ve updated my earlier post on How HoloLens Sensors Work to reflect the updated spec list. Here’s what I got wrong:

1. Definitely no internal eye tracking camera. I originally thought this is what the “gaze” gesture was. Then I thought it might be used for calibration of interpupillary distance. I was wrong on both counts.

2. There aren’t four depth sensors. Only one. I had originally thought these cameras would be used for spatial mapping. Instead just the one depth camera is, and it maps a 75 degree cone out in front of the headset, with a range of 0.8 M to 3.1 M.

3.  The four cameras I saw are probably just grayscale cameras – and it’s these cameras along with cool algorithms that are being used to do inside-out position tracking along with the IMU.

Here are the final sensor specs:

  • 1 IMU
  • 4 environment understanding cameras
  • 1 depth camera
  • 1 2MP photo / HD video camera
  • Mixed reality capture
  • 4 microphones
  • 1 ambient light sensor

The mixed reality capture is basically a stream that combines digital objects with the video stream coming through the HD video camera. It is different from the on-stage rigs we’ve seen which can calculate the mixed-reality scene from multiple points of view. The mixed reality capture is from the user’s point of view only. The mixed-reality capture can be used for streaming to additional devices like your phone or TV.

Here are the final display specs:

  • See-through holographic lenses (waveguides)
  • 2 HD 16:9 light engines
  • Automatic pupillary distance calibration
  • Holographic Resolution: 2.3M total light points
  • Holographic Density: >2.5k radiants (light points per radian)

I’ll try to explain “light points” in a later post – if I can ever figure it out.

The Timelessness of Emerging Technology

In the previous post I wrote about “experience.” Here I’d like to talk about the “emerging.”

flash-gordon

There are two quotations everyone who loves technology or suffers under the burden of technology should be familiar with. The first is Arthur C. Clarke’s quote about technology and magic. The other is William Gibson’s quote about the dissemination of technology. It is the latter quote I am initially concerned with here.

The future has arrived – it’s just not evenly distributed yet.

The statement may be intended to describe the sort of life I live. I try to keep on top of new technologies and figure out what will be hitting markets in a year, in two years, in five years. I do this out of the sheer joy of it but also – perhaps to justify this occupation – in order to adjust my career and my skills to get ready for these shifts. As I write this, I have about fifteen depth sensors of different makes, including five versions of the Kinect, sitting in boxes under a table to my left. I have three webcams pointing at me and an IR reader fifteen inches from my face attached to an Oculus Rift. I’m also surrounded by mysterious wires, soldering equipment and various IoT gadgets that constantly get misplaced because they are so small. Finally, I have about five moleskine notebooks and an indefinite number of sticky pads on which I jot down ideas and appointments. As emerging technology is disseminated from wherever it is this stuff comes from – probably somewhere in Shenzhen, China – I make a point of getting it a little before other people do.

astra-gnome

But the Gibson quote could just as easily describe the way technology first catches on in certain communities – hipsters in New York and San Francisco – then goes mainstream in the first world and then finally hits the after markets globally. Emerging markets are often the last to receive emerging technologies, interestingly. When they do, however, they are transformative and spread rapidly.

 robot

Or it could simply describe the anxiety gadget lovers face when they think someone else has gotten the newest device before they have, the fear that the future has arrived for others but not for them. In this way, social media has become a boon to the distressed broadcast industry by providing a reason to see a new TV episode live before it is ruined by your Facebook friends. It is about making the future a personal experience, rather than a communal one, for a moment so that your opinions and observations and humorous quips about it — for that oh-so-brief period of time — are truly yours and original. No one likes to discover that they are the fifth person responding to a Facebook post, the tenth on instagram, or the 100th on twitter to make the same droll comment.

shape

The quote captures the fact that the future is unevenly distributed in space. What it leaves out is that the future is also unevenly distributed in time. The future I see from 2016 is different from the one I imagined in 2001, and different from the horizon my parents saw in 1960, which in turn is different from the one H. G. Wells gazed at in 1933. Our experience of the future is always tuned to our expectations at any point in time.

 lastbattlefield

This is a trope in science fiction and the original Star Trek pretty much perfected it. The future was less about futurism in that show than about projecting the issues of the 60’s into the future in order to instruct a 60’s audience about diversity, equality, and the anti-war movement. In that show the future horizon was a mirror that tried to show us what we really looked like through exaggerations, inversions and a bit of paint makeup.

kitchen

I’m grateful to Josh Rothman for introducing me to the term retrofuturism in The New Yorker, which perfectly describes this phenomenon. Retrofuturism – whose exemplars include Star Trek as well as the steampunk movement – is the quasi-historical examination of the future through the past.

flight

The exploration of retrofuturism highlights the observation that the future is always relative to the present — not just in the sense that the tortoise will always eventually catch up to the hare, but in the sense that the future is always shaped by our present experiences, aspirations and desires. There are many predictions about how the world will be in ten or twenty years that do not fit into the experience of “the future.” Global warming and scarce resources, for instance, aren’t a part of this sense. They are simply things that will happen. Incrementally faster computers and cheaper electronics, also, are not what we think of as the future. They are just change.

modern_mechanix

What counts as the future must involve a big and unexpected change (though the fact that we have a discipline called futurism and make a sport of future predictions crimps this unexpectedness) rather than a gradual advancement and it must improve our lives (or at least appear to; dystopic perspectives and knee-jerk neo-Ludditism crimps this somewhat, too).

balloons

Most of all, though, the future must include a sense that we are escaping the present. Hannah Arendt captured this escapist element in her 1958 book The Human Condition in which she remarks on a common perspective regarding the Sputnik space launch a year previous as the first “step toward escape from men’s imprisonment to the earth.” The human condition is to be earth-bound and time-bound. Our aspiration is to be liberated from this and we pin our hopes on technology. Arendt goes on to say:

… science has realized and affirmed what men anticipated in dreams that were neither wild nor idle … buried in the highly non-respectable literature of science fiction (to which, unfortunately, nobody yet has paid the attention it deserves as a vehicle of mass sentiments and mass desires).

As a desire to escape from the present, our experience of the future is the natural correlate to a certain form of nostalgia that idealizes the past. This conservative nostalgia is not simply a love of tradition but a totemic belief that the customs of the past, followed ritualistically, will help us to escape the things that frighten us or make us feel trapped in the present. A time machine once entered, after all, goes in both directions.

flying-car

This sense of the future, then, is relative to the present not just in time and space but also emotionally. It is tied to a desire for freedom and in this way even has a political dimension – if only as a tonic to despair and cynicism concerning direct political action. Our love-hate relationship to modern consumerism means we feel enslaved to consumerism yet also are waiting for it to save us with gadgets and toys in what the philosopher Peter Sloterdijk calls a state of “enlightened false consciousness”.

living_room

And so we arrive at the second most important quote for people coping with emerging technologies, also known as Clarke’s third law:

Any sufficiently advanced technology is indistinguishable from magic.

What counts as emerging changes over time. It changes based on your location. It changes based on our hopes and our fears. We always recognize it, though, because it feels like magic. It has the power to change our perspective and see a bigger world where we are not trapped in the way things are but instead feel like anything can happen and everything will work out.

 city

Everything will work out not in the end, which is too far away, but just a little bit more in the future, just around the corner. I prepare for it by surrounding myself with gadgets and screens and diodes and notebooks, like a survivalist waiting for civilization not to crumble, but to emerge.

Kinect developer preview for Windows 10

 

unicorn

With Kind permission from the Product Group, I am reprinting this from an email that went out the Kinect developers mailing list.

Please also check out Mike Taulty’s blog for a walkthrough of what’s provided. The short version, though, is now you can access the perception API’s to work with Kinect v2 in a UWP app as well as use your Kinect v2 for face recognition as a replacement for your password. Please bear in mind that this is the public preview rather than a final release.

Snip >>

We are happy to announce our public preview of Kinect support for Windows 10.

 

This preview adds support for using Kinect with the built-in Windows.Devices.Perception APIs, and it also provides beta support for using Kinect with Windows Hello.

 

Getting started is easy. First, make sure you already have a working Kinect for Windows V2 attached to your Windows 10 PC.  The Kinect Configuration Verifier can make sure everything is functioning okay. Also, make sure your Kinect has a good view of your face –  we recommend centering it as close to the top or bottom of your monitor as possible, and at least 0.5 meters from your face.

 

Then follow these steps to install the Kinect developer preview for Windows 10:

 

1. The first step is to opt-in to driver flighting. You can follow the instructions here to set up your registry by hand, or you can use the following text to create a .reg file to right-click and import the settings:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\DriverFlighting\Partner]

“TargetRing”=”Drivers”

 

2. Next, you can use Device Manager to update to the preview version of the Kinect driver and runtime:

 

                            1. Open Device Manager (Windows key + x, then m).

                            2. Expand “Kinect sensor devices”.

                            3. Right-click on “WDF KinectSensor Interface 0”.

                            4. Click “Update Driver Software…”

                            5. Click “Search automatically for updated driver software”.

                            6. Allow it to download and install the new driver.

                            7. Reboot.

 

Once you have the preview version of the Kinect for Windows V2 driver (version 2.1.1511.11000 or higher), you can start developing sensor apps for Windows 10 using Visual Studio 2015 with Windows developer tools. You can also set up Windows Hello to log in using your Kinect.

 

1. Go to “Settings->Accounts->Sign-in options”.

2. Add a PIN if you haven’t already.

3. In the Windows Hello section, click “Set up” and follow the instructions to enable Windows Hello!

 

That’s it! You can send us your feedback at k4w@microsoft.com.

The Knowledge Bubble

code

Coding is hard and learning to code is perhaps even harder.

The current software community is in a quandary these days about how to learn … much less why we must learn. It is now acknowledged that a software developer must constantly retool himself (as an actor must constantly rebrand herself) in order to remain relevant. There is a lingering threat of sorts as we look around and realize that developers are continually getting younger and younger while the gray hairs are ne’er to be seen.

Let me raise a few problems with why we must constantly learn and relearn how to code before addressing the best way to relearn to code. Let me be “that guy” for a moment. First, the old way of coding (from six months ago) was probably perfectly fine. Nothing is gained for the product by finding a new framework or new platform or, God forbid, new paradigm for your product. Worse, bad code is introduced while trying to implement code that is only half-understood and time is lost as developers spend their time learning it. Even worse worse, the platform you are switching to probably isn’t mature (oh, you’re going to break angularjs in the next major release because you found a better way to do things?) and you’ll be spending the next two years trying to fix those problems.

Second, you’re creating a maintenance nightmare because there are no best practices for the latest and greatest code you are implementing (code examples you find on the Internet written by marketing teams to show how easy their code is don’t count) and, worse and worser, no one wants to get stuck doing maintenance while you are off doing the next latest and greatest thing six months from now. Everybody wants to be doing the latest and greatest, it turns out.

Third, management is being left behind. The people in software management who are supposed to be monitoring the code and guiding the development practices are hanging on for dear life trying to give the impression that they understand what is going on but they do not. And the reason they do not is because they’ve been managers for the past six cycles while best practices and coding standards have been flipped on their heads multiple times. You, the developers, are able to steamroll right over them with shibboleths like “decoupling” and “agility”. Awesome, right? Wrong. Managers actually have experience and you don’t – but in the constantly changing world of software development, we are able to get away with “new models” for making software that no one has heard of, that sound like bs, and that everyone will subscribe to just because it is the latest thing.

Fourth, when everyone is a neophyte there are no longer any checks and balances. Everyone is constantly self-promoting and suffering from imposter syndrome. They become paranoid that they will get caught out – which has a destructive impact on the culture – and the only way out of it is to double down on even newer technologies, frameworks and practices that no one has ever heard of so they can’t contradict you when you start spouting it.

Fifth, this state of affairs is not sustainable. It is the equivalent of the housing and credit bubble of 2008 except instead of money or real estate it concerns knowledge. Let’s call it a knowledge bubble. The signs of a knowledge bubble are 1) the knowledge people possess is severely over-valued 2) there are no regulatory systems in place (independent experts who aren’t consultants or marketing shills) to distinguish properly valued knowledge from BS 3) the people with experience in these matters, having lived through past situations that are similar, are de-valued, depreciated and not listened to. This is why they always seem to hate everyone.

Sixth, the problem that the knowledge industry in coding is trying to solve has not changed for twenty plus years. We are still trying to gather data, entered using a keyboard, and storing it in a database. Most efficiencies that have been introduced over the past twenty years have come primarily from improved hardware speeds, improved storage and lower prices for these. The supposed improvements to moving data A to location B and storing it in database C over the past twenty years due to frameworks and languages has been minimal – in other words, these supposed improvements have simply inflated the knowledge bubble. Unless we as individuals are doing truly innovative work with machine learning or augmented reality or NUI input, we are probably just moving data from point A to point B and wasting everyone’s time searching for more difficult and obscure ways to do it.

So why do we do it? A lot of this is due to the race for higher salaries. In olden days – which we laugh at – coders were rewarded and admired for writing lines of code. The more you wrote, the more kung fu you obviously knew. Over time, it because apparent that this was foolish, but the problem of determining who had the best kung fu was still extant, so we came up with code mastery. Unfortunately, there’s only so much code you can master – unless we constantly create new code to master! Which is what we all collectively did. We blame the problems of the last project on faulty frameworks and faulty processes and go hunting for new ones and embrace the first ones we find uncritically because, well, it’s something new to master. This, in turn, provides us with more ammunition to come back to our gray haired and two years behind bosses (who are no longer coders but also not trained managers) with and ask for more titles and more money. (Viceroy of software developer sounds really good, doesn’t it? Whatever it means, it’s going to look great on my business card.)

On the other hand, constantly learning also keeps us fresh. All work and no play makes Jack a dull boy, after all. There have been studies that demonstrate that an active mental life will keep us younger, put off the symptoms of Alzheimer’s and dementia, and generally allow us to live longer, happier lives. Bubbles aren’t all bad.

Ian-McKellen

So on to the other problem. What is the best way to learn? I personally prefer books. Online books, like safari books online, are alright but I really like the kind I can hold in my hands. I’m certainly a fan of videos like the ones Pluralsight provides but they don’t totally do it for me.

I actually did an audition video for Pluralsight a while back about virtual reality which didn’t quite get me the job. That’s alright since Giani Rosa Gallini ended up doing a much better job of it than I could have. What I noticed when I finished the audition video was that my voice didn’t sound the way I thought it did. It was much higher and more nasally than I expected. I’m not sure I would have enjoyed listening to me for three hours. I’ve actually noticed the same thing with most of the Pluralsight courses – people who know the material are not necessarily the best people to present the material. After all, in movies and theater we don’t require that actors write their own lines. We have a different role, called the writer, that performs that duty.

Not that the voice acting on Pluralsight videos are bad. I’m actually very fond of listening to Iris Classon’s voice  – she has a lilting non-specific European accent that is extremely cool – as well as Andras Velvart’s charming Hungarian drawl. In general, though, meh to the Pluralsight voice actors. I think part of the problem is that the general roundness of American vowels gets further exaggerated when software engineers attempt to talk folksy in order to create a connection with the audience. It’s a strange false Americanism I find jarring. On the other hand, the common-man approach can work amazingly well when it is authentic, as Jamie King does in his Youtube videos on C++ pointers and references – it sounds like Seth Rogan is teaching you pointer arithmetic.

Wouldn’t it be fun, though, to introduce some heavy weight voice acting as a premium Pluralsight experience? The current videos wouldn’t have to change at all. Just leave them be. Instead, we would have someone write up a typescript of the course and then hand it over to a voice actor to dub over the audio track. Voila! Instantly improved learning experience.

Wouldn’t you like to learn AngularJS Unit Testing in-depth, Using ngMock with Sir Ian McKellan? Or how about C# Fundamentals with Patrick Stewart? MongoDB Administration with Hellen Mirren. Finally, what about Ethical Hacking voiced by Angelina Jolie?

It doesn’t even have to be big movie actors, either. You can learn from Application Building Patterns with Backbone.js narrated by Steve Downes, the voice behind Master Chief, or Scrum Master Skills narrated by H. Jon Benjamin, the voice of Archer.

Finally, the voice actors could even do their work in character for an additional platinum experience for diamond members – would you prefer being taught AngularJS Unit Testing by Sir Ian McKellan or by Magneto?

For a small additional charge, you could even be taught by Gandalf the Grey.

Think of the sweet promotion you’d get with that on your resume.

Virtual Reality Device Showdown at CES 2016

WP_20160108_10_04_05_Pro

Virtual Reality had its own section at CES this year in the Las Vegas Convention Center South Hall. Oculus had a booth downstairs near my company’s booth while the OSVR (Open Source Virtual Reality) device was being demonstrated upstairs in the Razer booth. The Project Morpheus (now Playstation VR) was being demoed in the large Sony section of North Hall. The HTC Vive Pre didn’t have a booth but instead opted for an outdoor tent up the street from North Hall as well as a private ballroom in the Wynn Hotel to show off their device.

WP_20160105_18_24_03_Pro

It would be convenient to be able to tell you which VR head mounted display is best, but the truth is that they all have their strengths. I’ll try to summarize these pros and cons first and then go into details about the demo experiences further down.

  • HTC Vive Pre and Oculus Rift have nearly identical specs
  • Pro: Vive currently has the best peripherals (Steam controllers + Lighthouse position tracking), though this can always change
  • Pro: Oculus is first out of the gate with price and availability of the three major players
  • Con: Oculus and Vive require expensive latest gen gaming computers to run in addition to the headsets ($900 US +)
  • Pro: PlayStation VR works with a reasonably priced PlayStation
  • Pro: PlayStation Move controllers work really well
  • Pro: PlayStation has excellent relationships with major gaming companies
  • Con: PlayStation VR has lower specs than Oculus Rift or HTC Vive Pre
  • Con: PlayStation VR has an Indeterminate release date (maybe summer?)
  • Pro: OSVR is available now
  • Pro: OSVR costs only $299 US, making it the least expensive VR device
  • Con: OSVR has the lowest specs and is a bit DIY
  • Pro: OSVR is a bit DIY

You’ll also probably want to look at the numbers:

  Oculus Rift HTC Vive Pre PlayStation VR OSVR Oculus DK2
Resolution 2160 x 1200 2160 x 1200 1920 x 1080 1920 x 1080 1920 x 1080
Res per eye 1080 x 1200 1080 x 1200 960 x 1080 960 x 1080 960 x 1080
FPS 90 Hz 90 Hz 120 Hz 60 Hz 60 / 75 Hz
Horizontal FOV 110 degrees 110 degrees 100 degrees 100 degrees 100 degrees
Headline Game Eve: Valkyrie Elite: Dangerous The London Heist Titans of Space
Price $600 ? ? $299 $350/sold out

 

WP_20160105_19_34_23_Pro

Let’s talk about Oculus first because they started the current VR movement and really deserve to be first. Everything follows from that amazing initial Kickstarter campaign. The Oculus installation was an imposing black fortress in the middle of the hall with lines winding around it full of people anxious to get a seven minute demo of the final Oculus Rift. This was the demo everyone at CES was trying to get into. I managed to get into line half an hour early one morning because I was working another booth. Like at most shows, all the Oculus helpers were exhausted and frazzled but very nice. After some hectic moments of being handed off from person to person, I was finally led into a comfortable room on the second floor of Fortress Oculus and got a chance to see the latest device. I’ve had the DK2 for months and was pleased to see all the improvements that have been made to the gear. It was comfortable on my head and easy to configure, especially compared to the developer kit model that I need a coin in order to adjust. I was placed into a fixed-back chair and an Xbox controller was put into my hand (which I think means Oculus Rift is exclusively a PC device until the Oculus Touch is released in the future) and I was given the choice of eight or so games including a hockey game in which I could block the puck and some pretty strange looking games. I was told to choose carefully as the game I chose would be the only game I would be allowed to play. I chose the space game, Eve Valkyrie, and until my ship exploded I flew 360 degrees through the void fighting off an alien armada while trying to protect the carriers in my space fleet.

WP_20160109_15_41_38_Pro

What can one say? It was amazing. I felt fully immersed in the game and completely forgot about the rest of the world, the marketing people around me, the black fortress, the need to get back to my own booth, etc. If you are willing to pay $700 – $800 for your phone, then paying $600 for the Oculus Rift shouldn’t be such a big deal. And then you need to spend another $900 or more for a PC that will run the rift for you, but then at least you’ll have an awesome gaming machine.

Or you could also just wait for the HTC Vive Pre which has identical specs and feels just as nice and even has its own space game at launch called Elite: Dangerous. While the Oculus booth was targeted at fans, in large part, the Vive was shown in two different places to different audiences. A traveling HTC Vive bus pulled out tents and set up on the corner opposite Convention Hall North. This was for fans to try out the system and involved an hour wait for outdoor demos while demos inside the bus required signing up. I went down the street the the Wynn Hotel where press demos run by the marketing team were being organized in one of the hotel ballrooms. No engineers to talk to, sadly.

Whereas Oculus’s major announcement was about pricing and availability as well as opening up pre-orders, HTC’s announcement was about a technology breakthrough that didn’t really seem like much of a breakthrough. A color camera was placed on the front of HMD that outlines real-world objects around the player in order, among other things, to help the player avoid bumping into things when using the Vive Pre with the Lighthouse peripherals in order to walk around a VR experience.

vive pre

The Lighthouse experience is cool but the experience I most enjoyed was playing Elite: Dangerous with two mounted joysticks. This is a game I’ve played on the DK2 until it stopped working with the DK2 following my upgrade to Windows 10 (which as a Microsoft MVP I’m pretty much required to do) so I was pretty surprised to see the game in the HTC press room and even more surprised when I spent an hour chatting away happily to one of ED’s marketing people.

So this is a big tangent but here’s what I think happened and why the ED Oculus support became rocky a few months ago. Oculus appears to have started courting Eve: Valkyrie a while back, even though Elite: Dangerous was the more mature game. Someone must have decided that you don’t need two space games for one device launch, and so ED drifted over to the HTC Vive camp. And suddenly, support for the DK2 went on the backburner at ED while Oculus made breaking changes in their SDK release and many people who had gotten ED to play with the Rift or gotten the Rift to play with ED were sorely disappointed. At this point, you can make Elite: Horizons (the upgrade from ED) work in VR with Oculus but it is tricky and not documented. You have to download SteamVR, even if you didn’t buy Elite: Horizons from Stream, and jury rig your monitor settings to get everything running well in the Oculus direct mode. Needless to say, it’s clear that Elite’s games are going to run much more nicely if you buy Steam’s Vive and run it through Steam.

As for comparing Oculus Rift and HTC Vive Pre, it’s hard to say. They have the same specs. They both will need powerful computers to play on, so the cost of ownership goes beyond simply buying the HMD. Oculus has the touch controllers, but we don’t really know when they will be ready. HTC Vive has the Lighthouse peripherals that allow you to walk around and the specialized Steam controllers, but we don’t know how much they will cost.

For the moment, then, the best way to choose between the two VR devices comes down to which space flying game you think you would like more. Elite: Dangerous is mainly a community exploration game with combat elements. Eve: Valkyrie is a space combat game with exploration elements. Beyond that, Palmer Luckey did get the ball rolling on this whole VR thing, so all other things being equal, mutatis mutandis, you should probably reward him with your gold. Personally, though, I really love Elite: Horizons and being able to walk around in VR.

WP_20160109_09_39_20_Pro

But then again, one could always wait for PlayStation VR (the head-mounted display formerly known as Project Morpheus). The PlayStation VR demo was hidden in the back of the PlayStation demos, which in turn was at the back of the Sony booth which was at the far corner of the Las Vegas Convention Center North Hall. In other words, it was hard to find and a hike to get to. Once you go to it, though, it became clear that this was, in the scheme of things, a small play for the extremely diversified Sony. There wasn’t really enough room for the four demos Sony was showing and the lines were extremely compressed.

Which is odd because, for me at least, the PlayStation VR was the only thing I wanted to see. It’s by far the prettiest of the four big VR systems. While the resolution is slightly lower than that of the Oculus Rift or HTC Vive Pre, the frame rate is higher. Additionally, you don’t need to purchase a $900 computer to play it. You just need a PlayStation 4. The PlayStation Move controllers, as a bonus, finally make sense as VR controllers.

Best of all, there’s a good chance that PlayStation will end up having the best VR games (including Eve: Valkyrie) because those relationships already exist. Oculus and HTC Vive will likely clean up on the indie-game market since their dev and deployment story is likely going to be much simpler than Sony’s.

WP_20160108_10_04_34_Pro__highres

I waited forty minutes to play the newest The London Heist demo. In it, I rode shotgun in a truck next to a London thug as motorcycles and vans with machine gun wielding riders passed by and shot at me. I shot back, but strangely the most fascinating part for me was opening the glove compartment with the Move controllers and fiddling with the radio controls.

Prepare for another digression or just skip ahead if you like. While I was using Playstion Move controllers (those two lit up things in the picture above that look like neon ice-cream cones) in the Sony booth to change the radio station in my virtual van, BMW had a tent outside the convention center where they demoed a radio tuner in one of their cars that responded to hand gestures. One spun ones finger clockwise to scan through the radio channels. Two fingers pressed forward would pause a track. Wave would dismiss. Having worked with Kinect gestures for the past five years, I was extremely impressed with how good and intuitive these gestures were. They can even be re-programmed, by the way, to perform other functions. One night, I watched my boss close his eyes and perform these gestures from memory in order to lock them into his motor memory. They were that good, so if you have a lot of money, go buy all four VR sets as well as a BMW Series 7 so you can try out the radio.

But I digress. The London Heist is a fantastic game and the Playstation VR is pretty great. I only wish I had a better idea of when it is being released an how much it will cost.

Another great thing about the Sony PlayStation VR area was that it was out in the open unlike the VR demos from other companies. You could watch (for about 40 minutes, actually) as other people went through their moves. Eventually, we’ll start seeing a lot of these shots contrasting what people think they are doing in VR with what they are really doing. It starts off comically, but over time becomes very interesting as you realize the extent to which we are all constantly living out experiences in our imaginations and having imaginary conversations that no one around us is aware of – the rich interior life that a VR system is particularly suited to reveal to us.

WP_20160106_12_33_29_Pro

I found the OSVR demo almost by accident while walking around the outside of the Razer booth. There was a single small room with a glass window in the side where I could spy a demo going on. I had to wait for Tom’s Hardware to go though first, and also someone from Gizmodo, but after a while they finally invited me in and I got to talk to honest to goodness engineers instead of marketing people! OSVR demoed a 3D cut scene rather than an actual game and there was a little choppiness which may have been due to IR contamination from the overhead lights. I don’t really know. But for $299 it was pretty good and, if you aren’t already the proud owner of an Oculus DK2, which has the same specs, it may be the way to go. It also has upgradeable parts which is pretty interesting. If you are a hobbyist who wants to get a better understanding of how VR devices work – or if you simply want a relatively inexpensive way to get into VR – then this might be a great solution.

You could also go even cheaper, down to $99, and get a Samsung Gear VR (or one of a dozen or so similar devices) if you already have a $700 phone to fit into it. Definitely demo a full VR head-mounted display first, though, to make sure the more limited Gear VR-style experience is what you really want.

I also wanted to make quick mention of AntVR, which is an indie VR solution and Kickstarter that uses fiducial markers instead of IR emitters/receivers for position tracking.  It’s a full walking VR system that looked pretty cool.

If walking around with VR goggles seems a bit risky to you, you could also try a harness rig like Omni’s. Ignoring the fact that it looks like a baby’s jumporee, the Omni now comes with custom shoes so running inside it is easier. With practice, it looks like you can go pretty fast in one of these things and maybe even burn some serious calories. There were lots of discussions about where you would put something like this. It should work with any sort of VR setup: the demo systems were using Oculus DK2. While watching the demo I kept wanting to eat baby carrots for some reason.

jumporee

According to various forecasters, virtual reality is going to be as important a cultural touchstone for children growing up today as the Atari 2600 was for my generation.

To quickly summarize (or at least generalize) the benefits of each of the four main VR systems coming to market this year:

1. Oculus Rift – first developed and first to release a full package

2. HTC Vive Pre – best controllers and position tracking

3. PlayStation VR – best games

4. OSVR – best value

You might be a HoloLens developer if

obialex

You currently can sign up to be selected to receive a HoloLens dev kit sometime in the 1st quarter of 2016. The advertised price is $3000 and there’s been lots of kerfuffle over this online, both pro and con. On the one hand, a high price tag for the dev kit ensures that only those who are really serious about this amazing technology will be jumping in. On the other hand, there’s the justifiable concern that only well heeled consulting companies will be able to get their hands on the hardware with this entry price, keeping it out of the hands of indie developers who may (or may not) be able to do the most innovative and exciting things with it.

I feel that both perspectives have an element of truth behind it. Even with the release of the Kinect a few years ago (which had a much much lower barrier to entry) there were similar conversations concerning price and accessibility. All this comes down to a question of who will do the most with the HoloLens and have the most to offer. In the long run, after all, it isn’t the hardware that will be expensive but instead the amount of time garage hackers as well as industry engineers are going to invest into organizing, designing and building experiences. At the end of the day (again from my experience with the Kinect) 80 per cent of these would be bleeding edge technologists will end up throwing up their hands while the truly devoted, it will turn out, never even blinked at the initial price tag.

Concerning the price tag, however, I feel like we are underestimating. For anyone currently planning out AR experiences, is only one HoloLens really going to be enough? I can currently start building HoloLens apps using Unity 3D and have a pretty good idea of how it will work out when (if) I eventually get a device in my hands. There will be tweaking, obviously, and lots of experiential, UX, and performance revelations to take into account, but I can pretty much start now. What I can’t do right now — or even easily imagine – is how to collaborate and share experiences between two HoloLenses. And for me, this social aspect is the most fascinating and largely unexplored aspect of augmented reality.

Virtual reality will have its own forms of sociality that largely revolve around using avatars for interrelations. In essence, virtual reality is always a private experience that we shim social interactions into.

Augmented reality, on the other hand, is essentially a social technology that, for now, we are treating as a private one. Perhaps this is because we currently take VR experiences as the template for our AR experiences. But this is misguided. An inherently and essentially social technology like HoloLens should have social awareness as a key aspect of every application written for it.

Can you build a social experience with just one HoloLens? Which leaves me wondering if the price tag for the HoloLens Development Edition is just $3000 as advertised? Or is it really $6000?

Finally, what does it take to be the sort of person who doesn’t blink at coughing up 3K – 6K for an early HoloLens?

You might be a HoloLens developer if:

  1. Your most prized possession is a notebook in which you are constantly jotting down your ideas for AR experiences.
  2. You are spending all your free time trying to become better with Unity, Unreal and C++.
  3. You are online until 3 in the morning comparing Microsoft and Magic Leap patents.
  4. You’ve narrowed all your career choices down to what gives you skills useful for HoloLens and what takes away from that.
  5. You’ve subscribed to Clemente Giorio’s HoloLens Developers group and Gian Paolo Santapaolo’s HoloLens Developers Worldwide group on Facebook.
  6. You know the nuanced distinctions between various waveguide displays.
  7. You don’t get “structured light” technology and “light field” technology confused.
  8. You practice imaginary gestures with your hands to see what “feels right”.
  9. You watch the Total Recall remake to laugh at what they get wrong about AR.
  10. You are still watching the TV version of Minority Report to try to see what they are getting right about AR.

Please add your own “You might be a HoloLens developer if” suggestions in the comments. 🙂