The Imaginative Universal

Studies in Virtual Phenomenology -- by @jamesashley, Kinect MVP and author

The HoloCoder’s Resume

 agile

In an ideal world, the resume is an advertisement for our capabilities and the interview process is an audit of those claims. Many factors have contributed to complicating what should be a simple process.

 

 ihaventreadyourresume

The first is the rise of professional IT recruiters and the automation of the resume process. Recruiters bring a lot to the game, offering a wider selection of IT job candidates to hiring companies, on the one hand, and providing a wider selection of jobs to job hunters, on the other. Automation requires standardization, however, and this has led to an overuse of key search terms when matching candidates to positions. The process begins with job specs from the hiring company -- which parenthetically often have little to do with the actual job itself and highlights the frequent disconnect between IT departments and HR departments. A naive job hunter would try to describe their actual experience, which typically will not match the job spec as written by HR. At this point the recruiter helps the job hunter modify the details of her resume to match the interface provided by the hiring company by injecting and prioritizing key buzzwords into the resume. “I’m sorry but Lolita, Inc will never hire you unless you have synesthesia listed in your job history. You do have experience with synesthesia, don’t you?”

 

clusteredindex 

All of this gerrymandering is required in order to get to the next step, the job interview. Unfortunately, the people doing the job interview have little confidence in the resume as a vehicle for accurately describing a candidate’s actually abilities. First of all, they know that recruiters have already gone over it to eliminate useful information and replace it with keywords instead. Next, the interviewers typically haven’t actually seen the HR job specs and do not understand what kind of role they are hiring for. Finally, none of the interviewers have any particular training in doing job interviews or any particular skill in ascertaining what a candidate knows. In short, the interviewer doesn’t know what he’s looking for and wouldn’t know how to get it if he did.

greatestweakness

A savvy interviewer will probably realize that he is looking for the sort of generalist that Joel Spolsky describes as “smart and gets things done,” but how do you interview for that? The tools the interviewer is provided with are not generic but instead highly specific technology skills. At some point, this impedance mismatch between technology specific interview questions on the one had and a desire to hire generalists on the other (technology, after all, simply changes too quickly to look for only one skillset) let to an increased reliance on behavioral questions and eventually Google-style language games. Neither of these, it turns, out, particularly help in hiring good candidates.

 polymorphism

Once we historically severed any attempt to match interview questions to actual skills, the IT interview process was allowed to become a free floating hermeneutic exercise. Abstruse but non-specific questions involving principles and design patterns have taken over the process. This has led to two strange outcomes. On the one hand, job applicants are now required to be fluent in technical information they will never actually use in their jobs. Literary awareness of ten year old blog posts by Martin Fowler are more important than actually knowing how to get things done. And if the job interviewer exhibits any self-awareness when he turns down a candidate for not being clear on the justified uses of the CQRS pattern (there are none), it will not be because the candidate didn’t know something important for the job but rather because the candidate was unwilling to play the software architecture language game, and anyone unwilling to play the game is likely going to be a poor cultural fit.

The other consequence of an increased reliance on abstruse and non-essential IT knowledge has been the rise of the Architect across the industry. The IT industry has created a class of software developers who cannot actually develop software but instead specializes in telling other people what is wrong with their code. The architect is probably a specialization that probably indicates a deviant phase in the software industry – but at the same time it is a natural outcome of our IT job spec – resume – interview process. The skills of a modern software architect – knowledge of abstruse information and jargon often combined with an inability to get things done – is what we currently look for in our IT hiring rituals.

whencanyoustart

This distinction between the ritual of IT hiring and the actual goals of IT hiring become most apparent when we look for specific as opposed to generalist skills. We hire generalists to be on staff over a long period. We hire specialists to perform difficult but real tasks that can eventually be handed over to our generalists – when we need to get something specific done.

Which gets us to the point of this post. What are the skills we should look for when hiring for a HoloLens developer? And what are the skills a HoloLens developer should be highlighting on her resume?

At this point in time, when there is still no SDK generally available for the HoloLens and all HoloLens coders are working for Microsoft and under various NDAs, it is hard to say. Fortunately, important clues have been provided by the recent announcement of the first consulting agency dedicated to the HoloLens and co-founded by someone who has been working on HoloLens applications for Microsoft over the past year. The company Object Theory was just started by Michael Hoffman and Raven Zachary and they threw up a website to advertise this new venture.

Among the tasks involved in creating this sort of extremely specialized website is explaining what capabilities you offer. First, they offer experience since Hoffman has worked on several of the demos that Microsoft has been exhibiting at conferences and in promotional videos. But is this enough of a differentiator? What skills do they have to offer to a company looking to build a HoloLens application?

This is part of the fascination of their “Work” page. It cannot describe any actual work since the company just started and hasn’t technically done any technical work. Instead, it provides a list of capabilities that look amazingly like resume keywords – but different from any keywords you may have come across:

 

          • Entirely new Natural User Interfaces (NUI)
          • Surface reconstruction and object persistence
          • 3D Spatial HRTF audio
          • Mesh reduction, culling and optimization
          • Baked shadows and ambient occlusion
          • UV mapping
          • Optimized render shaders
          • Efficient WiFi connectivity to back-end services
          • Unity and C#
          • Windows 10 APIs

These, in fact, are probably the sorts of skills you should be putting on your resume – or learning about in order to put on your resume – if getting a job programming HoloLens is your goal.

The verso side of this coin is that the list can also be turned into a great set of interview questions for someone thinking of hiring for HoloLens development, for instance:

Explain the concept of NUI to me.

Tell me about your experience with surface reconstruction and object persistence.

What is 3D spatial HRTF audio and why is it important for engineering HoloLens apps?

What are mesh reduction, mesh culling and mesh optimization?

Do you know anything about baked shadows and ambient occlusion?

Describe how you would go about performing UV mapping.

What are optimized render shaders and when would you need them?

How does the HoloLens communicate with external services such as a database?

What are the advantages and disadvantages of developing in Unity vs C#?

Describe the Windows 10 APIs that are used in HoloLens application development.

 

Then again, maybe these questions are a bit too abstruse?

HoloLens App Development with Unity3D

A few months ago I wrote a speculative piece about how HoloLens might work with XAML frameworks based on the sample applications Microsoft had been showing.

Even though Microsoft has still released scant information about integration with 3D platforms, I believe I can provide a fairly accurate walkthrough of how HoloLens development will occur for Unity3D. In fact, assuming I am correct, you can begin developing games and applications today and be in a position to release a HoloLens experience shortly after the hardware becomes available.

To be clear, though, this is just speculative and I have no insider information about the final product that I can talk about. This is just what makes sense based on publicly available information regarding HoloLens.

Unity3D integration with third party tools such as Kinect and Oculus Rift occurs through plugins. The Kinect 2 plugin can be somewhat complex as it introduces components that are unique to the Kinect’s capabilities.

The eventual HoloLens plugin, on the other hand, will likely be relatively simple since it will almost certainly be based on a pre-existing component called the FPSController (in Unity 5.1 which is currently the latest).

To prepare for HoloLens, you should start by building your experience with Unity 5.1 and the FPSController component. Here’s a quick rundown of how to do this.

Start by installing the totally free Unity 5.1 tools: http://unity3d.com/get-unity/download?ref=personal

newproject

Next, create a new project and select 3D for the project type.

newprojectcharacters

Click the button for adding asset packages and select Characters. This will give you access to the FPSController. Click done and continue. The IDE will now open with an practically empty project.

assetstore

At this point, a good Unity3D tutorial will typically show you how to create an environment. We’re going to take a shortcut, however, and just get a free one from the Asset Store. Hit Ctrl+9 to open the Asset Store from inside your IDE. You may need to sign in with your Unity account. Select the 3D Models | Environments menu option on the right and pick a pre-built environment to download. There are plenty of great free ones to choose from. For this walkthrough, I’m going to use the Japanese Otaku City by Zenrin Co, Ltd.

import

After downloading is complete, you will be presented with an import dialog box. By default, all assets are selected. Click on Import.

hierarchy_window

Now that the environment you selected has been imported, go the the scenes folder in your project window and select a sample scene from the downloaded environment. This will open up the city or dungeon or forest or whatever environment you chose. It will also make all the different assets and components associated with the scene show up in your Hierarchy window. At this point, we want to add the first-person shooter controller into the scene. You do this by selecting the FPSController from the project window under Assets/Standard Assets/Characters/FirstPersonCharacter/Prefabs and dragging the FPSController into your Hierarchy pane.

fpscontroller

This puts a visual representation of the FPS controller into your scene. Select the controller with your mouse and hit “F” to zoom in on it. You can see from the visual representation that the FPS controller is basically a collision field that can be moved with a keyboard or gamepad that additionally has a directional camera component and a sound component attached. The direction the camera faces ultimately become the view that players see when you start the game.

dungeon

Here is another scene that uses the Decrepit Dungeon environment package by Prodigious Creations and the FPS controller. The top pane shows a design view while the bottom pane shows the gamer’s first-person view.

buttons

You can even start walking through the scene inside the IDE by simply selecting the blue play button at the top center of the IDE.

The way I imagine the HoloLens integration to work is that another version of FPS controller will be provided that replaces mouse controller input with gyroscope/magnetometer input as the player rotates her head. Additionally, the single camera view will be replaced with a two camera rig that sends two different, side-by-side feeds back to the HoloLens device. Finally, you should be able to see how all of this works directly in the IDE like so:

stereoscope

There is very good evidence that the HoloLens plugin will work something like I have outlined and will be approximately this easy. The training sessions at the Holographic Academy during /Build pretty much demonstrated this sort of toolchain. Moreover, this is how Unity3D currently integrates with virtual reality devices like Gear VR and Oculus Rift. In fact, the screen cap of the Unity IDE above is from an Oculus game I’ve been working on.

So what are you waiting for? You pretty much have everything you already need to start building complex HoloLens experiences. The integration itself, when it is ready, should be fairly trivial and much of the difficult programming will be taken care of for you.

I’m looking forward to seeing all the amazing experiences people are building for the HoloLens launch day. Together, we’ll change the future of personal computing!

Marshall McLuhan and Understanding Digital Reality

Understanding McLuhan

While slumming on the internet looking for new content about digital media I came across this promising article entitled Virtual Reality, Augmented Reality and Application Development. I was feeling hopeful about it until I came across this peculiar statement:

“Of the two technologies, augmented reality has so far been seen as the more viable choice…”

What a strange thing to write. Would we ever ask whether the keyboard or the mouse is the more viable choice? The knife or the fork? Paper or plastic? It should be clear by now that this is a false choice and not a case of having your cake or eating it, too. We all know that the cake is a lie.

But this corporate blog post was admittedly not unique in creating a false choice between virtual reality and augmented reality. I’ve come across this before and it occurred to me that this might be an instance of a category mistake. A category mistake is itself a category of philosophical error identified by the philosopher Gilbert Ryle to tackle the sticky problem of Cartesian dualism. He pointed out that even though it is generally accepted in the modern world that mind is not truly a separate substance from mind but is in fact a formation that emerges in some fashion out of the structure of our brains, we nevertheless continue to divide the things of the world, almost as if by accident, into two categories: mental stuff and material stuff.

sony betamax

There are certainly cases of competing technologies where one eventually dies off. The most commonly cited example is the Betamax and VHS. Of course, they both ultimately died off and it is meaningless today to claim that either one really succeeded. There are many many more examples of apparently technological duels in which neither party ultimately falls or concedes defeat. PC versus Mac. IE vs Chrome. NHibernate vs EF. etc.

The rare case is when one technology completely dominates a product category. The few cases where this has happened, however, have so captured our imaginations that we forget it is the exception and not the rule. This is the case with category busters like the iPhone and the iPad – brands that are so powerful it has taken years for competitors to even come up with viable alternatives.

What this highlights is that, typically, technology is not a zero sum game. The norm in technology is that competition is good and leads to improvements across the board. Competition can grow an entire product category. The underlying lie, however, is perhaps that each competitor tells itself that they are in a fight to the death and that they are the next iPhone. This is rarely the case. The lie beneath that lie is that each competitor is hoping to be bought out by another larger company for billions of dollars and has to look competitive up until that happens. A case of having your cake and eating it, too.

stop_staring

There is, however, a category in which one set of products regularly displace another set of products. This happens in the fashion world.

Each season, each year, we change out our cuts, our colors and accessories. We put away last year’s fashions and wouldn’t be caught dead in them. We don’t understand how these fashion changes occur or what rules they obey but the fashion houses all seem to conform to these unwritten rules of the season and bring us similar new things at the proper time.

This is the category mistake that people make when they ask things such as which is more viable: augmented reality or virtual reality? Such questions belong to the category of fashion (which is in season: earth tones or pastels?) and not to technology. In the few unusual cases where this does happen, then the category mistake is clearly in the opposite direction. The iPhone and iPad are not technologies: they are fashion statements.

Virtual reality and augmented reality are not fashion statements. They aren’t even technologies in the way we commonly talk about technology today – they are not software platforms (though they require SDKs), they are not hardware (though they are useless without hardware), they are not development tools (you need 3D modeling tools and game engines for this). In fact, they have more in common with books, radio, movies and television than they do to software. They are new media.

Dr Mabuse

A medium, etymologically speaking, is the thing in the middle. It is a conduit from a source to a receiver – from one world to another. A medium lets us see or hear things we would otherwise not have access to. Books allow us to hear the words of people long dead. Radio transmits words over vast distances. Movies and television let us see things that other people want us to see and we pay for the right to see those things. Augmented reality and virtual reality, similarly, are conduits for new content. They allow us to see and hear things in ways we haven’t experienced content before.

The moment we cross over from talking about technology and realize we are talking about media, we automatically invoke the spirit of Marshall McLuhan, the author of Understanding Media: The Extensions of Man. McLuhan thought deeply about the function of media in culture and many of his ideas and aphorisms, such as “the medium is the message,” have become mainstays of contemporary discourse. Other concepts that were central to McLuhan’s thought still elude us and continue to be debated. Among these are his two media categories: hot and cold.

understanding media

McLuhan claimed that any media is either hot or cold, warm or cool. Cool mostly means what we think it means metaphorically; for instance, James Dean is cool in exactly the way McLuhan meant. Hot media, in turn, is in most ways what you would think it is: kinetic with a tendency to overwhelm the senses. To illustrate what he meant by hot and cold, McLuhan often provides contrasting examples. Movies are a hot medium. Television is a cold medium. Jazz is a hot medium. The twist is a cool medium. Cool media leave gaps that the observer must fill in. It is highly participatory. Hot media is a wall of sensation that does not require any filling in: McLuhan characterizes it as “high definition.”

I think it is pretty clear, between virtual reality and augmented reality, which falls into the category of a cool medium and which a hot one.

To help you come to your own conclusions about how to categorize augmented reality glasses and the virtual reality goggles, though, I’ll provide a few clues from Understanding Media:

“In terms of the theme of media hot and cold, backward countries are cool, and we are hot. The ‘city slicker’ is hot, and the rustic is cool. But in terms of the reversal of procedures and values in the electric age, the past mechanical time was hot, and we of the the TV age are cool. The waltz was hot, fast mechanical dance suited to the industrial time in its moods of pomp and circumstance.”

 

“Any hot medium allows of less participation than a cool one, as a lecture makes for less participation than a seminar, and a book for less than dialogue. With print many earlier forms were excluded from life and art, and many were given strange new intensity. But our own time is crowded with examples of the principle that the hot form excludes, and the cool one includes.”

 

“The principle that distinguishes hot and cold media is perfectly embodied in the folk wisdom: ‘Men seldom make passes at girls who wear glasses.’ Glasses intensify the outward-going vision, and fill in the feminine image exceedingly, Marion the Librarian notwithstanding. Dark glasses, on the other hand, create the inscrutable and inaccessible image that invites a great deal of participation and completion.”

 

audrey hepburn glasses