Congrats to NimbleVR

I had the opportunity to meet Rob Wang, Chris Twigg and Kenrick Kin of 3Gear several years ago when I was in San Francisco demoing retail experiences using the Microsoft Kinect and Surface Table at the 2011 Oracle OpenWorld conference. I had been following their work with stereoscopic finger and hand tracking with dual Kinects and sent them what was basically a fan letter and they were kind enough to send me an invitation to their headquarters.

At the time, 3Gear was co-sharing office space with several other companies in a large warehouse space. Their finger tracking technology blew me away and I came away with the impression that these were some of the smartest people I had ever met working with computer vision and the Kinect. After all, they’re basically all Phd’s with backgrounds at companies like Industrial Light and Magic and Pixar.

I’ve written about them several times on this blog and nominated them for the Kinect v2 preview progrram. I was extremely excited when Chris agreed to present at the ReMIX conference some friends and I organized in Atlanta a few years ago for designers and developers. Here is a video of Chris’s amazing talk.

Bringing ‘Minority Report’ to your Desk: Gestural Control Using the Microsoft Kinect – Chris Twigg from ReMIX South on Vimeo.

Since then, 3Gear have worked on the problem of finger and hand tracking on various commercial devices in multiple configurations. In October of 2014 the guys at 3Gear initiated a Kickstarter project for a sensor they had developed called Nimble Sense. Nimble Sense is a depth sensor built from commodity components that is intended to be mounted on the front of an Oculus Rift headset. It handles the difficult problem of providing a good input device for the VR system which has the obvious side-effect of preventing you from seeing your own hands.

The solution, of course, is to represent the interaction controller – in this case the user’s hands – in the virtual world itself. Leap Motion, which produces another cool finger tracking device, also is working on creating a solution for this. The advantage the 3Gear people have, of course, is that they have been working on this particular problem with particular expertise in gesture tracking – rather than merely finger tracking – as well as visualization.

After exceeding their original goal in pledges, 3Gear abruptly cancelled their kickstarter on December 11th and the official 3Gear.com website I have been going to for news updates about the company was replaced.

This is actually all good news. Nimble VR, a rebranding of 3Gear for the Nimble Sense project, has been purchased by Oculus (which in turn, you’ll recall, was purchased by Facebook several months ago for around $2 billion).

For me this is a Cinderella story. 3Gear / Nimble VR is an extremely small team of extremely smart people who have passed on much more lucrative job opportunities in order to pursue their dreams. And now they’ve achieved their much deserved big payday.

Congratulations Rob, Chris and Kenrick!

Projecting Augmented Reality Worlds

WP_20141105_11_05_56_Raw

In my last post, I discussed the incredible work being done with augmented reality by Magic Leap. This week I want to talk about implementing augmented reality with projection rather than with glasses.

To be more accurate, varieties of AR experiences are often projection based. The technical differences depend on which surface is being projected on. Google glass projects on a surface centimeters from the eye. Magic Leap is reported to project directly on the retina (virtual retinal display technology).

AR experiences being developed at Microsoft Research, which I had the pleasure of visiting this past week during the MVP Summit, are projected onto pre-existing rooms without the need to rearrange the room itself. Using fairly common projection mapping techniques combined with very cool technology such as the Kinect and Kinect v2, the room is scanned and appropriate distortions are created to make projected objects look “correct” to the observer.

An important thing to bear in mind as you look through the AR examples below is that they are not built using esoteric research technology. These experiences are all built using consumer-grade projectors, Kinect sensors and Unity 3D. If you are focused and have a sufficiently strong desire to create magic, these experiences are within your reach.

The most recent work created by this group (led by Andy Wilson and Hrvoje Benko) is a special version of RoomAlive they created for Halloween called The Other Resident. Just to prove I was actually there, here are some pictures of the lab along with the Kinect MVPs amazed that we were being allowed to film everything given that most of the MVP Summit involves NDA content we are not allowed to repeat or comment on.

WP_20141105_004

WP_20141105_016

WP_20141105_013

 

IllumiRoom is a precursor to the more recent RoomAlive project. The basic concept is to extend the visual experience on the gaming display or television with extended content that responds dynamically to what is seen onscreen. If you think it looks cool in the video, please know that it is even cooler in person. And if you like it and want it in your living room, then comment on this thread or on the youtube video itself to let them know it is definitely an M viable product for the XBox One, as the big catz say.

The RoomAlive experience is the crown jewel at the moment, however. RoomAlive uses multiple projectors and Kinect sensors to scan a room and then use it as a projection surface for interactive, procedural games: in other words, augmented reality.

A fascinating aspect of the RoomAlive experience is how it handles appearance preserving point-of-view dependent visualizations: the way objects need to be distorted in order to appear correct to the observer. In the Halloween experience at the top, you’ll notice that the animation of the old crone looks like it is positioned in front of the chair she is sitting on even the the projection surface is actually partially extended in front of the chair back and at the same time extended several feet behind the chair back for the shoulders and head.  In the RoomAlive video just above you’ll see the view dependent visualization distortion occurring with the running soldier changing planes at about 2:32”.

 

You would think that these appearance preserving PDV techniques will fall apart anytime you have more than one person in the room. To address this problem, Hrvoje and Andy worked on another project that plays with perception and physical interactions to integrate two overlapping experiences in a Wizard Battle scenario called Mano-a-Mano or, more technically, Dyadic Projected Spatial Augmented Reality. The globe at visualization at 2:46” is particularly impressive.

My head is actually still spinning following these demos and I’m still in a bit of a fugue state. I’ve had the opportunity to see lots of cool 3D modeling, scanning, virtual experiences, and augmented reality experiences over the past several years and felt like I was on top of it, but what MSR is doing took me by surprise, especially when it was laid out sequentially as it was for us. A tenth of the work they have been doing over the past two years could easily be the seed of an idea for any number of tech startups.

In the middle of the demos, I leaned over to one of the other MVPs and whispered in his ear that I felt like Steve Jobs at Xerox PARC seeing the graphical user interface and mouse for the first time. He just stroked his beard and nodded. It was a magic moment.

Kinect SDK 2.0 Live!

WP_20141022_009

Today the Kinect SDK 2.0 – the development kit for the new, improved Kinect version 2 – went live.  You can download it immediately.

Kinect for Windows v2 is now out of its beta and pre-release phase.

Additionally, the Windows Store will now accept apps developed for Kinect. If you have a Kinect for Windows v2 sensor and are running Windows 8, you will be able to use it to run apps you’ve downloaded from the Windows Store.

And if you don’t have a Kinect for Windows v2? In that case, you can use the Kinect sensor from your XBox One and – with a $50 adapter that Microsoft just released – turn it into a sensor you can use with your Windows 8 computer.

You basically now have a choice of purchasing a Kinect for Windows v2 kit for $200, or a separate Kinect for Xbox One for $150 and an adapter for $50.

Alternatively, if you already have the sensor that came with your Xbox One, Microsoft has effectively lowered the entry bar to $50 so you can start trying the new Kinect:

1. Buy the Kinect v2 adapter.

2. Download the SDK to your 64-bit Windows 8 machine.

3. Detach the Kinect from your XBox One and plug it into your computer.

Code Camp, MVP, etc.

Final-Design-For-Kinect-For-Windows-v2-Revealed_title

It has been a busy two weeks. On the first of the month I was renewed for the Microsoft MVP program. I started out as a Client App Dev MVP many years ago and am currently an MVP in the Kinect for Windows program. I’m very grateful to the Kinect for Windows team for re-upping me again this year. It’s a magnificent program and the team is incredibly supportive and helpful. It’s also an honor to be associated with the other K4W MVPs who are all amazing in their own right and, to be honest, somewhat intimidating. But they politely laugh at my jokes in group calls and rarely call me out when I say something stupid. For all this, I am very grateful.

I’m often asked how one gets into the MVP program. There are, of course, midnight rituals and secret nominations as with any similar association of people. In general, however, the MVP is given out for participating in community activities like message boards (yes, you should be answering questions on the MSDN forums and passing your knowledge on to others!) as well as Code Camps like the one I attended this past Saturday.

My talk at the 2014 Code Camp Atlanta was on the Kinect for Windows v2. It was appropriately called “Handwaving with the Kinect for Windows v2” since the projector in the room didn’t work for the first twenty minutes or so of the presentation. I was delighted to find out that I actually knew enough to talk through the features of the new Kinect without notes, slides, or a way to show my Kinect demos and still remain relatively entertaining and informative.

Once the nameless but wonderful tech guy finished installing a second projector in the room as I was going through my patter, I was able to start navigating through my slides using hand gestures and this gesture mapper tool I built last year: http://channel9.msdn.com/coding4fun/kinect/Kinect-PowerPoint-Mapper-A-fresh-look-at-Kinecting-to-PowerPoint

Anyways, I wanted to express my appreciation for the early morning attendees who sat through my hand-waving exercise and I hope it got you interested enough to download the sdk and start trying your hand at Kinect development.

MSR Mountain View and Kinect

Just before the start of the weekend, Mary Jo Foley broke the story that the Mountain View lab of Microsoft Research was being closed.  Ideally, most of the researchers will be redistributed to other locations and not be casualties of the most recent round of layoffs.

The Kinect sensor is one of the great examples of Microsoft Research successfully working well with a product team to bring something to market.  Researchers from around the world worked on Project Natal (the code-name for Kinect).  An extremely important contribution to the machine learning required to make skeleton tracking work on the Kinect was made in Mountain View.

Machine learning works best when you are dealing with lots of data.  In the case of skeleton tracking, millions of images had been gathered.  But how do you find the hardware to process that many images?

Fortunately, the Mountain View group specialized in distributed computing.  One researcher in particular, Mihai Budiu, worked on a project that he believed would help the Project Natal team to solve one of its biggest hurdles.  The project was called DryadLinq and could be used to coordinate parallel processing over a large server cluster.  The problem it solved was recognizing body parts for people of various sizes and shapes – a preliminary step to generating the skeleton view.

The research lab at Mountain View was an essential part of the Kinect story.  It will be missed.

Playing with Toasters

WP_20140811_001

Every parent at some point faces the dilemma of what to tell her children.  There’s a general and possibly mistaken notion that if you provide education about S-E-X in schools, you will encourage young ‘uns to turn words into deeds.  Along the same lines, we can’t resist telling our children not to put forks in the toaster, even though we know that a child told not to do something will likely do it within five minutes.  No more dangerous words were ever spoken than “don’t touch that!”

On a recent conference call I was on, someone asked if it would be dangerous to take an Xbox One Kinect and plug it into your computer.  Although I waited more than five minutes, I eventually had to give into my impulse to find out.

I have several versions of the Kinect.  I have both of the older model Kinect for Xbox 360 and Kinect for Windows v1.  I also have a Kinect for Xbox One, Kinect for Windows v2 developer preview and Kinect for Windows v2 consumer (shown above).

The common opinion is that most of the differences between versions of the Kinect v2 are purely cosmetic.  Kinect for Windows has a “Kinect” logo where the Kinect for Xbox One has a metallic “XBOX” logo.  The preview K4Wv2 hardware is generally assumed to be a Kinect for Xbox One with razzmatazz stickers all over it.   There is a chance, however, that the Kinect for Windows hardware lacks the IR blaster included with the Xbox One’s Kinect.  The blaster is used to change channels on your TV from the Kinect, which “blasts” an IR signal over your room which the TV’s IR receiver picks up the reflection of.

  Kinect for Xbox One K4Wv2 Preview Kinect for Windows v2
SDK Color Sample yes yes yes
SDK Audio Sample yes yes yes
SDK Coord Map yes yes yes
Xbox Fitness yes yes no
Xbox Commands yes yes no
Xbox IR Blaster yes yes no
       

This was slightly scary, of course.  I didn’t want to brick a $150 device.  Then again, I reasoned it was being done for science – or at least for a blog post – so needs must.

I began by running the preview hardware against the latest SDK 2.0 preview.  I plugged the preview hardware into the new power/usb adapter that comes with the final hardware.  I then ran the color camera sample WPF project that comes with the SDK 2.0 preview.  It took about 30 to 60 seconds for the Kinect to be recognized as the firmware was automatically updated.  The sample then ran correctly.  I did the same with the Audio sample and the Coordinate Mapper, both of which ran correctly.

Next, I tried the same thing with the Kinect for Xbox One.  I plugged it into the Kinect for Windows v2 adapter and waited for it to be recognized.  I was, of course, concerned that even if I succeeded in getting the device to run, I might hose the Kinect for use back on my Xbox.  As things turned out, though, after a brief wait, the Kinect for Xbox One ran fine on a PC and with applications build on the SDK 2.0.

I think plugged my Kinect for Xbox One back into my Xbox One.  The only application I have that responds to the player’s body is the fitness app.  I fired that up and it recognized depth just fine.  I also tried speech commands such as “Xbox Go Home” and “Xbox Watch TV”.  I tested the IR blaster by shouting out “Xbox Watch PBS”.  Apparently my Kinect for Xbox was not damaged.

I then performed the same actions using the Kinect for Windows preview hardware and, I think, confirmed the notion that it is simply a Kinect for Xbox.  Everything I could do with the Xbox device could also be done using the Kinect for Windows preview hardware.

Finally I plugged in the Kinect for Windows final hardware and nothing happened.  The IR emitters never lighted up.  Either the hardware is just different enough or there is no Xbox compatible firmware installed on it.

There was no smoke and no one was harmed in the making of this blog post.

Kinect v2 Final Hardware

WP_20140725_001

The final Kinect hardware arrived at my front door this morning.  I’ve been playing with preview hardware for the past half year – and working on a book on programming it as various versions of the SDK were dropped on a private list – but this did not dampen my excitement over seeing the final product.

WP_20140725_002

The sensor itself looks pretty much the same as the the preview hardware – and as far as I know the internal components are identical.  The cosmetic differences include an embossed metal “Kinect” on the top of the sensor and the absence of the razzmatazz stickers – which I believe were simply meant to cover up Xbox One branding.

WP_20140725_003

Besides allowing you to admire my beige shag carpet, the photo above illustrates the major difference between the preview hardware and the final hardware.  It’s all about the adapters.  At the top of the picture are the older USB and power adapters, while below them are the new, sleek, lightweight versions of the same.  I’ve been carrying around that heavy Xbox power adapter for months, from hotel room to hotel room, in order to spend my evenings away from home working on Kinect code.  Naturally, I was often stopped by TSA and am happy that will not be happening any more.

Dawn Shines on Manhattan 24-Hour Hackathon

WP_20140621_008

This past weekend, it has been my privilege to attend a 24 hour Kinect Hackathon in Manhattan sponsored by the NUI Central meetup and the Kinect team.  It’s been great seeing Ben, Carmine and Lori from the Kinect team once again as well as meeting lots of new people.  My fellow MVP Adras Valvert from Hungary was also here.  Deb and Ken from the meetup group did an amazing job organizing the event and keeping the coffee flowing.

Judging is happening now.  I was supposed to walk around and help people throughout the night with their code but for the most part I’ve simply been in a constant state of amazement over what these developers and designers have been able to come up with.  In many cases, these are java and web developers working with WPF and Windows Store Apps for the first time.

Here are some cool things I’ve seen.  Several teams are working with experimental near field technology to do up close gesture detection along the lines of Leap Motion and Intel’s Perceptual Computing.  One old friend is here working on improving his algorithms for doing contactless heart rate detection.  There are several finger detection apps doing anything from making a mechanical arduino controlled hand open and close in response to the user’s hand opening and closing to a contactless touch typing application.  There’s an awesome Kinect-Occulus Rift mashup that allows the player to see his own virtual body – controlled by the Kinect – and even injects bystanders detected by the Kinect for Windows v2 into the virtual experience.  There’s a great app that brings awareness to the problem of abandoned explosives worldwide which uses Kinect to map out the plane of the floor and then track people as they step carefully over and invisible minefield.

Field research: I also gathered some good material about developers’ pain points in using the Kinect.  I simply went around and asked what devs encountering the Kinect for the first time would like to see in a programming book. 

There’s also apparently a picture going around showing me sprawled on the floor and drooling down the side of my face.  Please delete this immediately if you encounter it.

It’s been a long sleepless night for many people but also a testament to the ingenuity and stamina of these brilliant developers.

3D Movies with Kinect for Windows v2

3d

To build 3D movies with Kinect, you first have to import all of your depth data into a point cloud.  A point cloud is basically what it sounds like: a cloud of points in 3D space.  Because the Kinect v2 has roughly 3 times the depth data provided by the Kinect v1, the cloud density is much richer using it.

pc

The next step in building up a 3D movie is to color in the pixels of the point cloud.  Kinect v1 used an SD camera for color images.  For many people, this resolution was too low, so they came up with various ways to sync the data from an DSLR camera with the depth data.  This required precision alignment to make sure the color images lined up with and then scaled to the depth pixels.  This alignment also tended to be done in post-production rather than in realtime.  One of the most impressive tools created for this purpose is called the RGBD Toolkit, which was used to make the movie Clouds by James George and Jonathan Minard.  The images in this post, however, come from an application I wrote over Memorial Day weekend.

3dcolor3cropped

Unlike its predecessor, Kinect for Windows v2 is equipped with an HD video camera.  The Kinect for Windows v2 SDK also has facilities to map this color data to the depth positions in realtime, allowing me to record in 3D and view that recording at the same time.  I can even rotate and scale the 3D video live.

trinity-matrix-opening

You’ll also notice some distortion in these images.  I actually ran this 3D video capture on a standard laptop computer.  One of the nicest features of the Kinect v2 is that it takes advantage of the GPU for calculations.  If I don’t like the quality of the images I’m getting, I can always switch to a more powerful machine.

3dcolor1cropped

The next step, of course, is to use multiple Kinects to record 3D video.  While I can rotate the current images, there are shadows and distortions which become more evident when the image is rotated to orientations not covered by a single camera.  Two cameras, on the other hand, might allow me to do a live “bullet time” effect. 

I don’t really know what this would be used for – for now it’s just a toy I’m fiddling with–, but I think it would at least be an interesting way to tape my daughter’s next high school musical.  On the farther end of the spectrum, it might be an amazing way to do a video chat or to take the corporate video presentation to the next level.

A Guide to Kinect related sessions at //build 2014

lauren

Build is over and there were some cool announcements as well as sessions related to Kinect for Windows v2.  I’ve added links below to the Kinect sessions as well as some additional sessions I found interesting. 

The second of these links concerns using Kinect v2 for Windows Store apps (they only run on Win8 Pro, not WinRT – but still pretty cool).

Kinect 101: Introduction to Kinect for Windows: http://channel9.msdn.com/Events/Build/2014/2-514

Bringing Kinect into Your Windows Store App:
http://channel9.msdn.com/Events/Build/2014/2-532

 

Since Kinect was initially designed for XBox, I found these XBox One sessions pretty enlightening:

Understanding the Xbox One Game Platform Built on Windows: http://channel9.msdn.com/Events/Build/2014/2-651

Leveraging Windows Features to Build Xbox One App Experiences: http://channel9.msdn.com/Events/Build/2014/3-648

 

Here’s a session on how to develop newly announced “universal apps” – which isn’t directly tied to Kinect development, but may be one day:

Building Windows, Windows Phone, and Xbox One Apps with HTML/JS/CSS & C++:  http://channel9.msdn.com/Events/Build/2014/2-649

 

Two C++ sessions, just kuz:

Modern C++: What You Need to Knowhttp://channel9.msdn.com/Events/Build/2014/2-661

Native Code Performance on Modern CPUs: A Changing Landscape: http://channel9.msdn.com/Events/Build/2014/4-587

 

Finally, here’s an all-too-short channel 9 panel discussion with friend Rick Barraza from Microsoft and some dudes from Obscura and Stimulant talking about design and dropping some great one-liners I plan to steal and so can you (note the excellent use of the $12K 46-inch massive multi-touch Perceptive Pixel device in the background):

build 

Experience at the Intersection of Design and Development: http://channel9.msdn.com/Events/Build/2014/9-003