Tag Archives: Kinect for Windows v2

Dawn Shines on Manhattan 24-Hour Hackathon


This past weekend, it has been my privilege to attend a 24 hour Kinect Hackathon in Manhattan sponsored by the NUI Central meetup and the Kinect team.  It’s been great seeing Ben, Carmine and Lori from the Kinect team once again as well as meeting lots of new people.  My fellow MVP Adras Valvert from Hungary was also here.  Deb and Ken from the meetup group did an amazing job organizing the event and keeping the coffee flowing.

Judging is happening now.  I was supposed to walk around and help people throughout the night with their code but for the most part I’ve simply been in a constant state of amazement over what these developers and designers have been able to come up with.  In many cases, these are java and web developers working with WPF and Windows Store Apps for the first time.

Here are some cool things I’ve seen.  Several teams are working with experimental near field technology to do up close gesture detection along the lines of Leap Motion and Intel’s Perceptual Computing.  One old friend is here working on improving his algorithms for doing contactless heart rate detection.  There are several finger detection apps doing anything from making a mechanical arduino controlled hand open and close in response to the user’s hand opening and closing to a contactless touch typing application.  There’s an awesome Kinect-Occulus Rift mashup that allows the player to see his own virtual body – controlled by the Kinect – and even injects bystanders detected by the Kinect for Windows v2 into the virtual experience.  There’s a great app that brings awareness to the problem of abandoned explosives worldwide which uses Kinect to map out the plane of the floor and then track people as they step carefully over and invisible minefield.

Field research: I also gathered some good material about developers’ pain points in using the Kinect.  I simply went around and asked what devs encountering the Kinect for the first time would like to see in a programming book. 

There’s also apparently a picture going around showing me sprawled on the floor and drooling down the side of my face.  Please delete this immediately if you encounter it.

It’s been a long sleepless night for many people but also a testament to the ingenuity and stamina of these brilliant developers.

3D Movies with Kinect for Windows v2


To build 3D movies with Kinect, you first have to import all of your depth data into a point cloud.  A point cloud is basically what it sounds like: a cloud of points in 3D space.  Because the Kinect v2 has roughly 3 times the depth data provided by the Kinect v1, the cloud density is much richer using it.


The next step in building up a 3D movie is to color in the pixels of the point cloud.  Kinect v1 used an SD camera for color images.  For many people, this resolution was too low, so they came up with various ways to sync the data from an DSLR camera with the depth data.  This required precision alignment to make sure the color images lined up with and then scaled to the depth pixels.  This alignment also tended to be done in post-production rather than in realtime.  One of the most impressive tools created for this purpose is called the RGBD Toolkit, which was used to make the movie Clouds by James George and Jonathan Minard.  The images in this post, however, come from an application I wrote over Memorial Day weekend.


Unlike its predecessor, Kinect for Windows v2 is equipped with an HD video camera.  The Kinect for Windows v2 SDK also has facilities to map this color data to the depth positions in realtime, allowing me to record in 3D and view that recording at the same time.  I can even rotate and scale the 3D video live.


You’ll also notice some distortion in these images.  I actually ran this 3D video capture on a standard laptop computer.  One of the nicest features of the Kinect v2 is that it takes advantage of the GPU for calculations.  If I don’t like the quality of the images I’m getting, I can always switch to a more powerful machine.


The next step, of course, is to use multiple Kinects to record 3D video.  While I can rotate the current images, there are shadows and distortions which become more evident when the image is rotated to orientations not covered by a single camera.  Two cameras, on the other hand, might allow me to do a live “bullet time” effect. 

I don’t really know what this would be used for – for now it’s just a toy I’m fiddling with–, but I think it would at least be an interesting way to tape my daughter’s next high school musical.  On the farther end of the spectrum, it might be an amazing way to do a video chat or to take the corporate video presentation to the next level.

Razzle Dazzle

kinect for XBox One

People continue to ask what the difference is between the Kinect for XBox One and the Kinect for Windows v2.  I had to wait to unveil the Thanksgiving miracle to my children, but now I have some pictures to illustrate the differences.

side by side

On the sensors distributed through the developer preview program (thank you Microsoft!) there is a sticker along the top covering up the XBox embossing on the left.  There is an additional sticker covering up the XBox logo on the front of the device.  The power/data cables that comes off of the two  sensors look a bit like tails.  Like the body of the sensors, the tails are also identical.  These sensors plug directly into the XBox One.  To plug them into a PC, you need an additional adapter that draws power from a power cord and sends data to a USB 3.0 cable and passes both of these through the special plugs shown in the picture below.


So what’s with those stickers?  It’s a pattern called razzle dazzle (and sometimes razzmatazz).  In World War I, it was used as a form of camouflage for war ships by the British navy.  It’s purpose is to confuse rather than conceal — to obfuscate rather than occlude.

war razzle dazzle

Microsoft has been using it not only for the Kinect for Windows devices but also in developer units of the XBox One and controllers that went out six months ago. 

This is a technique of obfuscation popular with auto manufacturers who need to test their vehicles but do not want competitors or media to know exactly what they are working on.  At the same time, automakers do use this peculiar pattern to let their competitors and the media know that they are, in fact, working on something.

car razzle dazzle

What we are here calling razzle dazzle was, in a more simple age, called the occult.  Umberto Eco demonstrates in his fascinating exploration of the occult, Foucault’s Pendulum, that the nature of hidden knowledge is to make sure other people know you have hidden knowledge.  In other words, having a secret is no good if people don’t know you have it.  Dr. Strangelove expressed it best in Stanley Kubrick’s classic film:

Of course, the whole point of a Doomsday Machine is lost if you keep it a secret!

A secret, however, loses its power if it is ever revealed.  This has always been the difficulty of maintaining mystery series like The X-Files and Lost.  An audience is put off if all you ever do is constantly tease them without telling them what’s really going on. 


By the same token, the reveal is always a bit of a letdown.  Capturing bigfoot and finding out that it is some sort of hairy hominid would be terribly disappointing.  Catching the Loch Ness Monster – even discovering that it is in fact a plesiosaur that survived the extinction of the dinosaurs – would be deflating compared to the sweetness of having it exist as a pure potential we don’t even believe in.

This letdown even applies to the future and new technologies.  New technologies are like bigfoot in the way they disappoint when we finally get our hands on them.  The initial excitement is always short-lived and is followed by a peculiar depression.  Such was the case in an infamous blog post by Scott Hanselman called Leap Motion Amazing, Revolutionary, Useless – but known informally as his Dis-kinect post – which is an odd and ambivalent blend of snarky and sympathetic.  Or perhaps snarky and sympathetic is simply our constant stance regarding the always impending future.


The classic bad reveal – the one that traumatized millions of idealistic would-be Jedi – is the quasi-scientific explanation of midichlorians  in The Phantom Menace.   The offences are many – not least because the mystery of the force is simply shifted to magic bacteria that pervade the universe and live inside sentient beings – an explanation that explains nothing but does allow the force to be quantified in a midichlorian count. 

The midichlorian plot device highlights an important point.  Explanations, revelations and unmaskings do not always make things easier to understand, especially when it’s something like the force that, in some sense, is already understood intuitively.  Every child already knows that by being good, one ultimately gets what one wants and gets along with others.  This is essentially the lesson of that ancient Jedi religion – by following the tenets of the Jedi, one is able to move distant objects with one’s will, influence people, and be one with the universe.  An over-analysis of this premise of childhood virtue destroys rather than enlightens.

the force razzle dazzle

The force, like virtue itself, is a kind of razzle dazzle – by obfuscating it also brings something into existence – it creates a secret.  In attempts to explain the potential of the Kinect sensor, people often resort to images of Tom Cruise at the Desk of the Future or Picard on the holodeck.  The true emotional connection, however, is with that earlier (and adolescent) fantasy awakened by A New Hope of moving things by simply wanting them to move, or changing someone’s mind with a wave of the hand and a few words – these are not the droids you are looking for.  Ben Kenobi’s trick in turn has its primordial source in the infant’s crying and waving of the arms as a way to magically make food appear. 

It’s not coincidental, after all, that Kinect sensors have always had both a depth sensor to track hand movements as well as a virtual microphone array to detect speech.

Kinect for Windows v2 First Look


I’ve had a little less than a week to play with the new Kinect for Windows v2 so far, thanks to the developer preview program and the Kinect MVP program.  The original unboxing video is on Vimeo.  So far it is everything Kinect developers and designers have been hoping for – full HD through the color camera and a much improved depth camera as well as USB 3.0 data throughput. 

Additionally, much of the processing is now occurring on the GPU rather than the onboard chip or your computer’s CPU.  While amazing things were possible with the first Kinect for Windows sensor, most developers found themselves pushing the performance envelope at times and wishing they could get just a little more resolution or just a little more data speed.  Now they will have both.


At this point the programming model has changed a bit between Kinect for Windows v1 and Kinect for Windows v2.  While knowing the original SDK will definitely give you a leg up, a bit of work will still need to be done to port Kinect v1 apps to the new Kinect v2 SDK when it is eventually released.

What I find actually confusing is the naming.  With the first round of devices that came out in 2010-11, we had the Kinect for XBox and Kinect for Windows.  It makes sense that the follow up to Kinect for XBox is the “Kinect for XBox One”.  But the follow up to Kinect for Windows is “Kinect for Windows v2” so we end up with the Kinect for XBox One as the correlate to K4W2. Furthermore,  by “Windows” we mean Windows 8 (now 8.1) so to be truly accurate, we really should be calling the newest Windows sensor K4W8.1v2.  For convenience, I’ll just be calling it the “new Kinect” for a while.


What’s different between the new Kinect for XBox One and the Kinect for Windows v2?  It turns out not a lot.  The Kinect for XBox has a special USB 3.0 adapter that draws both lots of power as well as data from the XBox One.  Because it is a non-standard connector, it can’t be plugged straight into a PC (unlike with the original Kinect which had a standard USB 2.0 plug).

To make the new Kinect work with a PC, then, requires a special breakout board.  This board serves as an adapter with three ports – one for the Kinect, one for a power source, finally one for a standard USB 3.0 cable. 

We can also probably expect the firmware on the two versions of the new Kinect sensor to also diverge over time as occurred with the original Kinect.


Skeleton detection is greatly improved with the new Kinect.  Not only are more joints now detected, but many of the jitters developers became used to working around are now gone.  The new SDK recognizes up to 6 skeletons rather than just two.  Finally, because of the improved Time-of-Flight depth camera, which replaces the Primesense technology used in the previous hardware, the accuracy of the skeleton detection is much better and includes excellent hand detection.  Grip recognition as well as Lasso recognition (two fingers used to draw) are now available out of the box – even in this early alpha version of the SDK.


I won’t hesitate to say – even this early in the game – that the new hardware is amazing and is leaps and bounds better than the original sensor.  The big question, though, is whether it will take off the way the original hardware did.

If you recall, when Microsoft released the first Kinect sensor they didn’t have immediate plans to use it for anything other than a game controller – no SDK, no motor controller, not a single luxury.  Instead, creative developers, artists, researchers and hackers figured out ways to read the raw USB data and started manipulating it to create amazingly original applications that took advantage of the depth sensor – and they posted them to the Internet.

Will this happen the second time around?  Microsoft is endeavoring to do better this time by getting an SDK out much earlier.  As I mentioned above, the alpha SDK for Kinect v2 is already available to people in the developer preview program.  The trick will be in attracting the types of creative people that were drawn to the Kinect three years ago – the kind of creative technologists Microsoft has always had trouble attracting toward other products like Windows Phone and Windows tablets.

My colleagues and I at Razorfish Emerging Experiences are currently working on combining the new Kinect with other technologies such as Oculus Rift, Google Glass, Unity 3D, Cinder, Leap Motion and 4K video.  Like a modern day scrying device (or simply a mad scientist’s experiment) we hope that by simply mixing all these gadgets together we’ll get a glimpse at what the future looks like and, perhaps, even help to create that future.