3Gear Systems Kinect Handtracking API Unwrapping

WP_000542

I’ve been spending this last week setting up the rig for the beta hand detection API recently published by 3Gear Systems.  There’s a bit of hardware required to position the two Kinects correctly so they face down at a 45 degree angle.  The Kinect mounts from Amazon arrived within a day and were $6 each with free shipping since I never remember to cancel my Prime membership.  The aluminum parts from 80/20 were a bit more expensive but came to just a little above $100 with shipping.  We already have lots of Kinects around the Razorfish Emerging Experiences Lab, so that wasn’t a problem.

WP_000543

80/20 surprisingly doesn’t offer a lot of instruction on how to put the parts of the aluminum frame together so it took me about half-an-hour of trial-and-error to figure it out.  Then I found this PDF explaining what the frame should end up looking like deep-linked on the 3Gear website and had to adjust the frame to get the dimensions correct.

WP_000544

I wanted to use the Kinect for Windows SDK and, after some initial mistakes, realized that I needed to hook up our K4W Kinects rather than the Kinect for Xbox Kinects to do that.  When using OpenNI rather than K4W (the SDK supports either) you can use either the Xbox Kinect or the Xtion sensor.

My next problem was that although the machine we were building on has two USB Controllers, one of them wasn’t working, so I took a trip to Fry’s and got a new PCI-E USB Controller which ended up not working.  So on the way home I tracked down a USB Controller from a brand I recognized, US Robotics, and tried again the next day.  Success at last!

WP_000545

Next I started going through the setup and calibration steps here.  It’s quite a bit of command line voodoo magic and requires very careful attention to the installation instructions – for instance, install the C++ redistributable and Java SE.

WP_000546

After getting all the right software installed I began the calibration process.  A paper printout of the checkerboard pattern worked fine.  It turns out that the software for adjusting the angle of the Kinect sensor doesn’t work if the sensor is on its side facing down so I had to click-click-click adjust it manually.  That’s always a bit of a scary sound.

WP_000554

Pretty soon I was up and running with a point cloud visualization of my hands.  The performance is extremely good and the rush from watching everything working is incredible.

WP_000556

Of the basic samples, the rotation_trainer programmer is probably the most cool.  It allows one to rotate a 3D model around the Y-axis as well as around the X-axis.  Just this little sample opens up a lot of cool possibilities for HCI design.

WP_000557

From there my colleagues and I moved on to the C++ samples.  According to Chris Twigg from 3Gear, this 3D chess game (with 3D physics) was written by one of their summer interns.  If an intern can do this in a month … you get the picture.

I’m fortunate to get to do a lot of R&D in my job at Razorfish – as do my colleagues.  We’ve got home automation parts, arduino bits, electronic textiles, endless Kinects, 3D walls, transparent screens, video walls, and all manner of high tech toys around our lab.  Despite all that, playing with the 3Gear software has been the first time in a long time that we have had that great sense of “gee-whiz, we didn’t know that this was really possible.”

Thanks, 3Gear, for making our week!

Two Years of Kinect

As we approach the second anniversary of the release of the Kinect sensor, it seems appropriate to take inventory of how far we have come. Over the past two months, I have had the privilege of being introduced to several Kinect-based tools and demos that exemplify the potential of the Kinect and provide an indication of where the technology is headed.

restOnDesk

One of my favorites is a startup in San Francisco called 3Gear Systems. 3Gear have conquered the problem of precise finger detection by using dual Kinects. Whereas the original Kinect was very much a full-body sensor intended for bodies up to twelve feet away from the camera, 3Gear have made the Kinect into a more intimate device. The user can pick up digital objects in 3D space, move them, rotate them, and even draw free hand with her finger. The accuracy is amazing. The founders, Robert Wang, Chris Twigg and Kenrick Kin, have just recently released a beta of their finger-precise gesture detection SDK for developers to try out and instructions on purchasing and assembling a rig to take advantage of their software. Here’s a video demonstrating their setup and the amazing things you will be able to do with it.

oblong

Mastering the technology is only half the story, however. Oblong Industries has for several years been designing the correct gestures to use in a post-touch world. This TED Talk by John Underkoffler, Oblong’s Chief Scientist, demonstrates their g-speak technology using gloves to enable precision gesturing. Lately they’ve taken off the gloves in order to accomplish similar interactions using Kinect and Xtion sensors. The difficulty, of course, is that gestural languages can have accents just as spoken languages do. Different people perform the same gesture in different ways. On top of this, interaction gestures should feel intuitive or, at least, be easy for users to discover and master. Oblong’s extensive experience with gestural interfaces has aided them greatly in overcoming these types of hurdles and identifying the sorts of gestures that work broadly.

brekel-kinect-pro-face

The advent of the Kinect is also having a large impact on independent film makers.  While increasingly powerful software has allowed indies to do things in post-production that, five years ago, were solely the provenance of companies like ILM, the Kinect is finally opening up the possibility of doing motion capture on the cheap.  Few have done more than Jasper Brekelmans to help make this possible.  His Kinect Pro Face software, currently sold for $99 USD, allows live streaming of Kinect face tracking data straight into 3D modeling sofrtware.  This data can then be mapped to 3D models to allow for realtime digital puppetry. 

Kinect Pro Face is just one approach to translating and storing the data streams coming out of the Kinect device.  Another approach is being spearheaded by my friend Joshua Blake at Infostrat.  His company’s PointStreamer software treats the video, depth and audio feeds like any other camera, compressing the data for subsequent playback.  PointStreamer’s preferred playback mode is through point clouds which project color data onto 3D space generated using the depth data.  These point cloud playbacks can then be rotated in space, scrubbed in time, and generally distorted in any way we like.  This alpha-stage technology demonstrates the possibility of one day recording everything in pseudo-3D.