The Imaginative Universal

Studies in Virtual Phenomenology -- @jamesashley

Quick Guide to moving from the Kinect SDK beta 2 to v1

March 16
by James Ashley 16. March 2012 15:42

If you had been working with the beta 2 of the Kinect SDK prior to February 1st, you may have felt dismay at the number of API changes that were introduced in v1.

After porting several Kinect applications from the beta 2 to v1, however, I finally started to see a pattern to the changes.  For the most part, it is simply a matter of replacing one set of boilerplate code for another set of boilerplate code.  Any unique portions of the code can for the most part be left alone.

In this post, I want to demonstrate five simple code transformations that will ease your way from the beta 2 to the Kinect SDK v1.  I’ll do it boilerplate fragment by boilerplate fragment.

1. Namespaces have been shifted around.  Microsoft.Research.Kinect.Nui is now just Microsoft.Kinect.  Fortunately Visual Studio makes resolving namespaces relatively easy, so we can just move on.

2. The Runtime type, the controller object for working with data streams from the Kinect, is now called a KinectSensor type.  Grabbing an instance of it has also changed.  You used to just new up an instance like this:

Runtime nui = new Runtime();

Now you instead grab an instance of the KinectSensor from a static array containing all the KinectSensors attached to your PC. 

KinectSensor sensor = KinectSensor.KinectSensors[0];

3. Initializing a KinectSensor object to start reading the color stream, depth stream or skeleton stream has also changed.  In the beta 2, the initialization procedure just didn’t look very .NET-y.  In v1, this has been cleaned up dramatically.  The beta 2 code for initializing a depth and skeleton stream looked like this:

_nui.SkeletonFrameReady += 
    new EventHandler<SkeletonFrameReadyEventArgs>(
        _nui_SkeletonFrameReady
        );
_nui.DepthFrameReady += 
    new EventHandler<ImageFrameReadyEventArgs>(
        _nui_DepthFrameReady
        );
_nui.Initialize(RuntimeOptions.UseDepth, RuntimeOptions.UseSkeletalTracking);
_nui.DepthStream.Open(ImageStreamType.Depth
    , 2
    , ImageResolution.Resolution320x240
    , ImageType.DepthAndPlayerIndex);
     

 

In v1, this boilerplate code has been altered so the Initialize method goes away, roughly replaced by a Start method.  The Open methods on the streams, in turn, have been replaced by Enable.  The DepthAndPlayerIndex data is made available simply by having the skeleton stream enabled.  Also note that the event argument types for the depth and color streams are now different.  Here is the same code in v1:

sensor.SkeletonFrameReady += 
    new EventHandler<SkeletonFrameReadyEventArgs>(
        sensor_SkeletonFrameReady
        );
sensor.DepthFrameReady += 
    new EventHandler<DepthImageFrameReadyEventArgs>(
        sensor_DepthFrameReady
        );
sensor.SkeletonStream.Enable();
sensor.DepthStream.Enable(
    DepthImageFormat.Resolution320x240Fps30
    );
sensor.Start();

4. Transform Smoothing: it used to be really easy to smooth out the skeleton stream in beta 2.  You simply turned it on.

nui.SkeletonStream.TransformSmooth = true;

In v1, you have to create a new TransformSmoothParameters object and pass it to the skeleton stream’s enable property.  Unlike the beta 2, you also have to initialize the values yourself since they all default to zero.

sensor.SkeletonStream.Enable(
    new TransformSmoothParameters() 
    {   Correction = 0.5f
    , JitterRadius = 0.05f
    , MaxDeviationRadius = 0.04f
    , Smoothing = 0.5f });

5. Stream event handling: handling the ready events from the depth stream, the video stream and the skeleton stream also used to be much easier.  Here’s how you handled the DepthFrameReady event in beta 2 (skeleton and video followed the same pattern):

void _nui_DepthFrameReady(object sender
    , ImageFrameReadyEventArgs e)
{
    var frame = e.ImageFrame;
    var planarImage = frame.Image;
    var bits = planarImage.Bits;
    // your code goes here
}

For performance reasons, the newer v1 code looks very different and the underlying C++ API leaks through a bit.  In v1, we are required to open the image frame and check to make sure something was returned.  Additionally, we create our own array of bytes (for the depth stream this has become an array of shorts) and populate it from the frame object.  The PlanarImage type which you may have gotten cozy with in beta 2 has disappeared altogether.  Also note the using keyword to dispose of the ImageFrame object. The transliteration of the code above now looks like this:

void sensor_DepthFrameReady(object sender
    , DepthImageFrameReadyEventArgs e)
{
    using (var depthFrame = e.OpenDepthImageFrame())
    {
        if (depthFrame != null)
        {
            var bits =
                new short[depthFrame.PixelDataLength];
            depthFrame.CopyPixelDataTo(bits);
            // your code goes here
        }
    }
}

 

I have noticed that many sites and libraries that were using the Kinect SDK beta 2 still have not been ported to Kinect SDK v1.  I certainly understand the hesitation given how much the API seems to have changed.

If you follow these five simple translation rules, however, you’ll be able to convert approximately 80% of your code very quickly.

Tags:

Kinect

Comments are closed