Click here to Skip to main content
Click here to Skip to main content

Kinect and WPF: Skeleton tracking using the official Microsoft SDK

By , 3 Dec 2013
Rate this:
Please Sign up or sign in to vote.

Introduction

It's official! Microsoft released its Kinect SDK we have all been waiting for! Kinect SDK offers natural user interaction and audio APIs. It can also be used with the existing Microsoft Speech API. Today we'll see how to create a WPF application performimg skeleton tracking

Not a surprize, this official SDK provides an API similar to OpenNI's one. That's pretty cool for me (and anyone following my blog), because not much stuff needs to be learned from the beginning.

Kinect skeleton tracking

Step 0

Uninstall any previous Kinect drivers such as PrimeSensor, CL NUI, OpenKinect, etc).

Step 1

Download the official Kinect SDK and install. System requirements:

  • Kinect for Windows or Kinect for XBOX sensor
  • Computer with a dual-core, 2.66-GHz
  • Windows 7–compatible graphics card that supports DirectX® 9.0c capabilities
  • 2-GB RAM

Important: Remember to restart your PC after the installation!

Step 2

Launch Visual Studio and create a new WPF application.

Step 3

Add a reference to Microsoft.Research.Kinect assembly, found under the .NET tab. Do not forget to include its namespace in your xaml.cs file. I just included Nui namespace as we do not currently need the audio capabilities.

using Microsoft.Research.Kinect.Nui;

Step 4

It's time to create the user interface: An image displaying the raw camera output and a canvas displaying the users' joints:

<Grid>
    <Image Name="img" Width="640" Height="480" />
    <Canvas Name="canvas" Width="640" Height="480" />
</Grid>

Step 5

We are up to the most interesting part right now! Let's see how we obtain the raw camera image and how we perform user skeleton tracking! Open your xaml.cs file and start typing.

Kinect API offers a Runtime object which will accomplish the mission:

Runtime _nui = new Runtime();

After that, we have to initialize the Runtime object and then open the video stream:

_nui.Initialize(RuntimeOptions.UseDepthAndPlayerIndex | 
    RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor);

_nui.VideoStream.Open(ImageStreamType.Video, 2, 
    ImageResolution.Resolution640x480, ImageType.Color);

Finally, we need to define the proper event handlers for the camera image display and the skeleton recognition. Pretty simple:

_nui.VideoFrameReady += new EventHandler<imageframereadyeventargs>(
    Nui_VideoFrameReady);

_nui.SkeletonFrameReady += new EventHandler<skeletonframereadyeventargs>(
   Nui_SkeletonFrameReady);

Here follows the implementation of each method. They are self explantory and quite similar to what my Nui.Vision library does.

void Nui_VideoFrameReady(object sender, ImageFrameReadyEventArgs e)
{
    var image = e.ImageFrame.Image;
    img.Source = BitmapSource.Create(image.Width, image.Height, 96, 96, 
        PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel);
}
void Nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
    canvas.Children.Clear();

    foreach (SkeletonData user in e.SkeletonFrame.Skeletons)
    {
        if (user.TrackingState == SkeletonTrackingState.Tracked)
        {
            foreach (Joint joint in user.Joints)
            {
                DrawPoint(joint, Colors.Blue);
            }
        }
    }
}

Done! Build and run your project. Download demo with source code.

Attention: I have omitted some lines of code from this blog post in order to make it clearer. I suggest you downolad the sample project and have a look at it. You'll find that the X, Y and Z-axis values conversion from centimetres to pixels is quite interesting. In my example, I actually used the basic idea from Coding for Fun Kinect Toolkit.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

About the Author

Vangos Pterneas
Product Manager LightBuzz
United Kingdom United Kingdom
I'm a Software Engineer and Entrepreneur, passionate about motion technology and the way it can affect people’s lives.
 
I have been a Kinect enthusiast since the release of the very first unofficial hacks and have already published some innovative commercial Kinect applications. These applications include complex home automation systems, 3D body scanning programs and motion-enabled product browsers for businesses.
 
I worked as a Windows developer and consultant for Microsoft Innovation Center and I'm now running my own company, LightBuzz Software. LightBuzz has been awarded the first place in Microsoft’s worldwide innovation competition, held in New York, for effectively combining Kinect and smartphone functionality.
 
When I am not coding, I love writing books, speaking and blogging about my favorite technological aspects.
Follow on   Twitter   Google+   LinkedIn

Comments and Discussions

 
-- There are no messages in this forum --
| Advertise | Privacy | Mobile
Web01 | 2.8.140421.2 | Last Updated 3 Dec 2013
Article Copyright 2011 by Vangos Pterneas
Everything else Copyright © CodeProject, 1999-2014
Terms of Use
Layout: fixed | fluid