Click here to Skip to main content
15,879,239 members
Articles / Programming Languages / C#

Kinect for Windows Version 2: Body Tracking

Rate me:
Please Sign up or sign in to vote.
5.00/5 (4 votes)
11 Apr 2014CPOL3 min read 42.8K   26   4   8
Kinect for Windows Version 2: Body Tracking

NOTE: This is preliminary software and/or hardware and APIs are preliminary and subject to change.

In my previous blog post, I show you how to display the color, depth and infrared streams of Kinect version 2 by transforming the raw binary data into Windows bitmaps.

This time, we’ll dive into the most essential part of Kinect: Body tracking.

The initial version of Kinect allowed us to track up to 21 body joints. The second version allows up to 25 joints. The new joints include the fists and thumbs! Moreover, due to the enhanced depth sensor, the tracking accuracy has been significantly improved. Experienced users will notice less jittering and much better stability. Once again, I would like to remind you of my video, which demonstrates the new body tracking capabilities:

Watch the video on YouTube

Next, we are going to implement body tracking and display all of the new joints on-screen. We’ll extend the project we created previously. You can download the source code here.

Extending the Project

In the previous blog post, we created a project with an <Image> element for displaying the streams. We now need to add a <Canvas> control for drawing the body. Here is the updated XAML code:

XML
<Grid>
    <Image Name="camera" />
    <Canvas Name="canvas" />
</Grid>

We also added a reference to Microsoft.Kinect namespace and initialized the sensor:

C#
// Kinect namespace
using Microsoft.Kinect;

// ...

// Kinect sensor and Kinect stream reader objects
KinectSensor _sensor;
MultiSourceFrameReader _reader;
IList<Body> _bodies;

// Kinect sensor initialization
_sensor = KinectSensor.Default;

if (_sensor != null)
{
    _sensor.Open();
}

We also added a list of bodies, where all of the body/skeleton related data will be saved. If you have developed for Kinect version 1, you notice that the Skeleton class has been replaced by the Body class.

Remember the MultiSourceFrameReader? This class gives us access on every stream, including the body stream! We simply need to let the sensor know that we need body tracking functionality by adding an additional parameter when initializing the reader:

C#
_reader = _sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Color |
                                             FrameSourceTypes.Depth |
                                             FrameSourceTypes.Infrared |
                                             FrameSourceTypes.Body);
_reader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived;

The Reader_MultiSourceFrameArrived method will be called whenever a new frame is available. Let’s specify what will happen in terms of the body data:

  1. Get a reference to the body frame
  2. Check whether the body frame is null – this is crucial
  3. Initialize the _bodies list
  4. Call the GetAndRefreshBodyData method, so as to copy the body data into the list
  5. Loop through the list of bodies and do awesome stuff!

Always remember to check for null values. Kinect provides you with approximately 30 frames per second – anything could be null or missing! Here is the code so far:

C#
void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
{
    var reference = e.FrameReference.AcquireFrame();

    // Color
    // ...

    // Depth
    // ...

    // Infrared
    // ...

    // Body
    using (var frame = reference.BodyFrameReference.AcquireFrame())
    {
        if (frame != null)
        {
            _bodies = new Body[frame.BodyFrameSource.BodyCount];

            frame.GetAndRefreshBodyData(_bodies);

            foreach (var body in _bodies)
            {
                if (body != null)
                {
                    // Do something with the body...
                }
            }
        }
    }
}

This is it! We now have access to the bodies Kinect identifies. The next step is to display the skeleton information on-screen. Each body consists of 25 joints. The sensor provides us with the position (X, Y, Z) and the rotation information for each one of them. Moreover, Kinect lets us know whether the joints are tracked, hypothesized or not tracked. It’s a good practice to check whether a body is tracked before performing any critical functions. The following code illustrates how we can access the different body joints:

C#
if (body != null)
{
    if (body.IsTracked)
    {
        Joint head = body.Joints[JointType.Head];
        
        float x = head.Position.X;
        float y = head.Position.Y;
        float z = head.Position.Z;

        // Draw the joints...
    }
}

The supported joints by Kinect 2 are the following:

  • SpineBase
  • SpineMid
  • Neck
  • Head
  • ShoulderLeft
  • ElbowLeft
  • WristLeft
  • HandLeft
  • ShoulderRight
  • ElbowRight
  • WristRight
  • HandRight
  • HipLeft
  • KneeLeft
  • AnkleLeft
  • FootLeft
  • HipRight
  • KneeRight
  • AnkleRight
  • FootRight
  • SpineShoulder
  • HandTipLeft
  • ThumbLeft
  • HandTipRight
  • ThumbRight

Neck and thumbs are new joints added in the second version of Kinect.

Knowing the coordinates of every joint, we can now draw some objects using XAML and C#. However, Kinect provides a distance in millimeters, so we need to map the millimeters to screen pixels. In the attached project, I have made this mapping for you. So, the only method you need to call is the DrawPoint or the DrawLine. Here is the DrawPoint:

C#
public static void DrawPoint(this Canvas canvas, Joint joint)
{
    // 1) Check whether the joint is tracked.
    if (joint.TrackingState == TrackingState.NotTracked) return;

    // 2) Map the real-world coordinates to screen pixels.
    joint = joint.ScaleTo(canvas.ActualWidth, canvas.ActualHeight);

    // 3) Create a WPF ellipse.
    Ellipse ellipse = new Ellipse
    {
        Width = 20,
        Height = 20,
        Fill = new SolidColorBrush(Colors.LightBlue)
    };

    // 4) Position the ellipse according to the joint's coordinates.
    Canvas.SetLeft(ellipse, joint.Position.X - ellipse.Width / 2);
    Canvas.SetTop(ellipse, joint.Position.Y - ellipse.Height / 2);

    // 5) Add the ellipse to the canvas.
    canvas.Children.Add(ellipse);
}

Similarly, you can draw lines using the Line object. Download the sample project and check by yourself.

Here is the end result you saw in the video:

Kinect 2 body stream

Notice that the body joints are not perfectly aligned to the background image. Why? Because the color, infrared and depth sensors are not one above the other, so they have a slightly different point of view. You can use the coordinate mapper of the SDK and align them if necessary.

Body tracking was similarly done in the previous sensor. In the next blog post, we are going to see something totally new: facial expressions and hand states.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



Comments and Discussions

 
QuestionThank You Pin
Rrj Prince2-Mar-18 1:47
Rrj Prince2-Mar-18 1:47 
Questionkinect with power point Pin
Alok Agnihotri27-Jul-15 20:52
Alok Agnihotri27-Jul-15 20:52 
QuestionDraw Skeleton into Windows Form Pin
Pedro Francisco Borges Pereira23-Apr-15 21:32
Pedro Francisco Borges Pereira23-Apr-15 21:32 
QuestionBody Tracking in C++ - How to query Pin
Member 1109046817-Sep-14 17:06
Member 1109046817-Sep-14 17:06 
First of all, thanks for your tutorials and code sharing.
I am new(-ish) to C++ and no experience with Kinect (I normally code in Matlab - but no viable link with Kinect at the moment). I am trying to use snippets of your code adapted to C++. So far, I managed to get it working up to the
C#
if (body.IsTracked)
    {

Then I siply output TrackingID to console for testing. This works fine.
C++
UINT64   trackingID;
hr = m_bodies[bodyId]->get_TrackingId(&trackingID);
std::cout << "BodyId: " << bodyId << std::endl;

I haven't been able to find the equivalent (data type Joint) to your next bit:
C#
Joint head = body.Joints[JointType.Head];
        
        float x = head.Position.X;
        float y = head.Position.Y;
        float z = head.Position.Z;

I also don't know how to implement the drawPoint in C++.
Any help or suggestions would be greatly appreciated.
Cheers,
ADNewbie

AnswerRe: Body Tracking in C++ - How to query Pin
Vangos Pterneas18-Sep-14 5:31
professionalVangos Pterneas18-Sep-14 5:31 
GeneralMy vote of 5 Pin
Volynsky Alex11-Apr-14 9:07
professionalVolynsky Alex11-Apr-14 9:07 
GeneralRe: My vote of 5 Pin
Vangos Pterneas11-Apr-14 9:09
professionalVangos Pterneas11-Apr-14 9:09 
GeneralRe: My vote of 5 Pin
Volynsky Alex11-Apr-14 9:10
professionalVolynsky Alex11-Apr-14 9:10 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.