Click here to Skip to main content
15,064,140 members
Articles / Desktop Programming / WPF
Posted 18 Jun 2011


163 bookmarked

Kinect – Getting Started – Become The Incredible Hulk

Rate me:
Please Sign up or sign in to vote.
4.90/5 (89 votes)
18 Jun 2011Ms-PL6 min read
Getting Started with Kinect - Create Project, Control the Camera Angle and use Skeleton Tracking


On 16/06/2011, Microsoft released the Kinect .NET SDK Beta, right away I had to download it and give it a try, and it’s amazing!!!

In this article, I'll show you how to Get Started with Kinect SDK, from there we will move forward to how you can control the Camera Angle using the SDK and finally I'll show how to use the Skeleton Tracking and with a nice example of how to become The Incredible Hulk.


The Kinect for Windows SDK beta is a programming toolkit for application developers. It enables the academic and enthusiast communities easy access to the capabilities offered by the Microsoft Kinect device connected to computers running the Windows 7 operating system.

The Kinect for Windows SDK beta includes drivers, rich APIs for raw sensor streams and human motion tracking, installation documents, and resource materials. It provides Kinect capabilities to developers who build applications with C++, C#, or Visual Basic by using Microsoft Visual Studio 2010.

Step 1 – Prepare Your Environment

In order to work with Kinect .NET SDK, you need to have the below requirements:

Supported Operating Systems and Architectures

  • Windows 7 (x86 or x64)

Hardware Requirements

  • Computer with a dual-core, 2.66-GHz or faster processor
  • Windows 7–compatible graphics card that supports Microsoft® DirectX® 9.0c capabilities
  • 2 GB of RAM
  • Kinect for Xbox 360® sensor—retail edition, which includes special USB/power cabling

Software Requirements

Step 2: Create New WPF Project

Add reference to Microsoft.Research.Kinect.Nui (locate under - C:\Program Files (x86)\Microsoft Research KinectSDK) and make sure the solution file for the sample targets the x86 platform, because this Beta SDK includes only x86 libraries.


An application must initialize the Kinect sensor by calling Runtime.Initialize before calling any other methods on the Runtime object. Runtime.Initialize initializes the internal frame-capture engine, which starts a thread that retrieves data from the Kinect sensor and signals the application when a frame is ready. It also initializes the subsystems that collect and process the sensor data. The Initialize method throws InvalidOperationException if it fails to find a Kinect sensor, so the call to Runtime.Initialize appears in a try/catch block.

Create Windows Load Event and call InitializeNui:

private void InitializeNui()
        //Declares _kinectNui as a Runtime object, 
        //which represents the Kinect sensor instance.
        _kinectNui = new Runtime();

        //Open the video and depth streams, and sets up the event handlers 
        //that the runtime calls when a video, depth, or skeleton frame is ready
        //An application must initialize the Kinect sensor by calling 
        //Runtime.Initialize before calling any other methods on the Runtime object. 
        _kinectNui.Initialize(RuntimeOptions.UseDepthAndPlayerIndex |
                        RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor);

        //To stream color images:
        //  •	The options must include UseColor.
        //  •	Valid image resolutions are Resolution1280x1024 and Resolution640x480.
        //  •	Valid image types are Color, ColorYUV, and ColorYUVRaw.
        _kinectNui.VideoStream.Open(ImageStreamType.Video, 2,
                                ImageResolution.Resolution640x480, ImageType.ColorYuv);

        //To stream depth and player index data:
        //  •	The options must include UseDepthAndPlayerIndex.
        //  •	Valid resolutions for depth and player index data are 
        //Resolution320x240 and Resolution80x60.
        //  •	The only valid image type is DepthAndPlayerIndex.
        _kinectNui.DepthStream.Open(ImageStreamType.Depth, 2, 
		ImageResolution.Resolution320x240, ImageType.DepthAndPlayerIndex);

        lastTime = DateTime.Now;

        _kinectNui.VideoFrameReady += 
		new EventHandler<ImageFrameReadyEventArgs>(NuiVideoFrameReady);
        _kinectNui.DepthFrameReady += 
		new EventHandler<ImageFrameReadyEventArgs>(nui_DepthFrameReady);
    catch (InvalidOperationException ex)

Step 3: Show Video

Both Video and Depth returns <span>PlanarImage </span>and we just need to create new Bitmap and display on the UI.

Video Frame Ready Event Handler

 void NuiVideoFrameReady(object sender, ImageFrameReadyEventArgs e)
    PlanarImage Image = e.ImageFrame.Image;

    image.Source = BitmapSource.Create(
        Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null,
        Image.Bits, Image.Width * Image.BytesPerPixel);

    imageCmyk32.Source = BitmapSource.Create(
        Image.Width, Image.Height, 96, 96, PixelFormats.Cmyk32, null,
        Image.Bits, Image.Width * Image.BytesPerPixel);

Depth Frame Ready Event Handler

Depth is different because the image you are getting back is 16bit and we need to convert it to 32, I’ve used the same method like in the SDK.

 void nui_DepthFrameReady(object sender, ImageFrameReadyEventArgs e)
    var Image = e.ImageFrame.Image;
    var convertedDepthFrame = convertDepthFrame(Image.Bits);

    depth.Source = BitmapSource.Create(
        Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, 
			null, convertedDepthFrame, Image.Width * 4);


// Converts a 16-bit grayscale depth frame which includes player 
// indexes into a 32-bit frame
// that displays different players in different colors
byte[] convertDepthFrame(byte[] depthFrame16)
    for (int i16 = 0, i32 = 0; i16 < depthFrame16.Length && 
		i32 < depthFrame32.Length; i16 += 2, i32 += 4)
        int player = depthFrame16[i16] & 0x07;
        int realDepth = (depthFrame16[i16 + 1] << 5) | (depthFrame16[i16] >> 3);
        // transform 13-bit depth information into an 8-bit intensity appropriate
        // for display (we disregard information in most significant bit)
        byte intensity = (byte)(255 - (255 * realDepth / 0x0fff));

        depthFrame32[i32 + RED_IDX] = intensity;
        depthFrame32[i32 + BLUE_IDX] = intensity;
        depthFrame32[i32 + GREEN_IDX] = intensity;
    return depthFrame32;

void CalculateFps()

    var cur = DateTime.Now;
    if (cur.Subtract(lastTime) > TimeSpan.FromSeconds(1))
        int frameDiff = totalFrames - lastFrames;
        lastFrames = totalFrames;
        lastTime = cur;
        frameRate.Text = frameDiff.ToString() + " fps";


Step 4: Control Camera Angle

Now, I’ll show how easy it is to control Kinect Camera Angle (change the position on the camera).

There is Minimum and Maximum angles you can control but as you can see from the last picture (right), you can move the Kinect Sensor manually and the Angle will change automatically.

3.png 4.png 5.png

Create object from the Kinect Nui after the initialization.

private Camera _cam;

_cam = _kinectNui.NuiCamera;
txtCameraName.Text = _cam.UniqueDeviceName; 

Here is the Camera definition:

namespace Microsoft.Research.Kinect.Nui
    public class Camera
        public static readonly int ElevationMaximum;
        public static readonly int ElevationMinimum;

        public int ElevationAngle { get; set; }
        public string UniqueDeviceName { get; }

        public void GetColorPixelCoordinatesFromDepthPixel
			(ImageResolution colorResolution, ImageViewArea viewArea,
                             int depthX, int depthY, short depthValue, out int colorX,
                             out int colorY);

Step 5: Up and Down

Now when you do that, you can control the camera angle as follows:
To increase the Camera Angle, all you need to do is to increase camera ElevationAngle, there are Min and Max angles for the camera that you can control so don’t be afraid to push it too much.

private void BtnCameraUpClick(object sender, RoutedEventArgs e)
        _cam.ElevationAngle = _cam.ElevationAngle + 5;
    catch (InvalidOperationException ex)
    catch (ArgumentOutOfRangeException outOfRangeException)
        //Elevation angle must be between Elevation Minimum/Maximum"

And down:

private void BtnCameraDownClick(object sender, RoutedEventArgs e)
        _cam.ElevationAngle = _cam.ElevationAngle - 5;
    catch (InvalidOperationException ex)
    catch (ArgumentOutOfRangeException outOfRangeException)
        //Elevation angle must be between Elevation Minimum/Maximum"

Background: Become The Incredible Hulk using Skeleton Tracking

One of the big strengths of Kinect for Windows SDK is its ability to discover the skeleton of joints of a human standing in front of the sensor, very fast recognition system and requires no training to use.

The NUI Skeleton API provides information about the location of up to two players standing in front of the Kinect sensor array, with detailed position and orientation information.

The data is provided to application code as a set of points, called skeleton positions, that compose a skeleton, as shown in the picture below. This skeleton represents a user’s current position and pose.

Applications that use skeleton data must indicate this at NUI initialization and must enable skeleton tracking.

The Vitruvian Man has 20 points that are called Joints in Kinect SDK.

7.png 8.png

Step 6: Register To SkeletonFrameReady

Make sure you Initialize with UseSkeletalTracking, otherwise the Skeleton Tracking will not work.

_kinectNui.Initialize(RuntimeOptions.UseColor | 
	RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor);
_kinectNui.SkeletonFrameReady += 
	new EventHandler<SkeletonFrameReadyEventArgs>(SkeletonFrameReady); 

The Kinect NUI cannot track more than 2 Skeletons,

if (SkeletonTrackingState.Tracked != data.TrackingState) continue;   

means the Skeleton is tracked, untracked Skeletons only gives their position without the Joints, also Skeleton will be rendered if full body fits in frame.

Debugging isn’t a simple task when developing for Kinect – Get Up Each time you want to test it.

Skeleton Joints marked by TrackingID enum that defined its reference position:

namespace Microsoft.Research.Kinect.Nui
    public enum JointID

Step 7: Get Joint Position

The Joint position is defined in Camera Space, and we need to translate to our Size and Position.

Depth Image Space

Image frames of the depth map are 640x480, 320×240, or 80x60 pixels in size, with each pixel representing the distance, in millimeters, to the nearest object at that particular x and y coordinate. A pixel value of 0 indicates that the sensor did not find any objects within its range at that location. The x and y coordinates of the image frame do not represent physical units in the room, but rather pixels on the depth imaging sensor. The interpretation of the x and y coordinates depends on specifics of the optics and imaging sensor. For discussion purposes, this projected space is referred to as the depth image space.

Skeleton Space

Player skeleton positions are expressed in x, y, and z coordinates. Unlike the coordinate of depth image space, these three coordinates are expressed in meters. The x, y, and z axes are the body axes of the depth sensor. This is a right-handed coordinate system that places the sensor array at the origin point with the positive z axis extending in the direction in which the sensor array points. The positive y axis extends upward, and the positive x axis extends to the left (with respect to the sensor array), as shown in Figure 5. For discussion purposes, this expression of coordinates is referred to as the skeleton space.


private Point getDisplayPosition(Joint joint)
    float depthX, depthY;
    _kinectNui.SkeletonEngine.SkeletonToDepthImage(joint.Position, out depthX, out depthY);
    depthX = Math.Max(0, Math.Min(depthX * 320, 320));  //convert to 320, 240 space
    depthY = Math.Max(0, Math.Min(depthY * 240, 240));  //convert to 320, 240 space
    int colorX, colorY;
    ImageViewArea iv = new ImageViewArea();
    // only ImageResolution.Resolution640x480 is supported at this point
	(ImageResolution.Resolution640x480, iv, (int)depthX, (int)depthY, 
	(short)0, out colorX, out colorY);

    // map back to skeleton.Width & skeleton.Height
    return new Point((int)(imageContainer.Width * colorX / 640.0) - 30, 
	(int)(imageContainer.Height * colorY / 480) - 30);

Step 8: Place Image Based On Joint Type

A position of type Vector4 (x, y, z, w - The first three attributes define the position in camera space. The last attribute (w) gives the quality level (Range between 0-1)) of the position that indicates the center of mass for that skeleton.

This value is the only available positional value for passive players.

void SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
    foreach (SkeletonData data in e.SkeletonFrame.Skeletons)
        //Tracked that defines whether a skeleton is 'tracked' or not. 
        //The untracked skeletons only give their position. 
        if (SkeletonTrackingState.Tracked != data.TrackingState) continue;

        //Each joint has a Position property that is defined by a Vector4: (x, y, z, w). 
        //The first three attributes define the position in camera space. 
        //The last attribute (w)
        //gives the quality level (between 0 and 1) of the 
        foreach (Joint joint in data.Joints)
            if (joint.Position.W < 0.6f) return;// Quality check 
            switch (joint.ID)
                case JointID.Head:
                    var heanp = getDisplayPosition(joint);

                    Canvas.SetLeft(imgHead, heanp.X);
                    Canvas.SetTop(imgHead, heanp.Y);

                case JointID.HandRight:
                    var rhp = getDisplayPosition(joint);

                    Canvas.SetLeft(imgRightHand, rhp.X);
                    Canvas.SetTop(imgRightHand, rhp.Y);
                case JointID.HandLeft:
                    var lhp = getDisplayPosition(joint);

                    Canvas.SetLeft(imgLefttHand, lhp.X);
                    Canvas.SetTop(imgLefttHand, lhp.Y);




  • 18th June, 2011: Initial post


This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL)


About the Author

Shai Raiten
Architect Sela
Israel Israel
Shai Raiten is VS ALM MVP, currently working for Sela Group as a ALM senior consultant and trainer specializes in Microsoft technologies especially Team System and .NET technology. He is currently consulting in various enterprises in Israel, planning and analysis Load and performance problems using Team System, building Team System customizations and adjusts ALM processes for enterprises. Shai is known as one of the top Team System experts in Israel. He conducts lectures and workshops for developers\QA and enterprises who want to specialize in Team System.

My Blog:

Comments and Discussions

QuestionLibraries not being recognised. Pin
Member 107198962-Apr-14 12:49
MemberMember 107198962-Apr-14 12:49 
GeneralMy vote of 5 Pin
Programm3r26-Feb-14 22:43
MemberProgramm3r26-Feb-14 22:43 
GeneralMy vote of 5 Pin
GregoryW23-May-13 21:42
MemberGregoryW23-May-13 21:42 
GeneralMy vote of 5 Pin
Joezer BH6-Jan-13 22:08
professionalJoezer BH6-Jan-13 22:08 
QuestionThanks you are doing great job................ Pin
ntc997-Nov-12 1:47
Memberntc997-Nov-12 1:47 
GeneralMy vote of 5 Pin
miguelsanc15-Aug-12 13:17
Membermiguelsanc15-Aug-12 13:17 
QuestionGood work. Pin
neriacompany1-Jun-12 4:03
Memberneriacompany1-Jun-12 4:03 
QuestionTransposing joints Pin
Member 814862831-May-12 19:19
MemberMember 814862831-May-12 19:19 
QuestionExcellent. Thanks! Pin
kartalyildirim3-Mar-12 23:23
Memberkartalyildirim3-Mar-12 23:23 
QuestionGood one, minor bug Pin
nxaspisol31-Dec-11 23:19
Membernxaspisol31-Dec-11 23:19 
GeneralMy vote of 5 Pin
tolw25-Oct-11 0:30
Membertolw25-Oct-11 0:30 
GeneralRe: My vote of 5 Pin
Shai Raiten1-Mar-12 23:00
MemberShai Raiten1-Mar-12 23:00 
GeneralThank you ... Pin
Khalido8r17-Oct-11 17:01
MemberKhalido8r17-Oct-11 17:01 
GeneralRe: Thank you ... Pin
Shai Raiten1-Mar-12 23:00
MemberShai Raiten1-Mar-12 23:00 
GeneralMy vote of 5 Pin
Praveen Kullu22-Sep-11 23:07
MemberPraveen Kullu22-Sep-11 23:07 
GeneralRe: My vote of 5 Pin
Shai Raiten1-Mar-12 23:00
MemberShai Raiten1-Mar-12 23:00 
GeneralMy vote of 5 Pin
GPUToaster™5-Sep-11 22:40
MemberGPUToaster™5-Sep-11 22:40 
GeneralRe: My vote of 5 Pin
Shai Raiten1-Mar-12 22:59
MemberShai Raiten1-Mar-12 22:59 
GeneralMy vote of 5 Pin
Wendelius14-Aug-11 12:07
mveWendelius14-Aug-11 12:07 
GeneralRe: My vote of 5 Pin
Shai Raiten25-Aug-11 19:08
MemberShai Raiten25-Aug-11 19:08 
QuestionIs it possible to get the head's rotation as well??? Pin
Nawahdah8-Aug-11 15:45
MemberNawahdah8-Aug-11 15:45 
QuestionCould not load file or assembly 'INuiInstanceHelper.dll' or one of its dependencies. The specified module could not be found. Pin
franva19-Jul-11 4:05
Memberfranva19-Jul-11 4:05 
AnswerRe: Could not load file or assembly 'INuiInstanceHelper.dll' or one of its dependencies. The specified module could not be found. Pin
Shai Raiten24-Jul-11 23:38
MemberShai Raiten24-Jul-11 23:38 
GeneralRe: Could not load file or assembly 'INuiInstanceHelper.dll' or one of its dependencies. The specified module could not be found. Pin
franva28-Jul-11 15:41
Memberfranva28-Jul-11 15:41 
QuestionGetting Started with Kinect using C++ Pin
Aparnajnambair18-Jul-11 23:04
MemberAparnajnambair18-Jul-11 23:04 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.