Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Kinect and WPF: Getting the raw and depth image using OpenNI

0.00/5 (No votes)
3 Dec 2013 1  
Kinect and WPF: Getting the raw and depth image using OpenNI.

It's time for the first Kinect tutorial! In the upcoming blog posts, we'll see how to use the managed API provided by OpenNI and NITE in order to build our own Natural User Interface applications. OpenNI and NITE are two great libraries, offered by PrimeSense, which let us access lots of cool stuff such as body tracking, gesture recognition and much more. Both of them provide .NET wrappers utilized directly from C# applications! Learn how to install these libraries reading my previous blog post.

OpenNI comes with some interesting demos (SimpleRead.net, SimpleUser.net and UserTracker.net specifically) built using the managed OpenNI.net.dll library. Unfortunately, these demos run on .NET 2.0 in order to be fully compatible with Mono platform. So, I decided to create new samples (or modify some of the existing ones) enabling them to run on .NET 4.0 and Windows Presentation Foundation (WPF).

Kinect OpenNI UserTracker.net

The Power of WPF

WPF offers great advantages over WinForms considering user experience. Furthermore, WPF's System.Windows.Media is way more powerful than WinForms' System.Drawing. WPF uses WriteableBitmap and ImageSource instead of Bitmap and BitmapData. As a result, I had to rewrite much of the initial code to make it WPF-compliant.

Requirements

Accessing Kinect's Raw and Depth Image

Kinect device comes with two cameras: a raw and a depth one (640x480 resolution each). Different color in a raw image means different RGB value in the real scene. Different color in a depth image means different distance in the real scene. OpenNI lets us access both camera sources. Here is the raw image result:

Kinect OpenNI raw image

And here is the corresponding depth image result:

Kinect OpenNI depth image

Wrapping Them All Together

Have a look at the demo project I created. Download it and read the following lines to find out how things work.

Step 0

Ensure that OpenNI is properly installed in your Windows operating system.

Step 1

Open Visual Studio and create a new WPF application. I named it "KinectWPF".

Step 2

Add a reference to OpenNI.net.dll. OpenNI.net is found under C:\Program Files\OpenNI\Bin.

Step 3

Add an existing item and load SamplesConfig.xml to your project. SamplesConfig.xml is found under C:\Program Files\OpenNI\Data and provides all the necessary information about the sensor (available cameras, resolution, PrimeSense key). You need to have the default XML file replaced with something like the one I provided in my "how-to" post.

Step 4

Download my NuiSensor class and add it to your project. NuiSensor class uses OpenNI.net.dll internally in order to acquire the camera images. You need the following properties:

public ImageSource RawImageSource

and:

public ImageSource DepthImageSource

Step 5

Navigate to MainWindow.xaml and add two Image controls. In the code-behind file, firstly create a new instance of NuiSensor providing the SamplesConfig.xml path:

NuiSensor _sensor = new NuiSensor("SamplesConfig.xml");

Then, add an event handler for CompositionTarget.Rendering event. CompositionTarget.Rendering is raised when the frame needs to be redrawn (that means 60 times per second). You simply have to call the proper NuiSensor properties and you are done:

imgRaw.Source = _sensor.RawImageSource;
imgDepth.Source = _sensor.DepthImageSource;

Download demo. As you can see, I have also added a "toggle image visibility" button in order to reduce the Window size.

Wish you happy Kinect programming :-). 

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here