It's time for the first Kinect tutorial! In the upcoming blog posts, we'll see how to use the managed API provided by OpenNI and NITE in order to build our own Natural User Interface applications. OpenNI and NITE are two great libraries, offered by PrimeSense, which let us access lots of cool stuff such as body tracking, gesture recognition and much more. Both of them provide .NET wrappers utilized directly from C# applications! Learn how to install these libraries reading my previous blog post.
OpenNI comes with some interesting demos (SimpleRead.net, SimpleUser.net and UserTracker.net specifically) built using the managed OpenNI.net.dll library. Unfortunately, these demos run on .NET 2.0 in order to be fully compatible with Mono platform. So, I decided to create new samples (or modify some of the existing ones) enabling them to run on .NET 4.0 and Windows Presentation Foundation (WPF).
The Power of WPF
WPF offers great advantages over WinForms considering user experience. Furthermore, WPF's
System.Windows.Media is way more powerful than WinForms'
System.Drawing. WPF uses
ImageSource instead of
BitmapData. As a result, I had to rewrite much of the initial code to make it WPF-compliant.
Accessing Kinect's Raw and Depth Image
Kinect device comes with two cameras: a raw and a depth one (640x480 resolution each). Different color in a raw image means different RGB value in the real scene. Different color in a depth image means different distance in the real scene. OpenNI lets us access both camera sources. Here is the raw image result:
And here is the corresponding depth image result:
Wrapping Them All Together
Have a look at the demo project I created. Download it and read the following lines to find out how things work.
Ensure that OpenNI is properly installed in your Windows operating system.
Open Visual Studio and create a new WPF application. I named it "
Add a reference to OpenNI.net.dll. OpenNI.net is found under C:\Program Files\OpenNI\Bin.
Add an existing item and load SamplesConfig.xml to your project. SamplesConfig.xml is found under C:\Program Files\OpenNI\Data and provides all the necessary information about the sensor (available cameras, resolution, PrimeSense key). You need to have the default XML file replaced with something like the one I provided in my "how-to" post.
Download my NuiSensor class and add it to your project.
NuiSensor class uses OpenNI.net.dll internally in order to acquire the camera images. You need the following properties:
public ImageSource RawImageSource
public ImageSource DepthImageSource
Navigate to MainWindow.xaml and add two Image controls. In the code-behind file, firstly create a new instance of
NuiSensor providing the SamplesConfig.xml path:
NuiSensor _sensor = new NuiSensor("SamplesConfig.xml");
Then, add an event handler for
CompositionTarget.Rendering is raised when the frame needs to be redrawn (that means 60 times per second). You simply have to call the proper
NuiSensor properties and you are done:
imgRaw.Source = _sensor.RawImageSource;
imgDepth.Source = _sensor.DepthImageSource;
Download demo. As you can see, I have also added a "toggle image visibility" button in order to reduce the Window size.
Wish you happy Kinect programming :-).
I have been a student at Athens University of Economics and Business, Department of Informatics, since September 2007.
I mainly develop and design .NET applications in C#, ASP.NET and Silverlight, but I have also worked on J2EE, LAMP and C++.
Currently, I am a member of Microsoft Student Partners team and I work as a freelancer for several employers including the Institute for the Management of Information Systems and Vodafone Corporation. As a Student Partner, I have made lots of speaking engagements considering Microsoft technologies (ASP.NET, Silverlight, Windows Phone etc) to undergraduate / postgraduate students and developers.