Click here to Skip to main content
Click here to Skip to main content

Tagged as

Go to top

Virtual Wall Wirth Kinect Toolbox

, 24 Jun 2014
Rate this:
Please Sign up or sign in to vote.
In this short article, I am going to teach how to create a Virtual Wall using the Kinect sensor through the Kinect Toolbox.

Introduction

In this short article, I am going to teach how to create a Virtual Wall using the Kinect sensor through the Kinect Toolbox. The Kinect Toolbox is a framework for developing with Kinect for Windows SDK (1.7). 

Virtual Wall is a simple, but efficient algorithm that consists of defining a spatial reference (known distance) in order to remove the image background. It is usually used in order to separate specific parts of the user’ body from the other parts. In this example we are going to separate the user’s hands from the rest of the body. In order to do it we have to define an attribute:

Where DM is the depth map, d  is the pixel’s value in the map and t is the threshold chosen that will define the real position of the Virtual Wall in relation to the depth Kinect camera.

Pre-Requisites

  • Visual Studio 2012
  • .Net 4.5
  • Kinect.Toolbox 1.3
  • Kinect for Windows SDK (1.7)

Code

In the Kinect.Toolbox we have the class DepthStreamManager. This class updates each depth frame of the Kinect camera. We have two methods in this class. First, the method Update what receives a DepthImageFrame as parameter. Second, the method ConvertDepthFrame what defines the value of all pixels of an image. In the last mentioned method we have the variable realDepth. In this variable, we have the distance between the camera and the user. What we are going to do is to create a variable the will be our threshold. If the depth pixel value is lesser than our threshold we will draw it on the screen, otherwise we will not.

  • Here we have the Kinect.Toolbox original code:
void ConvertDepthFrame(short[] depthFrame16)
        {

            for (int i16 = 0, i32 = 0; i16 < depthFrame16.Length && i32 < depthFrame32.Length; i16++, i32 += 4)
            {
                int user = depthFrame16[i16] & 0x07;
                int realDepth = (depthFrame16[i16] >> 3);

                byte intensity = (byte)(255 - (255 * realDepth / 0x1fff));

				depthFrame32[i32] = 0;
				depthFrame32[i32 + 1] = 0;
				depthFrame32[i32 + 2] = 0;
				depthFrame32[i32 + 3] = 255;

				switch (user)
				{
					case 0: // no one
						depthFrame32[i32] = (byte)(intensity / 8);
						depthFrame32[i32 + 1] = (byte)(intensity / 8);
						depthFrame32[i32 + 2] = (byte)(intensity / 8);
						break;
					case 1:
						depthFrame32[i32] = intensity;
						break;
					case 2:
						depthFrame32[i32 + 1] = intensity;
						break;
					case 3:
						depthFrame32[i32 + 2] = intensity;
						break;
					case 4:
						depthFrame32[i32] = intensity;
						depthFrame32[i32 + 1] = intensity;
						break;
					case 5:
						depthFrame32[i32] = intensity;
						depthFrame32[i32 + 2] = intensity;
						break;
					case 6:
						depthFrame32[i32 + 1] = intensity;
						depthFrame32[i32 + 2] = intensity;
						break;
					case 7:
						depthFrame32[i32] = intensity;
						depthFrame32[i32 + 1] = intensity;
						depthFrame32[i32 + 2] = intensity;
						break;
				}

            }
        }

And here we have the same method, but now with our threshold:

  void ConvertDepthFrame(short[] depthFrame16)
        {

            for (int i16 = 0, i32 = 0; i16 < depthFrame16.Length && i32 < depthFrame32.Length; i16++, i32 += 4)
            {
                int user = depthFrame16[i16] & 0x07;
                int realDepth = (depthFrame16[i16] >> 3);

                byte intensity = (byte)(255 - (255 * realDepth / 0x1fff));

                if (realDepth < this.GThreshold)
                {
                    depthFrame32[i32] = 0;
                    depthFrame32[i32 + 1] = 0;
                    depthFrame32[i32 + 2] = 0;
                    depthFrame32[i32 + 3] = 255;

                    switch (user)
                    {
                        case 0: // no one
                            depthFrame32[i32] = (byte)(intensity / 8);
                            depthFrame32[i32 + 1] = (byte)(intensity / 8);
                            depthFrame32[i32 + 2] = (byte)(intensity / 8);
                            break;
                        case 1:
                            depthFrame32[i32] = intensity;
                            break;
                        case 2:
                            depthFrame32[i32 + 1] = intensity;
                            break;
                        case 3:
                            depthFrame32[i32 + 2] = intensity;
                            break;
                        case 4:
                            depthFrame32[i32] = intensity;
                            depthFrame32[i32 + 1] = intensity;
                            break;
                        case 5:
                            depthFrame32[i32] = intensity;
                            depthFrame32[i32 + 2] = intensity;
                            break;
                        case 6:
                            depthFrame32[i32 + 1] = intensity;
                            depthFrame32[i32 + 2] = intensity;
                            break;
                        case 7:
                            depthFrame32[i32] = intensity;
                            depthFrame32[i32 + 1] = intensity;
                            depthFrame32[i32 + 2] = intensity;
                            break;
                    }
                }
                else
                {
                    depthFrame32[i32] = 0;
                    depthFrame32[i32 + 1] = 0;
                    depthFrame32[i32 + 2] = 0;
                    depthFrame32[i32 + 3] = 255;
                }
            }
        }	

Doing this code change we are able to get only body's parts we want. Here we have de result:

Original Depth Image

Original Depth Image 

Original Depth Image

Depth Image With Virtual Wall 

 

Conclusions

In this article, we used the Kinect.Toolbox in order to create a Virtual Wall with the Kinect depth camera. Personally, I have used this method to create a sign recognition application, but it has been used in many other areas. It is my first CodeProject article I hope it helps someone. 

 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

Master Degree student in Computer Science
Federal University of São Carlos - UFSCar

Comments and Discussions

 
QuestionMissing screenshots PinmemberAlain Peralta24-Jun-14 8:15 
AnswerRe: Missing screenshots PinmemberDiego Gonçalves Dias24-Jun-14 8:27 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web03 | 2.8.140916.1 | Last Updated 24 Jun 2014
Article Copyright 2014 by Diego Gonçalves Dias
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid