Click here to Skip to main content
15,867,453 members
Articles / Multimedia / Image Processing

EMGU Multiple Face Recognition using PCA and Parallel Optimisation

Rate me:
Please Sign up or sign in to vote.
4.87/5 (112 votes)
26 May 2014CPOL26 min read 993K   235   282
Using EMGU to perform Principle Component Analysis (PCA) multiple face recognition is achieved. Using .NET Parallel toolbox real time analysis and optimisation is introduced in a user friendly application.

Introduction

This article is designed to be the first in several to explain the use of the EMGU image processing wrapper. For more information on the EMGU wrapper, please visit the EMGU website. If you are new to this wrapper, see the Creating Your First EMGU Image Processing Project article. You may start with three warnings for the references not being found. Expand the References folder within the solution explorer, delete the three with yellow warning icons and Add fresh references to them located within the Lib folder. If you have used this wrapper before, please feel free to browse other examples on the EMGU Code Reference page.

Face Recognition has always been a popular subject for image processing and this article builds upon the work by Sergio Andrés Gutiérrez Rojas and his original article here[^]. The reason that face recognition is so popular is not only its real world application but also the common use of principal component analysis (PCA). PCA is an ideal method for recognizing statistical patterns in data. The popularity of face recognition is the fact a user can apply a method easily and see if it is working without needing to know too much about how the process is working.

This article will look into PCA analysis and its application in more detail while discussing the use of parallel processing and the future of it in image analysis. The source code makes some key improvements over the original source both in usability and the way it trains and the use of parallel architecture for multiple face recognition.

Updates

The newest version V2.4.9 uses an updated CascadeClassifier class for acquiring the face position within a frame, and a new FaceRecognizer that allows Eigen, Fisher and Local Binary Pattern Histogram (LBPH) recognisers to be applied. There is a bug in the FaceRecognizer class that changes the recognition of unknown individuals, this has been addressed.

Source Code Requirements

The program is designed to use a web camera so this is essential. While the program should execute on single core machines, be aware that performance may be increased by using the sequential frame processing method. Look at the “Improving the detection performance” section for more details. The x86 source will also run on x64 machines, however the x64 source is only for x64 architectures.

Face_Recognition/Main1.jpg

How EMGU the FaceRecognizer Works

The new FaceRecognizer is a global constructor that allows Eigen, Fisher, and LBPH classifiers to be used together. The class combines common method calls between the classifiers. The constructor for each classifier type is as follows:

  • FaceRecognizer recognizer = new EigenFaceRecognizer(num_components, threshold);
  • FaceRecognizer recognizer = new FisherFaceRecognizer(num_components, threshold);
  • FaceRecognizer recognizer = new LBPHFaceRecognizer(radius, neighbors, grid_x, grid_y, threshold);

Each constructor is described below. Note that the threshold for classification is different with the Eigen recognisor than it is for the Fisher and LBPH classifiers.

The Eigen Classifier

The Eigen recognisor takes two variables. The first, is the number of components kept for this Principal Component Analysis. There’s no rule how many components that should be kept for good reconstruction capabilities. It is based on your input data, so experiment with the number. OpenCV documentation suggests keeping 80 components should almost always be sufficient. The second variable is designed to be a prediction threshold; this variable contains the bug as any value above this is considered as an unknown. For the Fisher and LBHP, this is how unknowns are classified, however with the Eigen recogniser, we must use the return distance to provide our own test for unknowns. In the Eigen recogniser, the larger the value returned, the closer to a match we have.

To allow us to set a threshold rule later, we set the threshold value to positive infinity, allowing all faces to be recognized:

C#
FaceRecognizer recognizer = new EigenFaceRecognizer(80, double.PositiveInfinity);

We then examine the Eigen distance return after recognition, if it is above the threshold, we set in the form then it is recognized, if not then it’s an unknown.

C#
public string Recognise(Image<Gray, Byte> Input_image, int Eigen_Thresh = -1)
{
    if (_IsTrained)
    {
        FaceRecognizer.PredictionResult ER = recognizer.Predict(Input_image);

.....
        //Only use the post threshold rule if we are using an Eigen Recognizer 
        //since Fisher and LBHP threshold set during the constructor will work correctly 
        switch (Recognizer_Type)
        {
            case ("EMGU.CV.EigenFaceRecognizer"):
                    if (Eigen_Distance > Eigen_threshold) return Eigen_label;
                    else return "Unknown";
            case ("EMGU.CV.LBPHFaceRecognizer"):
            case ("EMGU.CV.FisherFaceRecognizer"):
            default:
                    return Eigen_label; //the threshold set in training controls unknowns
        }
    }
......
}

The Fisher Classifier

The Fisher recogniser takes two variables as with the Eigen constructor. The first is the number of components kept Linear Discriminant Analysis with the Fisherfaces criterion. It’s useful to keep all components, this means the number of your training inputs. If you leave this at the default (0), set it to a value less than 0, or greater than the number of your training inputs, it will be set to the correct number (your training inputs - 1) automatically. The second Variable is the threshold value for unknowns, if the resultant Eigen distance is above this value, the Predict() method will return a -1 value indicating an unknown. This method works and the threshold is set to a default of 3500, change this to constrain how accurate you want results. If you change the value in the constructor, the recogniser will need retraining.

C#
FaceRecognizer recognizer = new FisherFaceRecognizer(0, 3500);//4000

As with the Eigen, you can introduce your own rule as demonstrated below, this is not instigated in this version (2.4.9) of the code although the example is provided in case you wish to add additional form controls to play with the threshold settings. In the constructor, the threshold must be set to double.PositiveInfinity.

C#
//NOTE: This is not within V2.4.9 of the code....
public string Recognise(Image<Gray, Byte> Input_image, int Eigen_Thresh = -1)
{
    if (_IsTrained)
    {
        FaceRecognizer.PredictionResult ER = recognizer.Predict(Input_image);

.....
            //Only use the post threshold rule if we are using an Eigen Recognizer 
            //since Fisher and LBHP threshold set during the constructor will work correctly 
            switch (Recognizer_Type)
            {
                case ("EMGU.CV.EigenFaceRecognizer"):
                        if (Eigen_Distance > Eigen_threshold) return Eigen_label;
                        else return "Unknown";
                case ("EMGU.CV.FisherFaceRecognizer"):
                        //Note how the Eigen Distance 
                        //must be below the threshold unlike as above
if (Eigen_Distance < Fisher_threshold) return Eigen_label;
                        else return "Unknown";
                case ("EMGU.CV.LBPHFaceRecognizer"):
                default:
                        return Eigen_label; //the threshold set in training controls unknowns
            }
        }
......
    }

The Local Binary Pattern Histogram (LBPH) Classifier

The LBPH recogniser unlike the other two takes five variables:

  • radius – The radius used for building the Circular Local Binary Pattern.
  • neighbors – The number of sample points to build a Circular Local Binary Pattern from. An value suggested by OpenCV Documentations is ‘8’ sample points. Keep in mind: the more sample points you include, the higher the computational cost.
  • grid_x – The number of cells in the horizontal direction, 8 is a common value used in publications. The more cells, the finer the grid, the higher the dimensionality of the resulting feature vector.
  • grid_y – The number of cells in the vertical direction, 8 is a common value used in publications. The more cells, the finer the grid, the higher the dimensionality of the resulting feature vector.
  • threshold – The threshold applied in the prediction. If the distance to the nearest neighbour is larger than the threshold, this method returns -1.

The final variable, the threshold value works as it does in the Fisher method. If the Eigen Distance calculated is above this value, the Predict() method will return a -1 value indicating an unknown. This method works and the threshold is set to a default of 100, change this to onstrain how accurate you want results. If you change the value in the constructor, the recogniser will need retraining.

C#
FaceRecognizer recognizer = new LBPHFaceRecognizer(1, 8, 8, 8, 100);//50

As with the Eigen, you can introduce your own rule as demonstrated below, this is not instigated in this version (2.4.9) of the code although the example is provided in case you wish to add additional form controls to play with the threshold settings. In the constructor, the threshold must be set to double.PositiveInfinity.

C#
//NOTE: This is not within V2.4.9 of the code....
public string Recognise(Image<Gray, Byte> Input_image, int Eigen_Thresh = -1)
{
    if (_IsTrained)
    {
        FaceRecognizer.PredictionResult ER = recognizer.Predict(Input_image);

.....
            //Only use the post threshold rule if we are using an Eigen Recognizer 
            //since Fisher and LBHP threshold set during the constructor will work correctly 
            switch (Recognizer_Type)
            {
                case ("EMGU.CV.EigenFaceRecognizer"):
                        if (Eigen_Distance > Eigen_threshold) return Eigen_label;
                        else return "Unknown";
                case ("EMGU.CV.LBPHFaceRecognizer"):
                        //Note how the Eigen Distance must be 
                        //below the threshold unlike as above
                        if (Eigen_Distance < LBPH_threshold) return Eigen_label;
                        else return "Unknown";
                case ("EMGU.CV.FisherFaceRecognizer"):
                default:
                        return Eigen_label; //the threshold set in training controls unknowns
            }
    }
......
}

Principle Component Analysis (PCA)

Since this article was originally written for the EigenObjectRecognizer class in the application of face detection, the focus of the article will remain with the new EigenFaceRecognizer. The EigenFaceRecognizer applies PCA. The EigenFaceRecognizer allows easier application of the FisherFaceRecognizer and the LBPHFaceRecognizer.

The FisherFaceRecognizer applies Linear Discriminant Analysis derived by R.A. Fisher. LDA finds the subspace representation of a set of face images, the resulting basis vectors defining that space are known as Fisherfaces. This can yield preferable results to PCA based analysis favouring classification rather than representation. See this ScholarPedia article for more information.

The LBPHFaceRecognizer uses Local binary patterns (LBP) to create a feature vector for using in a support vector machine or some other machine-learning algorithm. LBP unifies traditionally divergent statistical and structural models of texture analysis. LBP is very robust in real-world applications due to the manner in which it is deals with monotonic gray-scale changes caused by variations in illumination. See this ScholarPedia article for more information.

The EigenFaceRecognizer class applies PCA on each image, the results of which will be an array of Eigen values that a Neural Network can be trained to recognise. PCA is a commonly used method of object recognition as its results, when used properly can be fairly accurate and resilient to noise. The method of which PCA is applied can vary at different stages so what will be demonstrated is a clear method for PCA application that can be followed. It is up for individuals to experiment in finding the best method for producing accurate results from PCA.

To perform PCA, several steps are undertaken:

  • Stage 1: Subtract the Mean of the data from each variable (our adjusted data).
  • Stage 2: Calculate and form a covariance Matrix.
  • Stage 3: Calculate Eigenvectors and Eigenvalues from the covariance Matrix.
  • Stage 4: Chose a Feature Vector (a fancy name for a matrix of vectors).
  • Stage 5: Multiply the transposed Feature Vectors by the transposed adjusted data.

STAGE 1: Mean Subtraction

This data is fairly simple and makes the calculation of our covariance matrix a little simpler now. This is not the subtraction of the overall mean from each of our values as for covariance, we need at least two dimensions of data. It is in fact the subtraction of the mean of each row from each element in that row.

(Alternatively, the mean of each column from each element in the column however this would adjust the way we calculate the covariance matrix.)

Stage 2: Covariance Matrix

The basic Covariance equation for two dimensional data is:

Face_Recognition/EQ1.jpg

Which is similar to the formula for variance however, the change of x is in respect to the change in y rather than solely the change of x in respect to x. In this equation, x represents the pixel value and ̄x is the mean of all x values, and n the total number of values.

The covariance matrix that is formed of the image data represents how much the dimensions vary from the mean with respect to each other. The definition of a covariance matrix is:

Face_Recognition/EQ2.jpg

Now the easiest way to explain this is but an example, the easiest of which is a 3x3 matrix.

Face_Recognition/EQ3.jpg

Now with larger matrices this can become more complicated and the use of computational algorithms essential.

Stage 3: Eigenvectors and Eigenvalues

Eigenvalues are a product of multiplying matrices however they are as special case. Eigenvalues are found by multiples of the covariance matrix by a vector in two dimensional space (i.e., a Eigenvector). This makes the covariance matrix the equivalent of a transformation matrix. It is easier to show in a example:

Face_Recognition/EQ4.jpg

Eigenvectors can be scaled so ½ or x2 of the vector will still produce the same type of results. A vector is a direction and all you will be doing is changing the scale not the direction.

Face_Recognition/EQ5.jpg

Eigenvectors are usually scaled to have a length of 1:

Face_Recognition/EQ6.jpg

Thankfully, finding these special Eigenvectors is done for you and will not be explained, however there are several tutorials available on the web to explain the computation.

The Eigenvalue is closely related to the Eigenvector used and is the value of which the original vector was scaled in the example the Eigenvalue is 4.

Stage 4: Feature Vectors

Now, usually the results of Eigenvalues and Eigenvectors are not as clean as in the example above. In most cases, the results provided are scaled to a length of 1. So here are some example values calculated using Matlab:

Face_Recognition/EQ7.jpg

Once Eigenvectors are found from the covariance matrix, the next step is to order them by Eigenvalue, highest to lowest. This gives you the components in order of significance. Here, the data can be compressed and the weaker vectors are removed producing a lossy compression method, the data lost is deemed to be insignificant.

Face_Recognition/EQ8.jpg

Stage 5: Transposition

The final stage in PCA is to take the transpose of the feature vector matrix and multiply it on the left of the transposed adjusted data set (the adjusted data set is from Stage 1 where the mean was subtracted from the data).

The EigenFaceRecognizer class performs all of this and then feeds the transposed data as a training set into a Neural Network. When it is passed an image to recognize it performs PCA and compares the generated Eigenvalues and Eigenvectors to the ones from the training set, the Neural Network then produces a match if one has been found or a negative match if no match is found. There is a little more to it than this, however the use of Neural Networks is a complex subject to cover and is not the object of this article.

The Source Code

Training the 'FaceRecognizer'

Training of the FaceRecognizeris consistent between all three class members of the recogniser. In the code, this is done at the start of a program using the EigenFaceRecognizer as the default. In this version, v2.4.9, FaceRecognizer selection can be made though the ‘Recogniser Type’ menu selection of the main form. Training made after training data has been added or created is controlled by this selection. From v2.4, the ability to save the Eigen Recognizer for exportation to other applications was been introduced. It is important to note however, that you will need the original training data to add additional faces.

Face_Recognition/Training.jpg

The training form allows for a face to recognised and added individually as the program is designed to run from a web cam the faces are recognised in the same method. A feature to acquire 10 successful face classification and add them all or individual ones to the training data has been included. This increase the collection of training data and the amount of images acquired can be adjusted in the Variables region of the Training Form.cs. Increase or decrease num_faces_to_aquire any preferred value.

C#
#region Variables            
....
    //For acquiring 10 images in a row
    List<Image<Gray, byte>> resultImages = new List<Image<Gray, byte>>();
    int results_list_pos = 0;

    int num_faces_to_aquire = 10;

    bool RECORD = false;
....
#endregion

A Classifier_Train class is included, it has two constructors, the default takes the standard folder path of Application.StartupPath + "\\TrainedFaces" which is also the default save location of the training data. If you wish to have different sets of training data, then another constructor carries a string containing the training folder. The program only makes use of the default constructor; it is included to allow for development. The class sole design is to make the Form code more readable. To alter the default path, the following functions must be corrected:

C#
//Forms
private bool save_training_data(Image face_data) //Training_Form.cs*
private void Delete_Data_BTN_Click(object sender, EventArgs e) //Training_Form.cs*

//Classes
public Classifier_Train() //Classifier_Train.cs

Storing of the Training Data

The training data default location is within the TrainedFaces folder of the application path. It has a single XAML file that contains tags for the Name of the person and a file name for the training image. The XAML file has the following structure:

XML
<Faces_For_Training>
    <FACE>
    <NAME>NAME</NAME> 
    <FILE>face_NAME_2057798247.jpg</FILE> 
</FACE>
</Faces_For_Training>

This structure can be easily changed to work with extra data or with another layout. The following functions must be adjusted to accommodate the extra information and extra variables added where require.

C#
//Forms
private bool save_training_data(Image face_data) //Training_Form.cs*
//Classes
private bool LoadTrainingData(string Folder_loacation) //Classifier_Train.cs

Each image is saved using a random number so that unique file identifiers can be generated. This prevents images being overwritten and easily allows several images for one individual to be acquired and stored with no problems.

C#
Random rand = new Random();
bool file_create = true;
string facename = "face_" + NAME_PERSON.Text + "_" +    rand.Next().ToString() + ".jpg";
while (file_create)
{
    if (!File.Exists(Application.StartupPath + "/TrainedFaces/" + facename))
    {
        file_create = false;
    }
    else
    {
    facename = "face_" + NAME_PERSON.Text + "_" + rand.Next().ToString() + ".jpg";
    }
}

The Training form allows for data to be added to the training set, it has been noted that this process can be slow. While a quicker method would be to load and write all the data at open and close respectively, this has not been included. If such an action were taken, memory management would have to be carefully considered so that the amount of training images does not cause memory problems.

A JPEG encoder is used to store the images, however this could be changed to a bitmap encoder to prevent any data loss see the following functions within the Traing_Form.cs* file:

C#
//Saving The Data
private bool save_training_data(Image face_data)

private ImageCodecInfo GetEncoder(ImageFormat format)

Saving and Loading the Trained EigenObjectRecognizer

To save and load different FaceRecognizers, simply use the 'File>Recogniser>' save/load options. The benefits are now that the Recogniser no longer requires training every time a program is loaded, it can simply read in the previously trained data. v2.4 Keeps compatibility with v2.3 in trying to read the TrainedFaces folder and retrain the recogniser upon startup.

Saving has changed from the previous version as the FaceRecognizer class includes its own save/load methods. The save method only saves the relevant data for the recogniser type and there is no built in method of determining the recogniser type from the data saved. To enable this, three file extensions are used, these file extensions are examined when loading the data to determine the recogniser type.

  • EFR - EigenFaceRecognizer
  • FFR - FisherFaceRecognizer
  • LBPH - LBPHFaceRecognizer

It is important to note that the recogniser no longer has a string array to contain the names, instead it produces an integer which can then be used to classify an individual. As such, a list is used in training and prediction to store each string name of an individual. The integer returned be the FaceRecognizer class is then used to read the name from the particular position in the array. Similarly saving the recogniser also does not save the list of names. To allow saving of all data, an additional ‘Labels.xml’ is saved with the recognizer to store this data.

C#
recognizer.Save(filename);

//save label data as this isn't saved with the network
string direct = Path.GetDirectoryName(filename);
FileStream Label_Data = File.OpenWrite(direct + "/Labels.xml");
using (XmlWriter writer = XmlWriter.Create(Label_Data))
{
    writer.WriteStartDocument();
    writer.WriteStartElement("Labels_For_Recognizer_sequential");
    for (int i = 0; i < Names_List.Count; i++)
    {
        writer.WriteStartElement("LABEL");
        writer.WriteElementString("POS", i.ToString());
        writer.WriteElementString("NAME", Names_List[i]);
        writer.WriteEndElement();
    }

    writer.WriteEndElement();
    writer.WriteEndDocument();
}
Label_Data.Close();

Loading the FaceRecognizer is achieved by using the inbuilt load method. The file extension for the recogniser is used to determine the constructor required for the FaceRecognizer. In your own code, this will not be needed as you will probably only use one recogniser type. In addition to the recogniser, the ‘Labels.xml’ file is loaded and the list containing the string representation of the recognisers integer outputs is populated. Do not worry too much though as by default, your data within the TrainedFaces folder will still be loaded when you restart the program.

C#
public void Load_Eigen_Recogniser(string filename)
{
    //Lets get the recogniser type from the file extension
    string ext = Path.GetExtension(filename);
    switch (ext)
    {
        case (".LBPH"):
            Recognizer_Type = "EMGU.CV.LBPHFaceRecognizer";
            recognizer = new LBPHFaceRecognizer(1, 8, 8, 8, 100);//50
            break;
        case (".FFR"):
            Recognizer_Type = "EMGU.CV.FisherFaceRecognizer";
            recognizer = new FisherFaceRecognizer(0, 3500);//4000
            break;
        case (".EFR"):
            Recognizer_Type = "EMGU.CV.EigenFaceRecognizer";
            recognizer = new EigenFaceRecognizer(80, double.PositiveInfinity);
            break;
    }

    //introduce error checking
    recognizer.Load(filename);

    //Now load the labels
    string direct = Path.GetDirectoryName(filename);
    Names_List.Clear();
    if (File.Exists(direct + "/Labels.xml"))
    {
        FileStream filestream = File.OpenRead(direct + "/Labels.xml");
        long filelength = filestream.Length;
        byte[] xmlBytes = new byte[filelength];
        filestream.Read(xmlBytes, 0, (int)filelength);
        filestream.Close();

        MemoryStream xmlStream = new MemoryStream(xmlBytes);

        using (XmlReader xmlreader = XmlTextReader.Create(xmlStream))
        {
            while (xmlreader.Read())
            {
                if (xmlreader.IsStartElement())
                {
                    switch (xmlreader.Name)
                    {
                        case "NAME":
                            if (xmlreader.Read())
                            {
                                Names_List.Add(xmlreader.Value.Trim());
                            }
                            break;
                    }
                }
            }
        }
        ContTrain = NumLabels;
    }
    _IsTrained = true;
}

If you adjust the file extension, this will not affect saving and loading of the data unless you correct the loading potion of the code.

Improving FaceRecognizer Accuracy

You will notice if your run the program without alterations, train it on yourself and introduce another untrained face, it will be recognized as you. The methods to improve the accuracy of the FaceRecognizer have been made more stringent. The threshold used to control unknown faces either in the constructor or in the case of the EigenFaceRecognizer from the calculated distance can be adjusted to allow better accuracy. For the EigenFaceRecognizer, this can be done on the form by changing the value within the ‘Unknown Threshold’ textbox. A default of 2000 is used but be increasing this to 5000, for example, will mean it will be less likely to allow a false match. Too high however and you may never achieve a match the eigenDistance is displayed in the right hand panel with the face to allow calibration.

The FisherFaceRecognizer and the LBPHFaceRecognizer have the threshold set in the constructor. Additional controls can be added to the form to allow post training calibration. This is discussed with the ‘How EMGU the FaceRecognizer Works’ section.

Histogram equalization is also used to improve accuracy, this produces a more uniform image that is more resilient to changes in lighting. Alternative methods could also be taken to produce unique training sets. You could just take eyes and mouth features concatenate the data and use this however further experimentation would be required. In version 2.4.9, the face data is centralized to its detected position in order to remove background noise that could affect results in previous versions.

C#
result = currentFrame.Copy(face_found.rect).Convert<Gray, 
       byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
result._EqualizeHist();

You could also try larger training images the following code in both forms resize the face to a 100x100 image increasing this could increase accuracy. Be warned however that both occurrences must be changed and the larger this image, the longer training and recognition time required.

C#
result = currentFrame.Copy(face_found.rect).Convert<Gray, 
  byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC); //Training Form.cs* 
                                                                 //& Main Form.cs*

When using a small data set, such as recognizing yourself, you should offset your training data. To aid in classification of unknowns and reduce false recognition, you can also provide your recogniser with some training data of individuals to be recognized as 'Unknown' by name. Provided with the code is a 'Group_Photo_Unknowns.pdf' containing an image of five people. Print this on an A4 piece of paper and train your recogniser on the four detectable people using the name 'Unknown'. While these people will be recognized as an individual called 'Unknown', they will offset your training data of yourself by providing 4 sets of PCA features that should be an unknown individual. This is a common practice and will reinforce your data set reducing the false positive potential as individuals that are unknown have a greater chance of matching features with the people within the photo.

Detecting Unknown Faces

As discussed in the comments, this was previously achieved by editing the source 2.3.0 of EMGU to allow access to the Eigen distance variable. EMGU v2.4.2 made this accessible by introducing a RecognitionResult class to store three variables in. The recognized face label, Eigen distance, and Index were made available. V2.4.2 allowed you to set a Eigen_threshold so that if the Eigen distance is not greater than this than a null result will be returned. This contained the same bug that has been addressed with this version of the code. The classifier_Train.cs class has a method of setting a Eigen threshold inside itself only when the EigenFaceRecognizer is used. The FisherFaceRecognizer and the LBPHFaceRecognizer allow the threshold to be set in the constructor correctly.

The main form includes a threshold box for calibrating the threshold value. This value changes depending on the size of your training data. To aid in calibration, the Eigen distance is printed next to the name when recognized. The recognise method examines to see if the Eigen distance is greater than the set threshold. If not a "Unknown" label is returned instead of the last person’s name within the database.

C#
public string Recognise(Image<Gray, Byte>  Input_image, int Eigen_Thresh = -1)
{
    if (_IsTrained)
    {
        FaceRecognizer.PredictionResult ER = recognizer.Predict(Input_image);

        if (ER.Label == -1)
        {
            Eigen_label = "Unknown";
            Eigen_Distance = 0;
            return Eigen_label;
        }
        else
        {
            Eigen_label = Names_List[ER.Label];
            Eigen_Distance = (float)ER.Distance;
            if (Eigen_Thresh > -1) Eigen_threshold = Eigen_Thresh;

            //Only use the post threshold rule if we are using an Eigen Recognizer 
            //since Fisher and LBHP threshold set during the constructor will work correctly 
            switch (Recognizer_Type)
            {
                case ("EMGU.CV.EigenFaceRecognizer"):
                        if (Eigen_Distance > Eigen_threshold) return Eigen_label;
                        else return "Unknown";
                case ("EMGU.CV.LBPHFaceRecognizer"):
                case ("EMGU.CV.FisherFaceRecognizer"):
                default:
                        return Eigen_label; //the threshold set in training controls unknowns
            }
        }
    }
    else return "";
}

Improving the Detection Performance

Rather than focus on improving the performance for slower processors, this program is designed to increase the performance on modern machines. It is often a desired, although impractical, to have real time image processing. Real time processing is closely linked to the accuracy of an image-processing algorithm. Faster algorithms are a result of processing less data and their ability to determine true from false data is inherently flawed. In video acquisition, 30 frame per-second is deemed as standard, it’s faster than what our eyes can cope with and thus provided smooth movement. In real world applications, this is too slow for a computer to be accurate.

Modern high speed cameras can acquire 640 x 480 images at 300 Fps, putting a standard web camera to shame. These high end cameras use specific frame grabbers that deal with image acquisition. It is unlikely that the standard user will encounter such speeds maybe 60 Fps at best, but what is important is the way in which these camera do image pre-processing at real time speeds. The frame grabbers have FPGA (Field-programmable Gate Array) chips integrated onto the card. These deal with producing the images but can also do processors such as histogram equalization and edge detection at the same time. To understand how this is achieved, it is important to point out what an FPGA chip is and its architecture.

An FPGA is in simple terms a high end processor which can be designed to do a specific operation if you have a smart phone, then what do you think is running it (An ARM processor is an advanced FPGA). On a computer, you could design one to run word, another to run a browser and another to run games. Obviously, the complexity and practicality prevents this. An FPGA processor can be designed to have an extremely parallel architecture. So while performing edge detection, you could also be performing object recognition.

While FPGA use is beyond the scope of this article, a parallel architecture for image-processing can be produced. Many users of Visual Studio will have come across threading application before. This is where the processing of data is spread across the cores of your computer. This used to be complex and requires a large amount of experience, however Microsoft will have invested a lot of time into parallel computing. Here is a link to the home page http://msdn.microsoft.com/en-us/concurrency/default. The programmers at the Visual Studio base have produced a set of classes that will parallel almost any loop you use. Performance increase depends on your machine and the number of physical cores. An 8 core i7 only has 4 physical cores and performance increase of x3.7 on average are seen.

Do not jump straight in and make everything you own parallel. Its use is hit and miss and performance must be examined. Its use can increase the execution time and can easily eat all the memory up on your computer. It is also dependant on what other applications you are running.

Remember two things:

  1. Think how the computer works if you are only doing a small amount of processing, the computer must share all the resources, and then tell each processor what to processes. Then it must gather the results deal with them and repeat the process until your loop is exhausted. Sometimes, allowing one processor to deal with the information can be quicker. A stopwatch is your friend here time both instances and see what happens. Also remember your machine is not everyone else’s you may have 8 cores, but your end user may still be stuck with just 1.
  2. A few simple rules, each processor runs without looking at what other processors are doing. Do not use parallel loops within parallel loops as this will hinder performance. Do not set a task in which the output of the second loop is dependent on the first, else it will not work. Similarly if the results being recorded are dependant from each loop, errors will also occur. Non-dependant operations such as below are your friend. Any parallel loop can be buggy at times, so a try catch statement is useful to keep.
C#
//variables
Int Count = 0
Image<Gray,Byte> Image_Blank_Copy = My_Image.CopyBlank();
...

Parallel.For....

    Count += 10;
    Count -= 10;
    Count++;
    
    //or for an image
    Image_Blank_Copy.Data[y,x,0] += 10;

Also, a word of warning, setting an ROI in an image and then using that within a loop is considerably slower than simply copying that area to a new image and processing it, so for example:

C#
//Bad and Slow
My_Image.ROI = new Rectangle(0,0,100,100);
Parallel.For(0, My_Image.Width, (int i) =>
{
    for (int j = 0; j < My_Image.Height; j++)
    {
        //Do something
    }
});

//Good and Fast
My_Image.ROI = new Rectangle(0,0,100,100);

Using(Image<Bgr,Byte> tempory_image = My_Image.Copy())
{
    Parallel.For(0, tempory_image.Width, (int i) =>
    {
        for (int j = 0; j < tempory_image.Height; j++)
        {
            //Do something
        }
    });
}

To access Parallel.For, Parallel.ForEach, Task and ThreadPool, you will need to add the following using statements:

C#
using System.Threading;
using System.Threading.Tasks;

In the source code, provide a parallel foreach loop is used. This means that each face is recognized using a separate thread. So for each face detected, the information is passed to the recogniser to be classified independently. While performance on one face is not really seen if there are several people within a room, each one can be recognized independently. This is very useful if you are using a large amount of training data as the more possibilities the EigenObjectRecognizer has for an output, the longer it will take for an accurate classification. A try catch statement is used to filter out errors that can occur sporadically, however this does not effect performance or accuracy.

C#
{
    try
    {
        facesDetected[i].X += (int)(facesDetected[i].Height * 0.15);
        facesDetected[i].Y += (int)(facesDetected[i].Width * 0.22);
        facesDetected[i].Height -= (int)(facesDetected[i].Height * 0.3);
        facesDetected[i].Width -= (int)(facesDetected[i].Width * 0.35);
        result = currentFrame.Copy(facesDetected[i]).Convert<Gray, 
          Byte> ().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
        result._EqualizeHist(); //draw the face detected in the 0th (gray) 
                                //channel with blue color
        currentFrame.Draw(facesDetected[i], new Bgr(Color.Red), 2);
        if (Eigen_Recog.IsTrained)
        {
            string name = Eigen_Recog.Recognise(result);
            int match_value = (int)Eigen_Recog.Get_Eigen_Distance;
            //Draw the label for each face detected and recognized

            currentFrame.Draw(name + " ", ref font, 
              new Point(facesDetected[i].X - 2, facesDetected[i].Y - 2), 
              new Bgr(Color.LightGreen)); ADD_Face_Found(result, name, match_value);
        }
    }
    catch
    {
        //do nothing as parallel loop buggy
        //No action as the error is useless, it is simply an error in
        //no data being there to process and this occurs sporadically
    }
});

The performance increase doesn't end there. In the main program, a display of the last 5 faces detected is shown on a right hand side panel. These controls are created and shown programmatically and in parallel. As this is done in a parallel loop, each variable must be independent from actions within a loop. The important functions are shown below, you will notice that the location of each component is controlled by the variables faces_panel_X, faces_panel_Y any operation on these variables is independent and goes of its current value.

C#
void Clear_Faces_Found()
void ADD_Face_Found(Image<Gray, Byte> img_found, string name_person)
{
    ...
    PI.Location = new Point(faces_panel_X, faces_panel_Y);
    ...
    LB.Location = new Point(faces_panel_X, faces_panel_Y + 80);
    ...
      
    faces_count++;
    if (faces_count == 2)
    {
        faces_panel_X = 0;
        faces_panel_Y += 100;
        faces_count = 0;
    }
    else faces_panel_X += 85;
    ...
}

You can control the amount of faces shown by adjusting when the control panel is cleared. As there is a picturebox and a label per face, you must times the amount of faces by two. In this case, 10/2 = 5 faces and names are shown.

C#
if (Faces_Found_Panel.Controls.Count > 10)

Changing Between Parallel and Sequential Execution v2.4

To aid in comparison, a menu option is now provided to switch between parallel and sequential processing without the need to exit the program. For further details, see 'Changing Between Parallel and Sequential Execution v2.3' below.

Changing Between Parallel and Sequential Execution v2.3

As users may want to investigate the performance increase both the parallel processing of the facial recognition and sequential processing functions are included. The default is the parallel method. Within the Main Form.cs* code, you will see two functions:

C#
//Process Frame
void FrameGrabber_Standard(object sender, EventArgs e)  //This is the Sequential
void FrameGrabber_Parrellel(object sender, EventArgs e) //and this the Parallel 

Which one of this is used is controlled by the Camera Start and Stop functions again within Main Form.cs*.

C#
//Camera Start Stop
public void initialise_capture()
{
    grabber = new Capture();
    grabber.QueryFrame();
    //Initialize the FrameGraber event
    Application.Idle += new EventHandler(FrameGrabber_Parrellel);
}
private void stop_capture()
{
    Application.Idle -= new EventHandler(FrameGrabber_Parrellel);
    if(grabber!= null)
    {
        grabber.Dispose();
    }
}

You must change the following two lines of code and redirect them to the sequential function.

C#
Application.Idle += new EventHandler(FrameGrabber_Parrellel); //initialise Capture
Application.Idle -= new EventHandler(FrameGrabber_Parrellel); //Stop Capture

//becomes

Application.Idle += new EventHandler(FrameGrabber_Standard);  //initialise Capture
Application.Idle -= new EventHandler(FrameGrabber_Standard);  //Stop Capture

The parallelization of image processing code is important and the buck does not stop with threading of the code. It is easy to implement but with practice, it can be implemented well. There is a newcomer to EMGU as well which utilizes CUDA graphics processing. This topic is more advanced and will not be covered as not everyone has CUDA enable graphics cards, but in basics, it allows a higher level of parallelization as rather than 4 or 8 cores you can work with hundreds, it’s easy to speculate the improvements that can be made in execution time. The PedestrianDetection example that ships with EMGU shows how this can be implemented.

Conclusion

This article while explaining the principles of PCA introduces the important subject of decreasing execution time by parallelization. While the software and the article demonstrate a small example of its implementation, its importance must be noted. With the increasing affordability of multi-core processors and CUDA based graphic cards, image processing in real time is more accessible. Advanced microelectronics are no longer required to speed up simpler image processing systems and the decrease in development time allows image processing to be used by more individuals.

If you feel improvements or correction can be made to this article, please post a comment and it will be addressed.

Previous Versions

SourceForge

Acknowledgements

Many thanks to all of the 2006 research group of the Mineralogy and Crystallography department at the University of Arizona whose group photo was used for demonstration purposes. http://www.geo.arizona.edu/xtal/group/group2006.htm

If you do not wish your image to be used, please post a comment or contact me and I’ll remove any references.

Thank you to Sergio Andrés Gutiérrez Rojas whose code inspired this article. I hope you continue with your good work in Image processing.

History

  1. Direct links applied for an external file hosting website via kiwi6.com. Apologies if these links fail code project do not allow such large files to be hosted. To prevent downtime Sourceforge links are also provided. All files are now held on Sourceforge
  2. Direct links set to open a new window as there hotlinking now re-directs traffic. A new host will be looked for. X64 bit version is now hosted on private website, still looking for a host for the larger x86 version. The new version runs both x86/x64 code
  3. Version 2.4 uploaded allowing unrecognized faces to detected. Including saving/loading of a recogniser variable. Updates to code include in article.
  4. Version 2.9.4 updated to the new FaceRecognizer class allowing Eigen, Fisher and LBHP recognisers to be applied. The face detection using Haar was updated with the new method calls. The save/load methods were adjusted to suit the new FaceRecognizer class and the article was updated significantly because of this.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Engineer Manchester Metropolitan University
United Kingdom United Kingdom
I have a large amount of experience in image processing undertaking it as a major part of my PhD. As an assistant lecturer I have helped teach the subject mater at masters level and programming skills to undergraduates. I have a good grasp of the abilities of various image processing methods and an expertise in there practical application.

What makes my experience unique is the ability to design a high-specification machine vision system from principle up. Using my extensive knowledge of electronics to produce the custom electronics often required to help provide the best images possible. Such well designed systems assist image processing algorithms and increase their abilities. Additional expertise developed from my education focuses around software engineering, electronic engineering and practical skills developed in mechanical engineering. My PHD research has produced industry quality designs that surpass the ability of products currently available to end users with in-depth experience of Image processing, microcontrollers (PIC’s) and FPGA devices. I am also experience with the new range of development boards that are replacing traditional microcontroller programming methods.

Comments and Discussions

 
AnswerRe: Why Emgu.CV.Invoke Error when running the code? Pin
sizusuzu26-May-13 3:59
sizusuzu26-May-13 3:59 
Questionusing opencv library for image processing (convert Type) Pin
iComRID26-Apr-13 5:02
iComRID26-Apr-13 5:02 
QuestionI have some error in this code Pin
Member 999244216-Apr-13 19:01
Member 999244216-Apr-13 19:01 
Questionneural network Pin
Jade Ixiann27-Feb-13 15:13
Jade Ixiann27-Feb-13 15:13 
AnswerRe: neural network Pin
C_Johnson27-Feb-13 22:02
C_Johnson27-Feb-13 22:02 
GeneralRe: neural network Pin
Jade Ixiann27-Feb-13 22:23
Jade Ixiann27-Feb-13 22:23 
GeneralRe: neural network Pin
C_Johnson27-Feb-13 22:31
C_Johnson27-Feb-13 22:31 
GeneralRe: neural network Pin
Jade Ixiann28-Feb-13 2:13
Jade Ixiann28-Feb-13 2:13 
QuestionFace Recognition Accuracy Pin
bonn gray12-Feb-13 23:12
bonn gray12-Feb-13 23:12 
AnswerRe: Face Recognition Accuracy Pin
C_Johnson12-Feb-13 23:36
C_Johnson12-Feb-13 23:36 
GeneralMy vote of 5 Pin
Herbert Tamayo4-Feb-13 19:51
Herbert Tamayo4-Feb-13 19:51 
GeneralMy vote of 5 Pin
babak12343-Feb-13 4:43
babak12343-Feb-13 4:43 
Questionexception Pin
k_tanmay26-Jan-13 1:26
k_tanmay26-Jan-13 1:26 
QuestionCreate a haarcascade Pin
thucbv18-Jan-13 16:11
thucbv18-Jan-13 16:11 
AnswerRe: Create a haarcascade Pin
C_Johnson27-Feb-13 22:07
C_Johnson27-Feb-13 22:07 
QuestionThe type initializer for 'Emgu.CV.CvInvoke' threw an exception. Pin
aboutjayesh2-Jan-13 23:21
aboutjayesh2-Jan-13 23:21 
AnswerRe: The type initializer for 'Emgu.CV.CvInvoke' threw an exception. Pin
C_Johnson8-Jan-13 5:00
C_Johnson8-Jan-13 5:00 
GeneralRe: The type initializer for 'Emgu.CV.CvInvoke' threw an exception. Pin
rizkiptk9-Jul-13 8:39
rizkiptk9-Jul-13 8:39 
GeneralRe: The type initializer for 'Emgu.CV.CvInvoke' threw an exception. Pin
SumantaBanerjee17-Jul-13 2:35
SumantaBanerjee17-Jul-13 2:35 
GeneralVote 5 Pin
Jasper Lim22-Dec-12 14:48
Jasper Lim22-Dec-12 14:48 
QuestionThe code crash, cant debug....Please help!!!! Pin
Jasper Lim21-Dec-12 17:34
Jasper Lim21-Dec-12 17:34 
C#
if (trainingImages.ToArray().Length != 0)
                {
                    //Eigen face recognizer
                    recognizer = new EigenObjectRecognizer(trainingImages.ToArray(),Names_List.ToArray(), 5000, ref termCrit); //5000 default
                    return true;
                }


Hi, Johnson, I face some problem when debug this project, it cant debug which i break all it always stop at this code...when i change this code trainingImages.ToArray().Length == 0 then it just can run...what's the problem OMG | :OMG:
AnswerRe: The code crash, cant debug....Please help!!!! Pin
C_Johnson22-Dec-12 9:19
C_Johnson22-Dec-12 9:19 
GeneralRe: The code crash, cant debug....Please help!!!! Pin
Jasper Lim22-Dec-12 9:30
Jasper Lim22-Dec-12 9:30 
GeneralRe: The code crash, cant debug....Please help!!!! Pin
Jasper Lim22-Dec-12 9:35
Jasper Lim22-Dec-12 9:35 
GeneralMessage Closed Pin
22-Dec-12 9:38
Jasper Lim22-Dec-12 9:38 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.