Click here to Skip to main content
15,860,943 members
Articles / Artificial Intelligence

AI: Neural Network for Beginners (Part 3 of 3)

Rate me:
Please Sign up or sign in to vote.
4.90/5 (103 votes)
29 Jan 2007CPOL11 min read 336.6K   7.5K   276   71
AI: An introduction into neural networks (multi-layer networks / trained by Microbial GA).

Introduction

This article is part 3 of a series of three articles that I am going to post. The proposed article content will be as follows:

  1. Part 1: This one will be an introduction into Perceptron networks (single layer neural networks)
  2. Part 2: Will be about multi-layer neural networks, and the back propagation training method to solve a non-linear classification problem such as the logic of an XOR logic gate. This is something that a Perceptron can't do. This is explained further within this article.
  3. Part 3: This one is about how to use a genetic algorithm (GA) to train a multi-layer neural network to solve some logic problem, ;f you have never come across genetic algorithms, perhaps my other article located here may be a good place to start to learn the basics.

Summary

This article will show how to use a Microbial Genetic Algorithm to train a multi-layer neural network to solve the XOR logic problem.

A Brief Recap (From Parts 1 and 2)

Before we commence with the nitty griity of this new article which deals with multi-layer neural networks, let's just revisit a few key concepts. If you haven't read Part 1 or Part 2, perhaps you should start there.

Part 1: Perceptron Configuration (Single Layer Network)

The inputs (x1,x2,x3..xm) and connection weights (w1,w2,w3..wm) in figure 4 are typically real values, both positive (+) and negative (-). If the feature of some xi tends to cause the perceptron to fire, the weight wi will be positive; if the feature xi inhibits the perceptron, the weight wi will be negative.

The perceptron itself consists of weights, the summation processor, and an activation function, and an adjustable threshold processor (called bias hereafter).

For convenience, the normal practice is to treat the bias as just another input. The following diagram illustrates the revised configuration:

Image 1

The bias can be thought of as the propensity (a tendency towards a particular way of behaving) of the perceptron to fire irrespective of its inputs. The perceptron configuration network shown in Figure 5 fires if the weighted sum > 0, or if you are into math type explanations.

Image 2

Part 2: Multi-Layer Configuration

The multi-layer network that will solve the XOR problem will look similar to a single layer network. We are still dealing with inputs / weights / outputs. What is new is the addition of the hidden layer.

Image 3

As already explained above, there is one input layer, one hidden layer, and one output layer.

It is by using the inputs and weights that we are able to work out the activation for a given node. This is easily achieved for the hidden layer as it has direct links to the actual input layer.

The output layer, however, knows nothing about the input layer as it is not directly connected to it. So to work out the activation for an output node, we need to make use of the output from the hidden layer nodes, which are used as inputs to the output layer nodes.

This entire process described above can be thought of as a pass forward from one layer to the next.

This still works like it did with a single layer network; the activation for any given node is still worked out as follows:

Image 4

where wi is the weight(i), and Ii is the input(i) value. You see it the same old stuff, no demons, smoke, or magic here. It's stuff we've already covered.

So that's how the network looks. Now I guess you want to know how to go about training it.

Learning

There are essentially two types of learning that may be applied to a neural network, which are "Reinforcement" and "Supervised".

Reinforcement

In Reinforcement learning, during training, a set of inputs is presented to the neural network. The output is 0.75 when the target was expecting 1.0. The error (1.0 - 0.75) is used for training ("wrong by 0.25"). What if there are two outputs? Then the total error is summed to give a single number (typically sum of squared errors). E.g., "your total error on all outputs is 1.76". Note that this just tells you how wrong you were, not in which direction you were wrong. Using this method, we may never get a result, or could be hunt the needle.

Using a generic algorithm to train a multi-layer neural network offers a Reinforcement type training arrangement, where the mutation is responsible for "jiggling the weights a bit". This is what this article is all about.

Supervised

In Supervised learning, the neural network is given more information. Not just "how wrong" it was, but "in what direction it was wrong", like "Hunt the needle", but where you are told "North a bit" "West a bit". So you get, and use, far more information in Supervised learning, and this is the normal form of neural network learning algorithm.

This training method is normally conducted using a Back Propagation training method, which I covered in Part 2, so if this is your first article of these three parts, and the back propagation method is of particular interest, then you should look there.

So Now the New Stuff

From this point on, anything that is being discussed relates directly to this article's code.

What is the problem we are trying to solve? Well, it's the same as it was for Part 2, it's the simple XOR logic problem. In fact, this articles content is really just an incremental build, on knowledge that was covered in Part 1 and Part 2, so let's march on.

For the benefit of those that may have only read this one article, the XOR logic problem looks like the following truth table:

Image 5

Remember with a single layer (perceptron), we can't actually achieve the XOR functionality as it's not linearly separable. But with a multi-layer network, this is achievable.

So with this in mind, how are we going to achieve this? Well, we are going to use a Genetic Algorithm (GA from this point on) to breed a population of neural networks that will hopefully evolve to provide a solution to the XOR logic problem; that's the basic idea anyway.

So what does this all look like?

Image 6

As can be seen from the figure above, what we are going to do is have a GA which will actually contain a population of neural networks. The idea being that the GA will jiggle the weights of the neural networks, within the population, in the hope that the jiggling of the weights will push the neural network population towards a solution to the XOR problem.

So How Does This Translate Into an Algorithm

The basic operation of the Microbial GA training is as follows:

  • Pick two genotypes at random
  • Compare scores (fitness) to come up with a winner and loser
  • Go along genotype, at each locus (point)

    So only the loser gets changed, which gives a version of Elitism for free; this ensures the best in breed remains in the population.

    • With some probability, copy from winner to loser (overwrite)
    • With some probability, mutate that locus of the loser

That's it. That is the complete algorithm.

But there are some essential issues to be aware of when playing with GAs:

  1. The genotype will be different for a different problem domain
  2. The fitness function will be different for a different problem domain

These two items must be developed again whenever a new problem is specified. For example, if we wanted to find a person's favourite pizza toppings, the genotype and fitness would be different from that which is used for this article's problem domain.

These two essential elements of a GA (for this article problem domain) are specified below.

1. The Geneotype

For this article, the problem domain states that we had a population of neural networks. So I created a single dimension array of NeuralNetwork objects. This can be seen from the constructor code within the GA_Trainer_XOR object:

C#
//ANN's
private NeuralNetwork[] networks;

public GA_Trainer_XOR()
{
    networks = new NeuralNetwork[POPULATION];
    //create new ANN objects, random weights applied at start
    for (int i = 0; i <= networks.GetUpperBound(0); i++)
    {
       networks[i] = new NeuralNetwork(2, 2, 1);
       networks[i].Change += 
         new NeuralNetwork.ChangeHandler(GA_Trainer_NN_Change);
    }
}

2. The Fitness Function

Remembering the problem domain description stated, the following truth table is what we are trying to achieve:

Image 7

So how can we tell how fit (how close) the neural network is to this ? It is fairly simply really. What we do is present the entire set of inputs to the Neural Network one at a time and keep an accumulated error value, which is worked out as follows:

Within the NeuralNetwork class, there is a getError(..) method like this:

C#
public double getError(double[] targets)
{
    //storage for error
    double error = 0.0;
    //this calculation is based on something I read about weight space in
    //Artificial Intellegence - A Modern Approach, 2nd edition.Prentice Hall
    //2003. Stuart Rusell, Peter Norvig. Pg 741
    error = Math.Sqrt(Math.Pow((targets[0] - outputs[0]), 2));
    return error;
}

Then in the NN_Trainer_XOR class, there is an Evaluate method that accepts an int value which represents the member of the population to fetch and evaluate (get fitness for). This overall fitness is then returned to the GA training method to see which neural network should be the winner and which neural network should be the loser.

C#
private double evaluate(int popMember)
{
    double error = 0.0;
    //loop through the entire training set
    for (int i = 0; i <= train_set.GetUpperBound(0); i++)
    {
        //forward these new values through network
        //forward weights through ANN
        forwardWeights(popMember, getTrainSet(i));
        double[] targetValues = getTargetValues(getTrainSet(i));
        error += networks[popMember].getError(targetValues);
    }
    //if the Error term is < acceptableNNError value we have found
    //a good configuration of weights for teh NeuralNetwork, so tell
    //GA to stop looking
    if (error < acceptableNNError)
    {
        bestConfiguration = popMember;
        foundGoodANN = true;
    }
    //return error
    return error;
}

So how do we know when we have a trained neural network? In this article's code, what I have done is provide a fixed limit value within the NN_Trainer_XOR class that, when reached, indicates that the training has yielded a best configured neural network.

If, however, the entire training loop is done and there is still no well-configured neural network, I simply return the value of the winner (of the last training epoch) as the overall best configured neural network.

This is shown in the code snippet below; this should be read in conjunction with the evaluate(..) method shown above:

C#
//check to see if there was a best configuration found, may not have done
//enough training to find a good NeuralNetwork configuration, so will simply
//have to return the WINNER
if (bestConfiguration == -1)
{
    bestConfiguration = WINNER;
}
//return the best Neural network
return networks[bestConfiguration];

So Finally the Code

Well, the code for this article looks like the following class diagram (it's Visual Studio 2005, C#, .NET v2.0):

Image 8

The main classes that people should take the time to look at would be:

  • GA_Trainer_XOR: Trains a neural network to solve the XOR problem using a Microbial GA.
  • TrainerEventArgs: Training event args, for use with a GUI.
  • NeuralNetwork: A configurable neural network.
  • NeuralNetworkEventArgs: Training event args, for use with a GUI.
  • SigmoidActivationFunction: A static method to provide the sigmoid activation function.

The rest are the GUI I constructed simply to show how it all fits together.

Note: The demo project contains all code, so I won't list it here. Also note that most of these classes are quite similar to those included with the Part 2 article code. I wanted to keep the code similar so people who have already looked at Part 2 would recognize the common pattern.

Code Demos

The demo application attached has three main areas which are described below:

Live Results Tab

Image 9

It can be seen that this has very nearly solved the XOR problem; it did however take nearly 45000 iterations (epoch) of a training loop. Remembering that we have to also present the entire training set to the network, and also do this twice, once to find a winner and once to find a loser. That is quite a lot of work; I am sure you would all agree. This is why neural networks are not normally trained by GAs; this article is really about how to apply a GA to a problem domain. Because the GA training took 45000 epochs to yield an acceptable result does not mean that GAs are useless. Far from it, GAs have their place, and can be used for many problems, such as:

  • Sudoko solver (the popular game)
  • Backpack problem (trying to optimize the use of a backpack of limited size, to get as many items in as will fit)
  • Favourite pizza toppings problem (try and find out what someone's favourite pizza is)

To name but a few, basically, if you can come up with the genotype and a Fitness function, you should be able to get a GA to work out a solution. GAs have also been used to grow entire syntax trees of grammar, in order to predict which grammar is more optimal. There is more research being done in this area as I write this article; in fact, there is a nice article on this topic (Gene Expression Programming) by Andrew Krillov, right here at the CodeProject, if anyone wants to read further.

Training Results Tab

Viewing the target/outputs together:

Image 10

Viewing the errors:

Image 11

Trained Results Tab

Viewing the target/outputs together:

Image 12

It is also possible to view the neural network's final configuration using the "View Neural Network Config" button.

Image 13

What Do You Think?

That is it; I would just like to ask, if you liked the article, please vote for it.

Points of Interest

I think AI is fairly interesting, that's why I am taking the time to publish these articles. So I hope someone else finds it interesting, and that it might help further someone's knowledge, as it has my own.

Anyone that wants to look further into AI type stuff, that finds the content of this article a bit basic, should check out Andrew Krillov's articles at Andrew Krillov CP articles as his are more advanced, and very good.

History

  • v1.1: 27/12/06: Modified the GA_Trainer_XOR class to have a random number seed of 5.
  • v1.0: 11/12/06: Initial article.

Bibliography

  • Artificial Intelligence 2nd edition, Elaine Rich / Kevin Knight. McGraw Hill Inc.
  • Artificial Intelligence, A Modern Approach, Stuart Russell / Peter Norvig. Prentice Hall.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior)
United Kingdom United Kingdom
I currently hold the following qualifications (amongst others, I also studied Music Technology and Electronics, for my sins)

- MSc (Passed with distinctions), in Information Technology for E-Commerce
- BSc Hons (1st class) in Computer Science & Artificial Intelligence

Both of these at Sussex University UK.

Award(s)

I am lucky enough to have won a few awards for Zany Crazy code articles over the years

  • Microsoft C# MVP 2016
  • Codeproject MVP 2016
  • Microsoft C# MVP 2015
  • Codeproject MVP 2015
  • Microsoft C# MVP 2014
  • Codeproject MVP 2014
  • Microsoft C# MVP 2013
  • Codeproject MVP 2013
  • Microsoft C# MVP 2012
  • Codeproject MVP 2012
  • Microsoft C# MVP 2011
  • Codeproject MVP 2011
  • Microsoft C# MVP 2010
  • Codeproject MVP 2010
  • Microsoft C# MVP 2009
  • Codeproject MVP 2009
  • Microsoft C# MVP 2008
  • Codeproject MVP 2008
  • And numerous codeproject awards which you can see over at my blog

Comments and Discussions

 
GeneralRe: ? Pin
Sacha Barber6-Jan-08 1:14
Sacha Barber6-Jan-08 1:14 
GeneralHello Pin
MohamadJaber10-Dec-07 23:31
MohamadJaber10-Dec-07 23:31 
GeneralFrom C# To C++, creating "brainless" ANN's Pin
key_467-Aug-07 23:31
key_467-Aug-07 23:31 
GeneralRe: From C# To C++, creating "brainless" ANN's Pin
Sacha Barber8-Aug-07 0:19
Sacha Barber8-Aug-07 0:19 
Generalpattern recognition Pin
V.K.Aggarwal23-May-07 19:05
V.K.Aggarwal23-May-07 19:05 
GeneralRe: pattern recognition Pin
Sacha Barber23-May-07 22:48
Sacha Barber23-May-07 22:48 
GeneralYet another excellent article Pin
merlin98117-May-07 4:43
professionalmerlin98117-May-07 4:43 
QuestionCharacter Recognition [modified] Pin
chenfeng728-Feb-07 0:21
chenfeng728-Feb-07 0:21 
Hi Mr. Sacha, I want to train a neural network to recognize the characters [A-Z] U [a-z] U [0-9] from a user-drawn image in a panel (Java) or picture box (VB6). The picture is divided into a 10 by 10 matrix and each intersection in the matrix represents 1 input neuron. Based from what I have read in your discussion above, genetic algorithms tend to be slow relative to BP when used to train neural networks. But you also said that it could be remedied if only you could come up with a good fitness function as well as a genotype... (Did I get it right?)Can you recommend me a good link so that I will have something to read regarding my project? I want to use GAs in evolving weights.

By the way, your article is quite easier to understand than the article in generation 5. But the code presented there when run, achieves (in only 1000 cycles) the performance of your code (in 45000 cycles). Why is that? I am a beginner in this field and I hope you could somehow help me. Also, could you recommend of a good data structure to use in storing the weights (100 input neurons, 10-15 hidden neurons, and 26 output neurons, fully connected)? Or maybe a good architecture for the network itself.

Thanks. I give you 5 for this article.

-- modified at 6:36 Wednesday 28th February, 2007

#7

AnswerRe: Character Recognition Pin
Sacha Barber28-Feb-07 3:59
Sacha Barber28-Feb-07 3:59 
GeneralRe: Character Recognition Pin
chenfeng728-Feb-07 12:47
chenfeng728-Feb-07 12:47 
GeneralRe: Character Recognition Pin
Sacha Barber28-Feb-07 21:48
Sacha Barber28-Feb-07 21:48 
GeneralSame old... [modified] Pin
TemplMetaProg18-Feb-07 19:45
TemplMetaProg18-Feb-07 19:45 
GeneralRe: Same old... Pin
Sacha Barber18-Feb-07 21:49
Sacha Barber18-Feb-07 21:49 
GeneralRe: Same old... Pin
just william18-Feb-07 23:12
just william18-Feb-07 23:12 
AnswerRe: Same old... Pin
Sacha Barber20-Feb-07 0:25
Sacha Barber20-Feb-07 0:25 
AnswerRe: Same old... Pin
TemplMetaProg20-Feb-07 21:29
TemplMetaProg20-Feb-07 21:29 
GeneralRe: Same old... Pin
Sacha Barber21-Feb-07 6:46
Sacha Barber21-Feb-07 6:46 
Generalask GA hybrid GMDH Pin
Member 996288221-Jul-13 20:10
Member 996288221-Jul-13 20:10 
QuestionGeneral Question Re using AI Pin
just william16-Feb-07 20:43
just william16-Feb-07 20:43 
AnswerRe: General Question Re using AI Pin
Sacha Barber16-Feb-07 23:20
Sacha Barber16-Feb-07 23:20 
GeneralRe: General Question Re using AI Pin
just william17-Feb-07 11:24
just william17-Feb-07 11:24 
GeneralRe: General Question Re using AI Pin
Sacha Barber18-Feb-07 21:46
Sacha Barber18-Feb-07 21:46 
GeneralNow it should be your time - congratulation Pin
Andrew Kirillov10-Jan-07 7:54
Andrew Kirillov10-Jan-07 7:54 
GeneralRe: Now it should be your time - congratulation Pin
Sacha Barber10-Jan-07 8:31
Sacha Barber10-Jan-07 8:31 
GeneralRe: Now it should be your time - congratulation Pin
Sacha Barber18-Jan-07 2:23
Sacha Barber18-Jan-07 2:23 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.