Click here to Skip to main content
15,860,943 members
Articles / Artificial Intelligence

AI : Neural Network for beginners (Part 2 of 3)

Rate me:
Please Sign up or sign in to vote.
4.87/5 (130 votes)
29 Jan 2007CPOL8 min read 331.5K   9.5K   279   45
AI : An Introduction into Neural Networks (Multi-layer networks / Back Propagation)

Introduction

This article is part 2 of a series of 3 articles that I am going to post. The proposed article content will be as follows:

  1. Part 1 : Is an introduction into Perceptron networks (single layer neural networks).
  2. Part 2 : This one, is about multi layer neural networks, and the back propagation training method to solve a non linear classification problem such as the logic of an XOR logic gate. This is something that a Perceptron can't do. This is explained further within this article.
  3. Part 3 : Will be about how to use a genetic algorithm (GA) to train a multi layer neural network to solve some logic problem.

Summary

This article will show how to use a multi-layer neural network to solve the XOR logic problem.

A Brief Recap (From part 1 of 3)

Before we commence with the nitty gritty of this new article which deals with multi layer Neural Networks, let just revisit a few key concepts. If you haven't read Part 1, perhaps you should start there.

Perceptron Configuration ( Single layer network)

The inputs (x1,x2,x3..xm) and connection weights (w1,w2,w3..wm) shown below are typically real values, both positive (+) and negative (-).

The perceptron itself, consists of weights, the summation processor, an activation function, and an adjustable threshold processor (called bias here after).

For convenience, the normal practice is to treat the bias as just another input. The following diagram illustrates the revised configuration.

Image 1

The bias can be thought of as the propensity (a tendency towards a particular way of behaving) of the perceptron to fire irrespective of it's inputs. The perceptron configuration network shown above fires if the weighted sum > 0, or if you have into maths type explanations

Image 2

So that's the basic operation of a perceptron. But we now want to build more layers of these, so let's carry on to the new stuff.

So Now The New Stuff (More layers)

From this point on, anything that is being discussed relates directly to this article's code.

In the summary at the top, the problem we are trying to solve was how to use a multi-layer neural network to solve the XOR logic problem. So how is this done. Well it's really an incremental build on what Part 1 already discussed. So let's march on.

What does the XOR logic problem look like? Well, it looks like the following truth table:

Image 3

Remember with a single layer (perceptron) we can't actually achieve the XOR functionality, as it is not linearly separable. But with a multi-layer network, this is achievable.

What Does The New Network Look Like

The new network that will solve the XOR problem will look similar to a single layer network. We are still dealing with inputs / weights / outputs. What is new is the addition of the hidden layer.

Image 4

As already explained above, there is one input layer, one hidden layer and one output layer.

It is by using the inputs and weights that we are able to work out the activation for a given node. This is easily achieved for the hidden layer as it has direct links to the actual input layer.

The output layer, however, knows nothing about the input layer as it is not directly connected to it. So to work out the activation for an output node we need to make use of the output from the hidden layer nodes, which are used as inputs to the output layer nodes.

This entire process described above can be thought of as a pass forward from one layer to the next.

This still works like it did with a single layer network; the activation for any given node is still worked out as follows:

Image 5

Where (wi is the weight(i), and Ii is the input(i) value)

You see it the same old stuff, no demons, smoke or magic here. It's stuff we've already covered.

So that's how the network looks/works. So now I guess you want to know how to go about training it.

Types Of Learning

There are essentially 2 types of learning that may be applied, to a Neural Network, which is "Reinforcement" and "Supervised"

Reinforcement

In Reinforcement learning, during training, a set of inputs is presented to the Neural Network, the Output is 0.75, when the target was expecting 1.0.

The error (1.0 - 0.75) is used for training ('wrong by 0.25').

What if there are 2 outputs, then the total error is summed to give a single number (typically sum of squared errors). Eg "your total error on all outputs is 1.76"

Note that this just tells you how wrong you were, not in which direction you were wrong.

Using this method we may never get a result, or it could be a case of 'Hunt the needle'.

NOTE : Part 3 of this series will be using a GA to train a Neural Network, which is Reinforcement learning. The GA simply does what a GA does, and all the normal GA phases to select weights for the Neural Network. There is no back propagation of values. The Neural Network is just good or just bad. As one can imagine, this process takes a lot more steps to get to the same result.

Supervised

In Supervised Learning the Neural Network is given more information.
Not just 'how wrong' it was, but 'in what direction it was wrong' like 'Hunt the needle' but where you are told 'North a bit', 'West a bit'.

So you get, and use, far more information in Supervised Learning, and this is the normal form of Neural Network learning algorithm. Back Propagation (what this article uses, is Supervised Learning)

Learning Algorithm

In brief, to train a multi-layer Neural Network, the following steps are carried out:

  • Start off with random weights (and biases) in the Neural Network
  • Try one or more members of the training set, see how badly the output(s) are compared to what they should be (compared to the target output(s))
  • Jiggle weights a bit, aimed at getting improvement on outputs
  • Now try with a new lot of the training set, or repeat again,
    jiggling weights each time
  • Keep repeating until you get quite accurate outputs

This is what this article submission uses to solve the XOR problem. This is also called "Back Propagation" (normally called BP or BackProp)

Backprop allows you to use this error at output, to adjust the weights arriving at the output layer, but then also allows you to calculate the effective error 1 layer back, and use this to adjust the weights arriving there, and so on, back-propagating errors through any number of layers.

The trick is the use of a sigmoid as the non-linear transfer function (which was covered in Part 1. The sigmoid is used as it offers the ability to apply differentiation techniques.

Image 6

Because this is nicely differentiable – it so happens that

Image 7

Which in context of the article can be written as

delta_outputs[i] = outputs[i] * (1.0 - outputs[i]) * (targets[i] - outputs[i])

It is by using this calculation that the weight changes can be applied back through the network.

Things To Watch Out For

Valleys: Using the rolled ball metaphor, there may well be valleys like this, with steep sides and a gently sloping floor. Gradient descent tends to waste time swooshing up and down each side of the valley (think ball!)

Image 8

So what can we do about this. Well we add a momentum term, that tends to cancel out the back and forth movements and emphasizes any consistent direction, then this will go down such valleys with gentle bottom-slopes much more successfully (faster)

Image 9

Starting The Training

This is probably best demonstrated with a code snippet from the article's actual code:

C#
/// <summary>
/// The main training. The expected target values are passed in to this
/// method as parameters, and the <see cref="NeuralNetwork">NeuralNetwork</see>
/// is then updated with small weight changes, for this training iteration
/// This method also applied momentum, to ensure that the NeuralNetwork is
/// nurtured into proceeding in the correct direction. We are trying to avoid valleys.
/// If you don't know what valleys means, read the articles associated text
/// </summary>
/// <param name="target">A double[] array containing the target value(s)</param>
private void train_network(double[] target)
{
    //get momentum values (delta values from last pass)
    double[] delta_hidden = new double[nn.NumberOfHidden + 1];
    double[] delta_outputs = new double[nn.NumberOfOutputs];

    // Get the delta value for the output layer
    for (int i = 0; i < nn.NumberOfOutputs; i++)
    {
        delta_outputs[i] =
        nn.Outputs[i] * (1.0 - nn.Outputs[i]) * (target[i] - nn.Outputs[i]);
    }
    // Get the delta value for the hidden layer
    for (int i = 0; i < nn.NumberOfHidden + 1; i++)
    {
        double error = 0.0;
        for (int j = 0; j < nn.NumberOfOutputs; j++)
        {
            error += nn.HiddenToOutputWeights[i, j] * delta_outputs[j];
        }
        delta_hidden[i] = nn.Hidden[i] * (1.0 - nn.Hidden[i]) * error;
    }
    // Now update the weights between hidden & output layer
    for (int i = 0; i < nn.NumberOfOutputs; i++)
    {
        for (int j = 0; j < nn.NumberOfHidden + 1; j++)
        {
            //use momentum (delta values from last pass),
            //to ensure moved in correct direction
            nn.HiddenToOutputWeights[j, i] += nn.LearningRate * delta_outputs[i] * nn.Hidden[j];
        }
    }
    // Now update the weights between input & hidden layer
    for (int i = 0; i < nn.NumberOfHidden; i++)
    {
        for (int j = 0; j < nn.NumberOfInputs + 1; j++)
        {
            //use momentum (delta values from last pass),
            //to ensure moved in correct direction
            nn.InputToHiddenWeights[j, i] += nn.LearningRate * delta_hidden[i] * nn.Inputs[j];
        }
    }
}

So Finally The Code

Well, the code for this article looks like the following class diagram (It's Visual Studio 2005 C#, .NET v2.0)

Image 10

The main classes that people should take the time to look at would be :

  • NN_Trainer_XOR : Trains a Neural Network to solve the XOR problem
  • TrainerEventArgs : Training event args, for use with a GUI
  • NeuralNetwork : A configurable Neural Network
  • NeuralNetworkEventArgs : Training event args, for use with a GUI
  • SigmoidActivationFunction : A static method to provide the sigmoid activation function

The rest are a GUI I constructed simply to show how it all fits together.

NOTE : the demo project contains all code, so I won't list it here.

Code Demos

The DEMO application attached has 3 main areas which are described below:

LIVE RESULTS Tab

Image 11

It can be seen that this has very nearly solved the XOR problem (You will probably never get it 100% accurate)

TRAINING RESULTS Tab

Viewing the training phase target/outputs together

Image 12

Viewing the training phase errors

Image 13

TRAINED RESULTS Tab

Viewing the trained target/outputs together

Image 14

Viewing the trained errors

Image 15

It is also possible to view the Neural Networks final configuration using the "View Neural Network Config" button. If people are interested in what weights the Neural Network ended up with, this is the place to look.

Image 16

What Do You Think ?

That's it. I would just like to ask, if you liked the article, please vote for it.

Points of Interest

I think AI is fairly interesting, that's why I am taking the time to publish these articles. So I hope someone else finds it interesting, and that it might help further someone's knowledge, as it has my own.

Anyone that wants to look further into AI type stuff, that finds the content of this article a bit basic should check out Andrew Krillovs articles, at Andrew Krillov CP articles as his are more advanced, and very good. In fact anything Andrew seems to do, is very good.

History

  • v1.0 24/11/06

Bibliography

  • Artificial Intelligence 2nd edition, Elaine Rich / Kevin Knight. McGraw Hill Inc.
  • Artificial Intelligence, A Modern Approach, Stuart Russell / Peter Norvig. Prentice Hall.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior)
United Kingdom United Kingdom
I currently hold the following qualifications (amongst others, I also studied Music Technology and Electronics, for my sins)

- MSc (Passed with distinctions), in Information Technology for E-Commerce
- BSc Hons (1st class) in Computer Science & Artificial Intelligence

Both of these at Sussex University UK.

Award(s)

I am lucky enough to have won a few awards for Zany Crazy code articles over the years

  • Microsoft C# MVP 2016
  • Codeproject MVP 2016
  • Microsoft C# MVP 2015
  • Codeproject MVP 2015
  • Microsoft C# MVP 2014
  • Codeproject MVP 2014
  • Microsoft C# MVP 2013
  • Codeproject MVP 2013
  • Microsoft C# MVP 2012
  • Codeproject MVP 2012
  • Microsoft C# MVP 2011
  • Codeproject MVP 2011
  • Microsoft C# MVP 2010
  • Codeproject MVP 2010
  • Microsoft C# MVP 2009
  • Codeproject MVP 2009
  • Microsoft C# MVP 2008
  • Codeproject MVP 2008
  • And numerous codeproject awards which you can see over at my blog

Comments and Discussions

 
QuestionQuestion on the Learning Rate Pin
GT20120-Jun-16 8:53
GT20120-Jun-16 8:53 
Questionusing matlab to deal with the problem? Pin
iysheng27-Feb-16 20:37
iysheng27-Feb-16 20:37 
Questionhelp Pin
shuishen11226-Nov-14 21:42
shuishen11226-Nov-14 21:42 
QuestionSigmoid function Pin
Member 1097383926-Jul-14 22:17
Member 1097383926-Jul-14 22:17 
QuestionSigmoid function Pin
Member 1097383926-Jul-14 22:16
Member 1097383926-Jul-14 22:16 
QuestionTakes a long time to converge Pin
LudemeGames1-Mar-14 23:13
LudemeGames1-Mar-14 23:13 
AnswerRe: Takes a long time to converge Pin
LudemeGames2-Mar-14 3:38
LudemeGames2-Mar-14 3:38 
QuestionI have a question,thank you for telling me . Pin
fengyelan16-Apr-13 21:31
fengyelan16-Apr-13 21:31 
GeneralMy vote of 5 Pin
Nickydo10-Sep-12 1:30
Nickydo10-Sep-12 1:30 
QuestionPart 3? Pin
Mauro Leggieri5-Apr-09 6:41
Mauro Leggieri5-Apr-09 6:41 
AnswerRe: Part 3? Pin
Sacha Barber5-Apr-09 8:55
Sacha Barber5-Apr-09 8:55 
You can get to all my articles old and new right here

http://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=569009[^]

The 3rd part of the NN series is AI : Neural Network for beginners (Part 3 of 3)[^]

You may also want to check out Daniel Vaughans latest article Perceptor: An artificially intelligent guided navigation system for WPF[^]

Sacha Barber
  • Microsoft Visual C# MVP 2008/2009
  • Codeproject MVP 2008/2009
Your best friend is you.
I'm my best friend too. We share the same views, and hardly ever argue

My Blog : sachabarber.net

GeneralRe: Part 3? Pin
Mauro Leggieri6-Apr-09 2:17
Mauro Leggieri6-Apr-09 2:17 
QuestionAbout parameterizing the 'momentum' factor Pin
mahabir23-Sep-08 19:48
mahabir23-Sep-08 19:48 
AnswerRe: About parameterizing the 'momentum' factor Pin
Sacha Barber23-Sep-08 21:57
Sacha Barber23-Sep-08 21:57 
AnswerRe: About parameterizing the 'momentum' factor Pin
Sacha Barber23-Sep-08 21:59
Sacha Barber23-Sep-08 21:59 
AnswerRe: About parameterizing the 'momentum' factor Pin
rameshKumar171726-Nov-12 17:14
rameshKumar171726-Nov-12 17:14 
Generalpart 1 Pin
gholamabbas Sayyad18-Sep-08 20:21
gholamabbas Sayyad18-Sep-08 20:21 
GeneralRe: part 1 Pin
Sacha Barber18-Sep-08 21:49
Sacha Barber18-Sep-08 21:49 
GeneralSolution for getTrainSet(int idx) Pin
DKHVC16-Apr-08 20:09
DKHVC16-Apr-08 20:09 
General[Message Deleted] Pin
Danny Rodriguez27-Jan-08 9:05
Danny Rodriguez27-Jan-08 9:05 
GeneralHello Pin
MohamadJaber10-Dec-07 23:33
MohamadJaber10-Dec-07 23:33 
QuestionErratic Bahaviour? Pin
rampantandroid15-Oct-07 17:17
rampantandroid15-Oct-07 17:17 
AnswerRe: Erratic Bahaviour? Pin
rampantandroid15-Oct-07 18:56
rampantandroid15-Oct-07 18:56 
GeneralSmall Suggestion Pin
dfhgesart28-Jul-07 15:26
dfhgesart28-Jul-07 15:26 
GeneralRe: Small Suggestion Pin
Sacha Barber28-Jul-07 23:35
Sacha Barber28-Jul-07 23:35 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.