Introduction
Artificial Neural Networks are a recent development tool that are modeled from biological neural networks. The powerful side of this new tool is its ability to solve problems that are very hard to be solved by traditional computing methods (e.g. by algorithms). This work briefly explains Artificial Neural Networks and their applications, describing how to implement a simple ANN for image recognition.
Background
I will try to make the idea clear to the reader who is just interested in the topic.
About Artificial Neural Networks (ANNs)
Artificial Neural Networks (ANNs) are a new approach that follow a different way from traditional computing methods to solve problems. Since conventional computers use algorithmic approach, if the specific steps that the computer needs to follow are not known, the computer cannot solve the problem. That means, traditional computing methods can only solve the problems that we have already understood and knew how to solve. However, ANNs are, in some way, much more powerful because they can solve problems that we do not exactly know how to solve. That's why, of late, their usage is spreading over a wide range of area including, virus detection, robot control, intrusion detection systems, pattern (image, fingerprint, noise..) recognition and so on.
ANNs have the ability to adapt, learn, generalize, cluster or organize data. There are many structures of ANNs including, Percepton, Adaline, Madaline, Kohonen, BackPropagation and many others. Probably, BackPropagation ANN is the most commonly used, as it is very simple to implement and effective. In this work, we will deal with BackPropagation ANNs.
BackPropagation ANNs contain one or more layers each of which are linked to the next layer. The first layer is called the "input layer" which meets the initial input (e.g. pixels from a letter) and so does the last one "output layer" which usually holds the input's identifier (e.g. name of the input letter). The layers between input and output layers are called "hidden layer(s)" which only propagate the previous layer's outputs to the next layer and [back] propagates the following layer's error to the previous layer. Actually, these are the main operations of training a BackPropagation ANN which follows a few steps.
A typical BackPropagation ANN is as depicted below. The black nodes (on the extreme left) are the initial inputs. Training such a network involves two phases. In the first phase, the inputs are propagated forward to compute the outputs for each output node. Then, each of these outputs are subtracted from its desired output, causing an error [an error for each output node]. In the second phase, each of these output errors is passed backward and the weights are fixed. These two phases is continued until the sum of [square of output errors] reaches an acceptable value.
Implementation
The network layers in the figure above are implemented as arrays of structs. The nodes of the layers are implemented as follows:
[Serializable]
struct PreInput
{
public double Value;
public double[] Weights;
};
[Serializable]
struct Input
{
public double InputSum;
public double Output;
public double Error;
public double[] Weights;
};
[Serializable]
struct Hidden
{
public double InputSum;
public double Output;
public double Error;
public double[] Weights;
};
[Serializable]
struct Output<T> where T : IComparable<T>
{
public double InputSum;
public double output;
public double Error;
public double Target;
public T Value;
};
The layers in the figure are implemented as follows (for a three layer network):
private PreInput[] PreInputLayer;
private Input[] InputLayer;
private Hidden[] HiddenLayer;
private Output<string>[] OutputLayer;
Training the network can be summarized as follows:
- Apply input to the network.
- Calculate the output.
- Compare the resulting output with the desired output for the given input. This is called the error.
- Modify the weights for all neurons using the error.
- Repeat the process until the error reaches an acceptable value (e.g. error < 1%), which means that the NN was trained successfully, or if we reach a maximum count of iterations, which means that the NN training was not successful.
It is represented as shown below:
void TrainNetwork(TrainingSet,MaxError)
{
while(CurrentError>MaxError)
{
foreach(Pattern in TrainingSet)
{
ForwardPropagate(Pattern);
BackPropagate()
}
}
}
This is implemented as follows:
public bool Train()
{
double currentError = 0;
int currentIteration = 0;
NeuralEventArgs Args = new NeuralEventArgs() ;
do
{
currentError = 0;
foreach (KeyValuePair<T, double[]> p in TrainingSet)
{
NeuralNet.ForwardPropagate(p.Value, p.Key);
NeuralNet.BackPropagate();
currentError += NeuralNet.GetError();
}
currentIteration++;
if (IterationChanged != null && currentIteration % 5 == 0)
{
Args.CurrentError = currentError;
Args.CurrentIteration = currentIteration;
IterationChanged(this, Args);
}
} while (currentError > maximumError && currentIteration <
maximumIteration && !Args.Stop);
if (IterationChanged != null)
{
Args.CurrentError = currentError;
Args.CurrentIteration = currentIteration;
IterationChanged(this, Args);
}
if (currentIteration >= maximumIteration || Args.Stop)
return false;
return true;
}
Where ForwardPropagate(..)
and BackPropagate()
methods are as shown for a three layer network:
private void ForwardPropagate(double[] pattern, T output)
{
int i, j;
double total;
for (i = 0; i < PreInputNum; i++)
{
PreInputLayer[i].Value = pattern[i];
}
for (i = 0; i < InputNum; i++)
{
total = 0.0;
for (j = 0; j < PreInputNum; j++)
{
total += PreInputLayer[j].Value * PreInputLayer[j].Weights[i];
}
InputLayer[i].InputSum = total;
InputLayer[i].Output = F(total);
}
for (i = 0; i < HiddenNum; i++)
{
total = 0.0;
for (j = 0; j < InputNum; j++)
{
total += InputLayer[j].Output * InputLayer[j].Weights[i];
}
HiddenLayer[i].InputSum = total;
HiddenLayer[i].Output = F(total);
}
for (i = 0; i < OutputNum; i++)
{
total = 0.0;
for (j = 0; j < HiddenNum; j++)
{
total += HiddenLayer[j].Output * HiddenLayer[j].Weights[i];
}
OutputLayer[i].InputSum = total;
OutputLayer[i].output = F(total);
OutputLayer[i].Target = OutputLayer[i].Value.CompareTo(output) == 0 ? 1.0 : 0.0;
OutputLayer[i].Error = (OutputLayer[i].Target - OutputLayer[i].output) *
(OutputLayer[i].output) * (1 - OutputLayer[i].output);
}
}
private void BackPropagate()
{
int i, j;
double total;
for (i = 0; i < HiddenNum; i++)
{
total = 0.0;
for (j = 0; j < OutputNum; j++)
{
total += HiddenLayer[i].Weights[j] * OutputLayer[j].Error;
}
HiddenLayer[i].Error = total;
}
for (i = 0; i < InputNum; i++)
{
total = 0.0;
for (j = 0; j < HiddenNum; j++)
{
total += InputLayer[i].Weights[j] * HiddenLayer[j].Error;
}
InputLayer[i].Error = total;
}
for (i = 0; i < InputNum; i++)
{
for(j = 0; j < PreInputNum; j++)
{
PreInputLayer[j].Weights[i] +=
LearningRate * InputLayer[i].Error * PreInputLayer[j].Value;
}
}
for (i = 0; i < HiddenNum; i++)
{
for (j = 0; j < InputNum; j++)
{
InputLayer[j].Weights[i] +=
LearningRate * HiddenLayer[i].Error * InputLayer[j].Output;
}
}
for (i = 0; i < OutputNum; i++)
{
for (j = 0; j < HiddenNum; j++)
{
HiddenLayer[j].Weights[i] +=
LearningRate * OutputLayer[i].Error * HiddenLayer[j].Output;
}
}
}
Testing the App
The program trains the network using bitmap images that are located in a folder. This folder must be in the following format:
- There must be one (input) folder that contains input images [*.bmp].
- Each image's name is the target (or output) value for the network (the pixel values of the image are the inputs, of course) .
As testing the classes requires to train the network first, there must be a folder in this format. "PATTERNS" and "ICONS" folders [depicted below] in the Debug folder fit this format.
History
- 30th September, 2007: Simplified the app
- 24th June, 2007: Initial Release
References & External Links