Click here to Skip to main content
15,867,308 members
Articles / Artificial Intelligence / Machine Learning

Comparing Neural Networks in Neuroph, Encog and JOONE

Rate me:
Please Sign up or sign in to vote.
4.25/5 (8 votes)
2 Jun 2010LGPL38 min read 52.2K   886   14   3
Highlights the differences in how you create an XOR network in Neuroph, Encog and JOONE

Introduction

In this article, I will show you how to create a feedforward neural network. I will show you how to create the feedforward neural network to recognize the XOR operator. There are many examples that will show you how to do this on the Internet. However, I am going to show you how to do this three times. We will use three different neural network frameworks. This all allows you to get a feel for the three major Java open source neural networks that are available on the Internet and how they approach things. This may help you to decide which neural network framework is best suited to your project.

First, let me explain the XOR operation that I will be teaching the neural network to recognize. The XOR operation is essentially the “Hello World” of neural networks. Neural networks accept input and produce output. They process data using layers of neurons. A feedforward neural network has an input layer, zero or more hidden layers, and an output layer. The input accepts an array of floating point numbers. The output layer produces an array of floating point numbers. The size of these arrays does not need to be equal. The number of neurons in the input and output layers determines the array size.

You can then create training data. For the XOR operator, you would have four training items. The training items map the input to the ideal output. For XOR, this training data would be as follows:

[0,0] -> [0]
[1,0] -> [1]
[0,1] -> [1]
[1,1] -> [0]

These values correspond to the mathematical XOR operator. The neural network will train until the output from the neural network is close to the ideal output. The training is broken up into iterations. Each iteration will get the neural network output closer to the ideal output. The difference between the actual output from the neural network and the ideal output is the error rate of the neural network. There are different ways to train a neural network, but they all usually involve going through iterations, until the error decreases to an acceptable level.

This article will show how to implement the XOR operator in three different Java open source neural network frameworks. The three main open source neural network frameworks that are available on the Internet are:

Creating the XOR Operator in Encog

Encog is licensed under the Lesser GNU Public License. Encog supports a variety of training techniques. In this article, we will make use of a training technique called Levenberg Marquardt, or LMA. LMA is one of the most advanced training techniques available for neural networks. It does not work well with all training sets, but it can learn the XOR in a fraction of the time other training techniques require. LMA requires that the output of the neural network be a single neuron. This is fine for the XOR operator. Also, as the size of the training set increases, the effectiveness of LMA decreases. For larger training sets, you might want to consider Resilient Propagation or Resilient Propagation.

First, we setup the training data. The XOR data is stored in two arrays.

Java
public static double XOR_INPUT[][] = {
 { 0.0, 0.0 },
 { 1.0, 0.0 },
{ 0.0, 1.0 },
{ 1.0, 1.0 } };
public static double XOR_IDEAL[][] = {
 { 0.0 },
{ 1.0 },
{ 1.0 },
 { 0.0 } };

Next, we create a BasicNetwork object. This object will accept the layers that make up the neural network.

Java
BasicNetwork network = new BasicNetwork();
network.addLayer(new BasicLayer(new ActivationTANH(),true,2));
network.addLayer(new BasicLayer(new ActivationTANH(),true,3));
network.addLayer(new BasicLayer(new ActivationTANH(),true,1));

As you can see, three layers were added. The first one is the input layer. It has two neurons and uses a TANH activation function. The true value means that this layer has bias.

The second layer is a hidden layer. It has three neurons. These neurons help to recognize the signals sent to the neural network. The output layer has a single output neuron.

Encog can support a variety of neural network architectures. These are defined as logic objects. This neural network is a feedforward neural network, so we will use the FeedforwardLogic class.

Java
network.setLogic(new FeedforwardLogic());
network.getStructure().finalizeStructure();
network.reset();

Finally, the neural network structure must be created, and the neural network reset. The process of resetting the neural network randomizes the weights.

We now create a training set. The training set is used by Encog to train for the XOR operator.

Java
NeuralDataSet trainingSet = new BasicNeuralDataSet(XOR_INPUT, XOR_IDEAL);

All Encog training is done using the Train interface. We are going to use LevenbergMarquardtTraining training.

Java
final Train train = new LevenbergMarquardtTraining(network, trainingSet);

Levenberg Marquardt training sometimes fails to converge from the random weights. This is especially true for such a small network as this. We will use a reset strategy to rerandomize if there is less than a 1% improvement over 5 iterations.

Java
train.addStrategy(new RequiredImprovementStrategy(5));

We are now ready to train though the epochs, or iterations. We begin with epoch 1.

Java
int epoch = 1;

We will loop through training iterations until the error falls below 1%.

Java
do {
  train.iteration();
  System.out.println("Epoch #" + epoch + " Error:" + train.getError());
  epoch++;
} while(train.getError() > 0.01);

We will now test the output of the neural network.

Java
System.out.println("Neural Network Results:");
for(NeuralDataPair pair: trainingSet ) {
  final NeuralData output = network.compute(pair.getInput());
  System.out.println(pair.getInput().getData(0) + "," + pair.getInput().getData(1)
   + ", actual=" + output.getData(0) + ",ideal=" + pair.getIdeal().getData(0));
}

The same training data is used to export the neural network.

The output from the Encog XOR example is shown here:

Epoch #1 Error:0.5455122488506147
Epoch #2 Error:0.5052657193615671
Epoch #3 Error:0.4807114538448516
Epoch #4 Error:0.43616509724573044
Epoch #5 Error:0.20566912505617155
Epoch #6 Error:0.17638897570315684
Epoch #7 Error:0.028373668231531972
Epoch #8 Error:0.026258952179473653
Epoch #9 Error:0.023244078646272336
Epoch #10 Error:0.021221881866343273
Epoch #11 Error:0.019887606329834745
Epoch #12 Error:0.018885747329580156
Epoch #13 Error:0.018047468600671735
Epoch #14 Error:0.017288643933811593
Epoch #15 Error:0.016572218727452955
Epoch #16 Error:0.01595505187120417
Epoch #17 Error:0.010300974511088516
Epoch #18 Error:0.008141364550377145
Neural Network Results:
0.0,0.0, actual=0.020564759593838522,ideal=0.0
1.0,0.0, actual=0.9607289095427742,ideal=1.0
0.0,1.0, actual=0.8966620525621667,ideal=1.0
1.0,1.0, actual=0.06032304565618274,ideal=0.0

As you can see, it took Encog 18 iterations with Levenberg Marquardt training. Not all training techniques will use as few iterations as Levenberg Marquardt. We will see this in the next section.

Creating the XOR Operator in Neuroph

Neuroph is another neural network framework. It is licensed under the Apache License. It is currently in discussions to be merged into the Apache Machine Learning project. For Neuroph, we will use an automatic variant of the backpropagation algorithm they provide. It is not nearly as advanced as Levenberg Marquard, and as a result will take considerably more training iterations.

We begin by creating a training set.

Java
TrainingSet trainingSet = new TrainingSet(2, 1);
trainingSet.addElement(new SupervisedTrainingElement
		(new double[]{0, 0}, new double[]{0}));
trainingSet.addElement(new SupervisedTrainingElement
		(new double[]{0, 1}, new double[]{1}));
trainingSet.addElement(new SupervisedTrainingElement
		(new double[]{1, 0}, new double[]{1}));
trainingSet.addElement(new SupervisedTrainingElement
		(new double[]{1, 1}, new double[]{0}));

Next we create the neural network. Neuroph uses one line to create the neural network.

Java
MultiLayerPerceptron network =
	new MultiLayerPerceptron(TransferFunctionType.TANH, 2, 3, 1);

This creates the same neural network as was created by Encog. It will use TANH, 2 input, 3 hidden and one output neuron.

For training, we will use Dynamic Backpropagation.

Java
DynamicBackPropagation train = new DynamicBackPropagation();
train.setNeuralNetwork(network);
network.setLearningRule(train);

We now begin to loop through training iterations, until we are trained to below 1%.

Java
int epoch = 1;
do
{
  train.doOneLearningIteration(trainingSet);
  System.out.println("Epoch " + epoch + ", error=" + train.getTotalNetworkError());
  epoch++;

  } while(train.getTotalNetworkError()>0.01);
  Once we are done, we display the trained network’s results.
  System.out.println("Neural Network Results:");
  for(TrainingElement element : trainingSet.trainingElements()) {
    network.setInput(element.getInput());
    network.calculate();
    Vector<Double> output = network.getOutput();
    SupervisedTrainingElement ste = (SupervisedTrainingElement)element;

  System.out.println(element.getInput().get(0) + ","
     + element.getInput().get(0)
     + ", actual=" + output.get(0) + ",ideal=" + ste.getDesiredOutput().get(0));

Now the network is done training. The output was as follows:

Epoch 1, error=0.8077190599388583
Epoch 2, error=0.6743673707323136
Epoch 3, error=0.6059014056204383
Epoch 4, error=0.5701909436997877
Epoch 5, error=0.5508436432846441
Epoch 6, error=0.5399519500630455
Epoch 7, error=0.5336030007276679
Epoch 8, error=0.5297843855391863
Epoch 9, error=0.5274201294309249
Epoch 10, error=0.5259144756152534
...
Epoch 611, error=0.010100940088620529
Epoch 612, error=0.010028500485140078
Epoch 613, error=0.009956945579398092
Neural Network Results:
0.0,0.0, actual=0.014203702898690052,ideal=0.0
0.0,0.0, actual=0.8971874832025113,ideal=1.0
1.0,1.0, actual=0.909728369858769,ideal=1.0
1.0,1.0, actual=0.01578509009382128,ideal=0.0

As you can see, it took 613 iterations to get the error below 1%. Clearly backpropagation is not nearly as robust as Levenberg Marquard. Automatic backpropagation is currently the most advanced training offered by Neuroph. This is one of the areas where Encog really excels in. Encog offers advanced training methods such as Resilient Propagation, Simulated Annealing, Genetic Training, Levenberg Marquard and Scaled Conjugate Gradient.

However, the automatic additions made to backpropagation by Neuroph does give it an advantage over a neural network framework that just uses standard backpropagation.

Creating the XOR Operator in JOONE

JOONE is the oldest neural network framework that we are examining. Encog and Neuroph were both started in 2008. JOONE dates back to 2001. JOONE is not actively supported, though there are a number of systems that were implemented with it. JOONE makes use of backpropagation. JOONE has classes that claim support for resilient propagation, but I was unable to get it to work. JOONE is also known for being “buggy”, and because it is no longer an active project, this can make things difficult.

The last two frameworks that we examined had fairly simple APIs that make it easy on the programmer to get the neural network created. Not so with JOONE. JOONE’s API is a beast. It does not really hide much, and is very difficult to get working. Of all three examples, this one took me the most time. It is also the longest.

JOONE defines arrays to accept the input.

Java
public static double XOR_INPUT[][] = { { 0.0, 0.0 }, { 1.0, 0.0 },
{ 0.0, 1.0 }, { 1.0, 1.0 } };
public static double XOR_IDEAL[][] = { { 0.0 }, { 1.0 }, { 1.0 }, { 0.0 } };

We will create a linear layer for the input.

Java
LinearLayer	input = new LinearLayer();
SigmoidLayer	hidden = new SigmoidLayer();
SigmoidLayer	output = new SigmoidLayer();

The hidden and output layers are both sigmoid. Next the layers are labeled.

Java
input.setLayerName("input");
hidden.setLayerName("hidden");
output.setLayerName("output");

We set the neuron counts on each of the layers.

Java
input.setRows(2);
hidden.setRows(3);
output.setRows(1);

Synapses must be created to link the layers together.

Java
FullSynapse synapse_IH = new FullSynapse();	/* input -&gtl hidden conn. */
FullSynapse synapse_HO = new FullSynapse();	/* hidden -&gtl output conn. */

We should label the synapses.

Java
synapse_IH.setName("IH");
synapse_HO.setName("HO");

Now the synapses are actually connected to the layers.

Java
input.addOutputSynapse(synapse_IH);
hidden.addInputSynapse(synapse_IH);

The other side of the synapses are connected.

Java
hidden.addOutputSynapse(synapse_HO);
output.addInputSynapse(synapse_HO);

Here is one feature I really dislike about JOONE. In both Neuroph and Encog, you create a neural network and then train it. With JOONE, you must physically change the network to switch between training it and using it. Here we are setting up to train. We will use a memory synapse to feed the arrays of XOR into the network.

Java
MemoryInputSynapse  inputStream = new MemoryInputSynapse();

We must specify the array to use and which columns to use.

Java
inputStream.setInputArray(XOR_INPUT);
inputStream.setAdvancedColumnSelector("1,2");

We add the input stream to the synapse.

Java
input.addInputSynapse(inputStream);

We create a teaching synapse to train the network.

Java
TeachingSynapse trainer = new TeachingSynapse();
MemoryInputSynapse samples = new MemoryInputSynapse();

We must also specify the ideal output.

Java
samples.setInputArray(XOR_IDEAL);
samples.setAdvancedColumnSelector("1");
trainer.setDesired(samples);

We connect the trainer.

Java
output.addOutputSynapse(trainer);

Now we create a neural network to hold this entire structure.

Java
this.nnet = new NeuralNet();

The layers are now added to the network.

Java
nnet.addLayer(input, NeuralNet.INPUT_LAYER);
nnet.addLayer(hidden, NeuralNet.HIDDEN_LAYER);
nnet.addLayer(output, NeuralNet.OUTPUT_LAYER);

We create a monitor to monitor the training process.

Java
this.monitor = nnet.getMonitor();
monitor.setTrainingPatterns(4);	// # of rows (patterns) contained in the input file
monitor.setTotCicles(1000);	// How many times the net must be trained on the input patterns
monitor.setLearningRate(0.7);
monitor.setMomentum(0.3);
monitor.setLearning(true);	// The net must be trained
monitor.setSingleThreadMode(true);  // Set to false for multi-thread mode
/* The application registers itself as monitor's listener so it can receive
the notifications of termination from the net. */
monitor.addNeuralNetListener(this);
}

The following is the output from the JOONE example.

...
Epoch: 4991 Error:0.01818486701740271
Epoch: 4992 Error:0.018182516392244434
Epoch: 4993 Error:0.01818016665209608
Epoch: 4994 Error:0.018177817796408376
Epoch: 4995 Error:0.018175469824632386
Epoch: 4996 Error:0.01817312273621985
Epoch: 4997 Error:0.01817077653062278
Epoch: 4998 Error:0.018168431207293823
Epoch: 4999 Error:0.018166086765686006

As you can see, it took almost 5000 iterations, and JOONE has not yet trained as well as Neuroph. This is because it is using regular backpropagation. The automatic propagation used by Neuroph varies the learning rate and threshold to optimize the learning process. The difference is abundantly clear here.

Conclusions

In this article, I showed you three different approaches to the same problem. In my opinion, Encog and Neuroph are easy choices over JOONE. JOONE is not really supported any longer, and though JOONE can execute an iteration much faster than Neuroph, Neuroph’s auto training makes it superior to JOONE. Also JOONE is very hard to use.

Encog has many more features than Neuroph and supports very advanced training methods. It took Encog 18 iterations, Neuroph 613 and JOONE over 5000. I find the internal code of Neuroph easier to follow than Encog. Though I find the APIs of Encog and Neuroph to be comparable. Encog is built for speed, and it makes its internals more complex. Encog can even take advantage of your GPU for additional training speed.

In my next article, I will benchmark all three and show timings in various situations.

History

  • 3rd June, 2010: Initial post 

License

This article, along with any associated source code and files, is licensed under The GNU Lesser General Public License (LGPLv3)


Written By
Other Rutgers University
United States United States
Hello, I am a student at Rutgers University. I am in computer science and am learning about machine learning and AI.

Comments and Discussions

 
QuestionGreat job. Pin
Member 1039753812-Nov-13 12:26
Member 1039753812-Nov-13 12:26 
GeneralThnaks a lot Pin
The Diamond Knight6-Aug-10 1:51
The Diamond Knight6-Aug-10 1:51 
This was very useful for me and helped me realize a lot of things, yet if u could recommend a reference to understand neural networks itself (I mean what its uses, training algorithm, .... etc) or should I search for such articles and it'll be enough ? Smile | :)
and again Thanks a lot Smile | :)
At the End nothing is difficult enough to stop me Smile | :)
let's be an exception Smile | :)

GeneralMy vote of 5 Pin
The Diamond Knight6-Aug-10 1:36
The Diamond Knight6-Aug-10 1:36 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.