Click here to Skip to main content
15,889,034 members
Articles / Programming Languages / C#

Neural Dot Net Pt 3 The Adaline Network

Rate me:
Please Sign up or sign in to vote.
3.71/5 (16 votes)
23 Oct 200316 min read 73.2K   379   41  
A neural network library in C#.
<!DOCTYPE HTML
PUBLIC
"-//W3C//DTD HTML 4.0 Transitional//EN">
<!DOCTYPE HTML
PUBLIC
"-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
	<HEAD>
		<TITLE>Body</TITLE>
		<META Name="Generator" Content="Lotus Word Pro">
		</META>
	</HEAD>
	<BODY BGCOLOR="#ffffff">
		<H2 align="center"><U>Contents</U></H2>
		<OL>
			<LI>
				<H2><U></U>Introduction</H2>
			<LI>
				<H2>The Basic Classes</H2>
			<LI>
				<H2>The Adaline Network</H2>
			<LI>
				<H2>The Neural Net Tester</H2>
			<LI>
				<H2>The Adaline Word Network</H2>
			<LI>
				<H2>Changes To The Basic Classes</H2>
			<LI>
				<H2>The Back Propagation Network</H2>
			<LI>
				<H2>The Back Propagation Word Network</H2>
			<LI>
				<H2>
					The Self Organizing Network</H2>
			<LI>
				<H2>
					The Self Organizing Word Network</H2>
			</LI>
		</OL>
		<H1 ALIGN="center">
			<CENTER>
				<H2><U><FONT size="7">Neural .Net pt 1</FONT></U></H2>
			</CENTER>
		</H1>
		<H2 align="center"><U><FONT size="7">Introduction</FONT></U></H2>
		<P>
		I don't know about everyone else but I've always had a fascination with neural 
		nets and artificial intelligence in general. A fascination that I think was 
		brought about by the fact that I knew absolutely nothing about it and it has 
		always been one of those things that one day I would get round to looking at to 
		see how they worked. This set of articles is the result of that fascination one 
		that has led me on to some quite surprising discoveries and into something's 
		that just plain didn't make sense.&nbsp;Like the idea of a single neuron neural 
		net. Please call it a function wrap it in a class but&nbsp;don't give the 
		single neuron neural net. It goes against everything that I&nbsp;think a neural 
		net is supposed to be and don't these people ever read the books they write. I 
		can honestly say that I have read books on neural networks and got to the end 
		of the book none the wiser, because the theory got in the way. There's a lot of 
		theory some of it interesting, alot of it up in the stratosphere where only 
		serious mathematicians dare to venture. But some books on neural networks seem 
		to use theory to bludgeon any chance you have of actually learning anything 
		from the book right out of&nbsp; your head. There is also the wrong book at the 
		wrong time perspective to take into account sometimes you just aren't in the 
		right state of mind to get the best out of a particular book. My biggest 
		example of this is the James Joyce book Ulysses. I have tried to read that book 
		three times in my life, the first time I got a few chapters in and stopped 
		because it bored me to tears. The second time I read it all the way through and 
		found it quite enjoyable. The third time I got a few chapters in and thought he 
		was spouting a load of crap. So is Ulysses a good book? Is it worth reading? I 
		would have to say&nbsp;that it is a difficult book and if&nbsp; you're not 
		personally in the right place to get the most out of it then you wont. And this 
		applies to any book on neural networks as none of them are easy, the trick is 
		not to be put off by the initial confusion and uncertainty but see it as a 
		challenge that is going to take some time&nbsp;and study before it starts 
		falling into place.&nbsp;
		<P>
		Finally in desperation I turned to the code, being a programmer I could surely 
		understand the code.&nbsp;Being only qualified in very basic math's the bit of 
		the page that read squiggle x over&nbsp;different squiggle z times 
		two&nbsp;actually meant take this&nbsp;value at point&nbsp;i in the array from 
		that value at point n in the other array and times it by two. It made sense, I 
		didn't know why they were doing this but it was a&nbsp;start and in my time I 
		have had to maintain code that didn't make anywhere near as much sense as that 
		when I first looked at it.
		<P>
		It should be noted here that the original idea for this project was sequential 
		releases each containing two working networks, this has been abandoned with the 
		release containing all six networks the were originally envisaged. This 
		document will occaisionally make references to this original intention. Also 
		some of the class diagrams have marginal differences between the code and the 
		diagram. This is due to the tandem development of the document and the code. 
		Where the changes to the code made a difference to how the networks run I have 
		attempted to keep the diagrams up to date.
		<P>
		The project was developed with Developer Studio 2002 and Developer Studio 2003 
		using the .Net runtimes version 1.0 and 1.1 as the project was started on 
		version 1.0 of the .Net framework I see no reason why it should not compile and 
		run on that framework although users of Developer Studio 2002 will have to 
		rebuild the project files.
		<P>The networks themselves are set up to work at their best using the default 
			values provided by the program, with options provided so that they can be 
			played around with to see what happens. In many cases changing the default 
			options will obviously change the way the network behaves and can even prevent 
			some networks from working altogether. As no atempt is made to save the 
			optional changes to the networks, should you break a network everything should 
			work again if you restart the program.
			<H2><U>What Is A Neural Net?</U>&nbsp;</H2>
		<P>
		Don't you just hate those innocent little questions. I might as well just ask 
		why?. I suppose the traditional idea is that a neural net is like a miniature 
		computer brain and that it is designed so that the individual cells can all 
		communicate with each other and come together to solve a bigger problem but 
		just think about that from a coding perspective. How do you write a program 
		that is so completely generic that it is capable of solving every problem you 
		can throw at it with all the individual little pieces or neurons that are all 
		identical coming together to do whatever it is you may decide to throw at them. 
		Sounds really hard to me, what tends to happen though is you get two different 
		types of networks, those that concentrate on trying to be biologically accurate 
		and mimic the brains neurons or nodes and those that concentrate on the 
		specific task in hand. I suspect that on balance most of them tend to be a 
		compromise between accuracy and functionality. I mean it's fine to have 
		something that perfectly models the brains functionality but if you can't get 
		it to do anything then it's only of use for research purposes and these days 
		people tend to want results more than they want research. Which means pure 
		research remains in the hands of the lucky few.
		<P>
		In his book "The Essence of Neural Networks" Robert Callan in chapter one gives 
		a brief set of rules that compromise a neural network. I give these here for 
		two reasons. One they are probably the most precise definition I have seen for 
		a neural network and two I can understand what they mean.
		<P><B>1, A Set Of Simple Processing Units. </B>
		<P>
		A neural network is made up of neurons or nodes. These are meant to be simple 
		processing units though if you've seen the math's behind some of them you may 
		well be wondering Simple to whom? But in essence the nodes ( I'm going to stick 
		with calling&nbsp;them Nodes to save confusion as they are called neurons nodes 
		and who knows&nbsp;what else in the literature and it's all done 
		interchangeably, so from now on they are nodes nothing&nbsp;else. Anything else 
		is referring to something completely different.) The&nbsp;Nodes then are from a 
		programmers perspective a class that carries out a certain task or objective. 
		As with any class that task or objective is defined within the code which is 
		defined by just what it is you want the class to do in the first place. For our 
		purposes and within the accompanying code a Neuron will be a collection of 
		nodes, comprising in it's simplest form four nodes, two for the input, one for 
		the bias node and one for the network work node, i.e. in the example code 
		provided in part three the Neuron will contain an Adaline node.
		<P><B>2, A Pattern of Connectivity </B>
		<P>
		This is the way that the network is built and the way that the data flows 
		through the network. Which is also how networks get their names. For example 
		the Adaline Network that we will be dealing with first contains two input 
		nodes, a bias node and an Adaline node. The Adaline node is the node that will 
		do all the work by calling the run and the learn functions. There is no set 
		limit to the amount of nodes that you can have within each neuron and no 
		restriction on the way that the data is flowing. The data originates in the 
		input nodes but once the networks get larger then the data will be going 
		through one node and can be passed forwards or backwards to another node for 
		that node to process according to how it sees fit.
		<P><B>3. A Rule for Propagating Signals&nbsp;Through the Network </B>
		<P>
		This is merely common sense, Whatever type of network it is that we are working 
		with then there are certain results that we&nbsp;want to achieve and these are 
		only going to be achieved by processing the data that the network is dealing 
		with in a specific way. That way can be to be to pass the data forward to an 
		output node or back through the network for further processing, or 
		alternatively even forward through the network for further processing, either 
		way&nbsp;as with any other computer program there are a set number of steps 
		that we want to perform and&nbsp;usually only one or two ways that we can go 
		about getting the correct result at the end.
		<P><B>4. A Rule For&nbsp;Combining Input Signals </B>
		<P>
		This is basically the action that we are going to carry out on the data coming 
		into the neural network. At this point it doesn't really matter that we know 
		what the answer will be just that we now what we want to do with the 
		information in the first place. This could be a mathematical function or the 
		comparison of strings or objects.
		<P><B>5. A Rule For Calculating An Output Signal </B>
		<P>
		This isn't necessarily the final output of the program but the output of that 
		section of the code. If you think of it in terms of a function then the output 
		value of a network node is the return value of the function. This is normally a 
		numerical value but there is absolutely no reason why it needs to be for 
		example the Adaline network could quite easily return a Boolean true or false, 
		that would in itself have no bearing on whether the node worked correctly or 
		not.
		<P><B>6. A Learning Rule To Adapt the Weights. </B>
		<P>A weight is a value given to the connection or link that helps in the learning 
			process. This is updated on the fly by the learn function and naturally there 
			should be a rule behind the way that this is done. Thinking about it though, 
			seeing as the final goal of the network is to learn to generate the correct 
			answers to the training data that is given to it then it seems that a perfectly 
			good rule for updating the weights is to just randomly assign a value to it 
			until something works. In theory it should just take the network longer to work 
			than it would were an explicit rule programmed in.
			<H2><U>How Does The Network Learn?</U></H2>
		<P>The simple answer to this question is by trial and error but as usual nothing is 
			that simple. In order to look at this question I'm going to talk about the 
			Adaline network that you will see in section three of this series of articles. 
			The following is a section from the output for the standard run of the Adaline 
			1 program.
			<PRE>
</PRE>
		<P><PRE>
Iteration number 172 produced 6 Good values out of 250 

Learn called at number 5 Pattern value = 1 Neuron value = -1 

Iteration number 173 produced 5 Good values out of 250 

Learn called at number 6 Pattern value = 1 Neuron value = -1 

Iteration number 174 produced 6 Good values out of 250 

Learn called at number 5 Pattern value = 1 Neuron value = -1 

Iteration number 175 produced 5 Good values out of 250 

Learn called at number 6 Pattern value = 1 Neuron value = -1 

Iteration number 176 produced 6 Good values out of 250 

Learn called at number 5 Pattern value = 1 Neuron value = -1 

Iteration number 177 produced 5 Good values out of 250 

Learn called at number 6 Pattern value = 1 Neuron value = -1 

Iteration number 178 produced 6 Good values out of 250 

Learn called at number 7 Pattern value = 1 Neuron value = -1 

Iteration number 179 produced 7 Good values out of 250 

Learn called at number 6 Pattern value = 1 Neuron value = -1 

Iteration number 180 produced 6 Good values out of 250 

Learn called at number 32 Pattern value = 1 Neuron value = -1 

Iteration number 181 produced 32 Good values out of 250 

Learn called at number 5 Pattern value = 1 Neuron value = -1 

Iteration number 182 produced 5 Good values out of 250 

Learn called at number 5 Pattern value = 1 Neuron value = -1 

Iteration number 183 produced 5 Good values out of 250 

Learn called at number 32 Pattern value = 1 Neuron value = -1 

Iteration number 184 produced 32 Good values out of 250 

Learn called at number 5 Pattern value = 1 Neuron value = -1 

Iteration number 185 produced 5 Good values out of 250 

Learn called at number 5 Pattern value = 1 Neuron value = -1 

Iteration number 186 produced 5 Good values out of 250 

Learn called at number 32 Pattern value = 1 Neuron value = -1 

Iteration number 187 produced 32 Good values out of 250 

Learn called at number 5 Pattern value = 1 Neuron value = -1 

Iteration number 188 produced 5 Good values out of 250 

Learn called at number 5 Pattern value = 1 Neuron value = -1 

Iteration number 189 produced 5 Good values out of 250 

Iteration number 190 produced 250 Good values out of 250 

</PRE>
		<P>As you can see from this example the network took 190&nbsp;attempts before it 
			got the answers all right. The Adaline program basically compares two values 
			that are within the range between -1 and 1. These values are randomly generated 
			into a file that can be generated by the provided demonstration program that 
			comes with the third article in the series. Basically what the program works 
			out is if the first number given to it is less than the second number given to 
			it if it is then the program will output a 1 and if it isn't the output will be 
			-1. The technical way of stating this would be that the network sums up the 
			inputs and the weights of each node and then runs the summation through the 
			transfer function that returns the output for the node.&nbsp; The training data 
			when generated also generates the right answer to the problem and the network 
			tests itself to see if it got the answer right or not. In this case the network 
			got for the most part about 6 correct answers with each run yet on the 190 run 
			it got everything right. The program&nbsp;is written so that it keeps on going 
			through the data until it gets them all correct which it did on attempt 190. So 
			what is going on here. Well when the program calls the run function in the 
			Adaline 1 program the run function is basically
		</P>
		<PRE>
for( int i=0; i&lt;nCount; i++ )
{
	dTotal += ( ( BasicLink )this.InputLinks[ i ] ).WeightedInputValue( nID );
}

this.NodeValues[ nID ] = TransferFunction( dTotal ); 
</PRE>
		<P>This code cycles through the links to the node and gets the weighted input value 
			and then adds the complete total to the double total variable. For now all that 
			you need to know is that the nID value is equal to&nbsp;the nodevalue constant 
			stored in the Values class ( == 0 )&nbsp;and that it is getting the first value 
			in the input node that is being referred to in the Input Links array by i. The 
			important bit of the weighted input value looks like this
		</P>
		<PRE>
dReturn = bnInputNode.GetValue( nID ) * ( ( double )arrayLinkValues[ Values.Weight ] );		
</PRE>
		<P>which means times the value in the node by the weight value of the link. the 
			weight value of the link is actually the first value in the link which is set 
			in the Adaline Link constructor to
		</P>
		<PRE>
arrayLinkValues[ Values.Weight ] = Values.Random( -1, 1 );		
</PRE>
		<P>which as you can see is a random number between -1 and 1 which means that the 
			networks first stab at getting the correct answer is nothing more than a guess, 
			but as you can see in the loop above the run function cycles through and uses 
			the weight value for calculations, adding the totals to dTotal. The dTotal 
			variable is then passed to the Transfer Function which is another piece of 
			simple code.
		</P>
		<PRE>
if( dValue &lt; 0 )
	return -1.0;

return 1.0;
</PRE>
		<P>
		<P>which returns -1 if the value is less than 0 and 1 if the value of dTotal is 
			greater than 0. So presume for a moment that we have a value in dTotal and that 
			according to the training set the answer is -1 but the network returns a 1, the 
			answer is wrong and the program would print out one of the lines above saying 
			that it had gotten a certain number right up until this point but it now had to 
			call learn because this one was wrong. The learn function uses the delta rule 
			or Widrow-Hoff rule which in programming terms is this
		</P>
		<PRE>
NodeErrors[ Values.NodeError ] = ( ( double )NodeValues[ Values.NodeValue ] )*-2.0;
BasicLink link;
int nCount = InputLinks.Count;
double dDelta;

for( int i=0; i&lt;nCount; i++ )
{
	link = ( BasicLink )InputLinks[ i ];
	/// delta rule
	dDelta = ( ( double )NodeValues[ Values.LearningRate ] ) * ( ( double )link.InputValue( Values.NodeValue ) ) * ( ( double )NodeErrors[ Values.NodeError ] );
	link.UpdateWeight( dDelta );
}		
</PRE>
		<P>
		<P>First of all the node's error value is set to equal to the node value * -2.0 and 
			then the code cycles through each input link to the current node and updates 
			the weight value for the link to the dDelta value which is the result of the 
			node values learning rate, which is set in the creation of the Adaline node to 
			0.45 though you can feel free to change this value to see how it affects the 
			learning rate of the program. Anyway back to the multiplication, the node 
			values learning rate is multiplied by the links input value and the result is 
			multiplied by the error value that was set at the start of the function. It 
			should probably be mentioned here that this is a simple network to enable 
			people to learn and understand how it all works things can get a lot more 
			complicated than this your just being eased into it gently. And it should also 
			be noticed that although the example is simple what you have here is the basis 
			of a decision as long as you know what the desired output is this program can 
			be modified to calculate it for you and once it has been trained it can search 
			through huge amounts of data checking things that don't fall within it's 
			required parameters.
		</P>
		<H2><U>Finally</U></H2>
		<P>The above is by no means all that I have to say on neural networks but hopefully 
			it will give the complete beginner an understanding of what the basics of a 
			neural network are and more importantly how they go about learning to do what 
			they do. In part two I'll be describing the Basic classes that are behind all 
			the upcoming programs and in part three we shall return to the Adaline network 
			and get up close and personal. One last point that should be understood is that 
			all code is officially experimental and changing the only thing that you can be 
			sure of is that the code that released with each release will work. As such I 
			haven't made my mind up about backward compatibility yet. I will try to ensure 
			that all previous programs will work with the latest versions of the code and 
			the library. Keeping the neural net tester program will help enforce this but I 
			already have some ideas that will require a complete rewrite of all previous 
			code once I get to the point where I want to try it.
		</P>
		<H2 align="center"><U><FONT size="7">Neural .Net pt 2</FONT></U></H2>
		<H2 align="center"><U><FONT size="7">The Basic Classes</FONT></U></H2>
		<P ALIGN="center">
		<P ALIGN="center">
		<P>Like any serious program these days there is going to be a class hierarchy in 
			the background hiding all the fiddly stuff so that the main classes can just 
			get on with doing the things they are designed to do and this collection of 
			neural network classes is no different. All these classes are based on the 
			class library provided in the book Objected Orientated Neural Networks in 
			C++,&nbsp; by Joey Rogers. These classes have been completely rewritten in 
			CSharp although the basic functionality of the originals has been retained for 
			the base classes.&nbsp;The most noticeable change in the code is through the 
			enforcement of stronger type checking in CSharp, so whereas the line array[ i 
			]-&gt;Function is perfectly valid in C++ this is incorrect syntax in CSharp and 
			should be written as ( ( Type )array[ i ] ).Function. I should also point out 
			that there have been changes to the class structure in the translation from C++ 
			to CSharp, Some classes have had only minor changes while some have been 
			dropped altogether. Further changes are in the pipeline but for now it is 
			better to keep things as simple as possible. The following is meant only as an 
			overview of the class structure with the details of what is going on in the 
			classes being referred to in part three where we will look at an actual 
			implementation of these classes.
		</P>
		<P><IMG SRC="Neural_Dot_Net/Basic.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
			<H2><U>The Values Class</U></H2>
		<P>The Values class is a representation of what where global variables in the 
			original framework. The Values class holds four integer members that are used 
			for accessing the arrays at specific points in the code. It is declared 
			as&nbsp;a static member of the Basics class which is it's one and only instance 
			and it can be accessed through the inheritance chain by all other classes and 
			even indirectly in the implementation code.&nbsp;The only function that the 
			Values class implements is the Random function which is a static function so 
			that it can be accessed easily from anywhere within the code. This function was 
			moved from the Basic Node class as it's a singular function by nature and 
			should be off to the side somewhere rather than being accessed through one of 
			the main base classes.
		</P>
		<H2><U>The Basic Class</U></H2>
		<P>The Basic class serves as the base class for most of the Basic classes and 
			simply contains a way to access the values and the identifier. In theory there 
			is no reason why the basic and the values class couldn't be a single class. The 
			class is abstract as there should be no reason for anyone wanting to create an 
			object of this class.
		</P>
		<H2><U>The Basic Link Class</U></H2>
		<P>
			<IMG SRC="Neural_Dot_Net/BasicLinkWorks.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>The Basic Link class is the class that provides the glue between the nodes and 
			contains the link value that is so important in the calculations made later on. 
			the class contains an array of link values which are the weights for the links. 
			There was certainly some confusion about this when I was learning about this 
			stuff but whenever anyone talks of weights in neural nets this is where you 
			will find them. The array for the most part will contain a single value but as 
			it is using an array list it can contain as are required. You can even keep a 
			track of all the previous weights that were used for this link if you so wish. 
			The Basic Link class is declared abstract so that the implementer of a network 
			is forced to inherit from this class.&nbsp;This is due to the fact that I 
			wished to make the basic classes all uninstantiable, although I gave up on 
			having this idea as a rule with the basic node class as I felt that&nbsp;the 
			requirement to derive from it was pointless unless you were going to add to the 
			functionality of the class and the Basic Node class as it is perfectly capable 
			of acting as an input class&nbsp;for the Adaline and other networks.&nbsp; The 
			link classes main purpose as well as providing the actual link is implement the 
			weight functionality that consists of getting and setting the weights and the 
			error values associated with the links and updating the weight values with the 
			new value. When the run function is called and the code gets the weight if the 
			answer is incorrect then part of the learning process means that the weight for 
			each link is updated with the update function in an attempt to get the correct 
			answer at the next epoch or loop through the data.
		</P>
		<H3><U>IMPORTANT BITS</U></H3>
		<P>
		The important functions within the link classes are
		<P>
		</P>
		<PRE>
public virtual double GetLinkValue( int nID )
{

	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Getting the Link value for array number " + Identifier.ToString() + " link id no " + ID.ToString() + " value = " + ( ( double )arrayLinkValues[ nID ] ).ToString(), ClassName );
	}

	if( arrayLinkValues.Count == 0 || nID &gt; arrayLinkValues.Count )
		return 0.0;

	return ( double )arrayLinkValues[ nID ];
}

public virtual void SetLinkValue( double dNewValue, int nID )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Setting the link value at " + Identifier.ToString() + " from " + ( ( double )arrayLinkValues[ nID ] ).ToString() + " to " + dNewValue.ToString() + " link id no " + ID.ToString(), ClassName );
	}

	if( arrayLinkValues.Count == 0 || nID &gt; arrayLinkValues.Count )
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.Errors ) == true )
		{
			log.Log( DebugLevelSet.Warning, "Error the id " + Identifier.ToString() + " is greater than the number of link values or the link values array is empty, link id no " + ID.ToString(), ClassName );
		}

		return;
	}

	arrayLinkValues[ nID ] = dNewValue;
}

</PRE>
		<P>
		The Get Link value and Set Link value are functions for accessing the links 
		value, each function prints a message to the log marking its progress and 
		checks that the accessor value to the array represented by the integer nID is 
		within the bounds of the array that it is trying to access.
		<P>The link class also has access to all the values that it shares between the 
			input and the output nodes.
		</P>
		<PRE>
public virtual double InputValue( int nID )
{
		if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
		{
			log.Log( DebugLevelSet.Progress, "Getting the Input Value at " + Identifier.ToString() + " link id no " + ID.ToString() + " value = " + InputNode.GetValue( nID ).ToString(), ClassName );
		}

		return InputNode.GetValue( nID );
}

/// 
/// get the value of the output node at the given id
/// 
/// 
/// 
public virtual double OutputValue( int nID )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Getting the Output Valuer at " + Identifier.ToString() + " link id no " + ID.ToString() + " value = " + OutputNode.GetValue( nID ).ToString(), ClassName );
	}

	return OutputNode.GetValue( nID );
}

/// 
/// get the value of the input error at the given id
/// 
public virtual double InputError( int nID )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Getting the Input Error at " + Identifier.ToString()  + " link id no " + ID.ToString() + " value = " + ( ( double )InputNode.GetError( nID ) ).ToString(), ClassName );
	}

	return ( double )InputNode.GetError( nID );
}

/// 
/// get the value of the output error at the given id
/// 
/// 
/// 
public virtual double OutputError( int nID )
{
		if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
		{
			log.Log( DebugLevelSet.Progress, "Getting the Output Error at " + Identifier.ToString() + " link id no " + ID.ToString() + " value = " + OutputNode.GetError( nID ).ToString(), ClassName );
		}

		return OutputNode.GetError( nID );
}
</PRE>
		<P>As you can see from the four accessor functions above the link can get the data 
			only from both the input and the output node that it maintains the link 
			between. No error checking is done within the link class on these functions as 
			that would duplicate the error handling contained within the classes 
			themselves.
		</P>
		<P>However apart from providing the glue between the input and the output nodes the 
			links main function is to control the weights between them.
		</P>
		<PRE>
/// 
/// get the weighted input value at the given id
/// 
/// 
/// 
public virtual double WeightedInputValue( int nID )
{
	double dReturn = 0.0;  
	if(	Values.Weight &lt;  arrayLinkValues.Count )
		dReturn = bnInputNode.GetValue( nID ) * ( ( double )arrayLinkValues[ Values.Weight ] );
	else
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.Warning ) == true )
		{
			log.Log( DebugLevelSet.Warning, "Warning the Values weight value is greater than the link values count returning 0.0, link id no " + ID.ToString(), ClassName );
		}
	}

	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Getting the Weighted Input Value at " + Identifier.ToString() + " link id no " + ID.ToString() + " unweighted value = " + bnInputNode.GetValue( nID ).ToString() + " weighted value = " + dReturn.ToString(), ClassName );
	}

	return dReturn;
}

/// 
/// get the weighted output value at the given id
/// 
/// 
/// 
public virtual double WeightedOutputValue( int nID )
{
	double dReturn = 0.0;  
	if( Values.Weight &lt;  arrayLinkValues.Count )
		dReturn = bnOutputNode.GetValue( nID ) * ( ( double )arrayLinkValues[ Values.Weight ] );
	else
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.Warning ) == true )
		{
			log.Log( DebugLevelSet.Warning, "Warning the Values Weight value is greater that the link values array count returning 0.0, link id no " + ID.ToString(), ClassName  );
		}
	}

	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Getting the Weighted Output value at " + Identifier.ToString() + " link id no " + ID.ToString() + " unweighted value = " + bnOutputNode.GetValue( nID ).ToString() + " weighted value = " + dReturn.ToString(), ClassName );
	}

	return dReturn;
}

/// 
/// get the weighted output error at the given id
/// 
/// 
/// 
public virtual double WeightedOutputError( int nID )
{
	double dReturn = 0.0;  
	if(	Values.Weight &lt;  arrayLinkValues.Count )
		dReturn = bnOutputNode.GetError( nID ) * ( ( double )arrayLinkValues[ Values.Weight ] );
	else
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.Warning ) == true )
		{
			log.Log( DebugLevelSet.Warning, "Warning the Values Weight value is greater that the link values array count returning 0.0, link id no " + Identifier.ToString(), ClassName );
		}
	}

	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Getting the Weighted Output Error value at " + Identifier.ToString() + " link id no " + Identifier.ToString() + " unweighted value = " + bnOutputNode.GetError( nID ).ToString() + " weighted value = " + dReturn.ToString(), ClassName );
	}

	return dReturn;
}

/// 
/// get the weighted input error at the given id
/// 
/// 
/// 
public virtual double WeightedInputError( int nID )
{
	double dReturn = 0.0;  
	if(	Values.Weight &lt;  arrayLinkValues.Count )
		dReturn = bnInputNode.GetError( nID ) * ( ( double )arrayLinkValues[ Values.Weight ] );
	else
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.Warning ) == true )
		{
			log.Log( DebugLevelSet.Warning, "Warning the Values Weight value is greater than the link values array count returning 0.0, link id no " + Identifier.ToString(), ClassName );
		}
	}

	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Getting the Weighted Input Error value at " + Identifier.ToString() + " link id no " + Identifier.ToString() + " unweighted value = " + bnInputNode.GetError( nID ).ToString() + " weighted value = " + dReturn.ToString(), ClassName );
	}

	return dReturn;
}

/// 
///  Update the weight for this basic Link
/// 
/// 
public virtual void UpdateWeight( double dNewValue )
{
	if( Values.Weight &lt; arrayLinkValues.Count )
	{
		double dTemp = ( double )arrayLinkValues[ Values.Weight ];
		dTemp += dNewValue;

		if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
		{
			log.Log( DebugLevelSet.Progress, "Updating the Weight for this basic link id no " + Identifier.ToString() + " from = " + ( ( double )arrayLinkValues[ Values.Weight ]).ToString() + " to = " + dTemp.ToString(), ClassName );
		}

		arrayLinkValues[ Values.Weight ] = dTemp;

	}
}		
</PRE>
		<P>
		There are four main accessor functions that the link controls with regard to 
		the weights, with&nbsp;these allowing the weighted values&nbsp;for the input 
		value, the output value, the input error and the output&nbsp;error values all 
		being returned with their original values multiplied by the&nbsp;weight value 
		of the current link. The final Update Weight function is called when the learn 
		function&nbsp;is activated and the weight value for the link is required to be 
		updated by the output node.
		<P></P>
		<H2><U>The Basic Node Class</U></H2>
		<P>
		The Basic Node class is the workhorse of the application and is the only Basic 
		class that is not abstract at this point. This is due to the fact that it is 
		used to provided input classes for the Adaline network at least. In the 
		original C++ class library there was another class that derived from this but 
		it didn't really add any functionality to the Basic Node class so I removed it 
		and allowed this class to be instantiable.
		<P>The Basic Node class keeps arrays of the values where array 0 is usually the 
			node value and array 1 is usually the learning rate although it should be noted 
			that this is usually for derived classes only and that when the Basic Node is 
			being used as an Input Node then it will only contain one value in&nbsp;the 
			values array 0. It also contains the errors that have been generated on 
			this&nbsp;node at&nbsp;Node Errors array 0. The node also contains 
			arraylists&nbsp;to&nbsp;both the inputs and the outputs of the nodes, which 
			will need to be lists as we will see later when we start to look at nodes that 
			have many input connects. The Basic Node class is the class responsible for 
			implementing the connection functions that create the links between the input 
			nodes and the output nodes which take the form of connect this node to the node 
			passed to the function using the links passed as the second parameter to the 
			function.&nbsp;&nbsp;&nbsp;
		</P>
		<H3><U>IMPORTANT BITS</U></H3>
		<P>The three most important functions for the basic node class are.
		</P>
		<PRE>
public virtual void Run( int nMode )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Run called in basic node", ClassName );
	}
}

public virtual void Learn( int nMode )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Learn Called in basic node", ClassName );
	}
}

public virtual void Epoch( int nMode )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Epoch called in basic node", ClassName );
	}

}
</PRE>
		<P>
		These functions are for implementing the main functionality of the Nodes. The 
		Run function is the function that will implement the networks algorithm for 
		that node. The learn function will update the weight value in the link and the 
		Epoch function in theory controls the entire run through the network although 
		this is more a concept for the Basic Network class or the Basic Neuron class. 
		All three functions are designed to be implemented in base classes and are not 
		used in the Basic Node class which makes the Basic Node class ideal for use as 
		an input&nbsp;class as&nbsp;the fact that these functions are not implemented 
		means that the user wont become confused and break things by inadvertently 
		calling any of them on an input node.
		<P>
		The Basic Node class also implements some functions for the getting and setting 
		of the values in the four arrays that it holds
		<P>
		<P>
		</P>
		<PRE>
private ArrayList arrayNodeValues; /// double values
private ArrayList arrayNodeErrors; /// double values
private ArrayList arrayInputLinks; /// Basic links 
private ArrayList arrayOutputLinks; /// Basic links		
</PRE>
		<P>
		These arrays hold all the information that the Basic Node requires. The Node 
		Values array holds the value (s) for the node and the Node Errors array holds 
		the error value (s) for the node. Each node also keeps track of the input and 
		output links to the node although when used as an input node only this class 
		will only have output nodes.
		<P>
		The connections between the nodes are controlled by,
		<P>
		<P></P>
		<PRE>
/// 
/// create a link to a node
/// 
/// 
/// 
public void CreateLink( BasicNode bnToNode, BasicLink blLink )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Creating a link between node " + bnToNode.Identifier.ToString() + " and link " + blLink.Identifier.ToString(), ClassName );
	}

	arrayOutputLinks.Add( blLink );
	bnToNode.InputLinks.Add( blLink );
	blLink.InputNode = this;
	blLink.OutputNode = bnToNode;
}

/// 
/// disconnect a connection from the arrays
/// 
/// 
/// 
/// true on success
public bool Disconnect( BasicNode bnFromNode, BasicNode bnToNode )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Disconnecting the connection between " + bnFromNode.ID.ToString() + " to " + bnToNode.ID.ToString(), ClassName );
	}

	ArrayList tempList = bnFromNode.OutputLinks;
	bool bFound = false;

	/// see if it is possible to find the connection to the to node
	int n=0;
	for( ; n&lt;tempList.Count; n++ )
	{
		if( ( ( BasicLink )tempList[ n ] ).OutputNode.Equals( bnToNode ) == true )
		{
			bFound = true;
			break;
		}
	}

	if( bFound == true )
	{
		/// from the current node remove the input then remove the node
		( ( BasicLink )tempList[ n ] ).OutputNode.arrayInputLinks.Remove( tempList[ n ] );
		tempList.RemoveAt( n );
		return true;
	}
	else
		return false;
}
</PRE>
		<P>The Create Links function is called on the node and sets up the link to the 
			passed in node through the passed in link.
		</P>
		<H2><U>The Basic Network Class</U></H2>
		<P>
		The Basic Network class is a container class that holds the information for the 
		whole network. This version of the network class has no inherent structure in 
		that it simple holds arrays of all the nodes and the links and is designed to 
		make the holding of a number of nodes and links easier than having a large 
		collection held within the program. For networks such as the Adaline network 
		this class is able to provide constructors that will build the whole network. 
		for larger networks it has the functionality to add nodes and links. There are 
		some thoughts at the moment to develop a network class that uses the Basic 
		Neuron class but this has been put off for now in order to concentrate on the 
		actual neural network programming rather than getting bogged down in 
		implementation details.
		<P>
		The class is an abstract class and needs to be derived from in order to se it. 
		It does however provide the basic accessing functionality that would be 
		required by any class that wants to derive from this class. It also has a fully 
		implemented save function and contains the blanked out code for loading the 
		file. A true implementation of load is not possible at this level as it would 
		require an instance of the Basic Link class which cannot be instantiated as it 
		is an abstract class. The code is provided as an example and there is a working 
		example in the Adaline network code.
		<P>The class also declares an abstract Create Network function which is used to 
			build the network, an example of this will be given in the Adaline network.
		</P>
		<H3><U>IMPORTANT BITS</U></H3>
		<P></P>
		<PRE>
public virtual void AddNode( BasicNode node )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Adding a new neuron to basic network" );
	}

	arrayNodes.Add( node );
}

public virtual void AddLink( BasicLink link )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Adding a new link to the basic network" );
	}

	arrayLinks.Add( link );
}

public virtual void RemoveNodeAt( int nID )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Removing a node from basic network at " + nID.ToString() );
	}

	if( nID &lt;= arrayNodes.Count )
	{
		arrayNodes.RemoveAt( nID );
	}
	else
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.WarningsAndErrors ) == true )
		{
			log.Log( DebugLevelSet.WarningsAndErrors, "Warning attempt to remove a node from basic network " + nID.ToString() + " when there are only " + arrayNodes.Count.ToString() + " in the array " );
		}
	}
}

public virtual void RemoveLinkAt( int nID )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Removing a link from the basic network at " + nID.ToString() );
	}

	if( nID &lt;= arrayNodes.Count )
	{
		arrayNodes.RemoveAt( nID );
	}
	else
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.WarningsAndErrors ) == true )
		{
			log.Log( DebugLevelSet.WarningsAndErrors, "Warning attempt to remove a node from basic network " + nID.ToString() + " when there are only " + arrayLinks.Count.ToString() + " in the array " );
		}
	}
}

public BasicNode GetNodeAt( int nID )
{
	if( arrayNodes.Count &gt;= nID )
		return ( BasicNode )arrayNodes[ nID ];
	else
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.WarningsAndErrors ) == true )
		{
			log.Log( DebugLevelSet.WarningsAndErrors, "Warning attempt to get a node from basic network " + nID.ToString() + " when there are only " + arrayNodes.Count.ToString() + " in the array " );
		}
	}

	return null;
}

public BasicLink GetLinkAt( int nID )
{
	if( arrayLinks.Count &gt;= nID )
		return ( BasicLink )arrayLinks[ nID ];
	else
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.WarningsAndErrors ) == true )
		{
			log.Log( DebugLevelSet.WarningsAndErrors, "Warning attempt to get a link from basic network " + nID.ToString() + " when there are only " + arrayLinks.Count.ToString() + " in the array " );
		}
	}

	return null;
}
</PRE>
		<P></P>
		<H2><U>The Bias Node Class</U></H2>
		<P>The Bias Node is a simple child of the Basic Node class that has it's value set 
			to 1.0 by default. Although this class is present in the implementation of the 
			Adaline network later it is not actually used by any of the code, as the 
			Adaline relies on the values provided by the learning rate the node value and 
			the error.
		</P>
		<H2><U>The Basic Neuron Class</U></H2>
		<P><IMG SRC="Neural_Dot_Net/BasicNeuronComponent.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		The Basic Neuron class is not a part of the original framework but I thought 
		that it made more sense to have a class that encapsulated the idea of a neuron 
		as an object. I think this makes the design conceptually easier in that it 
		separates things into more clearly defined separate layers, in that you now 
		have network that contains neurons and these neurons contain nodes that do the 
		work. Rather than having a network that has a whole bunch of nodes that have no 
		clearly defined relationship unless you understand the program already.
		<P>The Basic Neuron contains the essentials for building a neuron, these being a 
			couple of input nodes and a bias node, as well as an ArrayList for holding the 
			links to the nodes. This class is also declared as abstract to force the 
			implementer to inherit from it. This is a bit more reasonable than the abstract 
			declaration of the link class as the Neuron class for a specific&nbsp;network 
			will normally want to add some functionality to this class, whereas I suspect 
			most&nbsp;implementations of&nbsp;Basic Link derived classes will simply be 
			calling the base class functionality.&nbsp;
		</P>
		<H2><U>The Pattern Class</U></H2>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/Pattern.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>The Pattern class is used for loading the information that is presented to the 
			network. This class will hold the values in its own arrays and present them to 
			the network a pair&nbsp;at a time.
		</P>
		<H2><U>Finally</U></H2>
		<P>
		Right then that is the introduction to the basic classes now lets move on and 
		look at how the Adaline 1 program actually works and as part of that look into 
		what these classes do in action.
		<P>
		<P></P>
		<H2 align="center"><U><FONT size="7">Neural .Net pt 3</FONT></U></H2>
		<H2 align="center"><U><FONT size="7">The Adaline Network</FONT></U></H2>
		<P>
		Finally we are going to get something working, we've done the theory, you've 
		been formally introduced to the parents now lets get down to the juicy bits. 
		Welcome to the world of the Adaline neural network and running code 
		demonstrations and seeing how things really work when the code is put to the 
		test. As mentioned earlier the Adaline network is a rather sinple 
		classification network in that it takes a set of data with predefined answers 
		and by learning the inputted data it can get the correct answers whenever the 
		data is run.
		<P>The Adaline network is an example of a feed forward network which is named so as 
			all the data flows in one direction, from the input nodes to the output nodes. 
			It should be noted that although the Network contains a Bias node this is not 
			used within the example code. The network is based on the two Adaline examples 
			in Joey Rogers book Object Orientated Networks in C++ although the 
			AdalineNeuron class is entirely mine it has the same effective functionality as 
			the network classes provided in those demonstrations.
			<H2><U>The Adaline Network</U></H2>
		<P>
		There are three extra classes that make up the Adaline network used in the 
		example program and these are the AdalineNeuron which inherits from 
		BasicNeuron, the AdalineNode which inherits from the BasicNode class and the 
		AdalinePattern class which inherits from the Pattern class. These three classes 
		are located in the Adaline.cs file.
		<P>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/3e5b39da.jpg" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		<P></P>
		<IMG SRC="Neural_Dot_Net/Adaline.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<H2><U>The Adaline Neuron Class</U></H2>
		<P>
		The Adaline Neuron class contains the class members for creating the network it 
		is at this point, when overriding the Basic Neuron class ( see Neural .Net part 
		2 The Basic Classes for a description of this class ) that you can specify the 
		requirements that are personal to your network. The idea behind the Neuron 
		classes is that they encapsulate a part of the network in order to make life a 
		little easier later on when dealing with networks that have more than one 
		neuron.
		<P>
		What we are creating with the Adaline Network example is an example of a single 
		neuron neural network. Hopefully once looking at the class you will see why a 
		single neuron neural network is not written as a single class or function.
		<P>
		The Adaline Neuron class only adds one more member to the neuron class because 
		the basic setup of two input values, a bias node and the standard three links 
		are all that is required. We shall look at examples in a later article where 
		the nodes take inputs from more that two input nodes, and examples using 
		more&nbsp;linkages. but for now we'll stick to the basics.&nbsp;The Node added 
		to the Adaline Neuron class is&nbsp;an object of the Adaline Node class and it 
		is this that will be doing all the work for us during the running of the 
		program.&nbsp;&nbsp;
		<P>The useful work done by the Adaline Neuron&nbsp;class is that it builds the 
			network for us from the constructor, by calling BuildLinks. This just made more 
			sense to me than manually building the network every time that you wanted to 
			run&nbsp;it and means that anytime an Adaline network is used then once it been 
			made to work all future networks will build themselves. It does this by using 
			the Basic Node Connect function which is described in Neural .Net part 2 The 
			Basic Classes, but which in essence creates the links between the input nodes 
			and the Adaline node using the link classes.
		</P>
		<H2><U>The Adaline Pattern Class</U></H2>
		<P>The Adaline Pattern class inherits from the pattern class in order to present 
			the training and the running data to the network.
		</P>
		<H2><U>The Adaline Node Class</U></H2>
		<P>The Adaline Node class inherits from the basic node and implements the run, 
			learn and transfer functions for the network
		</P>
		<H2><U>The Adaline One Function</U></H2>
		<P>Now we come to how the Adaline network does it's required task. This is 
			performed by the DoAdalineOne function in Form1.cs and the important code is 
			listed below.
		</P>
		<PRE> 
FileInfo info = new FileInfo( "Neural Network Tester.xml" ); 
if( info.Exists == true ) 
{ 
	info.Delete();
} 

log = new Logger( "Neural Network Tester.xml", "NeuralNetworkTester", true ); 
ArrayList patterns = LoadAdalineTrainingFile(); /// create the Adaline network 

AdalineNeuron neuron = new AdalineNeuron( log, new BasicNode( log ), new BasicNode( log ), new BiasNode( log ), new AdalineNode( log, 0.45 ) ); 

/// train the Adaline network 
int nIteration = 0; 
int nGood = 0; 
while( nGood &lt; nNumberOfItemsInAdalineTrainingFile )
{
	 nGood = 0; 
	 for( int i=0; i&lt;nNumberOfItemsInAdalineTrainingFile; i++ ) 
	 { 
			neuron.InputNodeOne.SetValue( neuron.Node.Values.NodeValue, ( double )( ( Pattern )patterns[ i ] ).InSet[ 0 ] ); 
			neuron.InputNodeTwo.SetValue( neuron.Node.Values.NodeValue, ( double )( ( Pattern )patterns[ i ] ).InSet[ 1 ] ); 

			neuron.Node.Run( neuron.Node.Values.NodeValue ); 

			/// if the output value generated by run is not the same as the output value 
			/// in the training file then it is an error 
			if( ( ( Pattern )patterns[ i ] ).OutputValue( 0 ) != neuron.Node.GetValue( neuron.Node.Values.NodeValue ) ) 
			{ 
				/// run the learn function 
				log.Log( DebugLevelSet.Errors, "Learn called at  number " + i.ToString() + " Pattern value = " + ( ( Pattern )patterns[ i ] ).OutputValue( 0 ).ToString() +
				" Neuron value = " + neuron.Node.GetValue( neuron.Node.Values.NodeValue ), "Form1" ); 
				netWorkText.AppendText( "Learn called at  number " + i.ToString() + " Pattern value = " + ( ( Pattern )patterns[ i ] ).OutputValue( 0 ).ToString() + 
				" Neuron value = " + neuron.Node.GetValue( neuron.Node.Values.NodeValue ) + "\n" ); 

				neuron.Node.Learn(); 
				break; 
			} 
			else 
				nGood++; 
	} 


	log.Log( DebugLevelSet.Progress, "Iteration number " + nIteration.ToString() + " produced " +
			nGood.ToString() + " Good values out of 250 ", "Form1" ); 

	netWorkText.AppendText( "Iteration number " + nIteration.ToString() + " produced " + nGood.ToString() + 
			" Good values out of 250 \n" ); 

	nIteration++; 
} 

FileStream xmlstream = new FileStream( "adalinenetworkone.xml", FileMode.Create, FileAccess.Write, FileShare.ReadWrite, 8, true ); 
XmlWriter xmlWriter = new XmlTextWriter( xmlstream, System.Text.Encoding.UTF8 );
xmlWriter.WriteStartDocument(); 

neuron.Save( xmlWriter ); 
xmlWriter.WriteEndDocument(); 
xmlWriter.Close(); 

/// now load the file FileStream
readStream = new FileStream( "adalinenetworkone.xml", FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 8, true ); 

XmlReader xmlReader = new XmlTextReader( readStream ); 

/// create the adaline network 
AdalineNeuron neuron2 = new AdalineNeuron( log, new BasicNode( log ), new BasicNode( log ), new BiasNode( log ), new AdalineNode( log, 0.45 ) ); 
neuron2.Load( xmlReader );

xmlReader.Close(); 

/// rerun the adaline should get 250 out of 250
for( int i=0; i&lt;nNumberOfItemsInAdalineTrainingFile; i++ )
{
	neuron2.InputNodeOne.SetValue( neuron.Node.Values.NodeValue, ( double )( ( Pattern )patterns[ i ] ).InSet[ 0 ] );
	neuron2.InputNodeTwo.SetValue( neuron.Node.Values.NodeValue, ( double )( ( Pattern )patterns[ i ] ).InSet[ 1 ] );
	neuron2.Node.Run( neuron.Node.Values.NodeValue );
	netWorkText.AppendText( "Pattern " + i.ToString() + " Input = ( " + ( ( Pattern )patterns[ i ] ).InSet[ 0 ].ToString() + "," + ( ( Pattern )patterns[ i ] ).InSet[ 1 ].ToString() +
			" ) Adaline = " + neuron2.Node.GetValue( neuron2.Node.Values.NodeValue ) + " Actual = "  + ( ( Pattern )patterns[ i ] ).OutputValue( neuron2.Node.Values.NodeValue ) + "\n" );
}

log.Close();
thread.Suspend();
</PRE>
		<P>
		The Adaline Network Program is kicked off from the Train menu by clicking on 
		the "Adaline 1" option this kicks off a thread that runs the code above, the 
		reason for the thread is so that the user has some control over the 
		interface&nbsp;while the program is running. As although the Adaline program is 
		fairly&nbsp;quick to train it is not unreasonable to suspect that this wont 
		always be the case. &nbsp;
		<P>
		The code starts off by checking to see if the Neural Network Tester.xml file 
		exists if it does it deletes it and recreates it and then creates a new Logger 
		that will write to the file. The Logger class can be found in the SharpUtils 
		DLL and writes to either an xml file, the registry or both. Once the log is 
		created that log object will be passed through to all the base classes, with 
		its Logging parameters set by the DebugLevel variable that is created in the 
		constructor of the main form as,
		<P>
		<P></P>
		<PRE>
debugLevel = new DebugLevel( DebugLevelSet.All );		
		</PRE>
		<P>
		The debug level set has a number of options available to it that are stored in 
		an enumeration in the DebugLevel.cs file in the SharpUtils project. These 
		levels are
		<P></P>
		<PRE>
public enum DebugLevelSet{ All, WarningsAndErrors, Errors, Warning, Note, Important, HighLight, Progress };
</PRE>
		<P>
		<P>
		The first three levels apply only to the registry whereas the rest are to give 
		greater flexibility for controlling the amount of information that the code 
		outputs while it is running. It should be noted that I have set a lot of the 
		levels to progress and the debug level to all. This will generate the maximum 
		amount of information, which can become a problem in it's own right if the 
		network is refusing to learn as it was when I originally tried running it. The 
		I left the program running in the day while I went out and it generated a 2 gig 
		log file. For general running the debug level should be set to warning, 
		warnings and errors or errors with the other options only used for debugging.
		<P>
		When using the DebugLevels you are only required to set the level once. You 
		will notice in the other files that the level is set to 
		DebugLevelSet.currentLevel so that when you want to change the debug level you 
		don't have to start plowing through every single file that uses it to change 
		it.
		<P>
		Once the logger is created the code calls the 
		LoadAdalineTrainingFile&nbsp;function which is standard file loading code that 
		takes the variables in the file and stores them in a pattern array before 
		moving on to create the AdalineNeuron.
		<P>
		<P></P>
		<PRE>
AdalineNeuron neuron = new AdalineNeuron( log, new BasicNode( log ), new BasicNode( log ), new BiasNode( log ), new AdalineNode( log, 0.45 ) );		
</PRE>
		<P>
		The Adaline Neuron class inherits from the BasicNeuron class and in it's 
		constructor it takes a number of parameters, these being first of all the log 
		then the first Input Node which is of the Basic Node type as its job is to feed 
		information to the network and not to develop it in any way. The third 
		parameter is another basic node which is the second input node ( see picture 
		above ) that also does nothing more than pass data to the network. The fourth 
		parameter is the bias node which by default has a value of one, and works as 
		another input parameter. The Final parameter passed to the constructor is the 
		Adaline Node that itself takes two parameters the first being the log object 
		and the second being the learning rate for the node.
		<P>
		The Adaline Neuron class builds the network that is pictured&nbsp;at the start 
		of this section by calling the&nbsp;BuildLinks Function that establishes the 
		links between the separate nodes,
		<P></P>
		<PRE>
this.InputNodeOne.CreateLink( ( BasicNode )this.Node, ( BasicLink )this.Links[ 0 ] );
this.InputNodeTwo.CreateLink( ( BasicNode )this.Node, ( BasicLink )this.Links[ 1 ] );
this.BiasNode.CreateLink( ( BasicNode )this.Node, ( BasicLink )this.Links[ 2 ] );		
</PRE>
		<P>
		The BuildLinks function takes each node that is specific to the Adaline neuron 
		and joins them by calling the Basic Node Function CreateLinks which is called 
		on the Basic Node that you want to create the link from and takes the node that 
		you are creating the link to which in this case is the Adaline Node cast back 
		into a Basic Node and the Link that will control the connection and contain the 
		weight value for the link.
		<P></P>
		<PRE>
arrayOutputLinks.Add( blLink );
bnToNode.InputLinks.Add( blLink );
blLink.InputNode = this;
blLink.OutputNode = bnToNode;		
</PRE>
		<P>First the Link is added to the nodes array of output links and then it is added 
			to the input links array for the node that the link is going to. Once this is 
			done the information for the link itself is then updated so that the link knows 
			which is the output node and which is the input node.
		</P>
		<H2><U>Training</U></H2>
		<P>
		Training for the network is the process by which the network learns the task 
		appointed to it. It should be noted though that the network learns how to do a 
		complete task and not just how to get all the answers right in the example 
		file. We are not talking Pavlov conditioning here, where you get the same 
		response to the same stimulus all the time we are talking about teaching the 
		program to have the ability to get the answers right every time regardless of 
		the data set presented to it. I'll explain later how to run a trained network 
		so that it can prove that it has learnt the task by generating a new set of 
		data and running it against the saved network.
		<P><IMG SRC="Neural_Dot_Net/Inside The Adaline Run Function.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>The code for the training loop is
		</P>
		<PRE>
while( nGood &lt; nNumberOfItemsInAdalineTrainingFile )
{
	nGood = 0;
	for( int i=0; i&lt;nNumberOfItemsInAdalineTrainingFile; i++ )
	{
		neuron.InputNodeOne.SetValue( neuron.Node.Values.NodeValue, ( double )( ( Pattern )patterns[ i ] ).InSet[ 0 ] );
		neuron.InputNodeTwo.SetValue( neuron.Node.Values.NodeValue, ( double )( ( Pattern )patterns[ i ] ).InSet[ 1 ] );
		neuron.Node.Run( neuron.Node.Values.NodeValue );

		/// if the output value generated by run is not the same as the output value
		/// in the training file then it is an error
		if( ( ( Pattern )patterns[ i ] ).OutputValue( 0 ) != neuron.Node.GetValue( neuron.Node.Values.NodeValue ) )
		{
			/// run the learn function
			log.Log( DebugLevelSet.Errors, "Learn called at number " + i.ToString() + " Pattern value = " +  ( ( Pattern )patterns[ i ] ).OutputValue( 0 ).ToString() + " Neuron value = " + neuron.Node.GetValue( neuron.Node.Values.NodeValue ), "Form1" );
			netWorkText.AppendText( "Learn called at number " + i.ToString() + " Pattern value = " +  ( ( Pattern )patterns[ i ] ).OutputValue( 0 ).ToString() + " Neuron value = " + neuron.Node.GetValue( neuron.Node.Values.NodeValue ) + "\n" );
			neuron.Node.Learn();
			break;
		}
		else
			nGood++;
	}

	log.Log( DebugLevelSet.Progress, "Iteration number " + nIteration.ToString() + " produced " + nGood.ToString() + " Good values out of 250 ", "Form1" );
	netWorkText.AppendText( "Iteration number " + nIteration.ToString() + " produced " + nGood.ToString() + " Good values out of 250 \n" );
	nIteration++;
}
</PRE>
		<P><IMG SRC="Neural_Dot_Net/Inside Adaline Transition Function.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>Basically we run the code round in a loop for a specified number of times 
			represented by the nNumberOfItemsInTheAdalineTraining integer which is declared 
			as a static variable of the class and is set to 250. First of all the code sets 
			the input values in the Input Nodes to the values that have been loaded in to 
			the patterns array when we loaded the training file, Then we call run on the 
			Adaline Node itself, which is the only node in this network to have the run 
			function defined. I go through the details of the run and the learn functions 
			in part one when looking at how neural networks learn so I wont repeat it all 
			here. Suffice to say that once the run function has executed then the output 
			value of the Adaline node is checked to see if it is equal to the answer 
			defined in the training file and if it isn't then the learn function is called 
			and the weight for the node is updated, as is also described in the Neural Dot 
			Net 1 Introduction article.
		</P>
		<H2><U>Saving And Loading</U></H2>
		<P>
		Next we come to the saving and Loading part of the Adaline One demonstration. I 
		took the decision right at the start that all the saving and loading in this 
		program was going to use xml rather than some&nbsp;made up file definitions. I 
		did this because finally xml&nbsp;seems to be being excepted as&nbsp;a standard 
		way of doing things which&nbsp;only makes me wonder why its taken so long. All 
		the logging for this program is done in xml and the neural network is saved in 
		as xml too.
		<P>
		The strategy taken is that each class implements its own saved functionality 
		which means that I a, avoid having to write huge extremely complicated saving 
		functions for every network type that I write and b, because of the inheritance 
		hierarchy most of the save functionality is already implemented by the time 
		that I move to a new network.
		<P>
		This is what the saved Adaline network looks like
		<P>
		<P>
		</P>
		<PRE>
&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;AdalineNeuron&gt;
	&lt;BasicNeuron&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;0&lt;/Identifier&gt;
			&lt;NodeValue&gt;0.960490693785479&lt;/NodeValue&gt;
			&lt;NodeError&gt;0&lt;/NodeError&gt;
		&lt;/BasicNode&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;1&lt;/Identifier&gt;
			&lt;NodeValue&gt;0.541840747251101&lt;/NodeValue&gt;
			&lt;NodeError&gt;0&lt;/NodeError&gt;
		&lt;/BasicNode&gt;
		&lt;BiasNode&gt;
			&lt;BasicNode&gt;
				&lt;Identifier&gt;2&lt;/Identifier&gt;
				&lt;NodeValue&gt;1&lt;/NodeValue&gt;
				&lt;NodeError&gt;0&lt;/NodeError&gt;
			&lt;/BasicNode&gt;
		&lt;/BiasNode&gt;
	&lt;/BasicNeuron&gt;
	&lt;AdalineNode&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;3&lt;/Identifier&gt;
			&lt;NodeValue&gt;-1&lt;/NodeValue&gt;
			&lt;NodeValue&gt;0.45&lt;/NodeValue&gt;
			&lt;NodeError&gt;-2&lt;/NodeError&gt;
		&lt;/BasicNode&gt;
	&lt;/AdalineNode&gt;
	&lt;AdalineLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;4&lt;/Identifier&gt;
			&lt;LinkValue&gt;-5.14391275287787&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;0&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;3&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/AdalineLink&gt;
	&lt;AdalineLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;5&lt;/Identifier&gt;
			&lt;LinkValue&gt;4.19182542813562&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;1&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;3&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/AdalineLink&gt;
	&lt;AdalineLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;6&lt;/Identifier&gt;
			&lt;LinkValue&gt;-0.450444623246065&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;2&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;3&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/AdalineLink&gt;
&lt;/AdalineNeuron&gt;	
</PRE>
		<P>
		I've printed this out in its entirety as it not only shows the saved values for 
		a fully trained network but the&nbsp;xml format is also a convenient way of 
		displaying the class hierarchy in an easy to understand fashion. The Save and 
		Load functions are included in each of the network classes and as a result of 
		the hierarchy they only implement the saving and loading of their own variables 
		and then call the base class save. The standard format for a save function is 
		write an element string identifying the current class, save any local class 
		variables and then call base before writing the end of the element string.
		<P>
		The layout of the xml file follows the pattern it does as even though the nodes 
		contain a record of their links, it was not only more confusing to try and load 
		the links along with the nodes but seeing as the links are themselves stored as 
		separate objects it was an unwanted duplication of the code.
		<P>Once the file is saved and loaded again it is then run again using the same data 
			file and it should get full marks every time. There is a much surer test though 
			built into the test application provided with the downloads for this program.
		</P>
		<H2><U>Testing</U></H2>
		<P>
		The Testing portions of the code are located under the run menu for the Neural 
		Net Tester program. The test for this program is the "Load And Run Adaline 1" 
		menu option. This will load the file that resembles the one above. I say 
		resembles as the linkage values wont be exactly the same any two times running.
		<P>
		The menu option will rerun the file generation for the Adaline network, which 
		is generated based on a time value seed so that the numbers are fairly sure to 
		be different&nbsp;each time. The list will display a message saying that the 
		file is being generated and then will probably proceed in a blur of activity as 
		the program runs the newly generated file through the run function. Note that 
		no learning functions are called&nbsp;this time through. The file is processed 
		entirely with the data&nbsp;taken from the loaded network.
		<P>
		The display will show at the end a list of all the input data and the 
		conclusion the Adaline network reached about that data. Next to this will be 
		the answer that was generated by the test data in the pattern. So far in my 
		testing the function has performed with one hundred percent 
		accuracy.&nbsp;&nbsp;
		<P>The quick guide is
		</P>
		<UL>
			Menu :- Generate/Generate Adaline One&nbsp;Training File :- Generates the file 
			that is used for the adaline Load and run menu option</UL>
		<UL>
			Menu :- Run/Load And Run Adaline&nbsp;1:- Loads the Adaline saved network from 
			the disk and then runs it against the adaline file.</UL>
		<UL>
			Menu :- Train/Train Adaline&nbsp;1 :- Trains the network from scratch using the 
			current adword.trn adaline word training file and then saves it to disk.</UL>
		<UL>
			Menu :- Options Adaline&nbsp;1 Options :- Brings up a dialog that allows you to 
			set certain parameters for the running of the adaline network.
		</UL>
		<H2><U>Options</U></H2>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/AdalineOneOptions.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		There are three options that can be set through the options dialog box for the 
		Adaline one network. The first is the number of items in the file that is used 
		for testing and training. This can be raised or lowered ( as long as you make 
		sure the file contains enough data items )
		<P>The second is the Learning Rate for the network that is used whenever Learn is 
			called. and the third "Use Bias" option adds the bias value to the results of 
			the run function in the transfer function.
		</P>
		<H2><U>Understanding The Output</U></H2>
		<H3><U></U>Training</H3>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 248 Input Value 0 = 
				0.580523407357057 Input Value 1 = 0.856571342263637 Output Value 0 = 1 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 249 Input Value 0 = 
				0.144502778139199 Input Value 1 = 0.420550713045779 Output Value 0 = 1 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 250 Input Value 0 = 
				0.708482148921342 Input Value 1 = 0.984530083827921 Output Value 0 = 1</FONT>
		<P>
		The above is part of a training run for the Adaline One network which shows the 
		first stage of the output for the program. The Pattern ID value is the 
		identifier number of the pattern and has no bearing on the programs final 
		calculations. The Input value's are the values that are being loaded into the 
		patterns array at the time, because this section of the output is when the 
		program is loading the patterns array in preparation for trainging and is 
		called from the LoadAdalineTrainingFile function. This means that the input 
		value at 0 and the Input Value at 1 and the Output Value at 0 should be exactly 
		as they are in the training file the Adaline One network uses ( lin2var.trn ).
		<P>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Learn called at number 0 Pattern value = 
				1 Neuron value = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Iteration number 8 produced 0 Good values 
				out of 250 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Learn called at number 2 Pattern value = 
				-1 Neuron value = 1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Iteration number 9 produced 2 Good values 
				out of 250 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Learn called at number 0 Pattern value = 
				1 Neuron value = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Iteration number 10 produced 0 Good 
				values out of 250</FONT>
		<P>
		The next section of output that appears comes from within the main training 
		loop and indicates where a Learn function was required to be called because the 
		output returned from the network differed from that provided in the training 
		file. The values output by the program indicate how far through the training 
		array the code had gotten before learn was called. In the case of the first 
		line above the network immeadiately returned a wrong answer on the very first 
		item to be fed into it. The next line shows how many times the network has 
		cycled through the network, remember this is not always a complete run as the 
		Adaline network aborts the current iteration when it detects a an incorrect 
		answer and moves on to the next iteration.
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif"><PRE>Pattern 223 Input = ( 0.955437682548276,0.938066999864796 ) Adaline = -1 Actual = -1
</PRE>
			</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif"><PRE>Pattern 224 Input = ( 0.519417053330418,0.795464988236998 ) Adaline = 1 Actual = 1
</PRE>
			</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif"><PRE>Pattern 225 Input = ( 0.0833964241125604,0.664746477578183 ) Adaline = 1 Actual = 1
</PRE>
			</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif"><PRE>Pattern 226 Input = ( 0.647375794894703,0.228725848360325 ) Adaline = -1 Actual = -1</PRE>
			</FONT>
			<PRE>
</PRE>
		<P>
		The final part of the training for the Adaline One network is to run the 
		pattern array that was used to train the network through a saved version of the 
		trained network that is loaded into a completely new Adaline network and then 
		see what results it gives. As it was trained on this data we expect perfection. 
		The output shows the values entered into the network in brackets and the 
		required output with the final output being the value returned by the Adaline 
		network which if input value on the left is higher than the input value on the 
		right should be -1 and 1 if the input value on the right is greater than the 
		input value on the left.
		<P></P>
		<H3>Running</H3>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Adaline File Generated</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 251 Input Value 0 = 
				0.421976388628584 Input Value 1 = 0.00332644209420608 Output Value 0 = -1 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 252 Input Value 0 = 
				0.985955759410726 Input Value 1 = 0.567305812876348 Output Value 0 = -1 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 253 Input Value 0 = 
				0.549935130192868 Input Value 1 = 0.131285183658491 Output Value 0 = -1</FONT>
		<P>
		When running the Adaline One network it starts by generating a completely new 
		file of values to test, to ensure that the values that the network is running 
		against are different from those that it was trained by. The network starts by 
		loading the pattern array which is identical to the output during training 
		which is shown above.
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 0 Input = ( 
				0.421976388628584,0.00332644209420608 ) Adaline = -1 Actual = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 1 Input = ( 
				0.985955759410726,0.567305812876348 ) Adaline = -1 Actual = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 2 Input = ( 
				0.549935130192868,0.131285183658491 ) Adaline = -1 Actual = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 3 Input = ( 
				0.712635237124113,0.695264554440633 ) Adaline = -1 Actual = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 4 Input = ( 
				0.276614607906255,0.259243925222775 ) Adaline = -1 Actual = -1</FONT>
		<P>
		Once the network has loaded the patterns array it then runs the patterns 
		against the loaded network and outputs the details as in the training code 
		above.
		<P>
			<H2><U>Finally</U></H2>
		<P>
		As I've stated before the Adaline network, well Neuron is a simple neural 
		network and as such has it's limitations the most obvious being the fact that 
		the answers are input along with the questions. It's main purpose is simple 
		classifications where the desired output is known and there is a high volume of 
		data to be processed. Most of the early neural networks we will come across are 
		geared toward simple solutions applied to large volumes of data with the 
		classification of patterns within the data being the task of the network though 
		this classification will get more complicated even within the next few projects 
		that we look at.
		<P>
		The next system we will look at is the Back propagation Network that will 
		expand on the things that we have learnt here. This will be in the article 
		Neural Dot Net 5 The Back propagation Network as Neural Dot Net 4 deals with 
		the testing program provided with the download.
		<P>
		<P>
		<P></P>
		<H2 align="center"><FONT size="7"><U>Neural .Net pt 4</U></FONT></H2>
		<H2 align="center"><FONT size="7"><U>Neural Net Tester</U></FONT></H2>
		<P>
		The neural net tester is the application that comes with the neural net library 
		and serves to demonstrate that all the networks implemented in the library do 
		exactly what I say&nbsp;they will do in a provable and repeatable manner. This 
		article is a general view of the testing applications as there are a few ideas 
		floating about for additions and at the time of writing there is no guarantee 
		that components wont be rewritten, completely removed or added. So it more 
		efficient to have a separate piece dealing with the testing suite so I don't 
		have to rewrite every article whenever I make a change.&nbsp;
		<P>
		Also each network will independently describe how to run the tests for the 
		specified network. The reason for this is that it keeps all the information as 
		local as possible, as mistakes will be made if people start getting confused 
		about where the information is about how to test the program.
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/NeuralNetTesterPackages.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>The Testing application utilizes the Internet Explorer control for displaying 
			html pages and is used by the LogViewer component. All my packages use the 
			Sharp Utils DLL which in this case primarily contains the classes used for 
			logging.
		</P>
		<H2><U>The Neural Network Tester Program</U></H2>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/3e65d686.jpg" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		This is the main testing program for all of the neural network tests that are 
		provided with this package. These tests vary from straight CSharp 
		implementations of the C++ code in Joey Rogers book "Object Orientated Neural 
		Networks In C++" to experimental work which I mostly made up as I went along to 
		the implementation of algorithms as described in Simon Hayek's "Neural Networks 
		A Comprehensive Foundation"
		<P>The main program allows the running of the test networks and all tests are run 
			on separate threads so that the main application can retain some responsiveness 
			while a neural network test is running. The reasoning behind this is that 
			sometimes the running of the networks can take hours and at times if they are 
			training they may never satisfactorily complete their training code. There are 
			various reasons for that this can happen and all are noted in the parts that 
			deal with the relevant networks. The threading code is quite simply,
		</P>
		<PRE>
threadStart = new ThreadStart( LoadAndRunAdalineWord );
thread = new Thread( threadStart );

thread.Start();
</PRE>
		<P>
		The ThreadStart and Thread objects are members of the form class, and they work 
		by passing the name of the thread function to the ThreadStart object and then 
		passing the ThreadStart object to the Thread object. Once Start is called on 
		the thread then the function that was passed into the thread start object will 
		start. All of the tests in this program use this mechanism.
		<P>The Network Output Screen shown above is a simple rich edit box that displays 
			any messages output by the program, with each program being written so that it 
			gives a continuous run down of its current progress. The picture above shows 
			the start of the output for the training of the adaline one network.
		</P>
		<H2><U>The Menus</U></H2>
		<P><STRONG>&nbsp;File.</STRONG>
		&nbsp; This is the standard idea of a file menu in that it allows you to exit 
		the program or to stop the currently executing thread.
		<P><STRONG>Generate. </STRONG>
		The Generate menu allows you to generate the training files for each individual 
		program where that option is available.
		<P><STRONG>Run.</STRONG>
		&nbsp; The Run menu allows the running of the networks using previously saved 
		networks that are distributed with the release.
		<P><STRONG>Train.</STRONG>
		&nbsp; The Train menu allows you to see how each different network is trained. 
		Some of these can take a long time and in this section I have tried to make 
		them train within a reasonable time. This in some cases has meant sacrificing 
		accuracy for training speed and is not a method recommended for any networks 
		that are likely to be used in production, but then the idea is that you would 
		train the network before putting it into production and load the previously 
		trained network for the production code.
		<P><STRONG>Options. </STRONG>The options menu is simply there for research by 
			altering things such as the learning rates and the tolerance for errors ( 
			BackPropagation Network ) it is possible to see how to improve the training 
			times and the accuracy of the training.
		</P>
		<H2><U>The Log Viewer Component</U></H2>
		<P>
		<P>
		As with the files for the networks all logging is written to an xml file. This 
		is usually "Neural Network Tester.xml" for any networks that are being run 
		using the train menu and to a file called "Load and Run ... " for files 
		generated by programs run from the "Run" menu. The xml log file to be viewed is 
		then selected via the browse menu and the code then parses the file and writes 
		out html files for each class that has written errors to the file before 
		displaying them as tables in separate tab pages.
		<P>If the progress option is turned on there is going to be sometimes ridiculous 
			amounts of data written to the html file in a very short time ( The standard 
			size for a "Load and run BackPropagation 2" is over a gig ) and it can take the 
			code some time to process the generated xml file and turn it into the htm 
			pages. So unless you are developing then its a good idea to only have this set 
			to warnings and errors. The option for the logging code is set in the main form 
			constructor
		</P>
		<PRE>
debugLevel = new DebugLevel( DebugLevelSet.WarningsAndErrors );
</PRE>
		<P>The Debug Level can be tested at any point during the code by calling the 
			TestDebugLevel function like so,
		</P>
		<PRE>
if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
{
	log.Log( DebugLevelSet.Progress, "Returning original bias of " + NodeValues[ 0 ].ToString(), ClassName );
}
</PRE>
		<P>Once the debugging level has been set then any other time that a class wishes to 
			use the debugging facility it sets up the DebugLevel like so,
		</P>
		<PRE>
debugLevel = new DebugLevel( DebugLevel.currentLevel );
</PRE>
		<P>
		Using the DebugLevel's current level to set the debug Level for the class. It 
		is possible to set the debug level so that classes have different debug levels 
		although I rarely do this in practice it is nice to know it's there in case I 
		need it.
		<P>The LogViewer can also get information from the system event logs if a Event Log 
			Name is specified in the box provided. By event Log name I do not mean to 
			suggest any of the Windows system Event logs as the Logging class within the 
			SharpUtils DLL has the ability to&nbsp;write to&nbsp;custom event logs or to an 
			xml file or both. &nbsp;
		</P>
		<H2><U>Design Decisions</U></H2>
		<P>
		The reason that the LogViewer is implemented with the list of files on the 
		right hand side is due to the way that CSharp and the editor implement splitter 
		bars. The original idea was to place the file list in the standard position on 
		the left but what happened was that the way to get the splitter to work is to 
		declare the item on the left first in the control panel and then the item on 
		the left must be docked with a fill style while the item on the right has to be 
		docked to the right with the splitter then added and docked to the right. The 
		only problem with this is that the item on the right is then fairly static in 
		that if you expand the application to full screen the left hand side of the 
		splitter expands and the right hand size stays the same size. This meant that 
		the file list was then going about two thirds of the way across the screen and 
		the view that contained the html page was taking up a third of the screen. This 
		was far too irritating to be allowed to continue so I switched the two the 
		other way round.
		<P>
		The obvious question here is that surely you should be able to make the view on 
		the right dock as fill and then set the one on the left as docked to the left. 
		You would think that this would work but every time I've tried it the splitter 
		control just docks itself to the left edge of the parent and is completely 
		useless when the application is run. I assume that in later versions of 
		Developer Studio .NET this situation will be improved with either a user 
		interface that does what you want or at the very least someone giving a clear 
		explanation of what the rules are but until that happens we'll just have to 
		work around it.&nbsp;
		<P>
		<P>
		As with the main part of the program the LogViewer uses a thread to do it's 
		processing. This is done so that application flexibility is maintained and the 
		whole system doesn't freeze while the LogViewer goes through its main loop 
		which you can appreciate with processing files that have the capability of 
		being over a gigabyte can take some time. The program keeps an eye on the 
		thread through the use of a timer that checks to see if the thread is still 
		active. There are other ways of doing this but all the ones I tried froze the 
		system i.e.
		<P>
		Using a loop like
		<P>
		<P>
		</P>
		<PRE>
While( thread.IsAlive == true );
</PRE>
		<P>
		Has exactly the same effect as not using a thread. There are other ways of 
		checking the thread by checking the thread state but this had the same freezing 
		effect when placed in a loop. So I use a timer that checks if the thread is 
		alive every so often and if the thread is finished it then goes on to build the 
		list of available html files on the right hand side bar of the program.
		<P>
		<P>
		The design for the LogViewer originally allowed it to be more generic than it 
		is turning out to be. Originally the LogViewer could check the registry to read 
		the event logs and place the contents of the event logs as htm files along with 
		the files generated from the xml. This facility has largely been marginalised 
		by the fact that the Neural Network Tester doesn't use the registry, maybe when 
		I rewrite it for the fourth time I'll put it back in. So as always with these 
		things it is becoming more application specific than I originally would have 
		wanted.
		<P>
		<P>
		One final note should be that the LogViewer indiscriminantly deletes all html 
		files from the working directory before it creates the new ones so you should 
		avoid placing any html documents in this folder.
		<P>
		<P></P>
		<H2 align="center"><FONT size="7"><U>Neural .Net pt 5</U></FONT></H2>
		<H2 align="center"><FONT size="7"><U>The Adaline Word Network</U></FONT></H2>
		<P>The Adaline Word Network is an experimental network of my own. It came about 
			when I was wondering if a network could be made to understand words. Not as in 
			being able to give a dictionary definition, well not yet anyway but as separate 
			items of data. Of course the main problem with this was that the networks all 
			function through numbers. so I had to come up with a way getting words to be 
			represented by unique numbers. The idea I came up with was that each character 
			of the word is represented on the computer as an ASCII value so all I had to do 
			was use the value for each letter but then there was the problem that certain 
			words would amount to the same value. which required a way of changing the 
			letter value enough so that no two words could arrive at the same value. The 
			way I did this was to multiply each letter in the word by the value of it's 
			position in the word. so the first letter would be its character value 
			multiplied by one and so on. I still think it's possible that two words will 
			come up with the same value but it gives me a starting point to try some 
			experiments and see where it ends up.
		</P>
		<H2><U>The Adaline Word Network</U></H2>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/AdalineWordClassDiagram.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<H2><U>The Adaline Word Node Class</U></H2>
		<P>The AdalineWordNode class inherits from the AdalineNode class and overrides the 
			TransferFunction changing the test value to test if the total value generated 
			by the run function is less than 0.5. Other than this ( and the saving and 
			loading code ) the AdalineWord Node uses the code from the AdalineNode class.
		</P>
		<PRE>
if( dValue &lt; 0.5 )
	return -1.0;
return 1.0;		
</PRE>
		<H2><U>The Adaline Word Link Class</U></H2>
		<P>The AdalineWordLink class inherits from the AdalineLink class and apart from the 
			saving and loading code the class makes one change and that is to set the 
			starting weight for the link to a random value between 0 and 1 instead of 
			between -1 and 1. ( See Fun And Games section for an explanation )
		</P>
		<PRE>
arrayLinkValues[ Values.Weight ] = Values.Random( 0, 1 );		
</PRE>
		<H2><U>The Adaline Word Neuron Class</U></H2>
		<P>The AdalineWordNeuron class inherits directly from the Basic Neuron class and 
			the only&nbsp;changes are to allow it&nbsp;to use the Adaline Word&nbsp;Link 
			and Adaline Word Node classes.
		</P>
		<H2><U>The Adaline Pattern Class</U></H2>
		<P>The adaline pattern class inherits directly from the pattern class and slightly 
			changes the way in which the class works. This is necessary as the pattern 
			array now holds words and not values. These words need to be converted to 
			values and this is done through the GetInSetAt function which contains the 
			code,
			<PRE>
double dValue = 0;
string strTemp = arrayInSet[ nID ].ToString();
for( int i=0; i&lt;strTemp.Length; i++ )
{
	dValue += strTemp[ i ] * ( i+1 );
}

/// move decimal place
dValue = dValue / 10000;
</PRE>
		<P>Which gives me a double value for the word which will be mostly unique.
		</P>
		<H2><U>The OnDoAdaline2 Function</U></H2>
		<P>
		As they both use the&nbsp;same algorithm the OnDoAdaline2 function very similar 
		to the function that creates the first Adaline network.
		<P></P>
		<PRE>
FileInfo info = new FileInfo( "Neural Network Tester.xml" );
if( info.Exists == true )
{
	info.Delete();
}

log = new Logger( "Neural Network Tester.xml", "NeuralNetworkTester", true );
ArrayList patterns = LoadAdaline2TrainingFile();

/// create the adaline network
AdalineWordNeuron neuron = new AdalineWordNeuron( log, new BasicNode( log ), new BasicNode( log ), new BiasNode( log ), new AdalineWordNode( log, dLearningRateOfAdalineTwo ) );

/// train the adaline network
int nIteration = 0;
int nGood = 0;

while( nGood &lt; nNumberOfItemsInAdalineWordFile )
{
	nGood = 0;
	for( int i=0; i&lt;nNumberOfItemsInAdalineWordFile; i++ )
	{
		netWorkText.AppendText( "Setting the Node Data to, Pattern " + i.ToString() + " word 1 = " + ( ( AdalineWordPattern )patterns[ i ] ).InputValue( 0 ) +
			" value = " + ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( 0 ).ToString() + " word 2 = " + ( ( AdalineWordPattern )patterns[ i ] ).InputValue( 1 ) +
			" value = " + ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( 1 ).ToString() + " output value = " + ( ( AdalineWordPattern )patterns[ i ] ).OutSet[ 0 ].ToString() + "\n" );
		neuron.InputNodeOne.SetValue( neuron.Node.Values.NodeValue, ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( 0 ) );
		neuron.InputNodeTwo.SetValue( neuron.Node.Values.NodeValue, ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( 1 ) );

		neuron.Node.Run( neuron.Node.Values.NodeValue );

		/// if the output value generated by run is not the same as the output value
		/// in the training file then it is an error
		if( ( ( Pattern )patterns[ i ] ).OutputValue( 0 ) != neuron.Node.GetValue( neuron.Node.Values.NodeValue ) )
		{
			/// run the learn function
			log.Log( DebugLevelSet.Errors, "Learn called at number " + i.ToString() + " Pattern value = " +  ( ( Pattern )patterns[ i ] ).OutputValue( 0 ).ToString() + " Neuron value = " + neuron.Node.GetValue( neuron.Node.Values.NodeValue ), "Form1" );
			netWorkText.AppendText( "Learn called at number " + i.ToString() + " Pattern value = " +  ( ( Pattern )patterns[ i ] ).OutputValue( 0 ).ToString() + " Neuron value = " + neuron.Node.GetValue( neuron.Node.Values.NodeValue ) + "\n" );
			neuron.Node.Learn();
			break;
		}
		else
			nGood++;
	}


	log.Log( DebugLevelSet.Progress, "Iteration number " + nIteration.ToString() + " produced " + nGood.ToString() + " Good values out of " + nNumberOfItemsInAdalineWordFile.ToString(), "Form1" );
	netWorkText.AppendText( "Iteration number " + nIteration.ToString() + " produced " + nGood.ToString() + " Good values out of " + nNumberOfItemsInAdalineWordFile.ToString() + "\n" );
	nIteration++;
}

/// run the training sample
for( int i=0; i&lt;nNumberOfItemsInAdalineWordFile; i++ )
{
	neuron.InputNodeOne.SetValue( neuron.Node.Values.NodeValue, ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( 0 ) );
	neuron.InputNodeTwo.SetValue( neuron.Node.Values.NodeValue, ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( 1 ) );

	neuron.Node.Run( neuron.Node.Values.NodeValue );

	netWorkText.AppendText( "Pattern " + i.ToString() + " Input = ( " + ( string )( ( AdalineWordPattern )patterns[ i ] ).InSet[ 0 ]  + "," + ( string )( ( AdalineWordPattern )patterns[ i ] ).InSet[ 1 ] +
		" ) Adaline = " + neuron.Node.GetValue( neuron.Node.Values.NodeValue ) + " Actual = "  + ( ( AdalineWordPattern )patterns[ i ] ).OutSet[ 0 ].ToString() + "\n" );
}

FileStream xmlstream = new FileStream( "adalinewordnetwork.xml", FileMode.Create, FileAccess.Write, FileShare.ReadWrite, 8, true );
XmlWriter xmlWriter = new XmlTextWriter( xmlstream, System.Text.Encoding.UTF8 );
xmlWriter.WriteStartDocument();

neuron.Save( xmlWriter );

xmlWriter.WriteEndDocument();
xmlWriter.Close();

/// now load the file
FileStream readStream = new FileStream( "adalinewordnetwork.xml", FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 8, true );
XmlReader xmlReader = new XmlTextReader( readStream );

/// create the adaline network
AdalineWordNeuron neuron2 = new AdalineWordNeuron( log, new BasicNode( log ), new BasicNode( log ), new BiasNode( log ), new AdalineWordNode( log ) );
neuron2.Load( xmlReader );
xmlReader.Close();

for( int i=0; i&lt;nNumberOfItemsInAdalineWordFile; i++ )
{
	neuron2.InputNodeOne.SetValue( neuron.Node.Values.NodeValue, ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( 0 ) );
	neuron2.InputNodeTwo.SetValue( neuron.Node.Values.NodeValue, ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( 1 ) );

	neuron2.Node.Run( neuron.Node.Values.NodeValue );

	netWorkText.AppendText( "Pattern " + i.ToString() + " Input = ( " + ( string )( ( AdalineWordPattern )patterns[ i ] ).InSet[ 0 ]  + "," + ( string )( ( AdalineWordPattern )patterns[ i ] ).InSet[ 1 ] +
		" ) Adaline = " + neuron2.Node.GetValue( neuron2.Node.Values.NodeValue ) + " Actual = "  + ( ( AdalineWordPattern )patterns[ i ] ).OutSet[ 0 ].ToString() + "\n" );
}
</PRE>
		<P>
		As you can see the code here is very similar to the code that generates the 
		first Adaline network. The code loops through the number of words in the 
		adaline word file and trains the network by calling learn if the run does not 
		get the answer correct.
		<P>The next section runs the training variables through the code again to make sure 
			that it has learned its task properly. The reason I did this was because 
			originally the code was running till it got everything correct but then getting 
			everything wrong when I loaded the file and ran it again. The reason for this 
			was to do with the loading of the file not loading the link values correctly. 
			Finally the code will save the network and then load the network into a new 
			neuron and run the data through the new neuron, outputting its reponses to the 
			display window.
		</P>
		<H2><U>Training</U></H2>
		<P>
		As mentioned before the Adaline word relies heavily on the Adaline network so 
		the picture that depicts the training for the Adaline network is valid here
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/Inside The Adaline Run Function.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		This shows the way in which the run function processes its data by going 
		through the input data and multiplying it by the weight value.
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/Inside Adaline Transition Function.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>the above shows the transition function for the adaline network but apart from 
			the comparison being for less than 0.5 there is no difference.
		</P>
		<H2><U>Saving And Loading</U></H2>
		<P>As with the rest of the Neural Network Library the Adaline Word Network is saved 
			as an xml file to the disk so that once trained it can be used and loaded at 
			will.
		</P>
		<PRE>
&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;AdalineWordNeuron&gt;
	&lt;BasicNeuron&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;0&lt;/Identifier&gt;
			&lt;NodeValue&gt;0.2175&lt;/NodeValue&gt;
			&lt;NodeError&gt;0&lt;/NodeError&gt;
		&lt;/BasicNode&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;1&lt;/Identifier&gt;
			&lt;NodeValue&gt;0.114&lt;/NodeValue&gt;
			&lt;NodeError&gt;0&lt;/NodeError&gt;
		&lt;/BasicNode&gt;
		&lt;BiasNode&gt;
			&lt;BasicNode&gt;
				&lt;Identifier&gt;2&lt;/Identifier&gt;
				&lt;NodeValue&gt;1&lt;/NodeValue&gt;
				&lt;NodeError&gt;0&lt;/NodeError&gt;
			&lt;/BasicNode&gt;
		&lt;/BiasNode&gt;
	&lt;/BasicNeuron&gt;
	&lt;AdalineWordNode&gt;
		&lt;AdalineNode&gt;
			&lt;BasicNode&gt;
				&lt;Identifier&gt;3&lt;/Identifier&gt;
				&lt;NodeValue&gt;-1&lt;/NodeValue&gt;
				&lt;NodeValue&gt;0.223333&lt;/NodeValue&gt;
				&lt;NodeError&gt;-2&lt;/NodeError&gt;
			&lt;/BasicNode&gt;
		&lt;/AdalineNode&gt;
	&lt;/AdalineWordNode&gt;
	&lt;AdalineWordLink&lt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;4&lt;/Identifier&gt;
			&lt;LinkValue&gt;-3.70488132032844&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;0&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;3&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/AdalineWordLink&gt;
	&lt;AdalineWordLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;5&lt;/Identifier&gt;
			&lt;LinkValue&gt;5.06800087718808&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;1&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;3&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/AdalineWordLink&gt;
	&lt;AdalineWordLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;6&lt;/Identifier&gt;
			&lt;LinkValue&gt;0.184749753453698&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;2&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;3&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/AdalineWordLink&gt;
&lt;/AdalineWordNeuron&gt;		
</PRE>
		<H2><U>Testing</U></H2>
		<P>
		For the purposes of testing the Adaline word class appears as the Adaline 2 
		network on the menu's. It's option on the train menu is the Train Adaline 2 
		option which will run the code listed above. When it comes to the Generate menu 
		there is an option to generate an adaline working file that is saved as 
		AdalineWordWorkingFile.wrk. This generate operation reads all the words from 
		the adaline word file which is just a text file that contains words that the 
		adaline will use.
		<P>
		The adalinewordfile.dat file that is contains the words that the adaline sample 
		uses to generate a file can be added to through setting the options for the 
		Adaline 2 program in the options menu although there is nothing to prevent 
		anyone from just opening the adalinewordfile.dat and editing it in notepad as 
		it is a simple text file.
		<P>
		The generate operation will then read all the words from the file and create 
		the AdalineWordWorkingFile.wrk by randomly selecting two words from the file 
		and calculating the desired output before writing all the information to the 
		file. This information is in exactly the same format at as the adword.trn file 
		that is used to train the adaline word network, so if you fancy changing the 
		training data to see what happens simply cut and paste the contents between the 
		files.
		<P>The quick guide is
		</P>
		<UL>
			Menu :- Generate/Generate Adaline Two Working File :- Generates the file that 
			is used for the adaline Load and run menu option</UL>
		<UL>
			Menu :- Run/Load And Run Adaline Two :- Loads the Adaline Word saved network 
			from the disk and then runs it against the adaline word working file.</UL>
		<UL>
			Menu :- Train/Train Adaline 2 :- Trains the network from scratch using the 
			current adword.trn adaline word training file and then saves it to disk.</UL>
		<UL>
			Menu :- Options Adaline 2 Options :- Brings up a dialog that allows you to set 
			certain parameters for the running of the adaline word network, as well as 
			containing a facility to allow you to add words to the adaline word file. ( 
			AdalineWordFile.dat )</UL>
		<P>
		</P>
		<H2><U>Options</U></H2>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/AdalineTwoOptions.jpg" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		The main options that can be set for the Adaline Two network are the Number of 
		Items in the file which is set by default at 100 and the learning rate. There 
		is also the provision to allow testing using a bias value which is a value of 1 
		at the transfer function. There is also the option to add more words to the 
		file that the adaline two network uses.
		<P>
			<IMG SRC="Neural_Dot_Net/Addwordtoadalinetwofile.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P></P>
		<H2><U>Understanding The Output</U></H2>
		<H3><U></U>Training</H3>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 1 Input Value 0.1587 = metal 
				Input Value 0.0616 = red Output Value metal = -1 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 2 Input Value 0.2215 = 
				rabbit Input Value 0.114 = slow Output Value rabbit = -1 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 3 Input Value 0.0641 = cat 
				Input Value 0.1594 = steel Output Value steel = 1 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 4 Input Value 0.1074 = wood 
				Input Value 0.1611 = white Output Value white = 1</FONT>
		<P>
		As with the previous Adaline Network the Adaline Word Network begins by loading 
		the Pattern array with the values to be put to the network with the only 
		difference being that the values in this case are the words. The output shows 
		the word that is being put to the network and it's corresponding calculated 
		value.
		<P>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Iteration number 7 produced 2 Good values 
				out of 100</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting the Node Data to, Pattern 0 word 
				1 = metal value = 0.1587 word 2 = red value = 0.0616 output value = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting the Node Data to, Pattern 1 word 
				1 = rabbit value = 0.2215 word 2 = slow value = 0.114 output value = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting the Node Data to, Pattern 2 word 
				1 = cat value = 0.0641 word 2 = steel value = 0.1594 output value = 1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Learn called at number 2 Pattern value = 
				1 Neuron value = -1</FONT>
		<P>
		The above shows a portion of the training code for the Adaline Word Network as 
		it finishes one iteration and begins another, as with the previous Adaline 
		Network example on discovering an error the learn function is called and a new 
		iteration is started.
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Iteration number 250 produced 100 Good 
				values out of 100</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 0 Input = ( metal,red ) Adaline = 
				-1 Actual = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 1 Input = ( rabbit,slow ) Adaline 
				= -1 Actual = -1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 2 Input = ( cat,steel ) Adaline = 
				1 Actual = 1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 3 Input = ( wood,white ) Adaline 
				= 1 Actual = 1</FONT>
		<P>Once the network has successfully trained with the examples it then saves the 
			network and loads it into a completely new network object and then performs a 
			test with the same data to prove that eveything ahs workied correctly.
		</P>
		<H3>Running</H3>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif"></FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Generating Adaline Word File... Please 
				Wait</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Adaline File Generated</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 101 Input Value 0.2348 = 
				yellow Input Value 0.0617 = and Output Value yellow = -1 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern ID = 102 Input Value 0.1091 = 
				lion Input Value 0.1536 = black Output Value black = 1</FONT>
		<P>
		When you load and run the Adaline Word network the code generates a new file to 
		ensure that the running data is different from the training data, Once the data 
		is loaded it then runs it against the network. Note that for the adaline word 
		network the data being loaded into the pattern array is not output to the 
		screen, so the only output is the final results of the run.
		<P>
		</P>
		<H2><U>Fun And Games</U></H2>
		<P>
		Implementation of this network has not been easy from the start the first 
		problem being how to define a way that a word could create a number that would 
		be unique and then how to crowbar it into the learning algorithm. The original 
		idea behind this was to give each letter a value based on its ASCII value 
		multiplied by its place in the word. This gives me a good starting point but I 
		end up with a number that is something like 631, which is a bit off the testing 
		of between -1 and 1. so the idea from there was just to stick a "0." before the 
		number which can be made to work, sort of, apart from the minor technical 
		detail that&nbsp;if a value came out at 1100 once it had the "0." stuck infront 
		of it would be&nbsp;considered smaller by the code than the value 0.631. This 
		certainly wasn't the desired result and was generally too confusing. 
		&nbsp;&nbsp;
		<P>
		A solution to this was to divide the resulting number by 1000 which would mean 
		that the value 631 would resolve to 0.0631 and the value of 1100 would resolve 
		to 0.11 which would preserve the integrity of the initial values so that the 
		value that was originally the highest value would still be the highest value 
		and the values would remain linearly separable which is of great importance 
		when dealing with an adaline network as it only works correctly on linearly 
		separable values.
		<P>
		Next seeing as the transfer values of -1 and 1 were out as the numbers would 
		never originally be in the negative the transfer values were changed so that 
		the values were between 0 and 1. The only problem with&nbsp;this though was 
		trying to set the learning rate to a value sufficient to be able to distinguish 
		between two numbers where the difference could be as low as 0.0001. This 
		presents a problem in the training of the network although not in the running 
		of the network. Once the network has learnt what it is supposed to do it works 
		fine but it can get stuck in training on numbers that are very close together 
		during training. The quickest solution is to generate a new training file by 
		generating a test file and simply cutting and pasting the contents of the file 
		into the training file. ( adalineword.trn ). Though I have to admit that this 
		sometimes just takes lots of patience. Unless you are really interested in 
		watching numbers move across the screen the best way to test this program is 
		use the load and run adaline two option from the run menu.&nbsp;
		<P>
		<P>
		<P>
		<P>
		</P>
		<H2 align="center"><FONT size="7"><U>Neural .Net pt 6</U></FONT></H2>
		<H2 align="center"><FONT size="7"><U>Changes To The Basic Classes</U></FONT></H2>
		<P>
		As I mentioned earlier things were unlikely to remain the same way for long and 
		with the finishing of the code and articles for the first release showing the 
		Adaline networks I took the opportunity to fix a couple of points in the 
		library design that were bugging me. The following is the new class diagram for 
		the basic classes.
		<P>
		I should point out here that the first release was never released. The original 
		plan was for four releases that had two neural network samples of the same kind 
		of network. On reflection I felt that the first release was lacking somewhat in 
		dramatic impact and was too basic to really give an impression of what neural 
		networks and the accompanying library were capable of. This&nbsp;I feel has 
		been fixed by withholding any releases until what was originally planned as the 
		second release and the inclusion of the BackPropagation Word network give a 
		much stronger picture of the capabilities of both neural networks and the 
		accompanying library.
		<P ALIGN="center">
		<P><IMG SRC="Neural_Dot_Net/changedBasicClassDiagram.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		As you can see the Values class has been removed although this is now a 
		structure that contains constant values and as such is now accessible from 
		anywhere in the code without having to go through any other classes to get to 
		it like you had to with the initial release.
		<P>
		The BiasNode class has also been changed and renamed as it implementation was 
		just plain nasty in the first version of the code. It has been changed so that 
		it is no longer a separate node but is implemented as a part of the Basic Node 
		class. All the save and load code has been changed to reflect the changes in 
		the base classes and the examples from release one now work with the new 
		classes.
		<P>The most obvious changes are to the saving and loading of the xml data files. 
			Here is the file for the adaline network.
		</P>
		<PRE>
&lt;xml version="1.0" encoding="utf-8" ?&gt;
&lt;AdalineNeuron&gt;
	&lt;BasicNeuron&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;0&lt;/Identifier&gt;
			&lt;NodeValue&gt;0.102577325004422&lt;/NodeValue&gt;
			&lt;NodeError&gt;0&lt;/NodeError&gt;
			&lt;Bias&gt;
				&lt;BiasValue&gt;1&lt;/BiasValue&gt;
			&lt;/Bias&gt;
		&lt;/BasicNode&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;1&lt;/Identifier&gt;
			&lt;NodeValue&gt;0.378625259911001&lt;/NodeValue&gt;
			&lt;NodeError&gt;0&lt;/NodeError&gt;
			&lt;Bias&gt;
				&lt;BiasValue&gt;1&lt;/BiasValue&gt;
			&lt;/Bias&gt;
		&lt;/BasicNode&gt;
	&lt;/BasicNeuron&gt;
	&lt;AdalineNode&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;2&lt;/Identifier&gt;
			&lt;NodeValue&gt;1&lt;/NodeValue&gt;
			&lt;NodeValue&gt;0.45&lt;/NodeValue&gt;
			&lt;NodeError&gt;-2&lt;/NodeError&gt;
			&lt;Bias&gt;
				&lt;BiasValue&gt;1&lt;/BiasValue&gt;
			&lt;/Bias&gt;
		&lt;/BasicNode&gt;
	&lt;/AdalineNode&gt;
	&lt;AdalineLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;3&lt;/Identifier&gt;
			&lt;LinkValue&gt;-0.652531512432047&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;0&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;2&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/AdalineLink&gt;
	&lt;AdalineLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;4&lt;/Identifier&gt;
			&lt;LinkValue&gt;0.656796696482597&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;1&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;2&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/AdalineLink&gt;
&lt;/AdalineNeuron&gt;		
</PRE>
		<P>
		As you can see from the above the big change concerns the bias which is now 
		present in every node item in the file. This means that although the 
		implementation of the Bias follows the implementation in release one the future 
		way that it will be done is by using the BasicNode UseBias function which means 
		that in the translation function any node can automatically check that if it is 
		supposed to use the bias or not without the need to use crowbar techniques to 
		get the bias value to the transfer function as I did in the first release of 
		the code.
		<P>
		Note also that the adaline word save file looks similar to the file above. 
		Check the debug directory for the real adalinewordnetwork.xml file.
		<P>When it comes to using the code these changes are barely noticeable. The bias 
			changes will only be felt when developing with the library to create a new 
			network and implementing the transfer function and&nbsp;with the&nbsp;Values 
			being changed into a struct it will just&nbsp;be a lot simpler to use. &nbsp;
		</P>
		<H2>&nbsp;</H2>
		<P>
		<P>
		</P>
		<H2 align="center"><FONT size="7"><U>Neural .Net pt 7</U></FONT></H2>
		<H2 align="center"><FONT size="7"><U>The&nbsp;Backpropagation Network</U></FONT></H2>
		<P>The BackPropagation network gets its name from the way that the learning is done 
			due to the fact that with the BackPropagation network the learning is started 
			at the Learn&nbsp;function in the output nodes &nbsp;and proceeds backwards 
			through the nodes updating the weights on the links as it goes. This example 
			is&nbsp;based on&nbsp;chapter 5&nbsp;of &nbsp;Joey Rogers Object Orientated 
			Networks in C++&nbsp;book&nbsp;and has been expanded with the provision of the 
			facility to generate test data and then run the data through the trained 
			network.
		</P>
		<H2><U>The BackPropagation Network</U></H2>
		<P>
		There are five new classes to introduce with the BackPropagation program, most 
		of which inherit directly from classes that have already been seen previously. 
		Structure wise the network is slightly more complicated than that seen in 
		previous examples&nbsp;because we&nbsp;are&nbsp;now starting to look at 
		networks that contain layers of nodes. These layers&nbsp;now have what are 
		called "Hidden" nodes in that they are part of the actual network and are not 
		directly accessed by code from the running program but lie between the input 
		and the&nbsp;output nodes. &nbsp;&nbsp;
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/BackPropagationOneDiagram.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		<P></P>
		<IMG SRC="Neural_Dot_Net/Backpropagationclasses.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<H2><U>The Back Propagation Network Class</U></H2>
		<P>
		The Back Propagation Network class is used for building and controlling the 
		network and inherits from the Basic Network class. This class expands on the 
		basic network class in that it must be capable of providing layers for the 
		network and must know where the layers are. I say this because the nodes are 
		still stored in a single array list, the layers are a conceptual construct in 
		that they only exist where the code says they are. There is no attempt to build 
		a code hierarchy that models the diagram of the BackPropagation like that one 
		above.
		<P>It should also be noted that although this example only has one output node the 
			code contains the facility to deal with more than one output node. This is 
			noticeable in the code that gets and sets the output errors and values.
		</P>
		<PRE>	
public virtual double OutputError( int nID )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Getting the Output Error at " + nID.ToString() + " from the Backpropagation network", ClassName );
	}

	if( this.Nodes.Count &lt; ( nID + nFirstOutputNode ) )
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.WarningsAndErrors ) == true )
		{
			log.Log( DebugLevelSet.WarningsAndErrors, "Warning the array count is less than the index you are using to access it, returning 0.0 ", ClassName );
		}

		return 0.0;
	}

	return this.GetNodeAt( nID + nFirstOutputNode ).GetError( Values.NodeError );
}

public virtual void SetOutputError( int nID, double dNewValue )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Setting the output error for output node " + nID.ToString() + " from " + this.GetNodeAt( nID + nFirstOutputNode ).GetError( Values.NodeError ).ToString() + " to " + this.GetNodeAt( nID + nFirstOutputNode ).GetError( Values.NodeError ), ClassName );
	}

	if( this.Nodes.Count &lt; ( nID + nFirstOutputNode ) )
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.WarningsAndErrors ) == true )
		{
			log.Log( DebugLevelSet.WarningsAndErrors, "Warning the array count is less that the index you are using to access it, quitting set node error in BackPropagation network", ClassName );
		}

		return;
	}

	this.GetNodeAt( nID + nFirstOutputNode ).SetError( Values.NodeError, dNewValue );
}

public virtual void SetOutputError( Pattern pattern )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Set output error with pattern called for BackPropagation network", ClassName );
	}

	for( int i=0; i&lt;( ( int )this.Layers[ nNumLayers-1 ] ); i++ )
	{
		this.GetNodeAt( i + nFirstOutputNode ).SetError( Values.NodeError, pattern.OutputValue( i ) );
	}
}

public virtual double GetOutputValue( int nID )
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Get output value called for BackPropagation network", ClassName );
	}

	if( this.Nodes.Count &lt; ( nID + nFirstOutputNode ) )
	{
		if( debugLevel.TestDebugLevel( DebugLevelSet.WarningsAndErrors ) == true )
		{
			log.Log( DebugLevelSet.WarningsAndErrors, "Warning the array count is less than the index you are using to access it, returning 0.0", ClassName );
		}

		return 0.0;
	}

	return this.GetNodeAt( nID + nFirstOutputNode ).GetValue( Values.NodeValue );
}

</PRE>
		<P>
		As you can see from the code that the whenever the code accesses the output 
		values for the network it calls GetNodeAt which takes the ID passed to the 
		value which will be zero for the first node and adds that to the 
		nFirstOutputNode value which is the network classes way of keeping track of the 
		number in the ArrayList that the first output node starts at.
		<P>The Back Propagation Network is built by the CreateNetwork function.
		</P>
		<PRE>
protected override void CreateNetwork()
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Create Network called for the BackPropagation network ", ClassName );
	}

	/// work out the number of nodes and links
	for( int i=0; i&lt;arrayLayers.Count; i++ )
	{
		nNumberOfNodes += ( int )arrayLayers[ i ];
	}

	/// number of links equals the a link to each node in the preceding layer
	for( int i=1; i&lt;arrayLayers.Count; i++ )
	{
		nNumberOfLinks += ( ( int )arrayLayers[ i-1 ] * ( int )arrayLayers[ i ] );
	}

	/// fill out the node arrays
	nFirstMiddleNode = 0;
	for( int i=0; i&lt;nNumLayers; i++ )
	{
		/// input layer
		if( i==0 )
		{
			for( int n=0; n&lt;( int )arrayLayers[ i ]; n++ )
			{
				this.AddNode( new BasicNode( log ) );
			}
		}
		/// output layer
		else if( i+1 == nNumLayers ) /// numlayers not 0 based
		{
			nFirstOutputNode = this.Nodes.Count;

			for( int n=0; n&lt;( int )arrayLayers[ i ]; n++ )
			{
				this.AddNode( new BackPropagationOutputNode( log, this.LearningRate, this.dMomentumTerm ) );
			}
		}
		/// middle layer (s)
		else
		{
			nFirstMiddleNode = this.Nodes.Count;

			for( int n=0; n&lt;( int )arrayLayers[ i ]; n++ )
			{
				this.AddNode( new BackPropagationMiddleNode( log, this.LearningRate, this.dMomentumTerm ) );
			}
		}
	}

	/// create the links			
	for( int i=0; i&lt;nNumberOfLinks; i++ )
	{
		this.AddLink( new BackPropagationLink( log ) );
	}

	/// now do all the connections
	int nLayerOne = 0;
	int nLayerTwo = nFirstMiddleNode;
	int nLinkNumber = 0;

	for( int i=0; i&lt;nNumLayers-1; i++ )
	{
		/// outer layer ( starts with input layer )
		for( int n=0; n&lt;( int )arrayLayers[ i ]; n++ )
		{
			/// next inner layer to link to the outer layer
			for( int k=0; k&lt;( ( int )arrayLayers[ i + 1 ] ); k++ )
			{
				( ( BasicNode )this.Nodes[ nLayerOne + n ] ).CreateLink( ( BasicNode )this.Nodes[ nLayerTwo + k ], ( BasicLink )this.Links[ nLinkNumber ] );
				nLinkNumber++;
			}
		}

		nLayerOne = nLayerTwo;
		nLayerTwo += ( int )Layers[ i + 1 ];
	}
}
</PRE>
		<P>The Create Network function starts by calculating the numbers of nodes and links 
			that are required for the creation of the network. There are a couple of ways 
			that you can add layers to the Back Propagation Network class the first and the 
			one used by this code is to build a three layer network using the provided 
			constructor which then takes the number of nodes for each layer. The second is 
			to use the constructor that just takes the number of layers, you can then add 
			the layers by&nbsp;
		</P>
		<PRE>
AddLayer = numberOfNodes;
</PRE>
		<P>
		passing the number of nodes that are to be in that specific layer.
		<P>The create Network function then creates the nodes depending on where they are 
			in the layer structure, i.e. anything in the first layer is an input node, 
			anything in the last layer is an output node and anything in between is a 
			middle node. With the creation of the nodes the code then creates the correct 
			number of links to join the nodes together before cycling through the nodes and 
			creating the links between the separate layers.
		</P>
		<H2><U>The Back Propagation Output Node Class</U></H2>
		<P>
		The Back propagation Output Node class inherits from the Adaline node class so 
		that it can use the run function provided by that class. The most major change 
		in the Back Propagation Output Node class is the change to the Transfer 
		function this is,
		<P></P>
		<PRE>
protected override double TransferFunction( double dValue )
{
	return 1.0/( 1+Math.Exp( dValue ) ); /// sigma 
}
</PRE>
		<P>
		which now uses a sigmoid function to guarantee that the output returned by the 
		function is within the range of 0 and 1.
		<P>The back Propagation Output Node Class also has another new function called the 
			compute error function
		</P>
		<PRE>
public virtual double ComputeError()
{
	return ( ( double )this.NodeValues[ Values.NodeValue ] ) * ( 1.0-( double )this.NodeValues[ Values.NodeValue ] ) 
			* ( ( ( double )this.NodeErrors[ Values.NodeError ] ) - ( ( double )this.NodeValues[ Values.NodeValue ] ) );
}		
</PRE>
		<P>
		which computes the error for the output node as the current node value 
		multiplied by the current node value minus one, multiplied by the current error 
		value for the node minus the current node value. This function is called from 
		the Back Propagation Output Node Learn function which sets the node error value 
		with the returned result.
		<P>The Back Propagation Output Node Learn function looks like,
		</P>
		<PRE>
public override void Learn()
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Learn called for BackPropagation node ", ClassName );
	}

	double dDelta = 0.0;
	this.NodeErrors[ Values.NodeError ] = ComputeError();

	for( int i=0;i&lt;this.InputLinks.Count; i++ )
	{
		dDelta = ( ( double )this.NodeValues[ Values.LearningRate ] ) * ( ( double )this.NodeErrors[ Values.NodeError ] ) * ( ( BasicLink )this.InputLinks[ i ] ).InputValue( Values.NodeValue );
		( ( BackPropagationLink )this.InputLinks[ i ] ).UpdateWeight( dDelta );
	}
}
</PRE>
		<P>which starts off by getting the error value for the current node and then 
			calculates the new weight value for the node as the learning rate multiplied by 
			the node error value multiplied by the input value.
		</P>
		<H2><U>The Back Propagation Middle Node Class</U></H2>
		<P>
		The Back Propagation Middle Node class inherits from the Back Propagation 
		Output Node class and overrides the compute error function
		<P></P>
		<PRE>
public override double ComputeError()
{
	double dTotal = 0.0;
	for( int i=0; i&lt;this.OutputLinks.Count; i++ )
	{
		dTotal += ( ( BackPropagationLink )this.OutputLinks[ i ] ).WeightedOutputError( Values.NodeError );
	}

	return ( double )this.NodeValues[ Values.NodeValue ] * ( 1.0-( ( double )this.NodeValues[ Values.NodeValue ] ) ) * dTotal;
}		
</PRE>
		<P>
		The difference between the versions of the compute error function is that the 
		Back Propagation Middle Node calculates the error for the middle nodes 
		differently in that it calculates the total values for the weighted errors and 
		then returns the current node value multiplied by the current node value minus 
		one multiplied by the total value calculated as the total weighted error value.
		<P>
		</P>
		<H2><U>The Back Propagation Link Class</U></H2>
		<P>The Back Propagation Link class is an extension of the Basic Link class and is 
			provided to enable the use of the delta and the momentum values that are used 
			by the Back Propagation Network. It's main difference to the Basic Link class 
			comes with Update Weight function.
		</P>
		<PRE>
public override void UpdateWeight( double dNewValue )
{
	/// get the current momentum
	double dMomentum = this.OutputNode.GetValue( Values.Momentum );
	/// update the weight with the current change and a percentage of the last change
	this.arrayLinkValues[ Values.Weight ] = ( double )this.arrayLinkValues[ Values.Weight ] + dNewValue + ( dMomentum * ( double )this.arrayLinkValues[ Values.Delta ] );
	/// store the new value as passed 
	this.arrayLinkValues[ Values.Delta ] = dNewValue;
}		
</PRE>
		<P>This gets the momentum from the node that is set when the Back Propagation 
			network is created. It then calculates the new weight by adding the current 
			weight to the new value passed in and then adding a proportion of the previous 
			value for the weight. In this example the momentum is being set 0.9 so the 
			value will 0.9 times whatever value is stored in the delta value which as you 
			can see is stored in the delta in the next line of code, hence giving a 
			percentage of the previous update value.
		</P>
		<H2><U>Training</U></H2>
		<P>
		Training for the Back Propagation Network is slightly more complicated than it 
		was for the Adaline Network that we have seen earlier. The reason for this is 
		because the Back Propagation Network does the weight adjustments to the network 
		all in one go. That is it starts at the output node and propagates the training 
		backwards through the middle nodes.
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/Inside The Adaline Run Function.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>The training loop for the BackPropagation Network looks like,
		</P>
		<PRE>
while( nGood &lt; 4 )
{
	nGood = 0;
	dTotalError = 0.0;

	for( int i=0; i&lt;4; i++ )
	{
		/// set the input values
		bpNetwork.SetValue( ( ( Pattern )patterns[ i ] ) );

		/// run the network
		bpNetwork.Run();

		/// set the desired output
		bpNetwork.SetOutputError( ( ( Pattern )patterns[ i ] ) );

		/// run learn anyway
		bpNetwork.Learn();

		dTest = ( Math.Abs( bpNetwork.GetOutputValue( 0 ) ) - ( ( Pattern )patterns[ i ] ).OutputValue( 0 ) );
		netWorkText.AppendText( "Network out put value = " + ( double )bpNetwork.GetOutputValue( 0 ) + " Pattern out put value = " +  ( ( Pattern )patterns[ i ] ).OutputValue( 0 ) + "\n" );
		netWorkText.AppendText( "Absolute output value = " + dTest.ToString() + "\n" );

		if( dTest  &lt; dTolerance )
		{
			if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
			{
				log.Log( DebugLevelSet.Progress, "Test falls within tolerance levels", ClassName );
			}

			netWorkText.AppendText( "Test = " + dTest.ToString() + " Tolerance = " + dTolerance.ToString() + " Test falls within tolerance levels\n" );

			nGood++;
		}

		dTotalError += Math.Abs( bpNetwork.OutputError( 0 ) );

		if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
		{
			log.Log( DebugLevelSet.Progress, "Back Propagation training run " + i.ToString() + " completed, total error = " + dTotalError.ToString(), ClassName );
		}

		netWorkText.AppendText( "Pattern " + ( ( Pattern )patterns[ i ] ).InputValue( 0 ).ToString() + "," + ( ( Pattern )patterns[ i ] ).InputValue( 1 ).ToString() +
			" run through the network " + " output error at 0 = " + bpNetwork.OutputError( 0 ).ToString() + 
			" total error = " + dTotalError.ToString() + "\n" );
	}
}
</PRE>
		<P>
		the training loop only tries to get four good results as the we are trying to 
		solve the XOR problem here that means that of the four inputs 0 &amp; 0, 0 
		&amp; 1, 1 &amp; 0, and 1 &amp; 1 only the pairs of numbers that contain a 
		single value of 1 should give a positive value of 1 in the output. &nbsp;
		<P>
		The Back Propagation Network figures out the answer to this problem by running 
		through each epoch of the four patterns or value pairs indicated above. It then 
		calls the network run function for the pair. The essential part of the Back 
		Propagation Network run function is
		<P>
		</P>
		<PRE>
for( int i=nFirstMiddleNode; i&lt;this.Nodes.Count; i++ )
{
	( ( AdalineNode )this.Nodes[ i ] ).Run( Values.NodeValue );
}		
</PRE>
		<P>
		The run function starts at the first middle node and then calls the adaline 
		node run function on all of the nodes up to Nodes.Count which includes calling 
		the run function on the output node.
		<P>
		The output error for the output nodes in the network is then set to the desired 
		output that is stored in the pattern value, before the learn function is called 
		for every single run through the loop. The learn function,
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/Learning For the Backpropagation Network.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<PRE>
for( int i=this.Nodes.Count-1; i&lt;=nFirstMiddleNode; i-- )
{
	( ( AdalineNode )this.Nodes[ i ] ).Learn();
}		
</PRE>
		<P>cycles backwards through the output nodes to the first middle node in the 
			network, calling the learn function on each node. Each Lean is called 
			differently for the Back Propagation Output Node and the Back Propagation 
			Middle node as they have different ways of computing the error values for the 
			nodes as shown in the class descriptions above. .
		</P>
		<H2><U>Saving And Loading</U></H2>
		<P>The Back Propagation network uses the same xml loading and saving techniques 
			used through out the library. Here is an example of a saved xml file.
		</P>
		<PRE>
&lt;?xml version="1.0" encoding="utf-8" ?&gt;
&lt;BackPropagationNetwork&gt;
	&lt;NumberOfLayers&gt;3&lt;/NumberOfLayers&gt;
	&lt;FirstMiddleNode&gt;2&lt;/FirstMiddleNode&gt;
	&lt;FirstOutputNode&gt;4&lt;/FirstOutputNode&gt;
	&lt;Momentum&gt;0.9&lt;/Momentum&gt;
	&lt;Layers&gt;
		&lt;Layer0&gt;2&lt;/Layer0&gt;
		&lt;Layer1&gt;2&lt;/Layer1&gt;
		&lt;Layer2&gt;1&lt;/Layer2&gt;
	&lt;/Layers&gt;
	&lt;BasicNode&gt;
		&lt;Identifier&gt;0&lt;/Identifier&gt;
		&lt;NodeValue&gt;1&lt;/NodeValue&gt;
		&lt;NodeError&gt;0&lt;/NodeError&gt;
		&lt;Bias&gt;
			&lt;BiasValue&gt;1&lt;/BiasValue&gt;
		&lt;/Bias&gt;
	&lt;/BasicNode&gt;
	&lt;BasicNode&gt;
		&lt;Identifier&gt;1&lt;/Identifier&gt;
		&lt;NodeValue&gt;1&lt;/NodeValue&gt;
		&lt;NodeError&gt;0&lt;/NodeError&gt;
		&lt;Bias&gt;
			&lt;BiasValue&gt;1&lt;/BiasValue&gt;
		&lt;/Bias&gt;
	&lt;/BasicNode&gt;
	&lt;BackPropagationMiddleNode&gt;
		&lt;BackPropagationOutputNode&gt;
			&lt;AdalineNode&gt;
				&lt;BasicNode&gt;
					&lt;Identifier&gt;2&lt;/Identifier&gt;
					&lt;NodeValue&gt;0.306374302171129&lt;/NodeValue&gt;
					&lt;NodeValue&gt;0.45&lt;/NodeValue&gt;
					&lt;NodeValue0.9&lt;/NodeValue&gt;
					&lt;NodeError&gt;-0.0212641438895606&lt;/NodeError&gt;
					&lt;Bias&gt;
						&lt;BiasValue&gt;1&lt;/BiasValue&gt;
					&lt;/Bias&gt;
				&lt;/BasicNode&gt;
			&lt;/AdalineNode&gt;
		&lt;/BackPropagationOutputNode&gt;
	&lt;/BackPropagationMiddleNode&gt;
	&lt;BackPropagationMiddleNode&gt;
		&lt;BackPropagationOutputNode&gt;
			&lt;AdalineNode&gt;
				&lt;BasicNode&gt;
					&lt;Identifier&gt;3&lt;/Identifier&gt;
					&lt;NodeValue&gt;0.0793336908508232&lt;/NodeValue&gt;
					&lt;NodeValue&gt;0.45&lt;/NodeValue&gt;
					&lt;NodeValue&gt;0.9&lt;/NodeValue&gt;
					&lt;NodeError&gt;-0.00849829845549166&lt;/NodeError&gt;
					&lt;Bias&gt;
						&lt;BiasValue&gt;1&lt;/BiasValue&gt;
					&lt;/Bias&gt;
				&lt;/BasicNode&gt;
			&lt;/AdalineNode&gt;
		&lt;/BackPropagationOutputNode&gt;
	&lt;/BackPropagationMiddleNode&gt;
	&lt;BackPropagationOutputNode&gt;
		&lt;AdalineNode&gt;
			&lt;BasicNode&gt;
				&lt;Identifier&gt;4&lt;/Identifier&gt;
				&lt;NodeValue&gt;0.39830515925165&lt;/NodeValue&gt;
				&lt;NodeValue&gt;0.45&lt;/NodeValue&gt;
				&lt;NodeValue&gt;0.9&lt;/NodeValue&gt;
				&lt;NodeError&gt;-0.0954570813319005&lt;/NodeError&gt;
				&lt;Bias&gt;
					&lt;BiasValue&gt;1&lt;/BiasValue&gt;
				&lt;/Bias&gt;
			&lt;/BasicNode&gt;
		&lt;/AdalineNode&gt;
	&lt;/BackPropagationOutputNode&gt;
	&lt;BackPropagationLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;5&lt;/Identifier&gt;
			&lt;LinkValue&gt;0.156366589563507&lt;/LinkValue&gt;
			&lt;LinkValue&gt;-0.00956886475030225&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;0&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;2&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/BackPropagationLink&gt;
	&lt;BackPropagationLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;6&lt;/Identifier&gt;
			&lt;LinkValue&gt;1.28844804384242&lt;/LinkValue&gt;
			&lt;LinkValue&gt;-0.00382423430497125&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;0&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;3&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/BackPropagationLink&gt;
	&lt;BackPropagationLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;7&lt;/Identifier&gt;
			&lt;LinkValue&gt;0.657049900059346&lt;/LinkValue&gt;
			&lt;LinkValue&gt;-0.00956886475030225&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;1&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;2&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/BackPropagationLink&gt;
	&lt;BackPropagationLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;8&lt;/Identifier&gt;
			&lt;LinkValue&gt;1.16778235004883&lt;/LinkValue&gt;
			&lt;LinkValue&gt;-0.00382423430497125&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;1&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;3&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/BackPropagationLink&gt;
	&lt;BackPropagationLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;9&lt;/Identifier&gt;
			&lt;LinkValue&gt;1.04824365230734&lt;/LinkValue&gt;
			&lt;LinkValue&gt;-0.0131605185061592&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;2&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;4&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/BackPropagationLink&gt;
	&lt;BackPropagationLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;10&lt;/Identifier&gt;
			&lt;LinkValue&gt;1.21888832568501&lt;/LinkValue&gt;
			&lt;LinkValue&gt;-0.00340783316095809&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;3&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;4&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/BackPropagationLink&gt;
&lt;/BackPropagationNetwork&gt;		
</PRE>
		<P>As you can see above the layers section stores each layer in the xml file with 
			the number of nodes that are to be in that layer of the network. The Back 
			Propagation network also stores the array positions of the first middle node 
			and the array position of the first output node, as well as the momentum for 
			the network which is also stored individually by each node at array position 
			two of the node value array. this is as well as the learning rate which is 
			stored at position one in the node&nbsp;value array. The Back Propagation links 
			also are slightly different in that they now store the delta value in position 
			one of the link value array.
		</P>
		<H2><U>Testing</U></H2>
		<P>
		The Testing portions of the code are located under the run menu for the Neural 
		Net Tester program. The test for this program is the "Load And Run back 
		Propagation 1" menu option. This will load the file that resembles the one 
		above. I say resembles as the linkage values wont be exactly the same any two 
		times running.
		<P>
		The menu option will load and run the backpropagtionworkingfile.wrk and 
		generate the log file&nbsp;&nbsp;Neural Network Tester Load And Run 
		BackPropagation One Network.xml which can be viewed using the LogViewer that is 
		part of the neural net tester program.
		<P>
		The display will show at the end a list of all the input data and the 
		conclusion the&nbsp;backpropagation&nbsp;network reached about that data. Next 
		to this will be the answer that was generated by the test data in the pattern. 
		So far in my testing the function has performed with one hundred percent 
		accuracy.&nbsp;&nbsp;
		<P>The quick guide is
		</P>
		<UL>
			Menu :- Generate/Generate BackPropagation working&nbsp;File :- Generates the 
			file that is used for the adaline Load and run menu option</UL>
		<UL>
			Menu :- Run/Load And Run&nbsp;BackPropagation&nbsp;1:- Loads the saved 
			BackPropagation network from the disk and then runs it against the&nbsp;working 
			file.</UL>
		<UL>
			Menu :- Train/Train&nbsp;BackPropagation&nbsp;1 :- Trains the network from 
			scratch using the&nbsp;hardcoded XOR data&nbsp;and then saves it to disk.</UL>
		<UL>
			Menu :- Options&nbsp;BackPropagation&nbsp;1 Options :- Brings up a dialog that 
			allows you to set certain parameters for the running of the network.
		</UL>
		<H2><U>Options</U></H2>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/backpropagationoneoptions.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>The above is the options dialog for the Back Propagation One network and 
			contains the five options you can set. The first being the Number of Tests 
			which is the number of items that are read from and generated into the testing 
			file for the network which in the case of this network is the 
			BackPropagationOneWorkingFile.wrk. The second is the tolerance level that is 
			acceptable to the program. This should always be a value that the code is able 
			to distinguish i.e. if this was set to 0.6 then the acceptable values would 
			overlap making any answers returned from the network meaning less. The third 
			and fourth are the momentum and the learning rate which are both used in the 
			calculations that determine the weight values for each link to the nodes and 
			the final option is a simple check box to specify if you want to use the 
			inbuilt bias which always has a value of one in the calculations.
		</P>
		<H2><U>Understanding The Output</U></H2>
		<H3><U></U>Training</H3>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Network out put value = 0.397890715954771 
				Pattern out put value = 0</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Absolute output value = 0.397890715954771</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test = 0.397890715954771 Tolerance = 0.4 
				Test falls within tolerance levels</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 0,0 run through the network 
				output error at 0 = -0.0953241486740618 total error = 0.0953241486740618</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Network out put value = 0.430815783338131 
				Pattern out put value = 1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Absolute output value = 
				-0.569184216661869</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test = -0.569184216661869 Tolerance = 0.4 
				Test falls within tolerance levels</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern 0,1 run through the network 
				output error at 0 = 0.13957167905037 total error = 0.234895827724432</FONT>
		<P>
		The above shows two runs through the training loop for the Back Propagation 
		Network, unlike the Adaline network the Back Propagation network calls Learn 
		automatically each time through the loop so there is no output to say that 
		learn has been called. The first line of the output is the value that the 
		network has arrived at and the pattern output value that we want the network to 
		arrive at. The second line shows the absolute value of the networks output 
		which is the absolute value of the networks output value minus the patterns 
		output value. This absolute value is then tested against the tolerance level 
		which in this case has been set to 0.4. If the absolute value is less than the 
		tolerance value then it is determined to be a successful test. The final line 
		shows the error values returned by the network and the pattern that was run 
		through the network to begin with.
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Saving the BackPropagation Network</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Loading the BackPropagation Network</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern Values = 0, 0 Pattern Output 
				Value = 0 Net work Output Value = 0.284630680477795</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern Values = 0, 1 Pattern Output 
				Value = 1 Net work Output Value = 0.651005374420711</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern Values = 1, 0 Pattern Output 
				Value = 1 Net work Output Value = 0.66443836682106</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern Values = 1, 1 Pattern Output 
				Value = 0 Net work Output Value = 0.395627748169065</FONT>
		<P>Once the network has successfully trained against the test data, in this case 
			the four acceptable values for the XOR test then the network is saved and 
			reloaded into a new Back Propagation network before the values are again 
			entered into the network to see how it performs. The final four lines above 
			show the output of a run for the XOR test with each indicating which pattern 
			was entered for the test and the expected output value along with the final 
			value which is the value that the network has arrived at. As you can see all 
			the values fall within the tolerance levels with all the values where a 0 is 
			expected to be returned being less than 0.4 and all the values where a 1 is 
			expected being greater than 0.6.
		</P>
		<H3>Running</H3>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Generating Backpropagation working 
				File... Please Wait</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Backpropagation Working File Generated</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern 0 to, Input One = 1 Input 
				Two = 1 Output Value = 0</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern 1 to, Input One = 0 Input 
				Two = 0 Output Value = 0</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern 2 to, Input One = 1 Input 
				Two = 0 Output Value = 1</FONT>
		<P>
		As with the Adaline network samples the Back Propagation network generates a 
		file of data to run against the loaded network when load and run is selected. 
		This file is filled with pairs of values that are either 0 or 1. And the third 
		value is the value that the network should arrive at.
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern Values = 0, 1 Pattern Output 
				Value = 1 Net work Output Value = 0.651005374420711 Output value is within 
				tolerance level of 0.4 of 1 </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Pattern Values = 1, 1 Pattern Output 
				Value = 0 Net work Output Value = 0.395627748169065 Output value is within 
				tolerance level of 0.4 of 0</FONT>
		<P>
		Once the pattern array has been loaded the data is run and the results are 
		printed to the screen. The results show the original values entered and the 
		required output, followed by the value that the network arrived at and finally 
		indicates if the value is within the tolerance level of the correct answer.
		<P></P>
		<H2><U>Fun And Games</U></H2>
		<P>
		As with the other network examples the main aim is to produce a network that 
		can be repetitively trained and saved to disk for demonstration purposes for 
		this reason the main parameter that was played with during testing is the 
		tolerance parameter. For the purpose of the example it is set to a value of 0.4 
		which is giving it quite a large margin which you may not wish to were you 
		using the network in a production environment. There is the option to change 
		the tolerance parameter within the options dialog, but it should be remembered 
		that the smaller the tolerance gets the longer the network training will take. 
		Still this shouldn't be too much of a problem as once trained the network can 
		simply be reloaded as fully trained.
		<P>
		<P>
		<P>
		</P>
		<H2 align="center"><U><FONT size="7">Neural .Net pt 8</FONT></U></H2>
		<H2 align="center"><U><FONT size="7">The&nbsp;Backpropagation Word Network</FONT></U></H2>
		<P>The Back Propagation word network is a continuation of the idea developed in the 
			Adaline Word network and is set up so that it can learn how to tell the 
			difference between words in a given text. The exact words are not really 
			important to the testing of the network neither is the text that the network 
			trains or runs against. I am currently using the words "and" and "the" in the 
			demonstration program provided here. The text I am using is The Origin Of the 
			Species by Charles Darwin. The aim of the program is to extend the idea of the 
			XOR sample shown in the previous demonstration. This is done by giving the 
			network two words to learn. The network must then learn to differentiate not 
			only between the two words but between the two words and all the other words in 
			the given text. I do this by training the network with a prepared file that 
			contains the first chapter of the Origin Of the Species and then when the 
			program has been trained it runs against the full text version of the file 
			extracting all the sentences in the book that contain the words "and" and "the" 
			but not both or the same word twice.
		</P>
		<H2><U>The BackPropagation Word Network</U></H2>
		<P>
		There is surprisingly little change made to get from the standard 
		BackPropagation Network to the BackPropagation Word Network, with the only 
		required change being for the pattern code. Everything else is as it was in the 
		previous code.
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/BackPropagationWordDiagram.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/BackPropagationWordPattern.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		</P>
		<H2><U>The BackPropagation Word Pattern Class</U></H2>
		<P>The BackPropagation Word Pattern Class inherits from the Adaline Word Pattern 
			Class and overrides the constructors and the class name and data functions. It 
			would technically have been possible to extend the Adaline Word Pattern class 
			to include the extra constructor which is used for the demonstration code but I 
			thought it would be better to keep it separate for now.
		</P>
		<H2><U>Training</U></H2>
		<P>
		Training for the BackPropagation Word Network is almost identical to the 
		BackPropagation Network sample apart from the fact that we are dealing with a 
		pattern class that has words instead of numbers
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/Inside The Adaline Run Function.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>Once again the run function is using the original Adaline run function code that 
			was used earlier and is called by the Back Propagation Network class by the 
			code.
		</P>
		<PRE>
for( int i=nFirstMiddleNode; i&lt;this.Nodes.Count; i++ )
{
	( ( AdalineNode )this.Nodes[ i ] ).Run( Values.NodeValue );
}		
</PRE>
		<P>
		which cycles through the nodes starting at the first middle node and calls the 
		Adaline Node Class Run function.
		<P>The code for the BackPropagation Word Network training loop is.
		</P>
		<PRE>
while( nGood &lt; patterns.Count )
{
	nGood = 0;
	for( int i=0; i&lt;patterns.Count; i++ )
	{
		for( int n=0; n&lt;20; n++ )
		{
			( ( BasicNode )bpNetwork2.Nodes[ n ] ).SetValue( Values.NodeValue,( ( BackPropagationWordPattern )bpPatterns[ i ] ).GetInSetAt( n ) );</PRE>
		<PRE>
		}

		bpNetwork.Run();

		/// set the desired output
		bpNetwork.SetOutputError( ( ( BackPropagationWordPattern )patterns[ i ] ) );

		/// run learn anyway
		bpNetwork.Learn();

		/// first output value is output value 0
		dTemp = bpNetwork.GetOutputValue( 0 );
		dCalculatedTemp = bpNetwork.GetCalculatedOutputValue( 0, ( ( BackPropagationWordPattern )patterns[ i ] ).OutputValue( 0 ) );
		strTemp.Remove( 0, strTemp.Length );

		netWorkText.AppendText( "Absolute output value = " + dCalculatedTemp.ToString() + "\n" );
		if( dCalculatedTemp &lt; dBackPropagationWordTolerance )
		{
			strTemp.Append( "Output value is within tolerance level of " + dBackPropagationWordTolerance.ToString() + " of 0 " );
			nGood++;
		}
		else if( dCalculatedTemp &gt; ( 1 - dBackPropagationWordTolerance ) )
		{
			strTemp.Append( "Output value is within tolerance level of " + dBackPropagationWordTolerance.ToString() + " of 1 " );
			nGood++;
		}	
		else
		{
			strTemp.Append( "Output value is outside tolerance levels " );
		}

		strTemp.Append( " " );

		for( int n=0; n&lt;( ( BackPropagationWordPattern )patterns[ i ] ).InputSize(); n++ )
		{
			strTemp.Append( ( ( BackPropagationWordPattern )patterns[ i ] ).InputValue( n ) + " " );
		}

		strTemp.Append( "\n Output Value = " + ( ( BackPropagationWordPattern )patterns[ i ] ).OutputValue( Values.NodeValue ) );

		netWorkText.AppendText( "Test = " + dCalculatedTemp.ToString() + " Tolerance = " + dBackPropagationWordTolerance.ToString() + " " + strTemp.ToString() + "\n" );
	}

	netWorkText.AppendText( "\nNetwork iteration " + nIteration.ToString() + " produced " + nGood.ToString() + " good results out of " + patterns.Count.ToString() + "\n" );
	nIteration++;
}
</PRE>
		<P>which starts with the same basic premise as the previous networks in that we 
			loop through all the values stored in the Pattern class object until we get an 
			iteration that returns a 100% good count. The first difference that happens now 
			that we are graduating to doing whole sentences. Well, at least whole sentences 
			that are no more than 20 words long. The values from the pattern are loaded 
			into the Node values with the code
		</P>
		<PRE>
for( int n=0; n&lt;20; n++ )
{
	( ( BasicNode )bpNetwork2.Nodes[ n ] ).SetValue( Values.NodeValue,( ( BackPropagationWordPattern )bpPatterns[ i ] ).GetInSetAt( n ) );</PRE>
		<PRE>
}		
</PRE>
		<P>
		which cycles through the twenty available input nodes on the network and sets 
		the node value with the value stored in the pattern. Remember the GetInSetAt 
		function will return the linear seperable, numerical value for the word and not 
		the actual word itself. The Run, SetOutputError and the Learn functions are 
		then called exactly as they are in the BackPropagation example previously.
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/Learning For the Backpropagation Network.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>the rest of the code within the loop is merely confirming if the answer is 
			correct or not by seeing if the final value returned from the output node of 
			the network is the same value as the one that was entered into the pattern 
			output value.
		</P>
		<H2><U>Saving And Loading</U></H2>
		<P>The saving and loading for the BackPropagation Word Network is the same as in 
			the previous example, although it is considerably longer and due to the length 
			I'll only show a portion of the saved file here.
		</P>
		<PRE>
&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;BackPropagationNetwork&gt;
	&lt;NumberOfLayers&gt;3&lt;/NumberOfLayers&gt;
	&lt;FirstMiddleNode&gt;20&lt;/FirstMiddleNode&gt;
	&lt;FirstOutputNode&gt;40&lt;/FirstOutputNode&gt;
	&lt;Momentum&gt;0.9&lt;/Momentum&gt;
	&lt;Layers&gt;
		&lt;Layer0&gt;20&lt;/Layer0&gt;
		&lt;Layer1&gt;20&lt;/Layer1&gt;
		&lt;Layer2&gt;1&lt;/Layer2&gt;
	&lt;/Layers&gt;
	&lt;BasicNode&gt;
		&lt;Identifier&gt;0&lt;/Identifier&gt;
		&lt;NodeValue&gt;0.1074&lt;/NodeValue&gt;
		&lt;NodeError&gt;0&lt;/NodeError&gt;
		&lt;Bias&gt;
			&lt;BiasValue&gt;1&lt;/BiasValue&gt;
		&lt;/Bias&gt;
	&lt;/BasicNode&gt;
	Through to 
	&lt;BasicNode&gt;
		&lt;Identifier&gt;19&lt;/Identifier&gt;
		&lt;NodeValue&gt;0.2406&lt;/NodeValue&gt;
		&lt;NodeError&gt;0&lt;/NodeError&gt;
		&lt;Bias&gt;
			&lt;BiasValue&gt;1&lt;/BiasValue&gt;
		&lt;/Bias&gt;
	&lt;/BasicNode&gt;
	&lt;BackPropagationMiddleNode&gt;
		&lt;BackPropagationOutputNode&gt;
			&lt;AdalineNode&gt;
				&lt;BasicNode&gt;
					&lt;Identifier&gt;20&lt;/Identifier&gt;
					&lt;NodeValue&gt;0.029124473721815&lt;/NodeValue&gt;
					&lt;NodeValue&gt;0.45&lt;/NodeValue&gt;
					&lt;NodeValue&gt;0.9&lt;/NodeValue&gt;
					&lt;NodeError&gt;-0.00154046436077283&lt;/NodeError&gt;
					&lt;Bias&gt;
						&lt;BiasValue&gt;1&lt;/BiasValue&gt;
					&lt;/Bias&gt;
				&lt;/BasicNode&gt;
			&lt;/AdalineNode&gt;
		&lt;/BackPropagationOutputNode&gt;
	&lt;/BackPropagationMiddleNode&gt;

	Through to
	
	&lt;BackPropagationMiddleNode&gt;
		&lt;BackPropagationOutputNode&gt;
			&lt;AdalineNode&gt;
				&lt;BasicNode&gt;
					&lt;Identifier&gt;39&lt;/Identifier&gt;
					&lt;NodeValue&gt;0.0668105589475829&lt;/NodeValue&gt;
					&lt;NodeValue&gt;0.45&lt;/NodeValue&gt;
					&lt;NodeValue&gt;0.9&lt;/NodeValue&gt;
					&lt;NodeError&gt;-0.00126610484536625&lt;/NodeError&gt;
					&lt;Bias&gt;
						&lt;BiasValue&gt;1&lt;/BiasValue&gt;
					&lt;/Bias&gt;
				&lt;/BasicNode&gt;
			&lt;/AdalineNode&gt;
		&lt;/BackPropagationOutputNode&gt;
	&lt;/BackPropagationMiddleNode&gt;
	&lt;BackPropagationOutputNode&gt;
		&lt;AdalineNode&gt;
			&lt;BasicNode&gt;
				&lt;Identifier&gt;40&lt;/Identifier&gt;
				&lt;NodeValue&gt;0.312017133076152&lt;/NodeValue&gt;
				&lt;NodeValue&gt;0.45&lt;/NodeValue&gt;
				&lt;NodeValue&gt;0.9&lt;/NodeValue&gt;
				&lt;NodeError&gt;-0.0669783596518058&lt;/NodeError&gt;
				&lt;Bias&gt;
					&lt;BiasValue&gt;1&lt;/BiasValue&gt;
				&lt;/Biasgt;
			&lt;/BasicNode&gt;
		&lt;/AdalineNode&gt;
	&lt;/BackPropagationOutputNode&gt;
	&lt;BackPropagationLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;41&lt;/Identifier&gt;
			&lt;LinkValue&gt;0.76319456394861&lt;/LinkValue&gt;
			&lt;LinkValue&gt;-7.44506425561508E-05&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;0&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;20&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/BackPropagationLink&gt;

	Through To

	&lt;BackPropagationLink&gt;
		&lt;BasicLink&gt;
			&lt;Identifier&gt;460&lt;/Identifier&gt;
			&lt;LinkValue&gt;0.303193748484268&lt;/LinkValue&gt;
			&lt;LinkValue&gt;-0.00201368774057822&lt;/LinkValue&gt;
			&lt;InputNodeID&gt;39&lt;/InputNodeID&gt;
			&lt;OutputNodeID&gt;40&lt;/OutputNodeID&gt;
		&lt;/BasicLink&gt;
	&lt;/BackPropagationLink&gt;
&lt;/BackPropagationNetwork&gt;		
</PRE>
		<H2><U>Testing</U></H2>
		<P>
		The Testing portions of the code are located under the run menu for the Neural 
		Net Tester program. The test for this program is the "Load And Run back 
		Propagation 2" menu option. This will load the file that resembles the one 
		above. I say resembles as the linkage values wont be exactly the same any two 
		times running.
		<P>
		The menu option will load and run the&nbsp;"origins-of-species.txt" file and 
		generate the log file&nbsp;&nbsp;Neural Network Tester Load And Run 
		BackPropagation OneLoad And Run BackPropagation 2.xml which can be viewed using 
		the LogViewer that is part of the neural net tester program.
		<P>
		The display will show&nbsp;all the sentences in the text of the book that match 
		the two words entered through the options list or the default words which are 
		"and" and "the", upon finishing&nbsp;the network work will print out&nbsp;that 
		the test has finished.&nbsp;&nbsp;&nbsp;
		<P>The quick guide is
		</P>
		<UL>
			<LI>
				Menu :- Run/Load And Run&nbsp;BackPropagation&nbsp;2:- Loads the saved 
				BackPropagation network from the disk and then runs it against the&nbsp;full 
				text of the Origins of the Species.</LI>
		</UL>
		Menu :- Train/Train&nbsp;BackPropagation&nbsp;2 :- Trains the network from 
		scratch using the&nbsp;sample file which contains the introduction and the 
		first chapter of the Origins of the Species.
		<UL>
			Menu :- Options&nbsp;BackPropagation&nbsp;2 Options :- Brings up a dialog that 
			allows you to set certain parameters for the running of the network.
		</UL>
		<H2><U>Options</U></H2>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/BackPropagationWordOptions.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		The above is the options dialog for the BackPropagation Word or BackPropagation 
		2 Network. Starting with the Tolerance which is the value returned by the 
		network and the amount of difference between that and a positive value of one 
		that we will accept. The momentum is an additional drive parameter that the 
		network uses to try and prevent the network getting stuck in what is called a 
		local minimum before it is able to achieve a correct answer. The reason this 
		occurs is because the weights are adjusted by a percentage of the previous 
		value and if you only used this percentage before the network achieved 
		equilibrium, where equilibrium is defined as getting all its outputs to match 
		the required outputs passed into the pattern output parameters, which would 
		mean that the weight changes to the network would get ever smaller and possibly 
		get so small that they would never be able to get to the correct solution. Or 
		at least that it would take so long that people would just give up.
		<P>
		The Learning Rate is the same value as we have seen before and so is the Bias. 
		The new optional variables on the list are the two words that can be entered 
		for the network to search the text for. It must be noted that if you change 
		these values you will have to retrain the network as the network is trained to 
		find the specific words and is not designed or capable of just finding any 
		random words that are entered without training. Also there is the point that 
		you should choose words that are in the text that you are training the network 
		against. For example it you be pointless to look for the character Marvin the 
		paranoid android in a copy of The Origin Of The Species. In fact I'd be 
		surprised if the name Marvin showed up in the entire book, you'd be far better 
		off searching through the text of a Douglas Adams book for that one. Which 
		finally is why the last two optional parameters are included as these are the 
		names of the training file and the run file that are located in the same 
		directory as the executable. There are no restrictions on which files you 
		search although the sample provided are texts from The Origin Of The Species 
		you could replace these with anything you wished.
		<P>This is not to say that you shouldn't train the network against the provided 
			sample text and then run it against a different text in the run section of the 
			program it is merely pointing out that the context should be borne in mind.
		</P>
		<H2><U>Understanding The Output</U></H2>
		<H3><U></U>Training<U></U></H3>
		<P><U></U><FONT SIZE="2" FACE="Microsoft Sans Serif"></FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern to, With plants which are 
				temporarily propagated by cuttings, buds, &amp;c </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern to, the importance of 
				crossing is immense; for the cultivator may here disregard the extreme 
				variability both of hybrids and of mongrels, and the sterility of hybrids; but 
				plants not propagated by seed are of little importance to us, for their 
				endurance is only temporary </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern to, Over all these causes 
				of Change, the accumulative action of Selection, whether applied methodically 
				and quickly, or unconsciously and slowly but more efficiently, seems to have 
				been the predominant Power</FONT>
		<P>
		The Back Propagation Word Network begins by setting the patterns for the 
		network to be trained with. These are in the form of complete sentences, which 
		the program will then break up into individual words that are no more than 
		twenty characters each.
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Absolute output value = 0.110635431553925</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test = 0.110635431553925 Tolerance = 0.4 
				Output value is within tolerance level of 0.4 of 0 1859 THE ORIGIN OF SPECIES 
				by Charles Darwin 1859 INTRODUCTION INTRODUCTION WHEN on board H </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Output Value = 0</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Absolute output value = 
				0.00769773879206108</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test = 0.00769773879206108 Tolerance = 
				0.4 Output value is within tolerance level of 0.4 of 0 HM </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Output Value = 0</FONT>
		<P>
		The network then trains using the pattern array by looking through all the 
		twenty input nodes and giving an absolute value should indicate if the network 
		contains "and" or "the" but not both with a 1 being a positive indicator. The 
		absolute value is tested against the tolerance levels to see if it fits and the 
		final output value of the test is indicated. As with the previous Back 
		Propagation Network training is done for every turn through the loop so a 
		specifc call to learn is not indicated.
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Training Done - Reloading Network and 
				Testing against the training set</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Building Network</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Loading Network</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Testing Network</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Loading Back Propagation Training File 
				please wait ... </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern to, 1859 THE ORIGIN OF 
				SPECIES by Charles Darwin 1859 INTRODUCTION INTRODUCTION WHEN on board H</FONT>
		<P>
		Upon successful training the network then saves the trained network and loads 
		it again into a new network object. It then reloads the training file ( not 
		strictly neccassary ) and begins to test if the network learnt it's task 
		correctly.
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Absolute output value = 0.835661935490439</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test = 0.835661935490439 Tolerance = 0.4 
				Output value is within tolerance level of 0.4 of 1 The greater or less force of 
				inheritance and reversion, determine whether variations shall endure </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Output Value = 1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Absolute output value = 0.83858866554289</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test = 0.83858866554289 Tolerance = 0.4 
				Output value is within tolerance level of 0.4 of 1 Variability is governed by 
				many unknown laws, of which correlated growth is probably the most important </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Output Value = 1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Absolute output value = 0.929080538571364</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test = 0.929080538571364 Tolerance = 0.4 
				Output value is within tolerance level of 0.4 of 1 Some, perhaps a great, 
				effect may be attributed to the increased use or disuse of parts </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Output Value = 1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test Finished</FONT>
		<P>The results of the test are output to the screen when the test is positive, that 
			is we have found a sentence that contains the words "and" and "the" but not 
			both.
		</P>
		<H3>
			Running</H3>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern to, Beagle as naturalist, 
				I was much struck with certain facts in the distribution of the organic beings 
				inhabiting South America, and in the geological relations of the present to the 
				past inhabitants of that continent </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern to, These facts, as will 
				be seen in the latter chapters of this volume, seemed to throw some light on 
				the origin of species- that mystery of mysteries, as it has been called by one 
				of our greatest philosophers </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Setting pattern to, On my return home, it 
				occurred to me, in 1837, that something might perhaps be made out on this 
				question by patiently accumulating and reflecting on all sorts of facts which 
				could possibly have any bearing on it</FONT>
		<P>
		As in the training section the Back Propagation Word Network first loads the 
		data into the patterns array and then runs it printing out exactly the same 
		data as above.
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Absolute output value = 0.915116784210617</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test = 0.915116784210617 Tolerance = 0.4 
				Output value is within tolerance level of 0.4 of 1 As several of the reasons 
				which have led me to this belief are in some degree applicable in other cases, 
				I will here briefly give them </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Output Value = 1</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Absolute output value = 0.660441295449701</FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Test = 0.660441295449701 Tolerance = 0.4 
				Output value is within tolerance level of 0.4 of 1 Hence it must be assumed not 
				only that half-civilised man succeeded in thoroughly domesticating several 
				species, but that he intentionally or by chance picked out extraordinarily 
				abnormal species; and further, that these very species have since all become 
				extinct or unknown </FONT>
		<P><FONT SIZE="2" FACE="Microsoft Sans Serif">Output Value = 1</FONT>
		<P></P>
		<H2><U>Fun And Games</U></H2>
		<P>
		The BackPropagation Word Network is the first example of a more complicated 
		network in that it has a large number of input nodes, twenty to be exact and 
		this is where the first compromise between the program and strict accuracy is 
		made. Some of the sentences in the network contain many more than twenty words 
		but for the sake of the example only the first twenty words of any sentence are 
		counted. Of course there are also many sentences that are shorter than twenty 
		words long but the network copes fine with an absence of data in some of the 
		nodes. The final compromise that has been made is that of the sentences. In the 
		sample code the network simply looks for a full stop. There are a number of 
		places where a full stop can occur in an English sentence and unfortunately not 
		all of them are at the end of the sentence. This means that in some cases the 
		given sentence for a test can be one letter long. This doesn't have a 
		detrimental effect on the performance of the network so I've left the code as 
		it is. I have yet to notice any sentences that have fell through the gaps 
		because of these compromises but in theory it is possible that the network will 
		not perform with one hundred percent accuracy.
		<P>Because of the way the linear separability is achieved between the words the 
			words are case sensitive, which is to say that the word "The" is not the same 
			as the word "the" as far as the network is concerned.
		</P>
		<H2 align="center">
			<META name="Originator" content="Microsoft Visual Studio .NET 7.1">
			<U><FONT size="7">Neural .Net pt 9</FONT></U></H2>
		<H2 align="center"><U></U></H2>
		<H2 align="center"><FONT size="7"><U>The Self Organizing Network</U></FONT>
		</H2>
		<P>
		So far all the previous examples have involved a training technique that 
		involves holding the networks hand, by giving it the answer that we require as 
		part of the question. With the Self Organizing Network we arrive at a neural 
		network programming technique that allows the network to work out things for 
		itself. For this reason although the Adaline Pattern is used to present the 
		data to the network, the Pattern's do not have the output value filled out 
		which from a programming point of view makes life easy as the code to generate 
		the Self Organizing Networks training and working files is almost identical to 
		the code to generate the files for the Adaline One network. In fact it's a 
		direct cut and paste with a few minor changes.
		<P>
		As with the other new networks this network is based on the examples in Joey 
		Rogers "Object Orientated Neural Networks in C++"
		<P><font size="5"><b><U>The Self Organizing Network</U></b></font>
		<P>
		One feature of the Self Organizing Network that is immediately obvious from the 
		diagram below is that this time we are dealing with what is essentially a flat 
		network in that the network has only one active layer and this layer is 
		arranged in a plate fashion. No matter how many nodes are added to the network 
		layer it will just expand. Also there is the point here that every input is 
		connected to every node. The idea behind this network is that the nodes are in 
		competition with each other for the correct answer.
		<P>
		This is called competitive learning with the winner being determined by the 
		link values and once a winner is decided the winner and it's surrounding nodes 
		have their link values modified
		<P>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/SelfOrganizingNetworkDiagram.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		<P>
		<P>
		</P>
		<IMG SRC="Neural_Dot_Net/SelfOrganizingNetworkOneClasses.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		</P>
		<H2><U>The Self Organizing Network Node</U>
		</H2>
		<P>
		The Self Organizing Network Node inherits directly from the BasicNode class and 
		defines a new pair of learn and run functions as well as doing away with the 
		Transfer functions that we have been using previously. The reason the transfer 
		functions are removed is because we no longer have the correct answer passed in 
		with the network so we can't use the node and the pattern to see if we have got 
		it right or not.
		<P>
		The run function looks like.
		<P>
		<P></P>
		<PRE>
public override void Run(int nMode)
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Run function called for ", ClassName );
	}

	double dTotal = 0.0;
	for( int i=0; i&lt;this.InputLinks.Count; i++ )
	{
		/// get the power to 2 of the nodevalue - the weight
		dTotal += Math.Pow( ( ( BasicLink )this.InputLinks[ i ] ).InputNode.GetValue( nMode ) - ( ( BasicLink )this.InputLinks[ i ] ).InputValue( Values.Weight ), 2 );
	}

	/// store the square root of all the values
	this.NodeValues[ Values.NodeValue ] = Math.Sqrt( dTotal );
}
</PRE>
		<P>
		It begins by declaring a variable dTotal which as with the other networks is 
		used to calculate all the values from all the input links which, this being a 
		Self Organizing Network means all the input values. It then gets the value of 
		each input Node and subtracts the input value's weight from it.
		<P>
		This weight value is set originally to a random value by the Self Organizing 
		Network Link class. This then gives us a value for the input node which is 
		raised to the power of 2 and add it to the value dTotal. Once the node has 
		cycled through all the inputs the value for the node is saved as the square 
		root of the total values received from the input nodes.
		<P>
		The learn function looks like
		<P></P>
		<PRE>
public override void Learn(int nMode) 
{ 
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) = = true ) 
	{
		log.Log( DebugLevelSet.Progress, "learn called for ",  ClassName ); 
	}
	
	double dDelta = 0.0; 

	for( int i=0; i&lt;this.InputLinks.Count; i++ ) 
	{
		dDelta = ( ( double )this.NodeValues[ Values.LearningRate ] ) + ( ( BasicLink )this.InputLinks[ i ] ).InputValue( Values.Weight ); (
		( BasicLink )this.InputLinks[ i ] ).SetLinkValue( dDelta, Values.Weight );
	}
}		
</PRE>
		<P>
		<P>It begins by taking the total value of the input links and adding them to the 
			current learning rate for the node. The new weight value is then set to the 
			calculated value.
		</P>
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>The Self Organizing Network Link</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		<P>The Self organizing Network link is derived from the basic link class and apart 
			from the saving and loading functionality adds only the setting of the links 
			weight value to a value between 0 and 1 in the constructor
		</P>
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>The Self Organizing Network</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		<P>The real specialization of the Self Organizing network is done in this class 
			which is derived from the Basic Network class but has adds quite a bit of 
			functionality to make things work properly. It starts with a whole list of 
			class members,
		</P>
		<P></P>
		&lt; PRE&gt; private int nHorizontalSize; private int nVerticalSize; private 
		double dInitialLearningRate; private double dFinalLearningRate; private int 
		nInitialNeighborhoodSize; private int nNeighborhoodDecrement; private int 
		nNeighborhoodSize; private int nNumberOfIterations; private int nIterations; 
		private int nWinningHorizontalPos; private int nWinningVerticalPos; private int 
		nNumberOfNodes; private SelfOrganizingNetworkNode[][] arrayKohonenLayer = null;
		<PRE></PRE>
		<P>
		<P>The two starting variables nHorizontalSize and nVerticalSize are the variables 
			that hold the size of the Kohonen layer of the network which is named after the 
			inventor of the this type of network and includes all nodes that are not input 
			nodes.
		</P>
		<P>
		<P>The variables dInitialLearningRate and dFinalLearningRate should be immediately 
			familiar by now as the initial Learning Rate is the Learning rate at the start 
			of the running of a training session and the Final Learning rate is the 
			learning rate at the end of the session. The learning rate is adjusted in the 
			epoch function of the Self Organizing Network class below.
		</P>
		<P>
		<P>The next three variables are the neighborhood variables. These variables 
			represent the nodes that are in the neighborhood of the winning node that is 
			updated via the learn function along with the nodes in the winning nodes 
			neighborhood. This is done in the Self Organizing Network class' learn function 
			below. The nNeighborhoodDecrement variable is the odd one out in that it is 
			updated in the epoch function if the number of iterations plus one divided by 
			the Neighborhood Decrement remainder is equal to 0.&nbsp;The 
			nInitialNeighborhoodSize is the starting size of the neighborhood which 
			defaults to five. The nNeighborhoodSize is the networks way of keeping track of 
			the current size of the neighborhood.
		</P>
		<P>
		<P>The nNumberOfIterations variable is the number of iterations for the network to 
			make during a run and is used in the epoch functions calculations.
		</P>
		<P>
		<P>The nIterations variable is also used by the epoch function and keeps track of 
			the number of times that epoch has been called which is once every time the 
			main loop is executed.
		</P>
		<P>
		<P>The variables nWinningHorizontalPos and nWinningVerticalPos are the horizontal 
			and the vertical positions of the winning node which is decided in the run 
			function on the basis of which node provides the smallest node value.
		</P>
		<P>
		<P>The&nbsp;nNumberOfNodes variable is&nbsp;the number of input nodes that are to 
			used for the network and&nbsp;is used in the CreateNetwork function below.
		</P>
		<P>
		<P>The first and probably the most important function in the Self Organizing 
			Network class is the Create Network function that builds the network,
		</P>
		<P></P>
		<PRE>
protected override void CreateNetwork()
{	
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Create Network called for the Self Organizing Network ", ClassName );
	}

	/// create the array
	this.arrayKohonenLayer = new SelfOrganizingNetworkNode[ this.HorizontalSize ][]; 
	for( int i=0; i&lt;this.HorizontalSize; i++ )
	{
		this.arrayKohonenLayer[ i ] = new SelfOrganizingNetworkNode[ this.VerticalSize ];
	}

	/// create the input nodes ( In the basic network nodes array )
	for( int i=0; i&lt;this.nNumberOfNodes; i++ )
	{
		this.Nodes.Add( new BasicNode( log ) );
	}

	int nLinks = 0;

	/// loop through the horizontal
	for( int i=0; i&lt;this.HorizontalSize; i++ )
	{
		/// loop through the vertical
		for( int n=0; n&lt;this.VerticalSize; n++ )
		{
			this.arrayKohonenLayer[ i ][ n ] = new SelfOrganizingNetworkNode( log, LearningRate );

			/// connect each input node to each node in the k layer
			for( int k=0; k&lt;this.nNumberOfNodes; k++ )
			{
				this.Links.Add( new SelfOrganizingNetworkLink( log ) );
				( ( BasicNode )this.Nodes[ k ] ).CreateLink( ( BasicNode )this.arrayKohonenLayer[ i ][ n ], ( BasicLink )this.Links[ nLinks ] );
				nLinks++;
			}
		}
	}
}		
</PRE>
		<P>
		<P>The Create Network function begins by allocating the Self Organizing Network 
			Nodes for the Kohonen Layer array, it does this by allocating a two dimensional 
			array&nbsp;using the horizontal and the vertical sizes of the&nbsp;arrays. It 
			then allocates the input nodes the number of which is passed as the number of 
			nodes variable to the constructor. This builds the basic framework for the 
			network which then&nbsp;needs to be filled in. This is done through the use of 
			three loops the first loop runs through the horizontal size&nbsp;of 
			the&nbsp;Kohonen Layer array while the second runs through the vertical size of 
			the array for each of the elements of the horizontal array. A new Self 
			Organizing Network Node is then created at each element of the array. The final 
			loop then creates new links that join up this newly created node to each 
			of&nbsp;the input nodes.&nbsp;&nbsp;
		</P>
		<P>
		<P>Once the network is built with the Create Network function we can start to 
			concentrate on how it works. The main function in the running of the network is 
			the Run function,
		</P>
		<P></P>
		<PRE>
public void Run()
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Run called for Self Organizing network ", ClassName );
	}

	int nHoriz = 0;
	int nVert = 0;
	double dMinimum = 99999.0;
	double dNodeValue = 0.0;

	for( nHoriz=0; nHoriz&lt;this.HorizontalSize; nHoriz++ )
	{
		for( nVert=0; nVert&lt;this.VerticalSize; nVert++ )
		{
			( ( SelfOrganizingNetworkNode )this.arrayKohonenLayer[ nHoriz ][ nVert ] ).Run( Values.NodeValue );
			dNodeValue = ( ( SelfOrganizingNetworkNode )this.arrayKohonenLayer[ nHoriz ][ nVert ] ).GetValue( Values.NodeValue );

			if( dNodeValue &lt; dMinimum )
			{
				dMinimum = dNodeValue;
				this.WinningHorizontalPos = nHoriz;
				this.WinningVerticalPos = nVert;
			}
		}
	}
}
</PRE>
		<P>
		The Run function cycles through the Kohonen Layer array, both horizontally and 
		vertically and then calls run on each node in the Kohonen layer array. The 
		function keeps track of the lowest value returned by each node in the dMinimum 
		variable and after calling run each node it compares the nodes final value with 
		the minimum. If the nodes value is smaller than the minimum value then this 
		node is stored as the winning node position.
		<P>
		The Learn function for the Self Organizing Network is different from learn 
		function in the previous networks in that it only applies to the winning node 
		and its surrounding nodes.
		<P>
		</P>
		<PRE>
public void Learn()
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Learn called for Self Organizing network ", ClassName );
	}

	int nHoriz = 0;
	int nVert = 0;

	/// work out the neighborhood boundary
	int nHorizStart = this.WinningHorizontalPos - this.NeighborhoodSize;
	int nHorizStop = this.WinningHorizontalPos + this.NeighborhoodSize;
	int nVertStart = this.WinningVerticalPos - this.NeighborhoodSize;
	int nVertStop = this.WinningVerticalPos + this.NeighborhoodSize;

	/// make sure the boundary is within the kohonen layer
	if( nHorizStart &lt; 0 )
		nHorizStart = 0;
	if( nHorizStop &gt;= this.HorizontalSize )
		nHorizStop = this.HorizontalSize;
	if( nVertStart &lt; 0 )
		nVertStart = 0;
	if( nVertStop &gt;= this.VerticalSize )
		nVertStop = this.VerticalSize;

	/// update the neighbors of the winning node
	for( nHoriz=nHorizStart; nHoriz&lt;nHorizStop; nHoriz++ )
	{
		for( nVert=nVertStart; nVert&lt;nVertStop; nVert++ )
		{
			( ( SelfOrganizingNetworkNode )this.arrayKohonenLayer[ nHoriz ][ nVert ] ).SetValue( Values.LearningRate, this.LearningRate );
			( ( SelfOrganizingNetworkNode )this.arrayKohonenLayer[ nHoriz ][ nVert ] ).Learn( Values.NodeValue );
		}
	}
}		
</PRE>
		<P>
		The Learn function begins by calculating the size of the area to learn on the 
		vertical and the horizontal axis', Once the area has been calculated the nodes 
		Learning Value is set to the current Learning Value and the Learn is called on 
		each individual node in the affected area.
		<P>
		The Self Organizing Network class also implements the Epoch function which is 
		called once every iteration through the network.
		<P></P>
		<PRE>
public override void Epoch()
{
	if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
	{
		log.Log( DebugLevelSet.Progress, "Epoch called for the Self organizing network ", ClassName );
	}

	Iterations++;

	double fTemp = ( ( InitialLearningRate - ( ( double )Iterations/( double )NumberOfIterations ) ) );
	fTemp = ( ( double )fTemp * ( double )( InitialLearningRate - FinalLearningRate ) );

	if( fTemp &lt; FinalLearningRate )
		fTemp = FinalLearningRate;

	LearningRate = fTemp;

	if( ( Iterations + 1 )%NeighborhoodDecrement == 0 &amp;&amp; NeighborhoodSize &gt; 0 )
	{
		NeighborhoodSize--;
		if( NeighborhoodSize &lt; FinalNeighborhoodSize )
			NeighborhoodSize = FinalNeighborhoodSize;
	}
}		
</PRE>
		<P>
		The Epoch function calculates the current Learning Rate for the network and 
		control the neighborhood size. For each of these calculations it checks that 
		the returned value is not less than the value entered as the final value that 
		either the learning rate of the neighborhood size have specified and if it is 
		it then sets the value to the final specified value.
		<P>
		<P><font size="5"><b><U>Training</U> </b></font>
		<P>
		There are three main loops involved in training the Self Organizing Network, 
		the first loop deals with the actual training and then the second loop deals 
		with a special test file that has three hundred entries that are all the same 
		and the third is the final test and output to check that the network has been 
		trained.
		<P>
			<IMG SRC="Neural_Dot_Net/Inside Self Organizing network run.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		</P>
		<PRE>
/// train the self organizing network
int nIteration = 0;

for( nIteration=0; nIteration&lt;nNumberOfSonOneIterations; nIteration++ )
{
	for( int i=0; i&lt;nNumberOfItemsInSelfOrganizingNetworkTrainingFile; i++ )
	{
		son.SetValue( ( Pattern )patterns[ i ] );
		son.Run();
		netWorkText.AppendText( "." );
	}

	son.Learn();

	son.Epoch();

	log.Log( DebugLevelSet.Progress, "Iteration number " + nIteration.ToString() + " produced a winning node at  " + son.WinningHorizontalPos + " Horizontal and " + son.WinningVerticalPos + " vertical, winning node value = " + son.GetWinningNodeValue( son.WinningHorizontalPos, son.WinningVerticalPos ) + "\n", ClassName );
	netWorkText.AppendText( "\nIteration number " + nIteration.ToString() + " produced a winning node at  " + son.WinningHorizontalPos + " Horizontal and " + son.WinningVerticalPos + " vertical, winning node value = " + son.GetWinningNodeValue( son.WinningHorizontalPos, son.WinningVerticalPos ) + "\n" );
}

netWorkText.AppendText( "Saving the network" );

FileStream xmlstream = new FileStream( "selforganizingnetworkone.xml", FileMode.Create, FileAccess.Write, FileShare.ReadWrite, 8, true );
XmlWriter xmlWriter = new XmlTextWriter( xmlstream, System.Text.Encoding.UTF8 );
xmlWriter.WriteStartDocument();

son.Save( xmlWriter );

xmlWriter.WriteEndDocument();
xmlWriter.Close();

/// now load the file
FileStream readStream = new FileStream( "selforganizingnetworkone.xml", FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 8, true );
XmlReader xmlReader = new XmlTextReader( readStream );

SelfOrganizingNetwork sonTest = new SelfOrganizingNetwork( log );
sonTest.Load( xmlReader );
xmlReader.Close();

netWorkText.AppendText( "Testing against the test file the following out put should be identical for the test\n" );

ArrayList testPatterns = this.LoadSelfOrganizingNetworkFile( "SelfOrganizingNetworkOneTest.tst" );

for( int i=0; i&lt;nNumberOfItemsInSelfOrganizingNetworkTrainingFile; i++ )
{
	sonTest.SetValue( ( Pattern )testPatterns[ i ] );
	sonTest.Run();
	netWorkText.AppendText( "Run called at " + i.ToString() + " Network Values are :- Composite Value = " + sonTest.GetPosition( Values.Composite ) + ", Horizontal Value = " + sonTest.GetPosition( Values.Row ) + ", Vertical Value = " + sonTest.GetPosition( Values.Column ) + ", Current Winning Horizontal Position = " + sonTest.WinningHorizontalPos + ", Current Winning Vertical Position " + sonTest.WinningVerticalPos + ", Inputs = " + ( ( BasicNode )sonTest.Nodes[ 0 ] ).NodeValues[ Values.NodeValue ].ToString() + "," + ( ( BasicNode )sonTest.Nodes[ 1 ] ).NodeValues[ Values.NodeValue ].ToString() + ", Winning Node Value = " + sonTest.GetWinningNodeValue( sonTest.WinningHorizontalPos, sonTest.WinningVerticalPos ) + "\n" );
}

testPatterns.Clear();
StringBuilder strDataDisplay = new StringBuilder( "" );

ArrayList arrayOutput = new ArrayList();
SelfOrganizingNetworkData data;
netWorkText.AppendText( "Completed the test ... now reprocessing the orginal data through the loaded network\n " ); 

for( int i=0; i&lt;nNumberOfItemsInSelfOrganizingNetworkTrainingFile; i++ )
{
	sonTest.SetValue( ( Pattern )patterns[ i ] );
	sonTest.Run();

	strDataDisplay.Remove( 0, strDataDisplay.Length );
	strDataDisplay.Append( "Run Called at " + i.ToString() + " Network Values are :- Composite Value = " + sonTest.GetPosition( Values.Composite ) + ", Horizontal Value = " + sonTest.GetPosition( Values.Row ) + ", Vertical Value = " + sonTest.GetPosition( Values.Column ) + ", Current Winning Horizontal Position = " + sonTest.WinningHorizontalPos + ", Current Winning Vertical Position " + sonTest.WinningVerticalPos + ", Inputs = " + ( ( BasicNode )sonTest.Nodes[ 0 ] ).NodeValues[ Values.NodeValue ].ToString() + "," + ( ( BasicNode )sonTest.Nodes[ 1 ] ).NodeValues[ Values.NodeValue ].ToString() + ", Winning Node Value = " + sonTest.GetWinningNodeValue( sonTest.WinningHorizontalPos, sonTest.WinningVerticalPos ) + "\n" );

	netWorkText.AppendText( strDataDisplay.ToString() ); 

	data = new SelfOrganizingNetworkData();
	data.CompositeValue = ( int )sonTest.GetPosition( Values.Composite );
	data.Data = strDataDisplay.ToString();

	arrayOutput.Add( data );
}		
</PRE>
		<P>
		The first loop in the above code deals with the actual training and this is 
		done by running a loop that cycles through the number of iterations specified. 
		The default for this is 500. Within the loop is another loop that presents the 
		data loaded into the pattern array from an external file. In the example this 
		file is generated at run time although you can turn this option off if you wish 
		to repeatedly test with the same file.
		<P>
		The internal loop presents each pattern individually to the network and calls 
		run for the network. The network run function described above will then find 
		the node value that has the minimum value and declare this to be the winning 
		node. When the internal loop has finished the Self Organizing Network learn 
		function is called to calculate the size of the current neighborhood and update 
		the winning the winning node and the surrounding nodes by calling learn on the 
		individual nodes affected.
		<P>
		Finally the first loop calls the Self Organizing Network epoch function which 
		updates the learning rate for the network and if appropriate updates the 
		networks neighborhood size.
		<P>
		The Second loop of the three is a testing loop. It works on the theory that 
		there has to be a way to check that the Self Organizing Network is doing it's 
		job and as Self Organizing Networks&nbsp;are mainly tools for categorization 
		then this isn't an easy task, so the second loop is added to provide some kind 
		of verification. It does this by loading a training file where all the 
		variables are the same and then tests them against the freshly loaded network. 
		The pattern array is then passed into the network and it is run like any other 
		network. If the printed output from the network is identical in all three 
		hundred cases we then&nbsp;know that the network is performing 
		consistently.&nbsp;&nbsp;This doesn't prove that it is right just that if given 
		the same set of variables it will produce the same answer.
		<P>
		The third and final loop is the main test loop where the original input is 
		tested against the network that was saved to disk at the end of the training 
		session. The main difference between this code and the previous testing loops 
		is that I sort the data for the output into categories using the Self 
		Organizing Network data struct,
		<P></P>
		<PRE>
public struct SelfOrganizingNetworkData
{
	private int nCompositeValue;
	private string strData;

	public int CompositeValue 
	{
		get
		{
			return nCompositeValue;
		}
		set
		{
			nCompositeValue = value;
		}
	}

	public string Data
	{
		get
		{
			return strData;
		}
		set
		{
			strData = value;
		}
	}
}		
</PRE>
		<P>
		As you can see this is just a simple structure that is used to store the 
		composite value of each response from the network and a string representing the 
		data that it holds. This is then dumped into an array that is sorted by the 
		code,
		<P></P>
		<PRE>
SelfOrganizingNetworkData dataTest;

bool bDataValid = false;

for( int i=0; i&lt;arrayOutput.Count; i++ )
{
	/// first value is always valid
	if( i==0 )
	{
		nItemCount = 0;
		data = ( SelfOrganizingNetworkData )arrayOutput[ i ];
		bDataValid = true;
	}
	else
	{
		bool bFound = false;
		data = ( SelfOrganizingNetworkData )arrayOutput[ i ];

		for( int n=0; n&lt;i; n++ )
		{
			dataTest = ( SelfOrganizingNetworkData )arrayOutput[ n ];
			if( dataTest.CompositeValue == data.CompositeValue )
			{
				bFound = true;
				n=i;
			}
		}

		if( bFound == false )
		{
			nItemCount = 0;
			data = ( SelfOrganizingNetworkData )arrayOutput[ i ];
			bDataValid = true;
		}
	}

	if( bDataValid == true )
	{
		for( int n=0; n&lt;arrayOutput.Count; n++ )
		{
			dataTest = ( SelfOrganizingNetworkData )arrayOutput[ n ];
			if( dataTest.CompositeValue == data.CompositeValue )
			{
				nItemCount++;
			}
		}

		netWorkText.AppendText( "\n\nThere are " + nItemCount.ToString() + " items out of " + arrayOutput.Count.ToString() + " That have the Composite Value " + data.CompositeValue.ToString() + "\n" );

		for( int n=0; n&lt;arrayOutput.Count; n++ )
		{
			dataTest = ( SelfOrganizingNetworkData )arrayOutput[ n ];
			if( dataTest.CompositeValue == data.CompositeValue )
			{
				netWorkText.AppendText( dataTest.Data + "\n" );
			}
		}

		bDataValid = false;
	}
}
</PRE>
		<IMG SRC="Neural_Dot_Net/learn For the self organizing network.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		which simply groups the data according to it's composite value and prints the 
		groups out to the screen in complete blocks.
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>Saving And Loading</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		Saving and loading for the Self Organizing Network uses the same xml format as 
		the rest of the program and the file looks like
		<P></P>
		<PRE>
&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;SelfOrganizingNetwork&gt;
	&lt;HorizontalSize&gt;10&lt;/HorizontalSize&gt;
	&lt;VerticalSize&gt;10&lt;/VerticalSize&gt;
	&lt;InitialLearningRate&gt;0.5&lt;/InitialLearningRate&gt;
	&lt;LearningRate&gt;0.01&lt;/LearningRate&gt;
	&lt;FinalLearningRate&gt;0.01&lt;/FinalLearningRate&gt;
	&lt;InitialNeighborhoodSize&gt;5&lt;/InitialNeighborhoodSize&gt;
	&lt;FinalNeighborhoodSize&gt;5&lt;/FinalNeighborhoodSize&gt;
	&lt;NeighborhoodDecrement&gt;100&lt;/NeighborhoodDecrement&gt;
	&lt;NeighborhoodSize&gt;5&lt;/NeighborhoodSize&gt;
	&lt;NumberOfIterations&gt;500&lt;/NumberOfIterations&gt;
	&lt;Iterations&gt;500&lt;/Iterations&gt;
	&lt;WinningHorizontalPosition&gt;0&lt;/WinningHorizontalPosition&gt;
	&lt;WinningVerticalPosition&gt;9&lt;/WinningVerticalPosition&gt;
	&lt;NumberOfNodes&gt;2&lt;/NumberOfNodes&gt;
	&lt;InputLayer&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;0&lt;/Identifier&gt;
			&lt;NodeValue&gt;0.374602794355994&lt;/NodeValue&gt;
			&lt;NodeError&gt;0&lt;/NodeError&gt;
			&lt;Bias&gt;
				&lt;BiasValue&gt;1&lt;/BiasValue&gt;
			&lt;/Bias&gt;
		&lt;/BasicNode&gt;
		&lt;BasicNode&gt;
			&lt;Identifier&gt;1&lt;/Identifier&gt;
			&lt;NodeValue&gt;0.955952847821616&lt;/NodeValue&gt;
			&lt;NodeError&gt;0&lt;/NodeError&gt;
			&lt;Bias&gt;
				&lt;BiasValue&gt;1&lt;/BiasValue&gt;
			&lt;/Bias&gt;
		&lt;/BasicNode&gt;
	&lt;/InputLayer&gt;
	&lt;KohonenLayer&gt;
		&lt;SelfOrganizingNetworkNode&gt;
			&lt;BasicNode&gt;
				&lt;Identifier&gt;2&lt;/Identifier&gt;
				&lt;NodeValue&gt;11.8339429844748&lt;/NodeValue&gt;
				&lt;NodeValue&gt;0.01&lt;/NodeValue&gt;
				&lt;NodeError&gt;0&lt;/NodeError&gt;
				&lt;Bias&gt;
					&lt;BiasValue&gt;1&lt;/BiasValue&gt;
				&lt;/Bias&gt;
			&lt;/BasicNode&gt;
		&lt;/SelfOrganizingNetworkNode&gt;

		Through to ....

		&lt;SelfOrganizingNetworkNode&gt;
			&lt;BasicNode&gt;
				&lt;Identifier&gt;299&lt;/Identifier&gt;
				&lt;NodeValue&gt;12.0534121931137&lt;/NodeValue&gt;
				&lt;NodeValue&gt;0.01&lt;/NodeValue&gt;
				&lt;NodeError&gt;0&lt;/NodeError&gt;
				&lt;Bias&gt;
					&lt;BiasValue&gt;1&lt;/BiasValue&gt;
				&lt;/Bias&gt;
			&lt;/BasicNode&gt;
		&lt;/SelfOrganizingNetworkNode&gt;
		&lt;SelfOrganizingNetworkLink&gt;
			&lt;BasicLink&gt;
				&lt;Identifier&gt;3&lt;/Identifier&gt;
				&lt;LinkValue&gt;9.2275126633717&lt;/LinkValue&gt;
				&lt;InputNodeID&gt;0&lt;/InputNodeID&gt;
				&lt;OutputNodeID&gt;2&lt;/OutputNodeID&gt;
			&lt;/BasicLink&gt;
		&lt;/SelfOrganizingNetworkLink&gt;
		
		through to .....

		&lt;SelfOrganizingNetworkLink&gt;
			&lt;BasicLink&gt;
				&lt;Identifier&gt;301&lt;/Identifier&gt;
				&lt;LinkValue&gt;9.47900234600894&lt;/LinkValue&gt;
				&lt;InputNodeID&gt;1&lt;/InputNodeID&gt;
				&lt;OutputNodeID&gt;299&lt;/OutputNodeID&gt;
			&lt;/BasicLink&gt;
		&lt;/SelfOrganizingNetworkLink&gt;
	&lt;/KohonenLayer&gt;
&lt;/SelfOrganizingNetwork&gt;
</PRE>
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>Testing</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		The Testing portions of the code are located under the run menu for Neural Net 
		Tester program. test this program is "Load And Run&nbsp;Self Organizing Network 
		1" option. This will load file that resembles one above. I say as linkage 
		values wont be exactly same any two times running.
		<P>
		The menu option will load and run the&nbsp;"SelfOrganizingNetworkOne.wrk" file 
		and generate the log&nbsp;&nbsp;Load And Run Self Organizing Network One.xml 
		which can be viewed using the LogViewer that is part of the neural net tester 
		program.
		<P>
		The display will show&nbsp;an output similar to that found when running the 
		adaline networks and is described in understanding the output below.&nbsp;
		<P>The quick guide is
			<UL>
				<LI>
				Menu :- Run/Load And Run Self Organizing Network 1:- Loads the saved Self 
				Organizing&nbsp;network from the disk and then runs it against&nbsp;a newly 
				generated training file "SelfOrganizingNetwork.wrk".&nbsp;
				<LI>
				Menu :- Generate/Generate Self Organizing Network One Training File :- 
				Generates the training file for the network.
				<LI>
				Menu :- Generate/Generate Self Organizing Network One Working File :- Generates 
				the working file for the run menu.
				<LI>
				Menu :- Generate/Generate Self Organizing Network One Test File :- Generates a 
				test file for proving that the network has learnt correctly. ( used during the 
				training option. )&nbsp;
				<LI>
				Menu :- Train/Self&nbsp;Organizing Network 1&nbsp;:- Trains the network from 
				scratch using the&nbsp;sample file which by default is generated first. 
				SelfOrganizingNetwork.trn"
				<LI>
					Menu :- Options/Self Organizing Network 1&nbsp;Options :- Brings up a dialog 
					that allows you to set certain parameters for the running of the network.</LI>
			</UL>
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>Options</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
			<IMG SRC="Neural_Dot_Net/selfOrganizingnetworkoneoptions.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		The above is the dialog for the Self organizing One options that contains all 
		the available, changeable options that you can apply before running the 
		program. It starts off with the Number Of Items which is the number of items 
		that are contained in the files that the network uses, these being the training 
		file "SelfOrganizingNetwork.trn", the working file "SelfOrganizingNetwork.wrk" 
		and the testing file "SelfOrgainzingNetwork.tst"
		<P>
		The Initial Learning Rate is the starting learning rate when the network is run 
		and the Final Learning Rate is the Learning Rate that the network will end up 
		at. This is adjusted incrementally throughout the running of the program ( it 
		is done in the epoch function )
		<P>
		The Neighborhood Size&nbsp;is the Size of the&nbsp;area of nodes that you want 
		to adjust the weights for when learn is called. The idea&nbsp;with this&nbsp;is 
		that the size is reduced as the program progresses although in this current 
		example the size remains&nbsp;at a constant of five by default. &nbsp;
		<P>
		The Neighborhood decrement is the interval between the neighborhood size 
		changing, this is measured in iterations, so for every hundred iterations the 
		neighborhood size will be decremented slightly.
		<P>
		the Number of Iterations is the number of times that you wish the network to 
		run through the training loop.
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>Understanding The Output</U><FONT size="5"><B></H2>
		</B></FONT>
		<P><font size="4"><b></P>
		</B></FONT>
		<H3>Training<FONT size="4"><B></H3>
		</B></FONT>
		<P>
		<P>
		Iteration number 434 produced a winning node at 7 Horizontal and 8 vertical, 
		winning node value = 11.5295197965397
		<P>
		............................................................................................................................................................................................................................................................................................................
		<P>
		Iteration number 435 produced a winning node at 0 Horizontal and 6 vertical, 
		winning node value = 11.5295669799536
		<P>
		............................................................................................................................................................................................................................................................................................................
		<P>
		Iteration number 436 produced a winning node at 2 Horizontal and 0 vertical, 
		winning node value = 11.5424655187099
		<P>
		<P>
		At first the program runs through the data by cycling through for the specified 
		number of iterations, in this case 500 and for each iteration it prints out the 
		iteration number and the winning node position and values, the dots are printed 
		out after each call to the Self Organizing Network run function.
		<P>
		Pattern ID = 301 Input Value 0 = 0.450591519219145 Input Value 1 = 
		0.0319415726847675 Output Value 0 = 0
		<P>
		Pattern ID = 302 Input Value 0 = 0.450591519219145 Input Value 1 = 
		0.0319415726847675 Output Value 0 = 0
		<P>
		Once the training run is complete the network loads the pattern data from the 
		test file "SelfOrganizingNetwork.tst". This file contains a number of items 
		that are all the same to test the network to&nbsp;see if it gives the&nbsp;same 
		answers when the same values are input.
		<P>
		Run called at 2 Network Values are :- Composite Value = 90, Horizontal Value = 
		9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning 
		Vertical Position 0, Inputs = 0.450591519219145,0.0319415726847675, Winning 
		Node Value = 11.9952261309799
		<P>
		Run called at 3 Network Values are :- Composite Value = 90, Horizontal Value = 
		9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning 
		Vertical Position 0, Inputs = 0.450591519219145,0.0319415726847675, Winning 
		Node Value = 11.9952261309799
		<P>
		The results of the test are then output to the screen, the values output are 
		the composite value of the winning position, the horizontal and the vertical 
		winning positions, the input values and the winning node value.
		<P>
		Once the test has run the network is loaded from the&nbsp;xml file and 
		the&nbsp;results of the test are output
		<P>
		Run Called at 206 Network Values are :- Composite Value = 90, Horizontal Value 
		= 9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current 
		Winning Vertical Position 0, Inputs = 0.603428498191493,0.184778551657115, 
		Winning Node Value = 11.7790820024851
		<P>
		Run Called at 207 Network Values are :- Composite Value = 20, Horizontal Value 
		= 2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current 
		Winning Vertical Position 0, Inputs = 0.167407868973635,0.748757922439258, 
		Winning Node Value = 11.6911783767234
		<P>
		The output for the true test is the same as the output for the test case above. 
		When the test is finished the data is sorted and then output as,
		<P>
		There are 184 items out of 300 That have the Composite Value 20
		<P>
		Run Called at 0 Network Values are :- Composite Value = 20, Horizontal Value = 
		2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current Winning 
		Vertical Position 0, Inputs = 0.634821365417364,0.617450682733883, Winning Node 
		Value = 11.4534014253365
		<P>
		<P>
		There are 116 items out of 300 That have the Composite Value 90
		<P>
		Run Called at 1 Network Values are :- Composite Value = 90, Horizontal Value = 
		9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning 
		Vertical Position 0, Inputs = 0.29477788149136,0.123872065043017, Winning Node 
		Value = 12.0416726692412
		<P>
		These results largely give a split difference where the value on the left is 
		higher than the value on the right. It does however sometimes get confused when 
		the differences between the values are extremely small.
		<P><font size="4"><b></P>
		</B></FONT>
		<H3>Running<FONT size="4"><B></H3>
		</B></FONT>
		<P>
		<P>
		Generating Self Organizing Network File... Please Wait
		<P>
		Self Organizing Network File Generated
		<P>
		Pattern ID = 1 Input Value 0 = 0.634821365417364 Input Value 1 = 
		0.617450682733883 Output Value 0 = 0
		<P>
		Pattern ID = 2 Input Value 0 = 0.29477788149136 Input Value 1 = 
		0.123872065043017 Output Value 0 = 0
		<P>
		Pattern ID = 3 Input Value 0 = 0.141242747726498 Input Value 1 = 
		1.55989269426088 Output Value 0 = 0
		<P>
		<P>
		A Run starts by generating a new testing file and then loads the data into the 
		pattern array. The pattern array is then run and the output printed to the 
		screen
		<P>
		Run Called at 109 Network Values are :- Composite Value = 20, Horizontal Value 
		= 2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current 
		Winning Vertical Position 0, Inputs = 0.149710465292311,0.731060518757934, 
		Winning Node Value = 11.7162020269269
		<P>
		Run Called at 110 Network Values are :- Composite Value = 20, Horizontal Value 
		= 2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current 
		Winning Vertical Position 0, Inputs = 0.312410572223557,0.295039889540076, 
		Winning Node Value = 11.9092871878856
		<P>
		<P>
		Once the data is run it is then sorted into groups based on the composite 
		values.
		<P>
		Run Called at 3 Network Values are :- Composite Value = 20, Horizontal Value = 
		2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current Winning 
		Vertical Position 0, Inputs = 0.577263376944355,1.99591332347873, Winning Node 
		Value = 10.5484882984376
		<P>
		<P>
		Run Called at 4 Network Values are :- Composite Value = 20, Horizontal Value = 
		2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current Winning 
		Vertical Position 0, Inputs = 2.41456327001311,2.43193395269659, Winning Node 
		Value = 8.91189833587868
		<P>
		And
		<P>
		<P>
		There are 116 items out of 300 That have the Composite Value 90
		<P>
		Run Called at 1 Network Values are :- Composite Value = 90, Horizontal Value = 
		9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning 
		Vertical Position 0, Inputs = 0.29477788149136,0.123872065043017, Winning Node 
		Value = 12.0416726692412
		<P>
		Run Called at 5 Network Values are :- Composite Value = 90, Horizontal Value = 
		9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning 
		Vertical Position 0, Inputs = 3.85058389923097,2.86795458191445, Winning Node 
		Value = 7.59617983634502
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>Fun And Games</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		I must confess to a certain ambivalence to the Self Organizing Network in that 
		an understanding of what you are after is required if not in the same was as 
		the understanding of the correct answers that must be known previously in the 
		preceding networks, yet there is a certain point where you should know what to 
		expect from the network and if you don't then I get the impression that the 
		analysis of the results of the network can take longer and be more involved 
		than the actual running of the network in the first place.
		<P>
		This is in no way intended to demean the value of the network more a pointer to 
		something that should be borne in mind. The network can be useful to run and 
		see what comes out in an experimental fashion but until it is run there can be 
		no guarantee that the answers it gives will be of any value.
		<P>One problem that did occur during the testing of the network was the fact that 
			originally there was no Final Learning Rate option which meant that the 
			Learning Rate was being continually adjusted and decreased until it was a large 
			negative number. Naturally this was skewing the networks results somewhat so I 
			included the Final Learning Rate option which is the lowest that the learning 
			rate is allowed to&nbsp;go, this fixed the problem and the network now comes to 
			much more reasonable conclusions about the data being input.&nbsp;
		</P>
		<H2 align="center"><FONT size="7"><U>Neural .Net pt 10</U></FONT></H2>
		<H2 align="center"><FONT size="7"><U>The Self Organizing Word Network</U></FONT><B><BR>
		</H2>
		</B>
		<P>
		As with the previous word versions of the networks this one is merely an 
		extension of the code that has gone before except perhaps more so. This network 
		could have been written without any extra classes being added at all. In fact 
		the Self Organizing Network Word Pattern class that was written for this 
		example is never used as I reuse classes from the Back Propagation network
		<P>
		I wont go into detail about the classes here as they add nothing that hasn't 
		been said before. If anyone wishes to use them they are provided in their own 
		file. However, for the sake of completeness I will go through the details of 
		how the network is used here, so that if anyone wants to look it up there isn't 
		a sudden break with the way the rest of the code is documented.
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>The Self Organizing Word Network</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		The Self Organizing Word Network basically implements the same functionality as 
		the Self Organizing Network.
		<P></P>
		<IMG SRC="Neural_Dot_Net/SelfOrganizingNetworkDiagram.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P></P>
		<IMG SRC="Neural_Dot_Net/selforganizingnetworkword.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<PRE></PRE>
		<P>
		For details on implementation see the base class descriptions above.
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>Training</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		The training loop for the Self Organizing Word Network consists of two loops, 
		one for the training and one for the test. The additional loop that was used 
		for the Self Organizing Network has been omitted from the Word version as I 
		didn't feel it was required.
		<P></P>
		<IMG SRC="Neural_Dot_Net/Inside Self Organizing network run.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
			<PRE>
for( nIteration=0; nIteration&lt;nNumberOfSonTwoIterations; nIteration++ )
{
	for( int i=0; i&lt;patterns.Count; i++ )
	{
		for( int n=0; n&lt;20; n++ )
		{
			/// Note because I use the LoadBackPropagationWord training file function the pattern should be cast to an adaline word pattern which 
			/// contains the required function to get the values from the words.
			( ( BasicNode )sonTest.Nodes[ n ] ).SetValue( Values.NodeValue, ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( n ) );	
		}

		soNetwork.Run();
		netWorkText.AppendText( "." );
	}

	soNetwork.Learn();

	soNetwork.Epoch();

	log.Log( DebugLevelSet.Progress, "Iteration number " + nIteration.ToString() + " produced a winning node at  " + soNetwork.WinningHorizontalPos + " Horizontal and " + soNetwork.WinningVerticalPos + " vertical, winning node value = " + soNetwork.GetWinningNodeValue( soNetwork.WinningHorizontalPos, soNetwork.WinningVerticalPos ) + "\n", ClassName );
	netWorkText.AppendText( "\nIteration number " + nIteration.ToString() + " produced a winning node at  " + soNetwork.WinningHorizontalPos + " Horizontal and " + soNetwork.WinningVerticalPos + " vertical, winning node value = " + soNetwork.GetWinningNodeValue( soNetwork.WinningHorizontalPos, soNetwork.WinningVerticalPos ) + "\n" );
}

netWorkText.AppendText( "Saving the network\n" );

FileStream xmlstream = new FileStream( "selforganizingnetworktwo.xml", FileMode.Create, FileAccess.Write, FileShare.ReadWrite, 8, true );
XmlWriter xmlWriter = new XmlTextWriter( xmlstream, System.Text.Encoding.UTF8 );
xmlWriter.WriteStartDocument();

soNetwork.Save( xmlWriter );

xmlWriter.WriteEndDocument();
xmlWriter.Close();

/// now load the file
FileStream readStream = new FileStream( "selforganizingnetworktwo.xml", FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 8, true );
XmlReader xmlReader = new XmlTextReader( readStream );

netWorkText.AppendText( "Loading the network\n" );
SelfOrganizingNetworkWordNetwork sonTest = new SelfOrganizingNetworkWordNetwork( log );

sonTest.Load( xmlReader );
xmlReader.Close();

StringBuilder strDataDisplay = new StringBuilder( "" );
ArrayList arrayOutput = new ArrayList();

SelfOrganizingNetworkData data;
netWorkText.AppendText( "Completed the test ... now reprocessing the orginal data through the loaded network\n " ); 

for( int i=0; i&lt;patterns.Count; i++ )
{
	for( int n=0; n&lt;20; n++ )
	{
		/// Note because I use the LoadBackPropagationWord training file function the pattern should be cast to an adaline word pattern which 
		/// contains the required function to get the values from the words.
		( ( BasicNode )sonTest.Nodes[ n ] ).SetValue( Values.NodeValue, ( ( AdalineWordPattern )patterns[ i ] ).GetInSetAt( n ) );	
	}

	sonTest.Run();

	strDataDisplay.Remove( 0, strDataDisplay.Length );
	strDataDisplay.Append( "Run Called at " + i.ToString() + " Network Values are :- Composite Value = " + sonTest.GetPosition( Values.Composite ) + ", Horizontal Value = " + sonTest.GetPosition( Values.Row ) + ", Vertical Value = " + sonTest.GetPosition( Values.Column ) + ", Current Winning Horizontal Position = " + sonTest.WinningHorizontalPos + ", Current Winning Vertical Position " + sonTest.WinningVerticalPos + ", Inputs = " + ( ( BasicNode )sonTest.Nodes[ 0 ] ).NodeValues[ Values.NodeValue ].ToString() + "," + ( ( BasicNode )sonTest.Nodes[ 1 ] ).NodeValues[ Values.NodeValue ].ToString() + ", Winning Node Value = " + sonTest.GetWinningNodeValue( sonTest.WinningHorizontalPos, sonTest.WinningVerticalPos ) + "\n" );

	strDataDisplay.Append( " String Data :- " );

	for( int n=0; n&lt;( ( AdalineWordPattern )patterns[ i ] ).InputSize(); n++ )
	{
		strDataDisplay.Append( ( ( AdalineWordPattern )patterns[ i ] ).InputValue( n ) + " " );
	}

	netWorkText.AppendText( strDataDisplay.ToString() ); 

	data = new SelfOrganizingNetworkData();
	data.CompositeValue = ( int )sonTest.GetPosition( Values.Composite );
	data.Data = strDataDisplay.ToString();

	arrayOutput.Add( data );
}
</PRE>
		<P>
		The first loop performs the training of the network for the required number of 
		iterations, which in this example is five hundred. The code then loads the 
		patterns value for the words into the NodeValue section of the node. Note I use 
		the AdalineWord pattern for this as well as reusing some of the earlier 
		functions for loading the file. This is done for this demo to reduce the amount 
		of cut and paste code that is used although if you were developing an 
		application using the Self Organizing Network Word files then you would need to 
		write this yourself.
		<P>
		The Learn and Epoch functions are then called at the end of each iteration. the 
		learn function updates the nodes surrounding the winning node the area of which 
		is controlled by the neighborhood size variable.
		<P>
		The code then does the what by now should be standard&nbsp;saving and reloading 
		of the network and then reruns it through an identical loop 
		without&nbsp;calling the Learn or the Epoch functions, with the results of the 
		output using the same formatting code as the&nbsp;Self organizing Network.
		<P>
			<IMG SRC="Neural_Dot_Net/learn For the self organizing network.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
			<PRE></PRE>
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>Saving And Loading</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		Saving and Loading is done using the same xml format as everywhere else and 
		looks like,
		<P>
			<PRE>
&lt;?xml version="1.0" encoding="utf-8"?&gt;
	&lt;SelfOrganizingNetworkWordNetwork&gt;
		&lt;SelfOrganizingNetwork&gt;
			&lt;HorizontalSize&gt;10&lt;/HorizontalSize&gt;
			&lt;VerticalSize&gt;10&lt;/VerticalSize&gt;
			&lt;InitialLearningRate&gt;0.5&lt;/InitialLearningRate&gt;
			&lt;LearningRate&gt;0.01&lt;/LearningRate&gt;
			&lt;FinalLearningRate&gt;0.01&lt;/FinalLearningRate&gt;
			&lt;InitialNeighborhoodSize&gt;5&lt;/InitialNeighborhoodSize&gt;
			&lt;FinalNeighborhoodSize&gt;1&lt;/FinalNeighborhoodSize&gt;
			&lt;NeighborhoodDecrement&gt;100&lt;/NeighborhoodDecrement&gt;
			&lt;NeighborhoodSize&gt;1&lt;/NeighborhoodSize&gt;
			&lt;NumberOfIterations&gt;500&lt;/NumberOfIterations&gt;
			&lt;Iterations&gt;500&lt;/Iterations&gt;
			&lt;WinningHorizontalPosition&gt;7&lt;/WinningHorizontalPosition&gt;
			&lt;WinningVerticalPosition&gt;0&lt;/WinningVerticalPosition&gt;
			&lt;NumberOfNodes&gt;20&lt;/NumberOfNodes&gt;
			&lt;InputLayer&gt;
				&lt;BasicNode&gt;
					&lt;Identifier&gt;0&lt;/Identifier&gt;
					&lt;NodeValue&gt;0.1074&lt;/NodeValue&gt;
					&lt;NodeError&gt;0&lt;/NodeError&gt;
					&lt;Bias&gt;
						&lt;BiasValue&gt;1&lt;/BiasValue&gt;
					&lt;/Bias&gt;
				&lt;/BasicNode&gt;

				through to ...

				&lt;BasicNode&gt;
					&lt;Identifier&gt;19&lt;/Identifier&gt;
					&lt;NodeValue&gt;0.2406&lt;/NodeValue&gt;
					&lt;NodeError&gt;0&lt;/NodeError&gt;
					&lt;Bias&gt;
						&lt;BiasValue&gt;1&lt;/BiasValue&gt;
					&lt;/Bias&gt;
				&lt;/BasicNode&gt;
			&lt;/InputLayer&gt;
			&lt;KohonenLayer&gt;
				&lt;SelfOrganizingNetworkNode&gt;
					&lt;BasicNode&gt;
						&lt;Identifier&gt;20&lt;/Identifier&gt;
						&lt;NodeValue&gt;36.7925330599203&lt;/NodeValue&gt;
						&lt;NodeValue&gt;0.01&lt;/NodeValue&gt;
						&lt;NodeError&gt;0&lt;/NodeError&gt;
						&lt;Bias&gt;
							&lt;BiasValue&gt;1&lt;/BiasValue&gt;
						&lt;/Bias&gt;
					&lt;/BasicNode&gt;
				&lt;/SelfOrganizingNetworkNode&gt;
				
				through to ....
				
				&lt;SelfOrganizingNetworkNode&gt;
					&lt;BasicNode&gt;
						&lt;Identifier&gt;2099&lt;/Identifier&gt;
						&lt;NodeValue&gt;36.793609556095&lt;/NodeValue&gt;
						&lt;NodeValue&gt;0.01&lt;/NodeValue&gt;
						&lt;NodeError&gt;0&lt;/NodeError&gt;
						&lt;Bias&gt;
							&lt;BiasValue&gt;1&lt;/BiasValue&gt;
						&lt;/Bias&gt;
					&lt;/BasicNode&gt;
				&lt;/SelfOrganizingNetworkNode&gt;
				&lt;SelfOrganizingNetworkLink&gt;
					&lt;BasicLink&gt;
						&lt;Identifier23&lt;/Identifier&gt;
						&lt;LinkValue&gt;8.38539366591963&lt;/LinkValue&gt;
						&lt;InputNodeID&gt;2&lt;/InputNodeID&gt;
						&lt;OutputNodeID&gt;20&lt;/OutputNodeID&gt;
					&lt;/BasicLink&gt;
				&lt;/SelfOrganizingNetworkLink&gt;
				
				through to ....

				&lt;SelfOrganizingNetworkLink&gt;
					&lt;BasicLink&gt;
						&lt;Identifier&gt;2119&lt;/Identifier&gt;
						&lt;LinkValue&gt;8.72511583748183&lt;/LinkValue&gt;
						&lt;InputNodeID&gt;19&lt;/InputNodeID&gt;
						&lt;OutputNodeID&gt;2099&lt;/OutputNodeID&gt;
					&lt;/BasicLink&gt;
				&lt;/SelfOrganizingNetworkLink&gt;
			&lt;/KohonenLayer&gt;
		&lt;/SelfOrganizingNetwork&gt;
	&lt;/SelfOrganizingNetworkWordNetwork&gt;
</PRE>
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>Testing</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		The Testing portions of the code are located under the run menu for the Neural 
		Net Tester program. The test for this program is the "Load And Run&nbsp;Self 
		Organizing Network 2" menu option. This will load the file that resembles the 
		one above. I say resembles as the linkage values wont be exactly the same any 
		two times running.
		<P>
		The menu option will load and run the&nbsp;"SelfOrganizingNetworkOne.wrk" file 
		and generate the log&nbsp;&nbsp;Load And Run Self Organizing Network One.xml 
		which can be viewed using the LogViewer that is part of the neural net tester 
		program.
		<P>
		The display will show&nbsp;an output similar to that found when running the 
		adaline networks and is described in understanding the output below.&nbsp;
		<P>The quick guide is
			<UL>
				<LI>
				Menu :- Run/Load And Run Self Organizing Network 2:- Loads the saved Self 
				Organizing&nbsp;network from the disk and then runs it against 
				the&nbsp;"origin-of-the-species.txt" file
				<LI>
				Menu :- Train/Self&nbsp;Organizing Network&nbsp;2&nbsp;:- Trains the network 
				from scratch using the&nbsp;sample file which by default 
				is&nbsp;"originpart.txt"
				<LI>
					Menu :- Options/Self Organizing Network&nbsp;2&nbsp;Options :- Brings up a 
					dialog that allows you to set certain parameters for the running of the 
					network.</LI>
			</UL>
			<H2><U>Options</U><B><FONT size="5"></H2>
		</B></FONT>
		<P>
			<IMG SRC="Neural_Dot_Net/SelfOrganizingNetworkTwoOptions.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
		As with the previous Self organizing Network the Learning Rate is reduced as 
		the program progresses this is why the first two options are the starting or 
		initial Learning Rate and the Final Rate. The Neighborhood which is the range 
		of updated nodes when a Learn is called is reduced as the program progresses as 
		well as the learning rate.
		<P>
		The neighborhood decrement is the number of iterations to perform before a 
		reduction in the neighborhood size and the&nbsp;Number Of Iterations is 
		the&nbsp;amount of times that you want the program to run through the training 
		loop.
		<P><font size="5"><b></P>
		</B></FONT>
		<H2><U>Fun And Games</U><FONT size="5"><B></H2>
		</B></FONT>
		<P>
		The main problem with the output of this code ( well my main problem anyway ) 
		is just what exactly does it mean? The code is meant as an experiment just to 
		see what turns up although actually interpreting the answers then becomes a 
		problem in itself that raises another and perhaps the most important question 
		of,&nbsp; Do the answers given by the network tell us anything about the book 
		and the words in the book or do they merely tell us something about the nature 
		of the words used in a mathematical sense? I personally can't decide on this 
		one although I have a deep suspicion that the answer will be more to do with 
		the nature of the numbers used, which when you get down to it are chosen in a 
		perfectly arbitrary fashion. But then again what if the technique can be used 
		to tell us something about the book itself and the nature of how the book is 
		written and if it works on this book what about other books, it could possibly 
		be quite fascinating, it could equally possibly be a complete waste of time. 
		The problem brings me back to once again interpreting the data in a meaningful 
		way.
		<P>
		At present I don't have any answers to this at all. It's something that 
		requires more research and possibly and mathematician having a go at it before 
		a sensible answer is found.
		<P>
		<P>
		<P>
		<P>
			<H2 align="center"><U>References</U></H2>
		<P>
		Tom Archer ( 2001 ) Inside C#, Microsoft Press
		<P>
		Jeffery Richter (&nbsp; 2002 ) Applied Microsoft .NET Framework Programming, 
		Microsoft Press
		<P>
		Charles Peltzold ( 2002 ) Programming Microsoft Windows With C#, Microsoft 
		Press
		<P>
		Robinson et al ( 2001 ) Professional C#, Wrox
		<P>
		William R. Staneck ( 1997 ) Web Publishing Unleashed Professional Reference 
		Edition, Sams.net
		<P>
		Robert Callan, The Essence Of Neural Networks ( 1999 ) Prentice Hall
		<P>
		Timothy Masters, Practical Neural Network Recipes In C++ ( 1993 ) Morgan 
		Kaufmann ( Academic Press )
		<P>
		Melanie Mitchell, An Introduction To Genetic Algorithms ( 1999 ) MIT Press
		<P>
		Joey Rogers, Object-Orientated Neural Networks in C++ ( 1997 ) Academic 
		Press&nbsp;
		<P>
		Simon Haykin Neural Networks A Comprehensive Foundation ( 1999 ) Prentice Hall
		<P>
		Bernd Oestereich ( 2002 ) Developing Software With UML Object-Orientated 
		Analysis And Design In Practice Addison Wesley
		<P>
		R Beale &amp; T Jackson ( 1990 ) Neural Computing An Introduction, Institute Of 
		Physics Publishing
		<P>
			<H2 align="center"><U>Thanks</U></H2>
		<P>Special thanks go to anyone involved in TortoiseCVS for version control&nbsp; <A HREF="http://www.tortoisecvs.org/" TARGET="_top">
				http://www.tortoisecvs.org/<PRE></PRE>
			</A>
		&nbsp;
		<P>All UML diagrams were generated using Metamill version 2.2 <A HREF="http://www.metamill.com" TARGET="_top">
				http://www.metamill.com</A>
		<P>
		<P>
		<P>
			<H2 align="center"><FONT size="7"><U>Neural .Net Appendix One</U></FONT></H2>
			<H2 align="center"><FONT size="7"><U>Supporting Documentation</U></FONT></H2>
		<P>
			<H2><U>Testing A Given Network</U></H2>
		<P><IMG SRC="Neural_Dot_Net/Testing a Given Network.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>
			<H2><U></U></H2>
			<H2><U></U></H2>
			<H2><U>Training A Given Network</U></H2>
		<P>
		<P><IMG SRC="Neural_Dot_Net/Training a given Network.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
			<H2><U></U></H2>
			<H2><U></U></H2>
			<H2><U></U></H2>
			<H2><U></U></H2>
			<H2><U>Generating A Given File</U></H2>
		<P>
			<IMG SRC="Neural_Dot_Net/Generating a given File.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
			<H2><U></U></H2>
			<H2><U></U></H2>
			<H2><U></U></H2>
			<H2><U>Setting Options For A Given Network</U></H2>
		<P>
		<P>
			<IMG SRC="Neural_Dot_Net/Setting Options For a Given Network.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG>
		<P>&nbsp; <U></U>
			<H2><U>Train Network Showing Component Interaction</U><IMG SRC="Train Network Interaction.png" ALIGN="bottom" BORDER="0" TARGET="_top"></IMG></H2>
		<P></P>
	</BODY>
</HTML>

By viewing downloads associated with this article you agree to the Terms of Service and the article's licence.

If a file you wish to view isn't highlighted, and is a text file (not binary), please let us know and we'll add colourisation support for it.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
United Kingdom United Kingdom
This member has not yet provided a Biography. Assume it's interesting and varied, and probably something to do with programming.

Comments and Discussions