Click here to Skip to main content
Click here to Skip to main content

Neural Dot Net Pt 9 The Self Organizing Network

, 30 Nov 2003
Rate this:
Please Sign up or sign in to vote.
A neural network library in C#

Introduction

So far, all the previous examples have involved a training technique that involves holding the networks hand, by giving it the answer that we require as part of the question. With the Self Organizing Network, we arrive at a neural network programming technique that allows the network to work out things for itself. For this reason, although the Adaline Pattern is used to present the data to the network, the Patterns do not have the output value filled out which from a programming point of view makes life easy as the code to generate the Self Organizing Networks training and working files is almost identical to the code to generate the files for the Adaline One network. In fact, it's a direct cut and paste with a few minor changes.

As with the other new networks, this network is based on the examples in Joey Rogers' "Object Orientated Neural Networks in C++".

The Self Organizing Network

One feature of the Self Organizing Network that is immediately obvious from the diagram below is that this time we are dealing with what is essentially a flat network in that the network has only one active layer and this layer is arranged in a plate fashion. No matter how many nodes are added to the network layer, it will just expand. Also, there is the point here that every input is connected to every node. The idea behind this network is that the nodes are in competition with each other for the correct answer.

This is called competitive learning with the winner being determined by the link values, and once a winner is decided, the winner and its surrounding nodes have their link values modified.

The Self Organizing Network Node

The Self Organizing Network Node inherits directly from the BasicNode class and defines a new pair of Learn and Run functions as well as doing away with the Transfer functions that we have been using previously. The reason the transfer functions are removed is because we no longer have the correct answer passed in with the network, so we can't use the node and the pattern to see if we have got it right or not.

The Run function looks like:

public override void Run(int nMode)
{
    if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
    {
        log.Log( DebugLevelSet.Progress, "Run function called for ", 
             ClassName );
    }

    double dTotal = 0.0;
    for( int i=0; i<this.InputLinks.Count; i++ )
    {
        /// get the power to 2 of the nodevalue - the weight
        dTotal += Math.Pow( ( ( BasicLink )this.InputLinks[ i ] )
           .InputNode.GetValue( nMode ) 
           - ( ( BasicLink )this.InputLinks[ i ] )
           .InputValue( Values.Weight ), 2 );
    }

    /// store the square root of all the values
    this.NodeValues[ Values.NodeValue ] = Math.Sqrt( dTotal );
}

It begins by declaring a variable dTotal which as with the other networks is used to calculate all the values from all the input links which, this being a Self Organizing Network means all the input values. It then gets the value of each input Node and subtracts the input value's weight from it.

This weight value is set originally to a random value by the Self Organizing Network Link class. This then gives us a value for the input node which is raised to the power of 2 and add it to the value dTotal. Once the node has cycled through all the inputs, the value for the node is saved as the square root of the total values received from the input nodes.

The Learn function looks like:

public override void Learn(int nMode) 
{ 
    if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) = = true ) 
    {
        log.Log( DebugLevelSet.Progress, "learn called for ",  ClassName ); 
    }
    
    double dDelta = 0.0; 

    for( int i=0; i<this.InputLinks.Count; i++ ) 
    {
        dDelta = ( ( double )this.NodeValues[ Values.LearningRate ] ) 
         + ( ( BasicLink )this.InputLinks[ i ] ).InputValue( Values.Weight ); 
        ( ( BasicLink )this.InputLinks[ i ] ).SetLinkValue( dDelta, Values.Weight );
    }
}

It begins by taking the total value of the input links and adding them to the current learning rate for the node. The new weight value is then set to the calculated value.

The Self Organizing Network Link

The Self organizing Network link is derived from the basic link class, and apart from the saving and loading functionality, adds only the setting of the link's weight value to a value between 0 and 1 in the constructor.

The Self Organizing Network

The real specialization of the Self Organizing network is done in this class which is derived from the BasicNetwork class but adds quite a bit of functionality to make things work properly. It starts with a whole list of class members:

private int nHorizontalSize; 
private int nVerticalSize; 
private double dInitialLearningRate; 
private double dFinalLearningRate; 
private int    nInitialNeighborhoodSize; 
private int nNeighborhoodDecrement; 
private int nNeighborhoodSize; 
private int nNumberOfIterations; 
private int nIterations; 
private int nWinningHorizontalPos; 
private int nWinningVerticalPos; 
private int nNumberOfNodes; 
private SelfOrganizingNetworkNode[][] arrayKohonenLayer = null;

The two starting variables nHorizontalSize and nVerticalSize are the variables that hold the size of the Kohonen layer of the network which is named after the inventor of this type of network and includes all nodes that are not input nodes.

The variables dInitialLearningRate and dFinalLearningRate should be immediately familiar by now as the initial Learning Rate is the Learning rate at the start of the running of a training session and the Final Learning rate is the learning rate at the end of the session. The learning rate is adjusted in the epoch function of the Self Organizing Network class below.

The next three variables are the neighborhood variables. These variables represent the nodes that are in the neighborhood of the winning node that is updated via the Learn function along with the nodes in the winning node's neighborhood. This is done in the Self Organizing Network class' Learn function below. The nNeighborhoodDecrement variable is the odd one out in that it is updated in the epoch function if the number of iterations plus one divided by the Neighborhood Decrement remainder is equal to 0. The nInitialNeighborhoodSize is the starting size of the neighborhood which defaults to five. The nNeighborhoodSize is the network's way of keeping track of the current size of the neighborhood.

The nNumberOfIterations variable is the number of iterations for the network to make during a run and is used in the epoch function's calculations.

The nIterations variable is also used by the epoch function and keeps track of the number of times that epoch has been called which is once every time the main loop is executed.

The variables nWinningHorizontalPos and nWinningVerticalPos are the horizontal and the vertical positions of the winning node which is decided in the Run function on the basis of which node provides the smallest node value.

The nNumberOfNodes variable is the number of input nodes that are to be used for the network and is used in the CreateNetwork function below.

The first and probably the most important function in the Self Organizing Network class is the CreateNetwork function that builds the network:

protected override void CreateNetwork()
{    
    if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
    {
        log.Log( DebugLevelSet.Progress, 
           "Create Network called for the Self Organizing Network ", ClassName );
    }

    /// create the array
    this.arrayKohonenLayer = 
        new SelfOrganizingNetworkNode[ this.HorizontalSize ][]; 
    for( int i=0; i<this.HorizontalSize; i++ )
    {
        this.arrayKohonenLayer[ i ] = 
              new SelfOrganizingNetworkNode[ this.VerticalSize ];
    }

    /// create the input nodes ( In the basic network nodes array )
    for( int i=0; i<this.nNumberOfNodes; i++ )
    {
        this.Nodes.Add( new BasicNode( log ) );
    }

    int nLinks = 0;

    /// loop through the horizontal
    for( int i=0; i<this.HorizontalSize; i++ )
    {
        /// loop through the vertical
        for( int n=0; n<this.VerticalSize; n++ )
        {
            this.arrayKohonenLayer[ i ][ n ] 
                = new SelfOrganizingNetworkNode( log, LearningRate );

            /// connect each input node to each node in the k layer
            for( int k=0; k<this.nNumberOfNodes; k++ )
            {
                this.Links.Add( new SelfOrganizingNetworkLink( log ) );
                ( ( BasicNode )this.Nodes[ k ] ).CreateLink( 
                  ( BasicNode )this.arrayKohonenLayer[ i ][ n ], 
                          ( BasicLink )this.Links[ nLinks ] );
                nLinks++;
            }
        }
    }
}

The CreateNetwork function begins by allocating the Self Organizing Network Nodes for the Kohonen Layer array. It does this by allocating a two dimensional array using the horizontal and the vertical sizes of the arrays. It then allocates the input nodes, the number of which is passed as the number of nodes variable to the constructor. This builds the basic framework for the network which then needs to be filled in. This is done through the use of three loops, the first loop runs through the horizontal size of the Kohonen Layer array while the second runs through the vertical size of the array for each of the elements of the horizontal array. A new Self Organizing Network Node is then created at each element of the array. The final loop then creates new links that join up this newly created node to each of the input nodes.

Once the network is built with the CreateNetwork function, we can start to concentrate on how it works. The main function in the running of the network is the Run function:

public void Run()
{
    if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
    {
        log.Log( DebugLevelSet.Progress, 
                 "Run called for Self Organizing network ", ClassName );
    }

    int nHoriz = 0;
    int nVert = 0;
    double dMinimum = 99999.0;
    double dNodeValue = 0.0;

    for( nHoriz=0; nHoriz<this.HorizontalSize; nHoriz++ )
    {
        for( nVert=0; nVert<this.VerticalSize; nVert++ )
        {
            ( ( SelfOrganizingNetworkNode )
              this.arrayKohonenLayer[ nHoriz ][ nVert ] )
              .Run( Values.NodeValue );
            dNodeValue = ( ( SelfOrganizingNetworkNode )

            this.arrayKohonenLayer[ nHoriz ][ nVert ] )
               .GetValue( Values.NodeValue );

            if( dNodeValue < dMinimum )
            {
                dMinimum = dNodeValue;
                this.WinningHorizontalPos = nHoriz;
                this.WinningVerticalPos = nVert;
            }
        }
    }
}

The Run function cycles through the Kohonen Layer array, both horizontally and vertically and then calls Run on each node in the Kohonen layer array. The function keeps track of the lowest value returned by each node in the dMinimum variable, and after calling run each node, it compares the node's final value with the minimum. If the node's value is smaller than the minimum value then this node is stored as the winning node position.

The Learn function for the Self Organizing Network is different from Learn function in the previous networks in that it only applies to the winning node and its surrounding nodes.

public void Learn()
{
    if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
    {
        log.Log( DebugLevelSet.Progress, 
           "Learn called for Self Organizing network ", ClassName );
    }

    int nHoriz = 0;
    int nVert = 0;

    /// work out the neighborhood boundary
    int nHorizStart = this.WinningHorizontalPos - this.NeighborhoodSize;
    int nHorizStop = this.WinningHorizontalPos + this.NeighborhoodSize;
    int nVertStart = this.WinningVerticalPos - this.NeighborhoodSize;
    int nVertStop = this.WinningVerticalPos + this.NeighborhoodSize;

    /// make sure the boundary is within the kohonen layer
    if( nHorizStart < 0 )
        nHorizStart = 0;
    if( nHorizStop >= this.HorizontalSize )
        nHorizStop = this.HorizontalSize;
    if( nVertStart < 0 )
        nVertStart = 0;
    if( nVertStop >= this.VerticalSize )
        nVertStop = this.VerticalSize;

    /// update the neighbors of the winning node
    for( nHoriz=nHorizStart; nHoriz<nHorizStop; nHoriz++ )
    {
        for( nVert=nVertStart; nVert<nVertStop; nVert++ )
        {
            ( ( SelfOrganizingNetworkNode )
              this.arrayKohonenLayer[ nHoriz ][ nVert ] ).SetValue( 
                         Values.LearningRate, this.LearningRate );
            ( ( SelfOrganizingNetworkNode )
              this.arrayKohonenLayer[ nHoriz ][ nVert ] ).Learn( 
                          Values.NodeValue );
        }
    }
}

The Learn function begins by calculating the size of the area to learn on the vertical and the horizontal axis. Once the area has been calculated, the nodes Learning Value is set to the current Learning Value and the Learn is called on each individual node in the affected area.

The Self Organizing Network class also implements the Epoch function which is called once every iteration through the network.

public override void Epoch()
{
    if( debugLevel.TestDebugLevel( DebugLevelSet.Progress ) == true )
    {
        log.Log( DebugLevelSet.Progress, 
              "Epoch called for the Self organizing network ", ClassName );
    }

    Iterations++;

    double fTemp = ( ( InitialLearningRate 
              - ( ( double )Iterations/( double )NumberOfIterations ) ) );
    fTemp = ( ( double )fTemp 
              * ( double )( InitialLearningRate - FinalLearningRate ) );

    if( fTemp < FinalLearningRate )
        fTemp = FinalLearningRate;

    LearningRate = fTemp;

    if( ( Iterations + 1 )%NeighborhoodDecrement 
             == 0 && NeighborhoodSize > 0 )
    {
        NeighborhoodSize--;
        if( NeighborhoodSize < FinalNeighborhoodSize )
            NeighborhoodSize = FinalNeighborhoodSize;
    }
}

The Epoch function calculates the current Learning Rate for the network and controls the neighborhood size. For each of these calculations, it checks that the returned value is not less than the value entered as the final value that either the learning rate of the neighborhood size have specified, and if it is, it then sets the value to the final specified value.

Training

There are three main loops involved in training the Self Organizing Network, the first loop deals with the actual training and then the second loop deals with a special test file that has three hundred entries that are all the same and the third is the final test and output to check that the network has been trained.

/// train the self organizing network
int nIteration = 0;

for( nIteration=0; nIteration<nNumberOfSonOneIterations; nIteration++ )
{
    for( int i=0; i<nNumberOfItemsInSelfOrganizingNetworkTrainingFile; i++ )
    {
        son.SetValue( ( Pattern )patterns[ i ] );
        son.Run();
        netWorkText.AppendText( "." );
    }

    son.Learn();

    son.Epoch();

    log.Log( DebugLevelSet.Progress, "Iteration number " 
           + nIteration.ToString() + " produced a winning node at  " 
           + son.WinningHorizontalPos + " Horizontal and " 
           + son.WinningVerticalPos + " vertical, winning node value = " 
           + son.GetWinningNodeValue( son.WinningHorizontalPos, 
           son.WinningVerticalPos ) + "\n", ClassName );
    netWorkText.AppendText( "\nIteration number " 
           + nIteration.ToString() + " produced a winning node at  " 
           + son.WinningHorizontalPos + " Horizontal and " 
           + son.WinningVerticalPos + " vertical, winning node value = "
           + son.GetWinningNodeValue( son.WinningHorizontalPos, 
           son.WinningVerticalPos ) + "\n" );
}

netWorkText.AppendText( "Saving the network" );

FileStream xmlstream = new FileStream( "selforganizingnetworkone.xml",
     FileMode.Create, FileAccess.Write, FileShare.ReadWrite, 8, true );
XmlWriter xmlWriter = new XmlTextWriter( xmlstream, System.Text.Encoding.UTF8 );
xmlWriter.WriteStartDocument();

son.Save( xmlWriter );

xmlWriter.WriteEndDocument();
xmlWriter.Close();

/// now load the file
FileStream readStream = new FileStream( "selforganizingnetworkone.xml",
       FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 8, true );
XmlReader xmlReader = new XmlTextReader( readStream );

SelfOrganizingNetwork sonTest = new SelfOrganizingNetwork( log );
sonTest.Load( xmlReader );
xmlReader.Close();

netWorkText.AppendText( "Testing against the test file the following out 
                       put should be identical for the test\n" );

ArrayList testPatterns = 
  this.LoadSelfOrganizingNetworkFile( "SelfOrganizingNetworkOneTest.tst" );

for( int i=0; i<nNumberOfItemsInSelfOrganizingNetworkTrainingFile; i++ )
{
    sonTest.SetValue( ( Pattern )testPatterns[ i ] );
    sonTest.Run();
    netWorkText.AppendText( "Run called at " + i.ToString() 
      + " Network Values are :- Composite Value = " 
      + sonTest.GetPosition( Values.Composite ) 
      + ", Horizontal Value = " 
      + sonTest.GetPosition( Values.Row ) + ", Vertical Value = "
      + sonTest.GetPosition( Values.Column ) 
      + ", Current Winning Horizontal Position = " 
      + sonTest.WinningHorizontalPos 
      + ", Current Winning Vertical Position " 
      + sonTest.WinningVerticalPos + ", Inputs = " 
      + (( BasicNode)sonTest.Nodes[0]).NodeValues[Values.NodeValue].ToString() 
      + "," 
      + ((BasicNode)sonTest.Nodes[1]).NodeValues[Values.NodeValue].ToString() 
      + ", Winning Node Value = " 
      + sonTest.GetWinningNodeValue( sonTest.WinningHorizontalPos, 
      sonTest.WinningVerticalPos ) + "\n" );
}

testPatterns.Clear();
StringBuilder strDataDisplay = new StringBuilder( "" );

ArrayList arrayOutput = new ArrayList();
SelfOrganizingNetworkData data;
netWorkText.AppendText( "Completed the test ... 
         now reprocessing the orginal data through the loaded network\n " ); 

for( int i=0; i<nNumberOfItemsInSelfOrganizingNetworkTrainingFile; i++ )
{
    sonTest.SetValue( ( Pattern )patterns[ i ] );
    sonTest.Run();

    strDataDisplay.Remove( 0, strDataDisplay.Length );
    strDataDisplay.Append( "Run Called at " 
           + i.ToString() + " Network Values are :- Composite Value = " 
           + sonTest.GetPosition( Values.Composite ) 
           + ", Horizontal Value = " + sonTest.GetPosition( Values.Row ) 
           + ", Vertical Value = " + sonTest.GetPosition( Values.Column ) 

           + ", Current Winning Horizontal Position = " 
           + sonTest.WinningHorizontalPos 
           + ", Current Winning Vertical Position " 
           + sonTest.WinningVerticalPos + ", Inputs = " 
           + ( ( BasicNode )sonTest.Nodes[ 0 ] ).NodeValues
           [ Values.NodeValue ].ToString() 
           + "," 
           + ( ( BasicNode )sonTest.Nodes[ 1 ] ).NodeValues
           [ Values.NodeValue ].ToString() 
           + ", Winning Node Value = " 
           + sonTest.GetWinningNodeValue( sonTest.WinningHorizontalPos, 
           sonTest.WinningVerticalPos ) + "\n" );

    netWorkText.AppendText( strDataDisplay.ToString() ); 

    data = new SelfOrganizingNetworkData();
    data.CompositeValue = ( int )sonTest.GetPosition( Values.Composite );
    data.Data = strDataDisplay.ToString();

    arrayOutput.Add( data );
}

The first loop in the above code deals with the actual training and this is done by running a loop that cycles through the number of iterations specified. The default for this is 500. Within the loop is another loop that presents the data loaded into the pattern array from an external file. In the example, this file is generated at run time although you can turn this option off if you wish to repeatedly test with the same file.

The internal loop presents each pattern individually to the network and calls Run for the network. The network Run function described above will then find the node value that has the minimum value and declare this to be the winning node. When the internal loop has finished, the Self Organizing Network Learn function is called to calculate the size of the current neighborhood and update the winning node and the surrounding nodes by calling Learn on the individual nodes affected.

Finally, the first loop calls the Self Organizing Network Epoch function which updates the learning rate for the network, and if appropriate, updates the network's neighborhood size.

The second loop of the three is a testing loop. It works on the theory that there has to be a way to check that the Self Organizing Network is doing its job, and as Self Organizing Networks are mainly tools for categorization, then this isn't an easy task. So, the second loop is added to provide some kind of verification. It does this by loading a training file where all the variables are the same and then tests them against the freshly loaded network. The pattern array is then passed into the network and it is run like any other network. If the printed output from the network is identical in all three hundred cases, we then know that the network is performing consistently. This doesn't prove that it is right just that if given the same set of variables, it will produce the same answer.

The third and final loop is the main test loop where the original input is tested against the network that was saved to disk at the end of the training session. The main difference between this code and the previous testing loops is that I sort the data for the output into categories using the Self Organizing Network data struct:

public struct SelfOrganizingNetworkData
{
    private int nCompositeValue;
    private string strData;

    public int CompositeValue 
    {
        get
        {
            return nCompositeValue;
        }
        set
        {
            nCompositeValue = value;
        }
    }

    public string Data
    {
        get
        {
            return strData;
        }
        set
        {
            strData = value;
        }
    }
}

As you can see, this is just a simple structure that is used to store the composite value of each response from the network and a string representing the data that it holds. This is then dumped into an array that is sorted by the code:

SelfOrganizingNetworkData dataTest;

bool bDataValid = false;

for( int i=0; i<arrayOutput.Count; i++ )
{
    /// first value is always valid
    if( i==0 )
    {
        nItemCount = 0;
        data = ( SelfOrganizingNetworkData )arrayOutput[ i ];
        bDataValid = true;
    }
    else
    {
        bool bFound = false;
        data = ( SelfOrganizingNetworkData )arrayOutput[ i ];

        for( int n=0; n<i; n++ )
        {
            dataTest = ( SelfOrganizingNetworkData )arrayOutput[ n ];
            if( dataTest.CompositeValue == data.CompositeValue )
            {
                bFound = true;
                n=i;
            }
        }

        if( bFound == false )
        {
            nItemCount = 0;
            data = ( SelfOrganizingNetworkData )arrayOutput[ i ];
            bDataValid = true;
        }
    }

    if( bDataValid == true )
    {
        for( int n=0; n<arrayOutput.Count; n++ )
        {
            dataTest = ( SelfOrganizingNetworkData )arrayOutput[ n ];
            if( dataTest.CompositeValue == data.CompositeValue )
            {
                nItemCount++;
            }
        }

        netWorkText.AppendText( "\n\nThere are " 
                 + nItemCount.ToString() + " items out of " 
                 + arrayOutput.Count.ToString() 
                 + " That have the Composite Value " 
                 + data.CompositeValue.ToString() + "\n" );

        for( int n=0; n<arrayOutput.Count; n++ )
        {
            dataTest = ( SelfOrganizingNetworkData )arrayOutput[ n ];
            if( dataTest.CompositeValue == data.CompositeValue )
            {
                netWorkText.AppendText( dataTest.Data + "\n" );
            }
        }

        bDataValid = false;
    }
}

which simply groups the data according to its composite value and prints the groups out to the screen in complete blocks.

Saving And Loading

Saving and loading for the Self Organizing Network uses the same XML format as the rest of the program and the file looks like:

<?xml version="1.0" encoding="utf-8"?>
<SelfOrganizingNetwork>
    <HorizontalSize>10</HorizontalSize>
    <VerticalSize>10</VerticalSize>
    <InitialLearningRate>0.5</InitialLearningRate>
    <LearningRate>0.01</LearningRate>
    <FinalLearningRate>0.01</FinalLearningRate>
    <InitialNeighborhoodSize>5</InitialNeighborhoodSize>
    <FinalNeighborhoodSize>5</FinalNeighborhoodSize>
    <NeighborhoodDecrement>100</NeighborhoodDecrement>
    <NeighborhoodSize>5</NeighborhoodSize>
    <NumberOfIterations>500</NumberOfIterations>
    <Iterations>500</Iterations>
    <WinningHorizontalPosition>0</WinningHorizontalPosition>
    <WinningVerticalPosition>9</WinningVerticalPosition>
    <NumberOfNodes>2</NumberOfNodes>
    <InputLayer>
        <BasicNode>
            <Identifier>0</Identifier>
            <NodeValue>0.374602794355994</NodeValue>
            <NodeError>0</NodeError>
            <Bias>
                <BiasValue>1</BiasValue>
            </Bias>
        </BasicNode>
        <BasicNode>
            <Identifier>1</Identifier>
            <NodeValue>0.955952847821616</NodeValue>
            <NodeError>0</NodeError>
            <Bias>
                <BiasValue>1</BiasValue>
            </Bias>
        </BasicNode>
    </InputLayer>
    <KohonenLayer>
        <SelfOrganizingNetworkNode>
            <BasicNode>
                <Identifier>2</Identifier>
                <NodeValue>11.8339429844748</NodeValue>
                <NodeValue>0.01</NodeValue>
                <NodeError>0</NodeError>
                <Bias>
                    <BiasValue>1</BiasValue>
                </Bias>
            </BasicNode>
        </SelfOrganizingNetworkNode>

        Through to ....

        <SelfOrganizingNetworkNode>
            <BasicNode>
                <Identifier>299</Identifier>
                <NodeValue>12.0534121931137</NodeValue>
                <NodeValue>0.01</NodeValue>
                <NodeError>0</NodeError>
                <Bias>
                    <BiasValue>1</BiasValue>
                </Bias>
            </BasicNode>
        </SelfOrganizingNetworkNode>
        <SelfOrganizingNetworkLink>
            <BasicLink>
                <Identifier>3</Identifier>
                <LinkValue>9.2275126633717</LinkValue>
                <InputNodeID>0</InputNodeID>
                <OutputNodeID>2</OutputNodeID>
            </BasicLink>
        </SelfOrganizingNetworkLink>
        
        through to .....

        <SelfOrganizingNetworkLink>
            <BasicLink>
                <Identifier>301</Identifier>
                <LinkValue>9.47900234600894</LinkValue>
                <InputNodeID>1</InputNodeID>
                <OutputNodeID>299</OutputNodeID>
            </BasicLink>
        </SelfOrganizingNetworkLink>
    </KohonenLayer>
</SelfOrganizingNetwork>

Testing

The Testing portions of the code are located under the run menu for Neural Net Tester program. Test for this program is "Load And Run Self Organizing Network 1" option. This will load a file that resembles the one above. I say 'resembles' as linkage values won't be exactly same any two times running.

The menu option will load and run the SelfOrganizingNetworkOne.wrk file and generate the log Load And Run Self Organizing Network One.xml which can be viewed using the LogViewer that is part of the neural net tester program.

The display will show an output similar to that found when running the Adaline networks and is described in understanding the output below.

The quick guide is:

  • Menu :- Run/Load And Run Self Organizing Network 1:- Loads the saved Self Organizing network from the disk and then runs it against a newly generated training file SelfOrganizingNetwork.wrk.
  • Menu :- Generate/Generate Self Organizing Network One Training File :- Generates the training file for the network.
  • Menu :- Generate/Generate Self Organizing Network One Working File :- Generates the working file for the run menu.
  • Menu :- Generate/Generate Self Organizing Network One Test File :- Generates a test file for proving that the network has learnt correctly. (used during the training option.)
  • Menu :- Train/Self Organizing Network 1 :- Trains the network from scratch using the sample file which by default is generated first: SelfOrganizingNetwork.trn.
  • Menu :- Options/Self Organizing Network 1 Options :- Brings up a dialog that allows you to set certain parameters for the running of the network.

Options

The above is the dialog for the Self organizing One options that contains all the available, changeable options that you can apply before running the program. It starts off with the Number Of Items which is the number of items that are contained in the files that the network uses, these being the training file SelfOrganizingNetwork.trn, the working file SelfOrganizingNetwork.wrk and the testing file SelfOrgainzingNetwork.tst.

The Initial Learning Rate is the starting learning rate when the network is run and the Final Learning Rate is the Learning Rate that the network will end up at. This is adjusted incrementally throughout the running of the program (it is done in the Epoch function).

The Neighborhood Size is the Size of the area of nodes that you want to adjust the weights for when Learn is called. The idea with this is that the size is reduced as the program progresses, although in this current example, the size remains at a constant of five by default.

The Neighborhood decrement is the interval between the neighborhood size changing. This is measured in iterations, so for every hundred iterations, the neighborhood size will be decremented slightly.

The Number of Iterations is the number of times that you wish the network to run through the training loop.

Understanding The Output

Training

Iteration number 434 produced a winning node at 7 Horizontal and 8 vertical, winning node value = 11.5295197965397

Iteration number 435 produced a winning node at 0 Horizontal and 6 vertical, winning node value = 11.5295669799536

Iteration number 436 produced a winning node at 2 Horizontal and 0 vertical, winning node value = 11.5424655187099

At first, the program runs through the data by cycling through for the specified number of iterations, in this case 500, and for each iteration, it prints out the iteration number and the winning node position and values, the dots are printed out after each call to the Self Organizing Network Run function.

Pattern ID = 301 Input Value 0 = 0.450591519219145 Input Value 1 = 0.0319415726847675 Output Value 0 = 0

Pattern ID = 302 Input Value 0 = 0.450591519219145 Input Value 1 = 0.0319415726847675 Output Value 0 = 0

Once the training run is complete, the network loads the pattern data from the test file SelfOrganizingNetwork.tst. This file contains the number of items that are all the same to test the network to see if it gives the same answers when the same values are input.

Run called at 2 Network Values are :- Composite Value = 90, Horizontal Value = 9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning Vertical Position 0, Inputs = 0.450591519219145,0.0319415726847675, Winning Node Value = 11.9952261309799

Run called at 3 Network Values are :- Composite Value = 90, Horizontal Value = 9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning Vertical Position 0, Inputs = 0.450591519219145,0.0319415726847675, Winning Node Value = 11.9952261309799

The results of the test are then output to the screen, the values output are the composite value of the winning position, the horizontal and the vertical winning positions, the input values and the winning node value.

Once the test has run, the network is loaded from the XML file and the results of the test are output:

Run Called at 206 Network Values are :- Composite Value = 90, Horizontal Value = 9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning Vertical Position 0, Inputs = 0.603428498191493,0.184778551657115, Winning Node Value = 11.7790820024851

Run Called at 207 Network Values are :- Composite Value = 20, Horizontal Value = 2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current Winning Vertical Position 0, Inputs = 0.167407868973635,0.748757922439258, Winning Node Value = 11.6911783767234

The output for the true test is the same as the output for the test case above. When the test is finished, the data is sorted and then output as:

There are 184 items out of 300 That have the Composite Value 20

Run Called at 0 Network Values are :- Composite Value = 20, Horizontal Value = 2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current Winning Vertical Position 0, Inputs = 0.634821365417364,0.617450682733883, Winning Node Value = 11.4534014253365

There are 116 items out of 300 That have the Composite Value 90

Run Called at 1 Network Values are :- Composite Value = 90, Horizontal Value = 9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning Vertical Position 0, Inputs = 0.29477788149136,0.123872065043017, Winning Node Value = 12.0416726692412

These results largely give a split difference where the value on the left is higher than the value on the right. It does, however, sometimes get confused when the differences between the values are extremely small.

Running

Generating Self Organizing Network File... Please Wait

Self Organizing Network File Generated

Pattern ID = 1 Input Value 0 = 0.634821365417364 Input Value 1 = 0.617450682733883 Output Value 0 = 0

Pattern ID = 2 Input Value 0 = 0.29477788149136 Input Value 1 = 0.123872065043017 Output Value 0 = 0

Pattern ID = 3 Input Value 0 = 0.141242747726498 Input Value 1 = 1.55989269426088 Output Value 0 = 0

A Run starts by generating a new testing file and then loads the data into the pattern array. The pattern array is then run and the output printed to the screen.

Run Called at 109 Network Values are :- Composite Value = 20, Horizontal Value = 2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current Winning Vertical Position 0, Inputs = 0.149710465292311,0.731060518757934, Winning Node Value = 11.7162020269269

Run Called at 110 Network Values are :- Composite Value = 20, Horizontal Value = 2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current Winning Vertical Position 0, Inputs = 0.312410572223557,0.295039889540076, Winning Node Value = 11.9092871878856

Once the data is run, it is then sorted into groups based on the composite values.

Run Called at 3 Network Values are :- Composite Value = 20, Horizontal Value = 2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current Winning Vertical Position 0, Inputs = 0.577263376944355,1.99591332347873, Winning Node Value = 10.5484882984376

Run Called at 4 Network Values are :- Composite Value = 20, Horizontal Value = 2, Vertical Value = 0, Current Winning Horizontal Position = 2, Current Winning Vertical Position 0, Inputs = 2.41456327001311,2.43193395269659, Winning Node Value = 8.91189833587868

And,

There are 116 items out of 300 That have the Composite Value 90

Run Called at 1 Network Values are :- Composite Value = 90, Horizontal Value = 9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning Vertical Position 0, Inputs = 0.29477788149136,0.123872065043017, Winning Node Value = 12.0416726692412

Run Called at 5 Network Values are :- Composite Value = 90, Horizontal Value = 9, Vertical Value = 0, Current Winning Horizontal Position = 9, Current Winning Vertical Position 0, Inputs = 3.85058389923097,2.86795458191445, Winning Node Value = 7.59617983634502

Fun And Games

I must confess to a certain ambivalence to the Self Organizing Network in that an understanding of what you are after is required if not in the same was as the understanding of the correct answers that must be known previously in the preceding networks. Yet, there is a certain point where you should know what to expect from the network, and if you don't, then I get the impression that the analysis of the results of the network can take longer and be more involved than the actual running of the network in the first place.

This is in no way intended to demean the value of the network, more a pointer to something that should be borne in mind. The network can be useful to run and see what comes out in an experimental fashion, but until it is run, there can be no guarantee that the answers it gives will be of any value.

One problem that did occur during the testing of the network was the fact that originally there was no Final Learning Rate option which meant that the Learning Rate was being continually adjusted and decreased until it was a large negative number. Naturally, this was skewing the network's results somewhat, so I included the Final Learning Rate option which is the lowest that the learning rate is allowed to go. This fixed the problem and the network now comes to much more reasonable conclusions about the data being input.

History

  • 3 July 2003 :- Initial release
  • 1 December 2003 :- Review and edit for CP conformance

References

  • Tom Archer (2001) Inside C#, Microsoft Press
  • Jeffery Richter (2002) Applied Microsoft .NET Framework Programming, Microsoft Press
  • Charles Peltzold (2002) Programming Microsoft Windows With C#, Microsoft Press
  • Robinson et al (2001) Professional C#, Wrox
  • William R. Staneck (1997) Web Publishing Unleashed Professional Reference Edition, Sams.net
  • Robert Callan, The Essence Of Neural Networks (1999) Prentice Hall
  • Timothy Masters, Practical Neural Network Recipes In C++ (1993) Morgan Kaufmann (Academic Press)
  • Melanie Mitchell, An Introduction To Genetic Algorithms (1999) MIT Press
  • Joey Rogers, Object-Orientated Neural Networks in C++ (1997) Academic Press
  • Simon Haykin, Neural Networks A Comprehensive Foundation (1999) Prentice Hall
  • Bernd Oestereich (2002) Developing Software With UML Object-Orientated Analysis And Design In Practice, Addison Wesley
  • R Beale & T Jackson (1990) Neural Computing An Introduction, Institute Of Physics Publishing

Thanks

Special thanks go to anyone involved in TortoiseCVS for version control.

All UML diagrams were generated using Metamill version 2.2.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here

About the Author

pseudonym67

United Kingdom United Kingdom
No Biography provided

Comments and Discussions

 
GeneralElastic neural net PinmemberTomekS10-May-04 5:32 
GeneralRe: Elastic neural net Pinmemberpseudonym6710-May-04 7:20 
I've never heard of Elastic Neural Nets so I'd like to have a look thanx.
 
Send them to pseudonym67@hotmail.com
 
Though I'm not promising anything on the update front as I'm already trying to keep my current number of projects from spiralling out of control. Smile | :)
 
pseudonym67
 
My Articles[^]
 
"They say there are strangers who threaten us,
In our immigrants and infidels.
They say there is strangeness too dangerous
In our theaters and bookstore shelves.
That those who know what's best for us
Must rise and save us from ourselves."
 
Rush

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web01 | 2.8.140721.1 | Last Updated 1 Dec 2003
Article Copyright 2003 by pseudonym67
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid