As I mentioned earlier, things were unlikely to remain the same way for long and with the finishing of the code and articles for the first release showing the Adaline networks, I took the opportunity to fix a couple of points in the library design that were bugging me. The following is the new class diagram for the basic classes.
I should point out here that the first release was never released. The original plan was for four releases that had two neural network samples of the same kind of network. On reflection, I felt that the first release was lacking somewhat in dramatic impact and was too basic to really give an impression of what neural networks and the accompanying library were capable of. This, I feel has been fixed by withholding any releases until what was originally planned as the second release, and the inclusion of the BackPropagation Word network gives a much stronger picture of the capabilities of both neural networks and the accompanying library.
As you can see, the
Values class has been removed although this is now a structure that contains constant values and as such is now accessible from anywhere in the code without having to go through any other classes to get to it like you had to with the initial release.
BiasNode class has also been changed and renamed as its implementation was just plain nasty in the first version of the code. It has been changed so that it is no longer is a separate node but is implemented as a part of the
BasicNode class. All the save and load code has been changed to reflect the changes in the base classes and the examples from release one now work with the new classes.
The most obvious changes are to the saving and loading of the XML data files. Here is the file for the Adaline network.
<xml version="1.0" encoding="utf-8" ?>
As you can see from the above, the big change concerns the bias which is now present in every node item in the file. This means that although the implementation of the
Bias follows the implementation in release one, the future way that it will be done is by using the
UseBias function which means that in the translation function, any node can automatically check that if it is supposed to use the bias or not without the need to use crowbar techniques to get the bias value to the transfer function as I did in the first release of the code.
Note also that the Adaline word save file looks similar to the file above. Check the debug directory for the real adalinewordnetwork.xml file.
When it comes to using the code, these changes are barely noticeable. The bias changes will only be felt when developing with the library to create a new network and implementing the transfer function, and with the
Values being changed into a
struct, it will just be a lot simpler to use.
- 12 June 2003 :- Initial release.
- 4 November 2003 :- Review and edit for CP conformance.
- Tom Archer (2001) Inside C#, Microsoft Press
- Jeffery Richter (2002) Applied Microsoft .NET Framework Programming, Microsoft Press
- Charles Peltzold (2002) Programming Microsoft Windows With C#, Microsoft Press
- Robinson et al (2001) Professional C#, Wrox
- William R. Staneck (1997) Web Publishing Unleashed Professional Reference Edition, Sams.net
- Robert Callan, The Essence Of Neural Networks (1999) Prentice Hall
- Timothy Masters, Practical Neural Network Recipes In C++ (1993) Morgan Kaufmann (Academic Press)
- Melanie Mitchell, An Introduction To Genetic Algorithms (1999) MIT Press
- Joey Rogers, Object-Orientated Neural Networks in C++ (1997) Academic Press
- Simon Haykin, Neural Networks A Comprehensive Foundation (1999) Prentice Hall
- Bernd Oestereich (2002) Developing Software With UML Object-Orientated Analysis And Design In Practice, Addison Wesley
- R Beale & T Jackson (1990) Neural Computing An Introduction, Institute Of Physics Publishing
Special thanks go to anyone involved in TortoiseCVS for version control.
All UML diagrams were generated using Metamill version 2.2.