Click here to Skip to main content
Click here to Skip to main content

Convolutional Neural Network Workbench

By , 22 Jan 2012
Rate this:
Please Sign up or sign in to vote.
This is an old version of the currently published article.
Download CNNWB.zip - 2.53 MB

Introduction

This article is about a Microsoft C# 4.0 WPF implementation of a framework that allows to create, train, and test convolutional neural networks against the MNIST dataset of handwritten digits or the CIFAR-10 dataset of 10 different natural objects. There is a magnificent article by Mike O'Neill on the The Code Project about the same subject. Without his great article and C++ demo code, this project wouldn't exist. I also relied heavily on Dr. Yann LeCun's paper: Gradient-Based Learning Applied to Document Recognition to understand more about the principles of convolutional neural networks and the reason why they are so successful in the area of machine vision. Mike O'Neill uses Patrice Simard's implementation where the subsampling step is integrated in the structure of the convolutional layer itself. Dr. Yann LeCun uses in his LeNet-5 a separate subsampling step, and also uses non-fully connected layers. The framework presented allows to use all types of layers, and has an additional Max-Pooling layer that you can use instead of plain Average-Pooling. The default squashing function used is tanh() and the value to train for is set to 0.8 because it is the value at the curvature of the second derivative of the used non-linearity so there is less saturation. The input images are all normalised (-1/1), and the input layer is at a fixed 32x32 window.

The Code

The main goal of this project was to build an enhanced and extended version of Mike O'Neill's excellent C++ project. This time written in C# 4.0 and using WPF with a simple MVVM pattern as the GUI instead of Windows Forms. I've included and used the WPF TaskDialog Wrapper from Sean A. Hanley instead of the Windows API Code Pack because the first is more compact and fit my needs perfectly. Also the Extended WPF Toolkit is used. For unzipping the CIFAR-10 dataset I used the open-source SharpDevelop SharpZipLib module. So Visual Studio 2010 and Windows Vista SP2 are the minimum requirements to use my application. I also made maximal use of the parallel functionality offered in C# 4.0 by letting the user at all times choose how many logical cores are used in the parallel optimised code parts with a simple manipulation of the sliderbar next to the View combobox.

Using the code

Here is the example code to construct a LeNet-5 network in my code (see the InitializeDefaultNeuralNetwork() function in MainViewWindows.xaml.cs):

NeuralNetworks network = new NeuralNetworks("LeNet-5", 0.8D, LossFunctions.MeanSquareError, DataProviderSets.MNIST, 0.02D);
network.Layers.Add(new Layers(network, LayerTypes.Input, 1, 32, 32));
network.Layers.Add(new Layers(network, LayerTypes.Convolutional,ActivationFunctions.Tanh, 6, 28, 28, 5, 5));
network.Layers.Add(new Layers(network, LayerTypes.Subsampling, ActivationFunctions.AveragePoolingTanh, 6, 14, 14, 2, 2));

List<bool> mapCombinations = new List<bool>(16 * 6) 
{
 true, false,false,false,true, true, true, false,false,true, true, true, true, false,true, true,
 true, true, false,false,false,true, true, true, false,false,true, true, true, true, false,true,
 true, true, true, false,false,false,true, true, true, false,false,true, false,true, true, true,
 false,true, true, true, false,false,true, true, true, true, false,false,true, false,true, true,
 false,false,true, true, true, false,false,true, true, true, true, false,true, true, false,true,
 false,false,false,true, true, true, false,false,true, true, true, true, false,true, true, true
};

network.Layers.Add(new Layers(network, LayerTypes.Convolutional, ActivationFunctions.Tanh, 16, 10, 10, 5, 5, new Mappings(network, 2, mapCombinations)));
network.Layers.Add(new Layers(network, LayerTypes.Subsampling, ActivationFunctions.AveragePoolingTanh, 16, 5, 5, 2, 2));
network.Layers.Add(new Layers(network, LayerTypes.Convolutional, ActivationFunctions.Tanh, 120, 1, 1, 5, 5));
network.Layers.Add(new Layers(network, LayerTypes.FullyConnected, ActivationFunctions.Tanh, 10));
network.InitWeights();

Design View

DesignMNISTnew.png

design_cifar_news.png

This is Design view where you can see how the network is defined and see the weights of all the layers. When you hover with the mouse over a single weight, a tooltip shows the corresponding weight value. You can always refresh the weights graphic if you have changed the block size so you can see it to the prefered size.

Training View

training_mnist.png

training_cifar_news.png

This is Training view where you train the network. The 'Play' button gives you the 'Select Training Parameters' dialog where you can define the basic training parameters. The 'Training Scheme Editor' button gives you the possibility to fully define your own training schemas and to save and load them as you want. At any time, the training can be easily aborted by pressing the 'Stop' button. When you press the button with the star, you can reset all the weight values.

parameters_new.png

TrainingSchemaEditor.PNG

Testing View

TestingMNIST.png

testing_cifar_news.png

In Testing view, you can test your network and get a graphical confusion matrix that represents all the misses.

Calculate View

CalculateMNISTnew.png

calculate_cifar_news.png

In Calculate view, we can test a single digit or object with the desired properties and fire it through the network and get a graphical view of all the output values in every layer.

Final Words

I would love to see a DirectCompute 5.0 integration for offloading the highly parallel task of learning the neural network to a DirectX 11 compliant GPU if one is available. But I've never programmed with DirectX or any other shader based language before, so if there's anyone out there with some more experience in this area, any help is very welcome. I made an attempt to use a simple MVVM structure in this WPF application. In the Model folder, you can find the files for the neural network class and also a DataProvider class which deals with loading and providing the necessary MNIST training and testing samples. There is also a NeuralNetworkDataSet class that is used by the project to load and save neural network definitions, weights, or both (full) from or to a file on disk. Then there is the View folder that contains the four different PageViews in the project and a global PageView which acts as a container for the different views (Design, Training, Testing, and Calculate). In the ViewModel folder, you will find a PageViewModelBase class where the corresponding four ViewModels are derived from. All the rest is found in the MainViewWindows.xaml.cs class. Hope there's someone out there who can actually use this code and improve on it. Extend it with an unsupervised learning stage for example (encoder/decoder construction), or implement a better loss-function (negative log likelihood instead of MSE); extend to more test databases; make use of more advanced squashing functions, etc.

History

1.0.1.4:

- The performance of Design View is now better optimized for bigger networks.

- It's now possible to adjust the block size of the weight and output values graphic.

- In Design View you can refresh the weights graphic to the current block size.

1.0.1.3:

- Performance improvements in training networks

- Performance improvement in displaying Design View. (still to slow for big networks)

- Minor GUI changes

1.0.1.2:

- Now all the fully connected layers are displayed in Calculate View.

- Changing the background color is working properly now.

1.0.1.1:

- Now you can easily reset the weights values in Training View.

- By using Max-Pooling with the CIFAR-10 dataset the results are much better. I've also horizontal flipped each training pattern to double the size of the training set.

- Some minor fixes.

1.0.1.0:

- The CIFAR-10 Dataset of 10 natural objects in color is now fully supported.

- The weights in Design View are now correctly displayed. (still slow on big networks)

- The file format used to save and load weights, definitions, etc is changed and incompatible with previous versions.

1.0.0.1:

- Now you can see all the weight and bias values in every layer.

- Renaming some items so they make more sense (KernelTypes.Sigmoid => ActivationFunctions.Tanh)

- As a last layer you can use LeCun's RBF layer with fixed weights.

- Now it is possible to uses ActivationFunctions.AbsTanh to have a rectified convolutional layer.

1.0.0.0:

- Initial release

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

About the Author

Filip D'haene
Software Developer
Belgium Belgium
No Biography provided

Comments and Discussions


Discussions posted for the Published version of this article. Posting a message here will take you to the publicly available article in order to continue your conversation in public.
 
QuestionPlease guide me to training CNNs on Sound spectrum Images PinmemberMember 1062768312-Apr-14 1:53 
AnswerRe: Please guide me to training CNNs on Sound spectrum Images [modified] PinpremiumFilip D'haene14-Apr-14 8:43 
GeneralRe: Please guide me to training CNNs on Sound spectrum Images PinmemberMember 1062768314-Apr-14 16:40 
Questionrbf wrong Pinmemberwsnzone20-Mar-14 0:47 
AnswerRe: rbf wrong PinpremiumFilip D'haene20-Mar-14 8:10 
GeneralRe: rbf wrong Pinmemberwsnzone24-Mar-14 10:25 
BugZip file corrupted PinmemberIdo Freeman17-Mar-14 3:27 
GeneralRe: Zip file corrupted PinpremiumFilip D'haene17-Mar-14 3:43 
QuestionPlease guide me to implement GPU accelerated version PinmemberMember 106422257-Mar-14 23:53 
AnswerRe: Please guide me to implement GPU accelerated version PinpremiumFilip D'haene11-Mar-14 10:37 
QuestionWhy didn't CNN-CIFAR-10 use higher image resolution on input layer? [modified] PinmemberMember 103720811-Mar-14 20:06 
AnswerRe: Why didn't CNN-CIFAR-10 use higher image resolution on input layer? PinmemberFilip D'haene6-Mar-14 12:41 
Questionhow can i store the training setting and training result in my database? PinmemberMember 1033405317-Dec-13 0:23 
AnswerRe: how can i store the training setting and training result in my database? PinmemberFilip D'haene30-Dec-13 8:39 
GeneralRe: how can i store the training setting and training result in my database? PinmemberMember 1008855521-Jan-14 14:47 
QuestionQuestion about time for each epoch Pinmembertintinkool198827-Nov-13 17:59 
AnswerRe: Question about time for each epoch PinmemberFilip D'haene29-Nov-13 2:49 
QuestionAlways get stuck when download datasets PinmemberMember 1041023923-Nov-13 20:20 
AnswerRe: Always get stuck when download datasets PinmemberFilip D'haene24-Nov-13 3:08 
GeneralRe: Always get stuck when download datasets PinmemberMember 1041023924-Nov-13 14:06 
GeneralRe: Always get stuck when download datasets Pinmemberstevic26-Nov-13 0:25 
GeneralRe: Always get stuck when download datasets PinmemberFilip D'haene26-Nov-13 2:45 
QuestionSystem requirements to run PinmemberMember 1040849720-Nov-13 4:51 
AnswerRe: System requirements to run PinmemberFilip D'haene22-Nov-13 4:01 
QuestionMissing images Pinprofessionalroscler8-Nov-13 16:01 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web02 | 2.8.140415.2 | Last Updated 22 Jan 2012
Article Copyright 2010 by Filip D'haene
Everything else Copyright © CodeProject, 1999-2014
Terms of Use
Layout: fixed | fluid