Click here to Skip to main content
16,016,306 members
Articles / Desktop Programming / Win32
Article

A Neural Network on GPU

,
Rate me:
Please Sign up or sign in to vote.
4.90/5 (72 votes)
13 Mar 2008CC (ASA 2.5)9 min read 425.2K   20.1K   231   60
This article describes the implementation of a neural network with CUDA.

Introduction

An Artificial Neural Network is an information processing method that was inspired by the way biological nervous systems function, such as the brain, to process information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. Neural Networks have been widely used in "analogous" signal classifications, including handwriting, voice and image recognitions. Neural network can also be used in computer games. It enables games with the ability to adaptively learn from player behaviors. This technique has been used in racing games, such that opponent cars controlled by computers can learn how to drive by human players.

Since a Neural Network requires a considerable number of vector and matrix operations to get results, it is very suitable to be implemented in a parallel programming model and run on Graphics Processing Units (GPUs). Our goal is to utilize and unleash the power of GPUs to boost the performance of a Neural Network solving handwriting recognition problems.

This project was originally our graphics architecture course project. We ran on GPU the same Neural Network described by Mike O'Neill in his brilliant article "Neural Network for Recognition of Handwritten Digits".

About the Neural Network

A Neural Network consists of two basic kinds of elements, neurons and connections. Neurons connect with each other through connections to form a network. This is a simplified theory model of the human brain.

A Neural Network often has multiple layers; neurons of a certain layer connect neurons of the next level in some way. Every connection between them is assigned with a weight value. At the beginning, input data are fed into the neurons of the first layer, and by computing the weighted sum of all connected first layer neurons, we can get the neuron value of a second layer neuron and so on. Finally, we can reach the last layer, which is the output. All the computations involved in operating a Neural Network are a bunch of dot products.

The secret of a Neural Network is all about weight values. Right values make it perfect. However, at the beginning, we don't know those values. Therefore, we need to train our network with sample inputs and compare the outcomes with our desired answers. Some algorithm can take the errors as inputs and modify the network weights. If patient enough, the Neural Network can be trained to achieve high accuracy.

IllustrationNeuralNet.gif

The neural network we implemented was a 5 layer network called convolutional neural network. This kind of network is proven to be suitable for recognizing handwritten digits. For more theoretical details, please check out Mike's article and the references he has listed.

The first three layers of our neural network consist of several feature maps. Each of them is shrunken from the previous layer. Our input is a 29*29 image of a digit. Therefore, we have 29*29=841 neurons in the first layer. The second layer is a convolutional layer with 6 feature maps. Each feature map which is a 13*13 image is sampled from the first layer. Each pixel/neuron in a feature map is a 5*5 convolutional kernel of the input layer. So, there are 13*13*6 = 1014 nodes/neurons in this layer, and (5*5+1(bias node))*6 = 156 weights, 1014*(5*5+1) = 26364 connections linking to the first layer.

Layer 3 is also a convolutional layer, but with 50 smaller feature maps. Each feature map is 5*5 in size, and each pixel in these feature maps is a 5*5 convolutional kernel of corresponding areas of all 6 feature maps of the previous layer. There are thus 5*5*50 = 1250 neurons in this layer, (5*5+1)*6*50 = 7800 weights, and 1250*26 = 32500 connections.

The fourth layer is a fully-connected layer with 100 neurons. Since it is fully-connected, each of the 100 neurons in the layer is connected to all 1250 neurons in the previous layer. There are therefore 100 neurons in it, 100*(1250+1) = 125100 weights and 100x1251 = 125100 connections.

Layer 5 is the final output layer. This layer is also a fully-connected layer with 10 units. Each of the 10 neurons in this layer is connected to all 100 neurons of the previous layer. There are 10 neurons in Layer 5, 10*(100+1) = 1010 weights and 10x101 = 1010 connections.

As you can see, although structurally simple, this Neural Network is a huge data structure.

Previous GPU Implementation

Fast Neural Network Library (FANN) has a very simple implementation of Neural Network on GPU with GLSL. Each neural is represented by a single color channel of a texture pixel. This network is very specific; neurons are ranging from 0 to 1 and have an accuracy of only 8 bits. This implementation takes the advantage of hardware accelerated dot product function to calculate neurons. Both neurons and weights are carried on texture maps.

This implementation is straightforward and easy, however limited. First, in our neural network, we require 32-bit float accuracy for each neuron. Since our network has five layers, accuracy lost at the first level could be accumulated and alter the final results. And because it is important that a handwriting recognition system should be sensitive enough to detect slight differences between different inputs, using only 8 bits to represent a neuron is unacceptable. Secondly, normal Neural Networks map neuron values to the range from 0 to 1. However, in our program, the Neural Network which is specifically designed for handwriting recognition has a special activation function mapping each neuron value to the range from -1 to 1. Therefore, if the neuron is represented by a single color value as in FANN library, our neurons will lose accuracy further. Finally, the FANN method uses a dot product to compute neurons, which is suitable for full connected Neural Networks. In our implementation, the Neural Network is partially connected. Computations performed on our Neural Network involve dot products of large vectors.

Our Implementation

Due to all the inconvenience about GLSL mentioned above, we finally choose CUDA. The reason that the Neural Network is suitable for GPU is that the training and execution of a Neural Network are two separate processes. Once properly trained, no writing access is required while using a Neural Network. Therefore there is no synchronization issue that needs to be addressed. Moreover, neurons on a same network level are completely isolated, such that neuron value computations can achieve highly parallelization.

In our code, weights for the first layer are stored as an array, and those inputs are copied to device. For each network level, there is a CUDA function handling the computation of neuron values of that level, since parallelism can only be achieved within one level and the connections are different between levels. The connections of the Neural Network are implicitly defined in CUDA functions with the equations of next level neuron computation. No explicit connection data structure exists in our code. This is one main difference between our code and the CPU version by Mike.

cuda.PNG

For example, each neuron value of the second level is a weighted sum of 25 neurons of the first level and one bias. The second neuron level is composed of 6 feature maps; each has a size of 13*13. We assign a blockID for each feature map and a threadID for each neuron on a feature map. Every feature map is handled by a block and each pixel on it is dealt with by a thread.

This is the CUDA function that computes the second network layer:

C++
__global__ void executeFirstLayer
    (float *Layer1_Neurons_GPU,float *Layer1_Weights_GPU,float *Layer2_Neurons_GPU)
{
    int blockID=blockIdx.x;
    int pixelX=threadIdx.x;
    int pixelY=threadIdx.y;

    int kernelTemplate[25] = {
        0,  1,  2,  3,  4,
        29, 30, 31, 32, 33,
        58, 59, 60, 61, 62,
        87, 88, 89, 90, 91,
        116,117,118,119,120 };

    int weightBegin=blockID*26;
    int windowX=pixelX*2;
    int windowY=pixelY*2;

    float result=0;

    result+=Layer1_Weights_GPU[weightBegin];

    ++weightBegin;

    for(int i=0;i<25;++i)
    {
        result+=Layer1_Neurons_GPU
            [windowY*29+windowX+kernelTemplate[i]]*Layer1_Weights_GPU[weightBegin+i];
    }

    result=(1.7159*tanhf(0.66666667*result));

    Layer2_Neurons_GPU[13*13*blockID+pixelY*13+pixelX]=result;
} 

All other levels are computed the same way; the only difference is the equation of calculating neurons.

program.PNG

The main program first transfers all the input data to GPU and then calls each CUDA function in order and finally gets the answer.

recod.jpg

The user interface is a separate program using C#. Users can draw a digit with the mouse on the input pad, the program then generates a 29*29 image and calls the kernel Neural Network program. The kernel, as described above, will read the input image and feed it into our Neural Network. Results are also returned with files and then read back by the user interface.

Here is a screenshot. After drawing a digit, we can get all the 10 neuron values of the last network layer. The index of the maximum neuron value is the most possible digit. We shade candidates with different depth of red colors according to their possibilities.

On the right, the user interface will print out feature maps of the first three layers.

Note that C# under Windows XP has a resolution issue. We tested our program under 120dpi. A 96dpi resolution setting could shift the input image around, so that the accuracy is badly affected.

No training part is included in our GPU implementation. We use Mike’s code to train all the weights and cached them with files.

Result

Accuracy

Our Neural Network can achieve a 95% accuracy. The database we used to train the network is called MNIST containing 60000 handwriting examples from different people. It is reported by Dr. LeCun that this network can converge after around 25 times of training. This number is confirmed by our test. We achieved only around 1400 miss-recognition samples out of 60000 inputs.

Also note that there is a bug in Mike's code. This is the corrected code for initializing the second layer:

C++
for ( fm=0; fm<50; ++fm)
{
  for ( ii=0; ii<5; ++ii )
  {
    for ( jj=0; jj<5; ++jj )
    {
      // iNumWeight = fm * 26;  // 26 is the number of weights per feature map
      iNumWeight = fm * 156;  // 156 is the number of weights per feature map
      NNNeuron& n = *( pLayer->m_Neurons[ jj + ii*5 + fm*25 ] );

      n.AddConnection( ULONG_MAX, iNumWeight++ );  // bias weight

      for ( kk=0; kk<25; ++kk )
      {
        // note: max val of index == 1013, corresponding to 1014 neurons in prev layer
        n.AddConnection(       2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
        n.AddConnection( 169 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
        n.AddConnection( 338 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
        n.AddConnection( 507 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
        n.AddConnection( 676 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
        n.AddConnection( 845 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
      }
    }
  }
} 

Please refer to this for the details about this bug.

Our GPU implementation is based on the correct version, however there isn't too much difference in terms of accuracy.

Performance

performance.PNG

The major reason for using GPU to compute Neural Network is to achieve robustness. The outcome is promising compared to CPU implementation. As shown in the table above, the executing time of GPU version, EmuRelease version and CPU version running on one single input sample is compared. The GPU version speeds up by 270 times compared to CPU version and 516.6 times compared to EmuRelease version. To be more accurate, we also considered the IO time consumption of the GPU version. As we can see, even when the IO time is considered, our method is 10 times faster. And in practical use, weight values need only be loaded into the device once.

History

  • 14th March, 2008: Initial post

License

This article, along with any associated source code and files, is licensed under The Creative Commons Attribution-ShareAlike 2.5 License


Written By
Other
United States United States
Ph.D. student,
University of California, Davis

Written By
Other
United States United States
Ph.D. Student
University of California, Davis

Comments and Discussions

 
Questionhow to do set up your hardware enviornment? Pin
Southmountain16-May-22 7:50
Southmountain16-May-22 7:50 
Questionruning is error ,no can find cudart.dll ? Pin
文威 梁19-Mar-15 4:37
文威 梁19-Mar-15 4:37 
GeneralMy vote of 1 Pin
Omar Gameel Salem29-Sep-14 1:08
professionalOmar Gameel Salem29-Sep-14 1:08 
QuestionComparison with CPU performance. Pin
eugenewang5521-Jul-13 13:40
eugenewang5521-Jul-13 13:40 
QuestionThanks for sharing this information! Pin
yoels5-Apr-13 9:02
yoels5-Apr-13 9:02 
QuestionHelp me! Pin
2210199114-Oct-12 14:06
2210199114-Oct-12 14:06 
Questiongreat,thank you. Pin
Baolei Fan15-Jul-12 4:12
Baolei Fan15-Jul-12 4:12 
GeneralMy vote of 5 Pin
Kanasz Robert8-Jun-11 3:52
professionalKanasz Robert8-Jun-11 3:52 
Generali have not worked Demos and source code of NN Pin
edem4610-Apr-11 22:08
edem4610-Apr-11 22:08 
GeneralMy vote of 3 Pin
428820-Mar-11 5:08
428820-Mar-11 5:08 
GeneralRe: My vote of 3 Pin
Divydeep Agarwal14-Apr-11 10:03
Divydeep Agarwal14-Apr-11 10:03 
GeneralRe: My vote of 3 Pin
428815-Apr-11 3:30
428815-Apr-11 3:30 
GeneralCodes do not work Pin
cardano712-Mar-11 15:14
cardano712-Mar-11 15:14 
GeneralMy vote of 5 Pin
Kanasz Robert10-Jan-11 11:18
professionalKanasz Robert10-Jan-11 11:18 
Questionhow exactly did you obtain the weights files? Pin
fizdumn14-Jul-10 6:54
fizdumn14-Jul-10 6:54 
Questionnot fully optimized code?? Pin
ilios8610-Jul-10 22:18
ilios8610-Jul-10 22:18 
GeneralCan't Get it to work Pin
CarlosDuque2-Dec-09 11:17
CarlosDuque2-Dec-09 11:17 
GeneralStuck on 2 Pin
Zodical27-Nov-09 14:42
Zodical27-Nov-09 14:42 
GeneralThe CUDA code is not optimized Pin
Julekmen20-Oct-09 22:53
Julekmen20-Oct-09 22:53 
NewsRe: The CUDA code is not optimized Pin
sergneu13-Nov-09 1:49
sergneu13-Nov-09 1:49 
GeneralRe: The CUDA code is not optimized Pin
Member 1216440828-Nov-15 10:23
Member 1216440828-Nov-15 10:23 
Generalhelp me Pin
lechat232628-Sep-09 7:08
lechat232628-Sep-09 7:08 
GeneralPermission to redistribute source code Pin
cael4710-Sep-09 13:28
cael4710-Sep-09 13:28 
GeneralAnother interesting Neural Network application Pin
SharkTime6-Aug-09 8:01
SharkTime6-Aug-09 8:01 
GeneralI can not obtain a good recognition of this method Pin
bogycat9-May-09 17:12
bogycat9-May-09 17:12 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.