15,119,786 members
Articles / Desktop Programming / Win32
Article
Posted 13 Mar 2008

388.3K views
230 bookmarked

# A Neural Network on GPU

Rate me:
13 Mar 2008CC (ASA 2.5)9 min read
This article describes the implementation of a neural network with CUDA.

## Introduction

An Artificial Neural Network is an information processing method that was inspired by the way biological nervous systems function, such as the brain, to process information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. Neural Networks have been widely used in "analogous" signal classifications, including handwriting, voice and image recognitions. Neural network can also be used in computer games. It enables games with the ability to adaptively learn from player behaviors. This technique has been used in racing games, such that opponent cars controlled by computers can learn how to drive by human players.

Since a Neural Network requires a considerable number of vector and matrix operations to get results, it is very suitable to be implemented in a parallel programming model and run on Graphics Processing Units (GPUs). Our goal is to utilize and unleash the power of GPUs to boost the performance of a Neural Network solving handwriting recognition problems.

This project was originally our graphics architecture course project. We ran on GPU the same Neural Network described by Mike O'Neill in his brilliant article "Neural Network for Recognition of Handwritten Digits".

## About the Neural Network

A Neural Network consists of two basic kinds of elements, neurons and connections. Neurons connect with each other through connections to form a network. This is a simplified theory model of the human brain.

A Neural Network often has multiple layers; neurons of a certain layer connect neurons of the next level in some way. Every connection between them is assigned with a weight value. At the beginning, input data are fed into the neurons of the first layer, and by computing the weighted sum of all connected first layer neurons, we can get the neuron value of a second layer neuron and so on. Finally, we can reach the last layer, which is the output. All the computations involved in operating a Neural Network are a bunch of dot products.

The secret of a Neural Network is all about weight values. Right values make it perfect. However, at the beginning, we don't know those values. Therefore, we need to train our network with sample inputs and compare the outcomes with our desired answers. Some algorithm can take the errors as inputs and modify the network weights. If patient enough, the Neural Network can be trained to achieve high accuracy.

The neural network we implemented was a 5 layer network called convolutional neural network. This kind of network is proven to be suitable for recognizing handwritten digits. For more theoretical details, please check out Mike's article and the references he has listed.

The first three layers of our neural network consist of several feature maps. Each of them is shrunken from the previous layer. Our input is a 29*29 image of a digit. Therefore, we have 29*29=841 neurons in the first layer. The second layer is a convolutional layer with 6 feature maps. Each feature map which is a 13*13 image is sampled from the first layer. Each pixel/neuron in a feature map is a 5*5 convolutional kernel of the input layer. So, there are 13*13*6 = 1014 nodes/neurons in this layer, and (5*5+1(bias node))*6 = 156 weights, 1014*(5*5+1) = 26364 connections linking to the first layer.

Layer 3 is also a convolutional layer, but with 50 smaller feature maps. Each feature map is 5*5 in size, and each pixel in these feature maps is a 5*5 convolutional kernel of corresponding areas of all 6 feature maps of the previous layer. There are thus 5*5*50 = 1250 neurons in this layer, (5*5+1)*6*50 = 7800 weights, and 1250*26 = 32500 connections.

The fourth layer is a fully-connected layer with 100 neurons. Since it is fully-connected, each of the 100 neurons in the layer is connected to all 1250 neurons in the previous layer. There are therefore 100 neurons in it, 100*(1250+1) = 125100 weights and 100x1251 = 125100 connections.

Layer 5 is the final output layer. This layer is also a fully-connected layer with 10 units. Each of the 10 neurons in this layer is connected to all 100 neurons of the previous layer. There are 10 neurons in Layer 5, 10*(100+1) = 1010 weights and 10x101 = 1010 connections.

As you can see, although structurally simple, this Neural Network is a huge data structure.

## Previous GPU Implementation

Fast Neural Network Library (FANN) has a very simple implementation of Neural Network on GPU with GLSL. Each neural is represented by a single color channel of a texture pixel. This network is very specific; neurons are ranging from 0 to 1 and have an accuracy of only 8 bits. This implementation takes the advantage of hardware accelerated dot product function to calculate neurons. Both neurons and weights are carried on texture maps.

This implementation is straightforward and easy, however limited. First, in our neural network, we require 32-bit float accuracy for each neuron. Since our network has five layers, accuracy lost at the first level could be accumulated and alter the final results. And because it is important that a handwriting recognition system should be sensitive enough to detect slight differences between different inputs, using only 8 bits to represent a neuron is unacceptable. Secondly, normal Neural Networks map neuron values to the range from 0 to 1. However, in our program, the Neural Network which is specifically designed for handwriting recognition has a special activation function mapping each neuron value to the range from -1 to 1. Therefore, if the neuron is represented by a single color value as in FANN library, our neurons will lose accuracy further. Finally, the FANN method uses a dot product to compute neurons, which is suitable for full connected Neural Networks. In our implementation, the Neural Network is partially connected. Computations performed on our Neural Network involve dot products of large vectors.

## Our Implementation

Due to all the inconvenience about GLSL mentioned above, we finally choose CUDA. The reason that the Neural Network is suitable for GPU is that the training and execution of a Neural Network are two separate processes. Once properly trained, no writing access is required while using a Neural Network. Therefore there is no synchronization issue that needs to be addressed. Moreover, neurons on a same network level are completely isolated, such that neuron value computations can achieve highly parallelization.

In our code, weights for the first layer are stored as an array, and those inputs are copied to device. For each network level, there is a CUDA function handling the computation of neuron values of that level, since parallelism can only be achieved within one level and the connections are different between levels. The connections of the Neural Network are implicitly defined in CUDA functions with the equations of next level neuron computation. No explicit connection data structure exists in our code. This is one main difference between our code and the CPU version by Mike.

For example, each neuron value of the second level is a weighted sum of 25 neurons of the first level and one bias. The second neuron level is composed of 6 feature maps; each has a size of 13*13. We assign a `blockID` for each feature map and a `threadID` for each neuron on a feature map. Every feature map is handled by a block and each pixel on it is dealt with by a thread.

This is the CUDA function that computes the second network layer:

C++
```__global__ void executeFirstLayer
(float *Layer1_Neurons_GPU,float *Layer1_Weights_GPU,float *Layer2_Neurons_GPU)
{
int blockID=blockIdx.x;

int kernelTemplate[25] = {
0,  1,  2,  3,  4,
29, 30, 31, 32, 33,
58, 59, 60, 61, 62,
87, 88, 89, 90, 91,
116,117,118,119,120 };

int weightBegin=blockID*26;
int windowX=pixelX*2;
int windowY=pixelY*2;

float result=0;

result+=Layer1_Weights_GPU[weightBegin];

++weightBegin;

for(int i=0;i<25;++i)
{
result+=Layer1_Neurons_GPU
[windowY*29+windowX+kernelTemplate[i]]*Layer1_Weights_GPU[weightBegin+i];
}

result=(1.7159*tanhf(0.66666667*result));

Layer2_Neurons_GPU[13*13*blockID+pixelY*13+pixelX]=result;
} ```

All other levels are computed the same way; the only difference is the equation of calculating neurons.

The main program first transfers all the input data to GPU and then calls each CUDA function in order and finally gets the answer.

The user interface is a separate program using C#. Users can draw a digit with the mouse on the input pad, the program then generates a 29*29 image and calls the kernel Neural Network program. The kernel, as described above, will read the input image and feed it into our Neural Network. Results are also returned with files and then read back by the user interface.

Here is a screenshot. After drawing a digit, we can get all the 10 neuron values of the last network layer. The index of the maximum neuron value is the most possible digit. We shade candidates with different depth of red colors according to their possibilities.

On the right, the user interface will print out feature maps of the first three layers.

Note that C# under Windows XP has a resolution issue. We tested our program under 120dpi. A 96dpi resolution setting could shift the input image around, so that the accuracy is badly affected.

No training part is included in our GPU implementation. We use Mike’s code to train all the weights and cached them with files.

## Result

#### Accuracy

Our Neural Network can achieve a 95% accuracy. The database we used to train the network is called MNIST containing 60000 handwriting examples from different people. It is reported by Dr. LeCun that this network can converge after around 25 times of training. This number is confirmed by our test. We achieved only around 1400 miss-recognition samples out of 60000 inputs.

Also note that there is a bug in Mike's code. This is the corrected code for initializing the second layer:

C++
```for ( fm=0; fm<50; ++fm)
{
for ( ii=0; ii<5; ++ii )
{
for ( jj=0; jj<5; ++jj )
{
// iNumWeight = fm * 26;  // 26 is the number of weights per feature map
iNumWeight = fm * 156;  // 156 is the number of weights per feature map
NNNeuron& n = *( pLayer->m_Neurons[ jj + ii*5 + fm*25 ] );

n.AddConnection( ULONG_MAX, iNumWeight++ );  // bias weight

for ( kk=0; kk<25; ++kk )
{
// note: max val of index == 1013, corresponding to 1014 neurons in prev layer
n.AddConnection(       2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
n.AddConnection( 169 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
n.AddConnection( 338 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
n.AddConnection( 507 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
n.AddConnection( 676 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
n.AddConnection( 845 + 2*jj + 26*ii + kernelTemplate2[kk], iNumWeight++ );
}
}
}
} ```

Our GPU implementation is based on the correct version, however there isn't too much difference in terms of accuracy.

#### Performance

The major reason for using GPU to compute Neural Network is to achieve robustness. The outcome is promising compared to CPU implementation. As shown in the table above, the executing time of GPU version, EmuRelease version and CPU version running on one single input sample is compared. The GPU version speeds up by 270 times compared to CPU version and 516.6 times compared to EmuRelease version. To be more accurate, we also considered the IO time consumption of the GPU version. As we can see, even when the IO time is considered, our method is 10 times faster. And in practical use, weight values need only be loaded into the device once.

## History

• 14th March, 2008: Initial post

## About the Authors

 Other United States
Ph.D. student,
University of California, Davis

 Other United States
Ph.D. Student
University of California, Davis

## Comments and Discussions

 First PrevNext
 runing is error ，no can find cudart.dll ？ 文威 梁19-Mar-15 5:37 文威 梁 19-Mar-15 5:37
 My vote of 1 Omar Gameel Salem29-Sep-14 2:08 Omar Gameel Salem 29-Sep-14 2:08
 Comparison with CPU performance. eugenewang5521-Jul-13 14:40 eugenewang55 21-Jul-13 14:40
 Thanks for sharing this information! yoels5-Apr-13 10:02 yoels 5-Apr-13 10:02
 Help me! 2210199114-Oct-12 15:06 22101991 14-Oct-12 15:06
 great,thank you. Baolei Fan15-Jul-12 5:12 Baolei Fan 15-Jul-12 5:12
 My vote of 5 Kanasz Robert8-Jun-11 4:52 Kanasz Robert 8-Jun-11 4:52
 i have not worked Demos and source code of NN edem4610-Apr-11 23:08 edem46 10-Apr-11 23:08
 My vote of 3 428820-Mar-11 6:08 4288 20-Mar-11 6:08
 Re: My vote of 3 Divydeep Agarwal14-Apr-11 11:03 Divydeep Agarwal 14-Apr-11 11:03
 Re: My vote of 3 428815-Apr-11 4:30 4288 15-Apr-11 4:30
 Codes do not work cardano712-Mar-11 16:14 cardano7 12-Mar-11 16:14
 My vote of 5 Kanasz Robert10-Jan-11 12:18 Kanasz Robert 10-Jan-11 12:18
 how exactly did you obtain the weights files? fizdumn14-Jul-10 7:54 fizdumn 14-Jul-10 7:54
 not fully optimized code?? ilios8610-Jul-10 23:18 ilios86 10-Jul-10 23:18
 Can't Get it to work CarlosDuque2-Dec-09 12:17 CarlosDuque 2-Dec-09 12:17
 Stuck on 2 Zodical27-Nov-09 15:42 Zodical 27-Nov-09 15:42
 The CUDA code is not optimized Julekmen20-Oct-09 23:53 Julekmen 20-Oct-09 23:53
 Re: The CUDA code is not optimized sergneu13-Nov-09 2:49 sergneu 13-Nov-09 2:49
 Re: The CUDA code is not optimized Member 1216440828-Nov-15 11:23 Member 12164408 28-Nov-15 11:23
 help me lechat232628-Sep-09 8:08 lechat2326 28-Sep-09 8:08
 Permission to redistribute source code cael4710-Sep-09 14:28 cael47 10-Sep-09 14:28
 Another interesting Neural Network application SharkTime6-Aug-09 9:01 SharkTime 6-Aug-09 9:01
 I can not obtain a good recognition of this method bogycat9-May-09 18:12 bogycat 9-May-09 18:12
 i think it possible to train the convolutional neural network in CUDA xam_163@163.com8-Dec-08 15:08 xam_163@163.com 8-Dec-08 15:08
 Last Visit: 31-Dec-99 19:00     Last Update: 3-Dec-21 1:14 Refresh 123 Next ᐅ

General    News    Suggestion    Question    Bug    Answer    Joke    Praise    Rant    Admin

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.