12,500,937 members (48,990 online)
alternative version

65.6K views
58 bookmarked
Posted

# Single Layer Perceptron as Linear Classifier

, 7 Nov 2010 CPOL
 Rate this:
In this article, I will show you how to use single layer percetron as linear classifier of 2 classes.

## Introduction

Perceptron is the simplest type of feed forward neural network. It was designed by Frank Rosenblatt as dichotomic classifier of two classes which are linearly separable. This means that the type of problems the network can solve must be linearly separable. Basic perceptron consists of 3 layers:

• Sensor layer
• Associative layer
• Output neuron

There are a number of inputs (xn) in sensor layer, weights (wn) and an output. Sometimes w0 is called bias and x0 = +1/-1 (In this case is x0=-1).

For every input on the perceptron (including bias), there is a corresponding weight. To calculate the output of the perceptron, every input is multiplied by its corresponding weight. Then weighted sum is computed of all inputs and fed through a limiter function that evaluates the final output of the perceptron.

The output of neuron is formed by activation of the output neuron, which is function of input:

 (1)

The activation function F can be linear so that we have a linear network, or nonlinear. In this example, I decided to use threshold (signum) function:

 (2)

Output of network in this case is either +1 or -1 depending on the input. If the total input (weighted sum of all inputs) is positive, then the pattern belongs to class +1, otherwise to class -1. Because of this behavior, we can use perceptron for classification tasks.

Let's consider we have a perceptron with 2 inputs and we want to separate input patterns into 2 classes. In this case, the separation between the classes is straight line, given by equation:

 (3)

When we set x0=-1 and mark w0=?, then we can rewrite equation (3) into form:

 (4)

Here I will describe the learning method for perceptron. Learning method of perceptron is an iterative procedure that adjust the weights. A learning sample is presented to the network. For each weight, the new value is computed by adding a correction to the old value. The threshold is updated in the same way:

 (5)

where y is output of perceptron, d is desired output and ? is the learning parameter.

## Using the Program

When you run the program, you see area where you can input samples. Clicking by left button on this area, you will add first class sample (blue cross). Clicking by right button on this area, you will add first class sample (red cross). Samples are added to the `samples` list. You can also set learning rate and number of iterations. When you have set all these values, you can click on Learn button to start learning.

## Using the Code

All samples are stored in generic list `samples` which holds only `Sample` class objects.

```public class Sample
{
double x1;
double x2;
double cls;

public Sample(double x1, double x2, int cls)
{
this.x1 = x1;
this.x2 = x2;
this.cls = cls;
}

public double X1
{
get { return x1; }
set { this.x1 = value; }
}

public double X2
{
get { return x2; }
set { this.x2 = value; }
}

public double Class
{
get { return cls; }
set { this.cls = value; }
}
}```

Before running a learning of perceptron is important to set learning rate and number of iterations. Perceptron has one great property. If solution exists, perceptron always find it but problem occurs, when solution does not exist. In this case, perceptron will try to find the solution in infinity loop and to avoid this, it is better to set maximum number of iterations.

The next step is to assign random values for weights (w0, w1 and w2).

```Random rnd = new Random();

w0 = rnd.NextDouble();
w1 = rnd.NextDouble();
w2 = rnd.NextDouble();```

When random values are assigned to weights, we can loop through samples and compute output for every sample and compare it with desired output.

```double x1 = samples[i].X1;
double x2 = samples[i].X2;
int y;

if (((w1 * x1) + (w2 * x2) - w0) < 0)
{
y = -1;
}
else
{
y = 1;
}```

I decided to set `x0=-1` and for this reason, the output of perceptron is given by equation: y=w1*w1+w2*w2-w0. When perceptron output and desired output doesn’t match, we must compute new weights:

```if (y != samples[i].Class)
{
error = true;

w0 = w0 + alpha * (samples[i].Class - y) * x0 / 2;
w1 = w1 + alpha * (samples[i].Class - y) * x1 / 2;
w2 = w2 + alpha * (samples[i].Class - y) * x2 / 2;
}```

Y is output of perceptron and `samples[i].Class` is desired output. The last 2 steps (looping through samples and computing new weights), we must repeat while the `error` variable is `<> 0` and current number of iterations (`iterations`) is less than `maxIterations`.

```int i;
int iterations = 0;
bool error = true;

maxIterations = int.Parse(txtIterations.Text);

Random rnd = new Random();

w0 = rnd.NextDouble();
w1 = rnd.NextDouble();
w2 = rnd.NextDouble();

alpha = (double)trackLearningRate.Value / 1000;

while (error && iterations < maxIterations)
{
error = false;

for (i = 0; i <= samples.Count - 1; i++)
{
double x1 = samples[i].X1;
double x2 = samples[i].X2;
int y;

if (((w1 * x1) + (w2 * x2) - w0) < 0)
{
y = -1;
}
else
{
y = 1;
}

if (y != samples[i].Class)
{
error = true;

w0 = w0 + alpha * (samples[i].Class - y) * x0 / 2;
w1 = w1 + alpha * (samples[i].Class - y) * x1 / 2;
w2 = w2 + alpha * (samples[i].Class - y) * x2 / 2;
}
}
objGraphics.Clear(Color.White);
DrawSeparationLine();
iterations++;
}```

Function `DrawSeparationLine` draws separation line of 2 classes.

## History

• 07 Nov 2010 - Original version posted

## Share

 Architect The Staffing Edge & Marwin Cassovia Soft Slovakia
My name is Robert Kanasz and I have been working with ASP.NET, WinForms and C# for several years.
MCSD - Web Applications
MCSE - Data Platform
MCPD - ASP.NET Developer 3.5
- Web Developer 4
- Database Developer 2008
MCSA - SQL Server 2012
MCTS - .NET Framework 3.5, ASP.NET Applications
- SQL Server 2008, Database Development
- SQL Server 2008, Implementation and Maintenance
- .NET Framework 4, Data Access
- .NET Framework 4, Service Communication Applications
- .NET Framework 4, Web Applications
MS - Programming in HTML5 with JavaScript and CSS3 Specialist

Open source projects: DBScripter - Library for scripting SQL Server database objects

## You may also be interested in...

 Pro Pro

 First Prev Next
 My vote of 4 PFalkowski15-Dec-15 10:00 PFalkowski 15-Dec-15 10:00
 How to Draw Separation Line Member 1126450520-Apr-15 15:33 Member 11264505 20-Apr-15 15:33
 good Omar Gameel Salem30-Sep-14 18:50 Omar Gameel Salem 30-Sep-14 18:50
 perceptron Member 109267945-Jul-14 14:47 Member 10926794 5-Jul-14 14:47
 Incorrectly dividing by 2 twocs25-Nov-12 16:39 twocs 25-Nov-12 16:39
 According to equation 5, you should update the weight by adding the learning rate * error. But in the implementation, you then divide this number by 2. Although halving the learning rate will surely work, I don't understand why the code is different from the equation. Edit: saw this response in another answer "Hello, I'm so sorry, that I reply to late, but I had no time. In this equation you must divide by number 2 because I decided to use -1 an 1 values to distinct between 2 classes. If you set 0 for first class and 1 for second, than you don't need to divide by number 2. Regards Robert" Edit: Now I have a different question on the same topic The explanation suggests that we need to normalize the learning rate by the size of the space. This could be fine for two classes, but what if we introduce a third class, with a value such as -10. How would we know the normalization value? Second question, why not use 0 and 1? When I train the network with 0 and 1, the line ends up sticking to some of the points, but when -1 and 1 are used, the line ends up between the points. Obviously -1 and 1 are better, but a book on pattern recognition I'm reading had us use 0 and 1, and my results were not good. That's why I searched the Internet and found this tutorial. Can you explain why -1 and 1 are better than 0 and 1?modified 25-Nov-12 22:49pm.
 Nice one!!! strucker_luc18-Nov-12 3:17 strucker_luc 18-Nov-12 3:17
 Re: Nice one!!! Kanasz Robert18-Nov-12 3:21 Kanasz Robert 18-Nov-12 3:21
 Interesting article and very helpful kr1234564-Nov-12 3:58 kr123456 4-Nov-12 3:58
 Re: Interesting article and very helpful Kanasz Robert4-Nov-12 4:03 Kanasz Robert 4-Nov-12 4:03
 very well superdevX151-Nov-12 6:51 superdevX15 1-Nov-12 6:51
 Re: very well Kanasz Robert1-Nov-12 6:55 Kanasz Robert 1-Nov-12 6:55
 very well written article hakon12331-Oct-12 5:32 hakon123 31-Oct-12 5:32
 Re: very well written article Kanasz Robert31-Oct-12 5:37 Kanasz Robert 31-Oct-12 5:37
 nice memlon mulas29-Oct-12 5:15 memlon mulas 29-Oct-12 5:15
 Re: nice Kanasz Robert29-Oct-12 5:19 Kanasz Robert 29-Oct-12 5:19
 good and well written article jackhoal27-Oct-12 3:56 jackhoal 27-Oct-12 3:56
 Re: good and well written article Kanasz Robert27-Oct-12 4:00 Kanasz Robert 27-Oct-12 4:00
 Good robkaan27-Oct-12 3:29 robkaan 27-Oct-12 3:29
 Re: Good Kanasz Robert27-Oct-12 3:31 Kanasz Robert 27-Oct-12 3:31
 not bad windevvv21-Oct-12 6:52 windevvv 21-Oct-12 6:52
 Re: not bad Kanasz Robert21-Oct-12 6:59 Kanasz Robert 21-Oct-12 6:59
 good kaslaninovic2-Oct-12 22:51 kaslaninovic 2-Oct-12 22:51
 Re: good Kanasz Robert3-Oct-12 6:49 Kanasz Robert 3-Oct-12 6:49
 interesting developer88123-Sep-12 3:05 developer881 23-Sep-12 3:05
 Re: interesting Kanasz Robert23-Sep-12 23:12 Kanasz Robert 23-Sep-12 23:12
 Good bikerius19-Sep-12 2:13 bikerius 19-Sep-12 2:13
 Re: Good Kanasz Robert19-Sep-12 4:21 Kanasz Robert 19-Sep-12 4:21
 Well done pukson11-Sep-12 3:43 pukson 11-Sep-12 3:43
 Re: Well done Kanasz Robert11-Sep-12 3:51 Kanasz Robert 11-Sep-12 3:51
 My vote of 5 manoj kumar choubey28-Mar-12 0:16 manoj kumar choubey 28-Mar-12 0:16
 Re: My vote of 5 Kanasz Robert11-Sep-12 3:52 Kanasz Robert 11-Sep-12 3:52
 My vote of 5 lmflores7918-May-11 14:53 lmflores79 18-May-11 14:53
 Re: My vote of 5 Kanasz Robert29-Nov-11 23:01 Kanasz Robert 29-Nov-11 23:01
 Good article lmflores7918-May-11 10:20 lmflores79 18-May-11 10:20
 Re: Good article Kanasz Robert18-May-11 11:37 Kanasz Robert 18-May-11 11:37
 Don´t understand the equation Harkonnenaaaa9-Mar-11 3:00 Harkonnenaaaa 9-Mar-11 3:00
 Re: Don´t understand the equation Kanasz Robert12-Mar-11 0:05 Kanasz Robert 12-Mar-11 0:05
 My vote of 4 ronakshah_rrs23-Dec-10 3:29 ronakshah_rrs 23-Dec-10 3:29
 My vote of 4 Sacha Barber7-Nov-10 23:49 Sacha Barber 7-Nov-10 23:49
 Re: My vote of 4 Kanasz Robert7-Nov-10 23:56 Kanasz Robert 7-Nov-10 23:56
 Re: My vote of 4 Sacha Barber8-Nov-10 1:47 Sacha Barber 8-Nov-10 1:47
 Last Visit: 31-Dec-99 18:00     Last Update: 24-Sep-16 14:40 Refresh 1