**Overview**

Welcome back for the second installment in this series. This installment serves as an introduction to the world of convolution filters. It is also the first version of our program that offers one level of undo. We'll build on that later, but for now I thought it mandatory that you be able to undo your experiments without having to reload the image every time.

So what is a convolution filter ? Essentially, it's a matrix, as follows:

The idea is that the pixel we are processing, and the eight that surround it, are each given a weight. The total value of the matrix is divided by a factor, and optionally an offset is added to the end value. The matrix above is called an identity matrix, because the image is not changed by passing through it. Usually the factor is the value derived from adding all the values in the matrix together, which ensures the end value will be in the range 0-255. Where this is not the case, for example, in an embossing filter where the values add up to 0, an offet of 127 is common. I should also mention that convolution filters come in a variety of sizes, 7x7 is not unheard of, and edge detection filters in particular are not symmetrical. Also, the bigger the filter, the more pixels we cannot process, as we cannot process pixels that do not have the number of surrounding pixels our matrix requires. In our case, the outer edges of the image to a depth of one pixel will go unprocessed.

**A Framework**

First of all we need to establish a framework from which to write these filters, otherwise we'll find ourselves writing the same code over and again. As our filter now relies on surrounding values to get a result, we are going to need a source and a destination bitmap. I tend to create a copy of the bitmap coming in and use the copy as the source, as it is the one getting discarded in the end. To facilitate this, I define a matrix class as follows:

public class ConvMatrix
{
public int TopLeft = 0, TopMid = 0, TopRight = 0;
public int MidLeft = 0, Pixel = 1, MidRight = 0;
public int BottomLeft = 0, BottomMid = 0, BottomRight = 0;
public int Factor = 1;
public int Offset = 0;
public void SetAll(int nVal)
{
TopLeft = TopMid = TopRight = MidLeft = Pixel = MidRight =
BottomLeft = BottomMid = BottomRight = nVal;
}
}

I'm sure you noticed that it is an identity matrix by default. I also define a method that sets all the elements of the matrix to the same value.

The pixel processing code is more complex than our last article, because we need to access nine pixels, and two bitmaps. I do this by defining constants for jumping one and two rows ( because we want to avoid calculations as much as possible in the main loop, we define both instead of adding one to itself, or multiplying it by 2 ). We can then use these values to write our code. As our initial offset into the different color is 0, 1, and 2, we end up with 3 and 6 added to each of those values to create indices for three pixels across, and use our constants to add the rows. In order to ensure we don't have any values jumping from the bottom of the image to the top, we need to create one int, which is used to calculate each pixel value, then clamped and stored. Here is the entire function:

public static bool Conv3x3(Bitmap b, ConvMatrix m)
{
if (0 == m.Factor)
return false; Bitmap
bSrc = (Bitmap)b.Clone();
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
BitmapData bmSrc = bSrc.LockBits(new Rectangle(0, 0, bSrc.Width, bSrc.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
int stride2 = stride * 2;
System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr SrcScan0 = bmSrc.Scan0;
unsafe {
byte * p = (byte *)(void *)Scan0;
byte * pSrc = (byte *)(void *)SrcScan0;
int nOffset = stride - b.Width*3;
int nWidth = b.Width - 2;
int nHeight = b.Height - 2;
int nPixel;
for(int y=0;y < nHeight;++y)
{
for(int x=0; x < nWidth; ++x )
{
nPixel = ( ( ( (pSrc[2] * m.TopLeft) +
(pSrc[5] * m.TopMid) +
(pSrc[8] * m.TopRight) +
(pSrc[2 + stride] * m.MidLeft) +
(pSrc[5 + stride] * m.Pixel) +
(pSrc[8 + stride] * m.MidRight) +
(pSrc[2 + stride2] * m.BottomLeft) +
(pSrc[5 + stride2] * m.BottomMid) +
(pSrc[8 + stride2] * m.BottomRight))
/ m.Factor) + m.Offset);
if (nPixel < 0) nPixel = 0;
if (nPixel > 255) nPixel = 255;
p[5 + stride]= (byte)nPixel;
nPixel = ( ( ( (pSrc[1] * m.TopLeft) +
(pSrc[4] * m.TopMid) +
(pSrc[7] * m.TopRight) +
(pSrc[1 + stride] * m.MidLeft) +
(pSrc[4 + stride] * m.Pixel) +
(pSrc[7 + stride] * m.MidRight) +
(pSrc[1 + stride2] * m.BottomLeft) +
(pSrc[4 + stride2] * m.BottomMid) +
(pSrc[7 + stride2] * m.BottomRight))
/ m.Factor) + m.Offset);
if (nPixel < 0) nPixel = 0;
if (nPixel > 255) nPixel = 255;
p[4 + stride] = (byte)nPixel;
nPixel = ( ( ( (pSrc[0] * m.TopLeft) +
(pSrc[3] * m.TopMid) +
(pSrc[6] * m.TopRight) +
(pSrc[0 + stride] * m.MidLeft) +
(pSrc[3 + stride] * m.Pixel) +
(pSrc[6 + stride] * m.MidRight) +
(pSrc[0 + stride2] * m.BottomLeft) +
(pSrc[3 + stride2] * m.BottomMid) +
(pSrc[6 + stride2] * m.BottomRight))
/ m.Factor) + m.Offset);
if (nPixel < 0) nPixel = 0;
if (nPixel > 255) nPixel = 255;
p[3 + stride] = (byte)nPixel;
p += 3;
pSrc += 3;
}
p += nOffset;
pSrc += nOffset;
}
}
b.UnlockBits(bmData);
bSrc.UnlockBits(bmSrc);
return true;
}

Not the sort of thing you want to have to write over and over, is it ? Now we can use our ConvMatrix class to define filters, and just pass them into this function, which does all the gruesome stuff for us.

**Smoothing**

Given what I've told you about the mechanics of this filter, it is obvious how we create a smoothing effect. We ascribe values to all our pixels, so that the weight of each pixel is spread over the surrounding area. The code looks like this:

public static bool Smooth(Bitmap b, int nWeight )
{
ConvMatrix m = new ConvMatrix();
m.SetAll(1);
m.Pixel = nWeight;
m.Factor = nWeight + 8;
return BitmapFilter.Conv3x3(b, m);
}

As you can see, it's simple to write the filters in the context of our framework. Most of these filters have at least one parameter, unfortunately C# does not have default values, so I put them in a comment for you. The net result of apply this filter several times is as follows:

**Gaussian Blur**

Gaussian Blur filters locate significant color transitions in an image, then create intermediary colors to soften the edges. The filter looks like this:

Gaussian Blur |

1 |
2 |
1 |

2 |
4 |
2 |

1 |
2 |
1 |
/16+0 |

The middle value is the one you can alter with the filter provided, you can see that the default value especially makes for a circular effect, with pixels given less weight the further they go from the edge. In fact, this sort of smoothing generates an image not unlike an out of focus lens.

**Sharpen**

On the other end of the scale, a sharpen filter looks like this:

Sharpen |

0 |
-2 |
0 |

-2 |
11 |
-2 |

0 |
-2 |
0 |
/3+0 |

If you compare this to the gaussian blur you'll note it's almost an exact opposite. It sharpens an image by enhancing the difference between pixels. The greater the difference between the pixels that are given a negative weight and the pixel being modified, the greater the change in the main pixel value. The degree of sharpening can be adjusted by changing the centre weight. To show the effect better I have started with a blurred picture for this example.

**Mean Removal**

The Mean Removal filter is also a sharpen filter, it looks like this:

Mean Removal |

-1 |
-1 |
-1 |

-1 |
9 |
-1 |

-1 |
-1 |
-1 |
/1+0 |

Unlike the previous filter, which only worked in the horizontal and vertical directions, this one spreads it's influence diagonally as well, with the following result on the same source image. Once again, the central value is the one to change in order to change the degree of the effect.

**Embossing**

Probably the most spectacular filter you can do with a convolution filter is embossing. Embossing is really just an edge detection filter. I'll cover another simple edge detection filter after this and you'll notice it's quite similar. Edge detection generally works by offsetting a positive and a negative value across an axis, so that the greater the difference between the two pixels, the higher the value returned. With an emboss filter, because our filter values add to 0 instead of 1, we use an offset of 127 to brighten the image, otherwise much of it would clamp to black.

The filter I have implemented looks like this:

Emboss Laplascian |

-1 |
0 |
-1 |

0 |
4 |
0 |

-1 |
0 |
-1 |
/1+127 |

and it looks like this:

As you might have noticed, this emboss works in both diagonal directions. I've also included a custom dialog where you can enter your own filters, you might like to try some of these for embossing:

Horz/Vertical |
0 |
-1 |
0 |
-1 |
4 |
-1 |
0 |
-1 |
0 |
/1+127 | |
All Directions |
-1 |
-1 |
-1 |
-1 |
8 |
-1 |
-1 |
-1 |
-1 |
/1+127 | |
Lossy |
1 |
-2 |
1 |
-2 |
4 |
-2 |
-2 |
1 |
-2 |
/1+127 | |
Horizontal Only |
0 |
0 |
0 |
-1 |
2 |
-1 |
0 |
0 |
0 |
/1+127 | |
Vertical Only |
0 |
-1 |
0 |
0 |
0 |
0 |
0 |
1 |
0 |
/1+127 | |

The horizontal and vertical only filters differ for no other reason than to show two variations. You can actually rotate these filters as well, by rotating the values around the central point. You'll notice the filter I have used is the horz/vertical filter rotated by one degree, for example.

**Let's not get carried away**

Although this is kinda cool, you will notice if you run Photoshop that it offers a lot more functionality than the emboss I've shown you here. Photoshop creates an emboss using a more specifically written filter, and only part of that functionality can be simulated using convolution filters. I have spent some time writing a more flexible emboss filter, once we've covered bilinear filtering and the like, I may write an article on a more complete emboss filter down the track.

**Edge Detection**

Finally, just a simple edge detection filter, as a foretaste of the next article, which will explore a number of ways to detect edges. The filter looks like this:

Edge Detect |

1 |
1 |
1 |

0 |
0 |
0 |

-1 |
-1 |
-1 |
/1+127 |

Like all edge detection filters, this filter is not concerned with the value of the pixel being examined, but rather in the difference between the pixels surrounding it. As it stands it will detect a horizontal edge, and, like the embossing filters, can be rotated. As I said before, the embossing filters are essentially doing edge detection, this one just heightens the effect.

**What's in store**

The next article will be covering a variety of edge detection methods. I'd also encourage you to search the web for convolution filters. The comp.graphics.algorithms newsgroup tends to lean towards 3D graphics, but if you search an archive like google news for 'convolution' you'll find plenty more ideas to try in the custom dialog.