Blurring and blending can change an ordinary image into an unusual one. Frequently the results are surprising and difficult to predict. I wanted to see how far I could get with a blur and blend utility using C# and WPF on a 32 bit machine while keeping the code as simple as possible. As performance is always a concern in image processing, the code is instrumented allowing the operator to see how well it is doing. If you are interested in image processing you may enjoy this utility. In any case you may be interested how I side stepped some problems or the definition of common blend modes. No attempt was made to categorize the blend modes or show precisely what they do. The UI is based on whimsy not design principles. The MVVM pattern is followed but not strictly. Hopefully any shortcomings are made up with entertainment value.
The utility blends an image with a blurred copy of itself. The operator may specify the amount of blur in pixels (0-50) as well as the RGB channels to blur. 29 common blend modes are supported. Normal is the NOP blend. The utility is housed in a resizable window. As the window resizes the output image is also resized. Zoom indicates the amount of image compression or stretching of the output image in percent. 100 means the image is displayed at its actual size. Performance information is displayed in a textbox. The blur amount, memory size of the C# garbage collected heaps and the time in milliseconds used by 5 critical methods is shown. This information may be scrolled to show any operation.
Image data consumes a huge amount of memory. A 10MB image needs 40MB of memory to store the alpha, red, green and blue channels. The utility uses several copies of the image data to perform its work. These copies are allocated on the large object heap (LOH). During testing on a 32 bit machine I randomly encountered out of memory exceptions. Sometimes I could run all day without reproducing the problem. Other times the utility was unusable from the start. Recompiling to 64 bits removed the exception but not the poor memory usage. GC Memory was observed to randomly double. I never determined the cause of the problem. To get around it I used a fixed sized
LohRgbArray class to hold all image data. Using its default size, any image up to 10 MB can be stored. Instances of
LohRgbArray are allocated during startup and are never freed. The utility reuses them for any image that is loaded regardless of its size. Unfortunately this constrains the maximum size of an image that can be loaded. However, the utility no longer runs out of memory on a 32 bit machine.
Performing interactive image processing on a decent sized image requires consuming a great deal of CPU resources in a short period of time. When a WPF slider is dragged the image should update at least 10 times a second to avoid frustrating the operator. Its difficult to achieve this performance when complex imaging operations are performed. Especially using the blur method provided in the utility. Its no slouch. You can try blurring an image with Paint.NET as a comparison. However, its CPU usage goes up as the square of the image size. Its not pleasant to use it on images above a certain size. Programs like Photoshop solve this problem by using OpenCL to to run image processing code on the GPU. Its complex, difficult to do on any GPU and takes the fun out of experimenting with algorithms. The approach taken here uses multiple CPU cores to perform the processing fast enough for images that fit on the screen. Your mileage will vary depending on your CPU. Tweaking the C# has been avoided. The optimizations I tried sped up one CPU but slowed down another.
To support running image processing code on multiple cores, the image is divided up into groups of adjacent rows and columns. The number of groups is equal to the number of cores. An array of
PicSeg stores the starting and ending row or column for each group. The
PicSeg class is shown below...
public class PicSeg
public int startRow;
public int endRow;
public int startColumn;
public int endColumn;
Once the image has been divided into strips of rows and columns the C#
Task class may be used to distribute the computation across all cores. For example a Gaussian blur of both the rows and columns across
numTasks cores is coded as follows ...
Task tasks = new Task[numTasks];
for (int i = 0; i < numTasks; ++i)
PicSeg seg = ps[i]; tasks[i] = Task.Factory.StartNew(() =>
BlurRows(seg.startRow, seg.endRow)); }
for (int i = 0; i < numTasks; ++i)
PicSeg seg = ps[i];
tasks[i] = Task.Factory.StartNew(() =>
Similar logic is used for all image processing operations including computing the blend and setting the opacity. Both of these operations take little time compared to the Blur. But using all cores shaves time off each operation. At 10 times a second, a few milliseconds here and there add up. Only operations that are required are executed. For example changing opacity only recomputes the opacity not the blur or blend. The instrumentation is only active if all operations are performed. One does this by changing Sigma. When the blur is changed all operations must be performed. Note that
Task.WaitAll blocks WPF. This impacts performance a little, but the simplicity is to appealing to pass up.
If you call up the task monitor and look at performance while auto repeating the Sigma slider you will see that only 80% of the CPU is being used. This is disappointing as the image processing is distributed to all cores. Ideally 100% of the CPU should be consumed. The utility opens up a console window for displaying the critical "Between Blurs' performance indicator. This is the time in ms between successive blurs. As all large CPU consuming methods are carried out on separate threads, between Blurs is a measure the time it takes for WPF to go thru the message loop and anything else that is happening. To consume 100% CPU, Between Blurs must go to 0. Unfortunately Between Blurs slowly increases as blurs are performed. The increase is due to displaying performance data in the scrollable textbox. Performance data is displayed as a string whose length grows when a blur is done. To stop Between Blurs from growing, go to the
Msg property in MyModel.cs and comment out the code in the set.
The most well known and a few uncommon blend modes are supported. Keep in mind that there are no official definition of blend modes. When in doubt, compare with Photoshop. The utility can be used as a framework to develop new blend modes. One is free to experiment on whatever blend mode you can dream up. Keep in mind that for a blend mode to become accepted, it should be smooth. If it has sudden jumps its use will be resisted. Some of the well known blend modes are not smooth, but its too late to correct them.
Choosing the right blend mode to apply depends on the image. Blend modes that appear unusable for one image are the perfect choice for another. If a mode appears too extreme, back off on the opacity. Some blend modes are not useful until blur is applied. For example taking the difference of an image with itself gives black. Start applying blur and you may be surprised.
The default photo for the utility is Liz.jpg. Below is Liz before and after Divide mode was applied. Lowering the opacity would bring back more skin tone.
Some simple shapes are used below to illustrate combinations of the blur channels in difference mode.
One could easily make due with fewer Blend modes. Although some may seem identical, an examination of the code will show all are unique. A high quality Blur algorithm is not necessary for the utility to be effective. The Blur algorithm used here is exacting, but much too slow for large images. It could be replaced with a faster method. Breaking up the work over multiple cores speeds up the Blur but not enough to process large digital images comfortably. A possible work around might be to resize a large image to fit on the screen. Work on the smaller resized image could proceed without annoying delays. If and when the image is saved, the operation could be performed again on the larger original image without the need to be real time. My development environment is a 4 core Intel at 2.67 GHz. One development tool I was unable to locate was a good free memory profiler.
- Initial release - June 2013.