Intrigued by the thought of blurring an image to sharpen it? The process is called Unsharp Masking. It is the most widely used image sharpening method. However, how it works is not always understood. This application breaks down Unsharp Masking into a series of displayed steps that may be adjusted and understood by the user. Several reusable methods are provided for calculating exact Gaussian blurs up to 100 pixels. The application is slanted towards large images and the sizeable blurs they require. It is an expensive consumer of CPU. No attempt has been made to speed up the process with fast blurring approximations. If you are looking for real time speed or sharpening small images, this is not the article for you. If you are interested in understanding Unsharp Masking and want examples of WPF techniques and problems, then read on.
- Provide a step by step tutorial illustrating Unsharp Masking
- Provide a nontrivial example of image manipulation in HSL colorspace
- Provide examples of WPF features such as Control Templates, background threading, multiple windows and value converters
- Document reusable classes for calculating Gaussian blurs
- Provide image sharpening quality equivalent to Photoshop
UnSharp Masking Background
UnSharp Masking works by detecting the edges in an image and locally increasing the contrast around the edge. For example, divide a rectangle in half with a straight line forming two adjacent rectangles. Create a single image by filling each rectangle with a different color. The only edge in the image is where the 2 colors meet. Typically, 2 arbitrary colors have different Lightness values. In other words, one side of the edge is light and the other side is dark. UnSharp masking identifies a small local area of the image which straddles the edge. This area contains a portion from both the light and dark sides. If the light portion is made lighter and the dark portion is made darker, the contrast around the edge is increased. This gives the appearance of a sharper image. Think of UnSharp Masking as a trick to bring out details in an image by using small selective increases in contrast. UnSharp Masking does not change the focus of an image.
Edge Detection Using Blurs
Surprisingly, blurs provide a simple mechanism to identify local areas around image edges. Think of each point in a blurred image as the result of some kind of averaging performed over a small area of the unblurred image. Image edges are identified by subtracting the blurred image from the unblurred image. At most image points, this produces a value near zero as the average (blurred) value will not be greatly different from the value of the unblurred image. However, where the unblurred image changes values rapidly, such as at an edge, the subtraction produces a much greater value. Drop out a little noise from the subtraction and you have the edges identified.
This application blurs only the Lightness channel of an image or its grayscale values. Hue and Saturation, or the color part of an image, are not blurred. This limits the sharpening process to changing grayscale values (black and white). Thus potential color shifts introduced by sharpening are avoided and the blur is completed more rapidly as only a single channel is blurred. A Gaussian blur is utilized. Computing Gaussians is CPU intensive. Other types of blurs, some computed much quicker, may be used. Each type of blur produces different results which may or may not please the user. The classical approach is to use a Gaussian. This is the standard which any blurring method should be compared against.
Sharpening the Image
After the difference between the unblurred and blurred image is completed, it is multiplied by a user specified factor controlling the amount of sharpening. This adjusted difference is then added back into the unblurred image to sharpen it. Sharpening is a manual process. The user is presented with controls to adjust what is sharpened and the degree of sharpening desired. Every picture is different. An automated process gives poor results.
Using the Application
Startup the application to display the included image. The image shows a programmer whose job was outsourced to Thailand. The user will notice the included image needs sharpening. If you would rather work on another image, use the black File Open button to load your own large image. The black File Save button may be used at any time to save a jpeg image of the main window to disk. The remaining 6 glass buttons are used in the order they are displayed during the sharpening process. Note the TextBlock above the buttons. It displays status, warning and informational messages. Let':if expand("%") == ""|browse confirm w|else|confirm w|endif s demonstrate the glass buttons. First, enlarge the main window to full screen. Use the settings shown in the pictures below.
The RGB button displays the original unsharpened image. It is used to make comparisons during the sharpening process. It also logs informational messages on image pixel dimensions and the display zoom factor.
Clicking the Lightness button displays the grayscale Lightness channel of the original unsharpened image. Again, it may be used to make comparisons during the sharpening process. On image load, the original image is converted to HSL colorspace. Only the L or Lightness channel from HSL is used to sharpen the image. This means the sharpening is done in grayscale, not color.
The Blur button brings up the Blur Port window. The Blur port allows the user to select the amount of blurring and a blur algorithm. When the Blur port opens, it displays either the original RGB image or the results and settings of the last completed blur. All Port windows allow the user to scroll and zoom a small preview image and resize the window. A small port window allows the blur to be previewed quickly. For now, do not resize or zoom the blur port. To scroll, left click and drag. To toggle to the underlying image, right click.
The Blur slider controls the amount of blur. It represents the standard deviation of the Gaussian blur in pixels. The user may click on the slider or enter a text value and type tab to change the blur value. The amount of blur to apply depends on the size of the features to be sharpened. Fine features, say hair in a portrait, may be sharpened with a small blur of a pixel or two. While the entire portrait of a face may sharpen better with two to three times the amount of blur used for just hair. Ridiculously large blurs are useful for artistic and lighting effects. To get started, divide the largest dimension of the image by 1000. In general every picture requires a different amount of blur. You can always reclick the Blur button to modify settings.
There are 2 choices for the blur algorithm, standard convolution and iterative convolution. The standard algorithm is the classical approach. This implementation uses pixels out to 3 times the standard deviation to compute each point. It is exact, but extremely CPU intensive. It grows nonlinearly as the blur value increases. Chose the standard algorithm for blurs values up to 10 pixels. Otherwise, chose iterative convolution for a very accurate blur which uses a constant amount of CPU as the blur increases. References are provided at the end of the article for the algorithms.
When the user is satisfied with the preview in the blur port, the OK button is clicked to close the port and blur the entire image in the main window. As a blur takes a long time to compute, it is done in the background on another thread. A progress bar is updated to show the user the blur is proceeding. The remaining port windows, Diff and Add, update quickly in the main thread. These port windows may be maximized to give the user a larger display without bogging down the UI.
The Diff button opens the Diff port to display and adjust the difference between the unblurred and blurred image. The difference is typically small and difficult to observe. The following is done to make the difference readily apparent...
- Stretch Difference Values - The difference is stretched across the entire range of grayscale values. Without this, the user would not be able to easily distinguish unique difference values from one another.
- Absolute Values - Difference values may be positive (lighten) and negative (darken). Negative values are made positive for display. Meaning all differences are displayed in grayscale. Two image regions, lightened or darkened by the same magnitude, are displayed as an identical gray value.
- Adjustable Display Brightness - Difference values are typically hard to discern. The Diff Port may appear to be completely black. A Display Brightness slider is provided to facilitate viewing of difference values (edges). Adjusting the brightness does not affect the sharpening process. High values of brightness cause the display to threshold. It is an interesting effect. If you like it, you may save the effect using the File Save button in the main window. The Diff Port preview will appear to be noisy. However the noise is very small and will be invisible in the sharpened image. The noise may be eliminated by setting the ThreshHold slider to a value of 2.
The Edge Enhancement options are useful to eliminate ringing in a light or dark regions. For example, sharpening a skyline image of buildings frequently introduces small areas of sky that have been lightened too much around the edges of buildings. Rather than reduce the sharpening and hence the ringing, one may elect to limit sharpening to darkening edges. This may be a satisfactory method of sharpening a building while leaving the sky around the edges intact.
The ThreshHold slider allows the operator to eliminate sharpening of edges until the difference exceeds the threshold value. It is useful for dropping out Gaussian noise. It may be used to eliminate unwanted sharpening of features like light freckling in a portrait.
Clicking the Add button brings up the Add Port window. The window contains a slider controlling the amount of sharpening to be applied. Be careful of over sharpening the image. Although the sharpening is implemented by modifying the Lightness channel to avoid color shifts, any color may appear as black or white if the Lightness value is shifted by a large amount. If you see any obvious Lightness shifts, you have sharpened too much. Note that modifying the resolution or size in pixels of an image should be done before it is sharpened.
Clicking the Sharpen button converts the grayscale sharpened image back to RGB. Comparisons against the original unsharpened image may be made by clicking the RGB and Sharpen buttons. Sharpening replaces the original L channel with the L channel created by the Add button. The HSL image is then converted back to RGB and displayed.
Using the Code
I find the classes I most frequently reuse are low level ones. The lowest level classes in this application are those used to compute a blur using either standard convolution or iterative convolution. Both methods blur float arrays containing values between 0.0 and 1.0 inclusive. Assuming one starts from an image in an rgb byte array, the application does the following to produce a float array for the blur to work on.
byte rgbValues; int numColumns;
float hslValues = HslSpace.RgbToHsl(rgbValues); float lValues = new float[hslValues.Length / 4]; int lIndex = 0;
for (int hslIndex = 0; hslIndex < hslValues.Length; hslIndex += 4)
lValues[lIndex++] = hslValues[hslIndex + (int)HSL.Lightness];
The blurring methods take time to execute. Both the Standard and Iterative blur methods support a callback that is repeatedly invoked when the blur completes 2.5% of its work. The application uses these callbacks to update a progressbar. To simplify this discussion, the callback is set to null which means no callback will be invoked by the blur method as it performs its work.
The code for the blur methods are in modules StandardBlur.cs
. An examination of the code will reveal some complexity. The complexity arises because image data abruptly ends at the 4 sides of the image. Both methods assume the value of a pixel on a side of an image continues indefinitely. Although this is making up data, it gives a reasonable blur at the sides of an image.
The standard blur is not an intelligent routine. Rather than directly working off a standard deviation, it is passed a Gaussian convolution kernel. Any decent book on graphics will have a discussion on convolution kernels. But all one has to know is the kernel is a
double array whose length and values depend on the Standard deviation of the blur. The kernel for a specified standard deviation is calculated by the
GaussKernel class in module gauss.cs. The calculation is accurate out to 3 times the standard deviation. The
GaussKernel class is used as follows...
GaussKernel gk = new GaussKernel(); double sigma = 5.0; gk.Sigma = sigma; double kernel = gk.Kernel;
To recap, an RGB image having
numColumns columns was converted to HSL. The L channel from the HSL was extracted into the
lValues array and a kernel for a 5 pixel blur was calculated using a
GaussKernel object. Using this information a Standard blur is now calculated using a static class in module StandardBlur.cs. Note the progress callback is
float blurredValues = StandardBlur.Blur(lValues, numColumns, null, kernel);
The Iterative blur is done using van Vliet's algorithm which he graciously describes on the internet (see references). Unlike the Standard blur, a kernel is not required. Iterative blurs are useful for large standard deviations. For a given image, the algorithm completes in a constant time regardless of the value of the Standard deviation. An iterative blur is invoked as follows...
IterativeBlur.Sigma = sigma; blurredValues = IterativeBlur.Blur(lValues, numColumns, null);
Converting Back to RGB
Up to this point, the L channel of an image has been blurred using either a Standard blur or an Iterative blur. Perhaps the user has performed other data modifications as well. Assuming the modified lightness values are in array
blurredValues and the original image in HSL is in the array
hslValues how does one get the changes in
blurredValues back to RGB in order to save the modified image?
for (int i = 0; i < blurred.Length; i++)
hslValues[i * 4 + (int)HSL.Lightness] = blurredValues[i]; rgbValues = HslSpace.HslToRgb(hslValues);
A number of WPF problems were encountered producing the application. A clumsy workaround was employed to address one problem. However performance problems could not be fixed.
The port windows display the preview image in a
ScrollViewer. The intention was to let the user zoom and scroll the preview in any desired manner. However, the zoom would not work properly with the scroll under all conditions. At the start when the image is at 100% zoom everything works. But if the operator scrolls an image at a zoom other than 100% and then changes the zoom again, the image jumps by a factor proportional to the amount of scroll. I could not find any example of a scrollable and zoomable image on the internet. I found several articles on how to go about it, but the advice did not work. I believe there is a WPF bug with zoomable, scrollable images in a
ScrollViewer. After wasting much time, I worked around the problem by changing the zoom to 100% whenever the image was scrolled. If anyone has working code, as opposed to advice, I would be very interested in seeing it.
A writeable bitmap is used for the preview image in the Port Windows. When the image is reduced to 71%, subsequent zoom reductions take a huge amount of time to complete. Experimenting, I found the zoom reductions would speed up by a factor of 3 if modifications to the writeable bitmap were commented out. Even then zooming was still too slow. As the slow down always commenced at 71%, I believe this issue is a WPF performance problem.
The three Port Windows are implemented as separate window controls. WPF friendliness to multiple windows could be improved. This is especially true when working with XAML. I ended up loading and parsing the XAML files for the Port Windows myself. Not the best approach, but workable.
So why not use the WPF blur bitmap effect and take advantage of graphics hardware acceleration? The bitmap effects are designed to modify the display of images rendered on the screen. They work quickly but are not designed for saving images on disk. It can be done, but it is jamming a round peg in a square hole. However, the real problem with the blur bitmap effects and most fast blurring approximations available is you do not know what you are getting. Fast approximations break down at some point. If you do not know what you have, you will not know how far you can push it.
- November 16, 2009 - Initial release