13,054,779 members (71,412 online)
alternative version

#### Stats

83.5K views
33 bookmarked
Posted 18 Oct 2007

# Fast Dyadic Image Scaling with Haar Transform

, 18 Oct 2007
 Rate this:

## Introduction

This is the fast dyadic image down sampling class based on Haar transform. It extends BaseFWT2D class from my other article 2D Fast Wavelet Transform Library for Image Processing for this specific purpose. It uses MMX optimization and is applicable in the image processing field where you perform dyadic down sampling: 2, 4, 8, 16, 32 ... pow(2, N) times. I use that code as a preprocessing in the face detection process.

## Background

You need to be familiar with Haar transform.

## Using the Code

I've arranged console project allocating RGB array for 640x480 image and implementing several runs of down sampling to gather statistics and output average time for it. I used the precision time counter - I remember I downloaded it some long time ago from The Code Project. On my 2.2GHz TravelMate under licensed Vista it runs 5-6ms for down sampling this image to 80x60, eight times smaller.

The classes in the project are:

• vec1D //1D vector wrapper
• vec2D //2D vector wrapper
• BaseFWT2D //abstract base class for 2D FWT
• Haar : public BaseFWT2D //Haar based down sampling
• ImageResize //provides RGB data down sampling

You can learn about vec1D and BaseFWT2D from my 2D Fast Wavelet Transform Library for Image Processing article and about vec2D from my other article 2D Vector Class Wrapper SSE Optimized for Math Operations.

The ImageResize class contains three objects of class Haar for red, green and blue channels down sampling. First, you need to initialize the ImageResize object to specific width, height and down sampling ratio:

• void init(unsigned int w, unsigned int h, float zoom = 0.125f);

The zoom is the image down sampling factor, with resulting image down sampled by 1/zoom times. The default one (0.125f) provides 8 times down sampled image. You can down sample the image only with zoom equal to 1/2, 1/4, 1/8, ... 1/pow(2,N).

Then you can proceed with down sampling incoming images with either of the overloaded functions:

• int resize(const unsigned char* pBGR);
• int resize(const unsigned char* pR, const unsigned char* pG,
const unsigned char* pB) const;

The first one takes RGB stream with the first byte in the triplet for blue channel and the last one for red. The second takes the RGB channels in separate buffers.

//your bitmap data goes in that fashion
//unsigned char* pBGR = new unsigned char[width*height*3];

unsigned int width = 640;
unsigned int height = 480;
float zoom = 0.25;

ImageResize resize;
resize.init(width, height, zoom);

//keep resizing incoming data after initialization.
resize.resize(pBGR);

To access down sampled image, the following functions are defined:

• char** getr() const;
• char** getg() const;
• char** getb() const;

Note they provide 2D char pointers to the data in char range -128 ... 127.

//print out resized red channel
char** pr = resize.getr();
for(unsigned int y = 0; y < height * zoom; y++) {
for(unsigned int x = 0; x < width * zoom; x++)
wprintf(L" %d", (pr[y][x] + 128));
wprintf(L"\n");
}

You can also access down sampled gray version of the RGB bitmap after resize() call with:

• inline const vec2D* gety() const;

It returns the pointer of vec2D type to it. I've written rgb2y(int r, int g, int b) function to convert a single RGB triplet to gray pixel with SSE optimization, however I use simple floating point arithmetic currently in that version of class and turn on the compiler's SSE optimization. It actually runs slightly faster than my SSE optimized function (have to look at that a moment later).

The Haar extension to the BaseFWT2D is pretty simple. I've provided implementations for virtual functions BaseFWT2D::transrows() and BaseFWT2D::transcols() (I have not written it for BaseFWT2D::synthrows() and BaseFWT2D::synthcols() since this is a down sampling class and not up sampling yet). They are MMX optimized and the math behind Haar transform is that you take 2 consecutive pixels, and calculate their mean. So you first decrease the size of your image twice along the horizontal direction and the same along the vertical. It is easy when you do this column wise but with a single row, you have to select even and odd consecutive pixels and just average them in parallel.

I do it this way:

unsigned char* sour;

__m64 m00FF;
m00FF.m64_u64 = 0x00FF00FF00FF00FF;

__m64 *msour = (__m64 *)sour;

//even coeffs
__m64 even = _mm_packs_pu16(_mm_and_si64
(*msour, m00FF), _mm_and_si64(*(msour + 1), m00FF));
//odd coeffs
__m64 odd = _mm_packs_pu16(_mm_srli_pi16(*msour, 8), _mm_srli_pi16(*(msour + 1), 8));

msour += 2;

## Points of Interest

The Haar class could be modified with SSE2 integer intrinsic for even faster processing, I hope I can implement it later and submit the update, otherwise if someone interested is eager to modify it with SSE2 support, please let me know. I bet it could do the same 640x480 down sampling to 80x60 for about 1-2ms with SSE2.

## History

• 18th October, 2007: Initial post

## Share

 Engineer Russian Federation
Highly skilled Engineer with 14 years of experience in academia, R&D and commercial product development supporting full software life-cycle from idea to implementation and further support. During my academic career I was able to succeed in MIT Computers in Cardiology 2006 international challenge, as a R&D and SW engineer gain CodeProject MVP, find algorithmic solutions to quickly resolve tough customer problems to pass product requirements in tight deadlines. My key areas of expertise involve Object-Oriented
Analysis and Design OOAD, OOP, machine learning, natural language processing, face recognition, computer vision and image processing, wavelet analysis, digital signal processing in cardiology.

## You may also be interested in...

 FirstPrev Next
 Re: Face Detection double(U)20-Nov-07 7:31 double(U) 20-Nov-07 7:31
 Re: Face Detection (Cont.) Dark.Elf.ipl27-Oct-07 6:09 Dark.Elf.ipl 27-Oct-07 6:09
 Re: Face Detection (Cont.) Chesnokov Yuriy27-Oct-07 18:40 Chesnokov Yuriy 27-Oct-07 18:40
 15fps at the least. It does not depend on image size as it is downscaled to about 80x60 pic. 19x19 face, -> 361 dimensional vector -> motion detectio + skin segmentation -> 2 vectors SVM or 361 2 1 ANN for a prefilter -> PCA projection 361 -> 40 -> final ANN classification 40 20 10 1 WIth that scheme it provides 15-25fps with SSE optimization. Since I use floats and Viola integers their speed is close to mine. Do they provide integer approximations for ANN? By the time I post the code you may test it on CMU or yourself in real time. However you should retrain ANN and SVM on CBCL database as it provides a lot of face data I can not currently download about 200meg I've got only GPRS connection only. On my faces(~1700) nonfaces(~34000) collected samples PCA projected data ANN rate train set: 893 17154 se: 99.55 sp: 100.00 pp: 100.00 np: 99.98 ac: 99.98 er: 0.000519 validation set: 447 8578 se: 96.42 sp: 99.90 pp: 97.95 np: 99.81 ac: 99.72 er: 0.001824 test set: 447 8578 se: 97.32 sp: 99.90 pp: 97.97 np: 99.86 ac: 99.77 er: 0.001746 chesnokov
 Last Visit: 31-Dec-99 18:00     Last Update: 27-Jul-17 16:14 Refresh 1