Inner Product Experiment: CPU, FPU vs. SSE*

, 20 May 2008 GPL3
 Rate this:
The article demonstrating inner product operation performed with shorts, ints, floats and doubles with CPU/FPU and SSE for comparison.

Introduction

The inner product (or dot product, scalar product) operation is the major one in digital signal processing field. It is used everywhere, Fourier (FFT, DCT), wavelet-analysis, filtering operations and so on. With advances of SSE technology you can parallelize this operation to perform multiplication and addition on several numbers instantly. However what precision in calculations to choose, integer, floats, doubles? In this article I demonstrate inner product operation on shorts, ints, floats, doubles performed with both CPU and SSE/SSE2/SSE3 optimized versions.

Background (optional)

You need understanding of inner product operation and SSE technology understanding. I like wikipedia for having answers to every question, have a look at inner product. In short you take 2 std::vector arrays (floats, ints, shorts, doubles) of equal length, multiply them element wise and sum up the entries in resulting vector producing one number. For SSE programming there is nice article Introduction to SSE Programming at codeproject.

Using the code

Just run the console application and provide the first argument as the length of array for inner product. It creates 2 vectors of the same length with random entries and computes their inner product printing the results and processing times for chars, shorts/shorts SSE2, ins, floats/floats SSE/floats SSE3, doubles/doubles SSE2.

```>inner.exe 5000000
chars          processing time: 13 ms
-119
shorts         processing time: 17 ms
3241
shorts sse2    processing time: 6 ms
3241
ints           processing time: 16 ms
786540
floats         processing time: 30 ms
1339854
sse2 intrin    processing time: 13 ms
1339854
sse3 assembly  processing time: 12 ms
1339854
doubles        processing time: 22 ms
1107822
doubles sse2   processing time: 25 ms
1107822
```

That provided results for inner product of 2 vectors of size 5000000. The second line after each type denotes rounded result of operation. I ran it on my AMD turion 64 2.2Ghz processor. We can see that short precision is the fastest one with SSE2. Novel haddps instruction in SSE3 allows faster processing than SSE2. Double precision performed with FPU outperformed FPU floats. However SSE2 optimization of doubles decrease the speed of computation compared to floats where we have more than 2 times increase in speed with SSE compared to FPU.

If you perform DSP on image data use SSE2 with shorts, using integer filters, or floats. Do not optimize doubles, that will lead to decrease of performance. If you are not going to use SSE optimization use double precision, it is faster than floats.

I did not try it on other processors and if your results will be different let me know. The code for performing inner product operation presented below

On floats with FPU:

```std::vector<float< v1;
std::vector<float> v2;
v1.resize(size);
v2.resize(size);
float* pv1 = &v1[0];
float* pv2 = &v2[0];

for (unsigned int i = 0; i < size; i++) {
pv1[i] = float(rand() % 64 - 32);
pv2[i] = float(rand() % 64 - 32);
}

float sum = 0;
for (unsigned int i = 0; i < size; i++)
sum += pv1[i] * pv2[i];
wprintf(L" %d\n", (int)sum);
```

SSE2 optimized shorts:

```short sse2_inner_s(const short* p1, const short* p2, unsigned int size)
{
__m128i* mp1 = (__m128i *)p1;
__m128i* mp2 = (__m128i *)p2;
__m128i mres = _mm_set_epi16(0, 0, 0, 0, 0, 0, 0, 0);

for(unsigned int i = 0; i < size/8; i++) {
mp1++;
mp2++;
}

short res[8];
__m128i* pmres = (__m128i *)res;
_mm_storeu_si128(pmres, mres);

return res[0]+res[1]+res[2]+res[3]+res[4]+res[5]+res[6]+res[7];
}
```

SSE optimized floats:

```float sse_inner(const float* a, const float* b, unsigned int size)
{
float z = 0.0f, fres = 0.0f;
__declspec(align(16)) float ftmp[4] = { 0.0f, 0.0f, 0.0f, 0.0f };
__m128 mres;

if ((size / 4) != 0) {
for (unsigned int i = 0; i < size / 4; i++)

//mres = a,b,c,d
__m128 mv1 = _mm_movelh_ps(mres, mres);     //a,b,a,b
__m128 mv2 = _mm_movehl_ps(mres, mres);     //c,d,c,d

_mm_store_ps(ftmp, mres);

fres = ftmp[0] + ftmp[1];
}

if ((size % 4) != 0) {
for (unsigned int i = size - size % 4; i < size; i++)
fres += a[i] * b[i];
}

return fres;
}
```

SSE3 optimized floats:

```float sse3_inner(const float* a, const float* b, unsigned int size)
{
float z = 0.0f, fres = 0.0f;

if ((size / 4) != 0) {
const float* pa = a;
const float* pb = b;
__asm {
movss   xmm0, xmmword ptr[z]
}
for (unsigned int i = 0; i < size / 4; i++) {
__asm {
mov     eax, dword ptr[pa]
mov     ebx, dword ptr[pb]
movups  xmm1, [eax]
movups  xmm2, [ebx]
mulps   xmm1, xmm2
}
pa += 4;
pb += 4;
}
__asm {
movss   dword ptr[fres], xmm0
}
}

return fres;
}
```

SSE optimized doubles:

```double sse_inner_d(const double* a, const double* b, unsigned int size)
{
double z = 0.0, fres = 0.0;
__declspec(align(16)) double ftmp[2] = { 0.0, 0.0 };
__m128d mres;

if ((size / 2) != 0) {
for (unsigned int i = 0; i < size / 2; i++)

_mm_store_pd(ftmp, mres);

fres = ftmp[0] + ftmp[1];
}

if ((size % 2) != 0) {
for (unsigned int i = size - size % 2; i < size; i++)
fres += a[i] * b[i];
}

return fres;
}
```

Share

Engineer
Russian Federation
No Biography provided

 View All Threads First Prev Next
 Better use of SSE riffmaster 5-Feb-10 2:16
 Re: Better use of SSE BorisVidolov 27-Nov-11 12:46
 Last Visit: 31-Dec-99 19:00     Last Update: 20-Dec-14 8:59 Refresh 1