Click here to Skip to main content
Click here to Skip to main content

Introduction to SSE Programming

, 10 Jul 2003 CPOL
Rate this:
Please Sign up or sign in to vote.
An article describes programming floating-point calculations using Streaming SIMD Extensions

Introduction

The Intel Streaming SIMD Extensions technology enhance the performance of floating-point operations. Visual Studio .NET 2003 supports a set of SSE Intrinsics which allow the use of SSE instructions directly from C++ code, without writing the Assembly instructions. MSDN SSE topics [2] may be confusing for the programmers who are not familiar with the SSE Assembly progamming. However, reading the Intel Software manuals [1] together with MSDN gives the opportunity to understand the basics of SSE programming.

SIMD is a single-instruction, multiple-data (SIMD) execution model. Consider the following programming task: computing of the square root of each element in a long floating-point array. The algorithm for this task may be written by such way:

for each  f in array
    f = sqrt(f)
Let's be more specific:
for each  f in array
{
    load f to the floating-point register
    calculate the square root
    write the result from the register to memory
}
Processor with the Intel SSE support have eight 128-bit registers, each of which may contain 4 single-precision floating-point numbers. SSE is a set of instructions which allow to load the floating-point numbers to 128-bit registers, perform the arithmetic and logical operations with them and write the result back to memory. Using SSE technology, algorithms may be written as:
for each  4 members in array
{
    load 4 members to the SSE register
    calculate 4 square roots in one operation
    write the result from the register to memory
}
The C++ programmer writing a program using SSE Intrinsics doesn't care about registers. He has a 128-byte __m128 type and a set of functions to perform the arithmetic and logical operations. It's up to the C++ compiler to decide which SSE register to use and to make code optimizations. SSE technology may be used when some operation is done with each element of a long floating-point arrays.

SSE Programming Details

Include Files

All SSE instructions and __m128 data type are defined in xmmintrin.h file:
#include <xmmintrin.h>
Since SSE instructions are compiler intrinsics and not functions, there are no lib-files.

Data Alignment

Each float array processed by SSE instructions should have 16 byte alignment. A static array is declared using the __declspec(align(16)) keyword:
__declspec(align(16)) float m_fArray[ARRAY_SIZE];
Dynamic array should be allocated using new _aligned_malloc function:
m_fArray = (float*) _aligned_malloc(ARRAY_SIZE * sizeof(float), 16);
Array allocated by the _aligned_malloc function is released using the _aligned_free function:
_aligned_free(m_fArray);

__m128 Data Type

Variables of this type are used as SSE instructions operands. They should not be accessed directly. Variables of type _m128 are automatically aligned on 16-byte boundaries.

Detection of SSE Support

SSE instructions may be used if they are supported by the processor. The Visual C++ CPUID sample [4] shows how to detect support of the SSE, MMX and other processor features. It is done using the cpuid Assembly command. See details in this sample and in the Intel Software manuals [1].

SSETest Demo Project

SSETest project is a dialog-based application which makes the following calculation with three float arrays:
fResult[i] = sqrt( fSource1[i]*fSource1[i] + fSource2[i]*fSource2[i] ) + 0.5

i = 0, 1, 2 ... ARRAY_SIZE-1
ARRAY_SIZE is defined as 30000. Source arrays are filled using sin and cos functions. The Waterfall chart control written by Kris Jearakul [3] is used to show the source arrays and the result of calculations. Calculation time (ms) is shown in the dialog. Calculation may be done using one of three possible ways:
  • C++ code;
  • C++ code with SSE Intrinsics;
  • Inline Assembly with SSE instructions.
C++ function:
void CSSETestDlg::ComputeArrayCPlusPlus(
          float* pArray1,                   // [in] first source array
          float* pArray2,                   // [in] second source array
          float* pResult,                   // [out] result array
          int nSize)                        // [in] size of all arrays
{

    int i;

    float* pSource1 = pArray1;
    float* pSource2 = pArray2;
    float* pDest = pResult;

    for ( i = 0; i < nSize; i++ )
    {
        *pDest = (float)sqrt((*pSource1) * (*pSource1) + (*pSource2)
                 * (*pSource2)) + 0.5f;

        pSource1++;
        pSource2++;
        pDest++;
    }
}
Now let's rewrite this function using the SSE Instrinsics. To find the required SSE Instrinsics I use the following way:
  • Find Assembly SSE instruction in Intel Software manuals [1]. First I look for this instruction in Volume 1, Chapter 9, and after this find the detailed Description in Volume 2. This description contains also appropriate C++ Intrinsic name.
  • Search for SSE Intrinsic name in the MSDN Library.
Some SSE Intrinsics are composite and cannot be found by this way. They should be found directly in the MSDN Library (descriptions are very short but readable). The results of such search may be shown in the following table:

Required Function Assembly Instruction SSE Intrinsic
Assign float value to 4 components of 128-bit value movss + shufps _mm_set_ps1 (composite)
Multiply 4 float components of 2 128-bit values mulps _mm_mul_ps
Add 4 float components of 2 128-bit values addps _mm_add_ps
Compute the square root of 4 float components in 128-bit values sqrtps _mm_sqrt_ps

C++ function with SSE Intrinsics:

void CSSETestDlg::ComputeArrayCPlusPlusSSE(
          float* pArray1,                   // [in] first source array
          float* pArray2,                   // [in] second source array
          float* pResult,                   // [out] result array
          int nSize)                        // [in] size of all arrays
{
    int nLoop = nSize/ 4;

    __m128 m1, m2, m3, m4;

    __m128* pSrc1 = (__m128*) pArray1;
    __m128* pSrc2 = (__m128*) pArray2;
    __m128* pDest = (__m128*) pResult;


    __m128 m0_5 = _mm_set_ps1(0.5f);        // m0_5[0, 1, 2, 3] = 0.5

    for ( int i = 0; i < nLoop; i++ )
    {
        m1 = _mm_mul_ps(*pSrc1, *pSrc1);        // m1 = *pSrc1 * *pSrc1
        m2 = _mm_mul_ps(*pSrc2, *pSrc2);        // m2 = *pSrc2 * *pSrc2
        m3 = _mm_add_ps(m1, m2);                // m3 = m1 + m2
        m4 = _mm_sqrt_ps(m3);                   // m4 = sqrt(m3)
        *pDest = _mm_add_ps(m4, m0_5);          // *pDest = m4 + 0.5
        
        pSrc1++;
        pSrc2++;
        pDest++;
    }
}
This doesn't show the function using inline Assembly. Anyone who is interested may read it in the demo project. Calculation times on my computer:
  • C++ code - 26 ms
  • C++ with SSE Intrinsics - 9 ms
  • Inline Assembly with SSE instructions - 9 ms
Execution time should be estimated in the Release configuration, with compiler optimizations.

SSESample Demo Project

SSESample project is a dialog-based application which makes the following calculation with float array:
fResult[i] = sqrt(fSource[i]*2.8)

i = 0, 1, 2 ... ARRAY_SIZE-1
The program also calculates the minimum and maximum values in the result array. ARRAY_SIZE is defined as 100000. Result array is shown in the listbox. Calculation time (ms) for each way is shown in the dialog:
  • C++ code - 6 ms on my computer;
  • C++ code with SSE Intrinsics - 3 ms;
  • Inline Assembly with SSE instructions - 2 ms.

Assembly code performs better because of intensive using of the SSX registers. However, usually C++ code with SSE Intrinsics performs like Assembly code or better, because it is difficult to write an Assembly code which runs faster than optimized code generated by C++ compiler.

C++ function:

// Input: m_fInitialArray
// Output: m_fResultArray, m_fMin, m_fMax
void CSSESampleDlg::OnBnClickedButtonCplusplus()
{
    m_fMin = FLT_MAX;
    m_fMax = FLT_MIN;

    int i;

    for ( i = 0; i < ARRAY_SIZE; i++ )
    {
        m_fResultArray[i] = sqrt(m_fInitialArray[i]  * 2.8f);

        if ( m_fResultArray[i] < m_fMin )
            m_fMin = m_fResultArray[i];

        if ( m_fResultArray[i] > m_fMax )
            m_fMax = m_fResultArray[i];
    }
}
C++ function with SSE Intrinsics:
// Input: m_fInitialArray
// Output: m_fResultArray, m_fMin, m_fMax
void CSSESampleDlg::OnBnClickedButtonSseC()
{
    __m128 coeff = _mm_set_ps1(2.8f);      // coeff[0, 1, 2, 3] = 2.8
    __m128 tmp;

    __m128 min128 = _mm_set_ps1(FLT_MAX);  // min128[0, 1, 2, 3] = FLT_MAX
    __m128 max128 = _mm_set_ps1(FLT_MIN);  // max128[0, 1, 2, 3] = FLT_MIN

    __m128* pSource = (__m128*) m_fInitialArray;
    __m128* pDest = (__m128*) m_fResultArray;

    for ( int i = 0; i < ARRAY_SIZE/4; i++ )
    {
        tmp = _mm_mul_ps(*pSource, coeff);      // tmp = *pSource * coeff
        *pDest = _mm_sqrt_ps(tmp);              // *pDest = sqrt(tmp)

        min128 =  _mm_min_ps(*pDest, min128);
        max128 =  _mm_max_ps(*pDest, max128);

        pSource++;
        pDest++;
    }

    // extract minimum and maximum values from min128 and max128
    union u
    {
        __m128 m;
        float f[4];
    } x;

    x.m = min128;
    m_fMin = min(x.f[0], min(x.f[1], min(x.f[2], x.f[3])));

    x.m = max128;
    m_fMax = max(x.f[0], max(x.f[1], max(x.f[2], x.f[3])));
}

Sources

  1. Intel Software manuals.
  2. MSDN, Streaming SIMD Extensions (SSE). http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vclang/html/vcrefstreamingsimdextensions.asp
  3. Waterfall chart control written by Kris Jearakul. http://www.codeguru.com/controls/Waterfall.shtml
  4. Microsoft Visual C++ CPUID sample. http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vcsample/html/vcsamcpuiddeterminecpucapabilities.asp
  5. Matt Pietrek. Under The Hood. February 1998 issue of Microsoft Systems Journal. http://www.microsoft.com/msj/0298/hood0298.aspx

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

Alex Fr
Software Developer
Israel Israel
No Biography provided

Comments and Discussions

 
AnswerRe: How about double data type? Pinmemberdoug6553611-Aug-08 0:50 
GeneralVery good! PinmemberVincent Leong773-Aug-03 20:02 
Generalhow to use SSE under Linux? PinmemberEagleCalifornia9-Sep-03 1:43 
GeneralRe: how to use SSE under Linux? PinmemberAlex Farber9-Sep-03 2:18 
GeneralRe: how to use SSE under Linux? PinsussgnuLNX23-Sep-03 3:59 
GeneralRe: how to use SSE under Linux? PinsussChristophe Avoinne18-Oct-03 2:39 
GeneralRe: how to use SSE under Linux? PinsussgnuLNX20-Oct-03 1:53 
GeneralRe: how to use SSE under Linux? PinmemberPSuade20-Oct-03 8:40 
hummm... I'm using a GCC 3.2.3 but fail to have __m128. Are you sure you don't need to include a header file ?
 
I strongly discourage you to use a union, because it turns off some optimizations ( I tried a lot of tricks with SSE builtins ).
 
For those who are interested by real SSE optimization in C++ :
 
////////////////////////////////////////////////////////////////////////////////
 
#define always_inline inline __attribute__( ( always_inline ) )
 
////////////////////////////////////////////////////////////////////////////////
 
typedef float __v4sf __attribute__( ( mode( V4SF ), aligned( 16 ) ) );
 
////////////////////////////////////////////////////////////////////////////////
 
struct v4sf
{
__v4sf v;
 
///
always_inline
v4sf( ) { }
 
always_inline
v4sf( __v4sf _1 ) : v( _1 ) { }

always_inline
operator __v4sf( ) const { return v; }
};
 
always_inline
v4sf operator +( v4sf _1, v4sf _2 )
{ return __builtin_ia32_addps( _1.v, _2.v ); }
 
always_inline
v4sf operator +( __v4sf _1, v4sf _2 )
{ return __builtin_ia32_addps( _1, _2.v ); }
 
always_inline
v4sf operator +( v4sf _1, __v4sf _2 )
{ return __builtin_ia32_addps( _1.v, _2 ); }
 
always_inline
v4sf operator -( v4sf _1, v4sf _2 )
{ return __builtin_ia32_subps( _1.v, _2.v ); }
 
always_inline
v4sf operator -( __v4sf _1, v4sf _2 )
{ return __builtin_ia32_subps( _1, _2.v ); }
 
always_inline
v4sf operator -( v4sf _1, __v4sf _2 )
{ return __builtin_ia32_subps( _1.v, _2 ); }
 
always_inline
v4sf operator *( v4sf _1, v4sf _2 )
{ return __builtin_ia32_mulps( _1.v, _2.v ); }
 
always_inline
v4sf operator *( __v4sf _1, v4sf _2 )
{ return __builtin_ia32_mulps( _1, _2.v ); }
 
always_inline
v4sf operator *( v4sf _1, __v4sf _2 )
{ return __builtin_ia32_mulps( _1.v, _2 ); }
 
always_inline
v4sf operator /( v4sf _1, v4sf _2 )
{ return __builtin_ia32_divps( _1.v, _2.v ); }
 
always_inline
v4sf operator /( __v4sf _1, v4sf _2 )
{ return __builtin_ia32_divps( _1, _2.v ); }
 
always_inline
v4sf operator /( v4sf _1, __v4sf _2 )
{ return __builtin_ia32_divps( _1.v, _2 ); }
 
////////////////////////////////////////////////////////////////////////////////
 
Now using "struct v4sf" would help compiler to allocate SSE registers without putting v in stack. Using a union would prevent compiler from register optimizations and put v in stack even if a SSE register were more appropriate.
 
v4sf a,b,d;
void f()
{
d = a * ( d + b );
}
 
that gives us :
 

65: 0f 28 3d 20 00 00 00 movaps 0x20,%xmm7
6c: 0f 28 35 00 00 00 00 movaps 0x0,%xmm6
73: 0f 58 3d 10 00 00 00 addps 0x10,%xmm7
7a: 0f 59 f7 mulps %xmm7,%xmm6
7d: 0f 29 35 20 00 00 00 movaps %xmm6,0x20
 
Now if we replace :
 
struct v4sf
{
union { __v4sf v; float f[4]; }
 
...
 
that gives us ( what a ugly code ! ) :
 
65: a1 00 00 00 00 mov 0x0,%eax
6a: 89 45 d8 mov %eax,0xffffffd8(%ebp)
6d: a1 04 00 00 00 mov 0x4,%eax
72: 89 45 dc mov %eax,0xffffffdc(%ebp)
75: a1 08 00 00 00 mov 0x8,%eax
7a: 89 45 e0 mov %eax,0xffffffe0(%ebp)
7d: a1 0c 00 00 00 mov 0xc,%eax
82: 89 45 e4 mov %eax,0xffffffe4(%ebp)
85: 0f 28 75 d8 movaps 0xffffffd8(%ebp),%xmm6
89: a1 20 00 00 00 mov 0x20,%eax
8e: 89 45 b8 mov %eax,0xffffffb8(%ebp)
91: a1 24 00 00 00 mov 0x24,%eax
96: 89 45 bc mov %eax,0xffffffbc(%ebp)
99: a1 28 00 00 00 mov 0x28,%eax
9e: 89 45 c0 mov %eax,0xffffffc0(%ebp)
a1: a1 2c 00 00 00 mov 0x2c,%eax
a6: 89 45 c4 mov %eax,0xffffffc4(%ebp)
a9: 0f 28 7d b8 movaps 0xffffffb8(%ebp),%xmm7
ad: a1 10 00 00 00 mov 0x10,%eax
b2: 89 45 a8 mov %eax,0xffffffa8(%ebp)
b5: a1 14 00 00 00 mov 0x14,%eax
ba: 89 45 ac mov %eax,0xffffffac(%ebp)
bd: a1 18 00 00 00 mov 0x18,%eax
c2: 89 45 b0 mov %eax,0xffffffb0(%ebp)
c5: a1 1c 00 00 00 mov 0x1c,%eax
ca: 89 45 b4 mov %eax,0xffffffb4(%ebp)
cd: 0f 58 7d a8 addps 0xffffffa8(%ebp),%xmm7
d1: 0f 29 7d c8 movaps %xmm7,0xffffffc8(%ebp)
d5: 0f 59 75 c8 mulps 0xffffffc8(%ebp),%xmm6
d9: 0f 29 75 e8 movaps %xmm6,0xffffffe8(%ebp)
dd: 8b 45 e8 mov 0xffffffe8(%ebp),%eax
e0: a3 20 00 00 00 mov %eax,0x20
e5: 8b 45 ec mov 0xffffffec(%ebp),%eax
e8: a3 24 00 00 00 mov %eax,0x24
ed: 8b 45 f0 mov 0xfffffff0(%ebp),%eax
f0: a3 28 00 00 00 mov %eax,0x28
f5: 8b 45 f4 mov 0xfffffff4(%ebp),%eax
f8: a3 2c 00 00 00 mov %eax,0x2c
 
So you shouldn't mix things like it.
 
even :
struct v4sf
{
union { __v4sf v; float f[4] __attribute( ( aligned( 16 ) ) ); }
 
...
 
or :
 
struct v4sf
{
union { __v4sf v; float f[4]; } __attribute( ( aligned( 16 ) ) );
 
...
 
don't change anything.
 
To access a 4 floats, just create another class float4 with conversion operator between v4sf and float4.
 
Oh yeah, flags were :
-march=athlon-xp
-fomit-frame-pointer
-mfpmath=sse
-O6

GeneralRe: how to use SSE under Linux? PinsussgnuLNX20-Oct-03 10:57 
GeneralRe: how to use SSE under Linux? PinmemberPSuade21-Oct-03 9:40 
GeneralRe: how to use SSE under Linux? PinsussgnuLNX22-Oct-03 2:25 
GeneralRe: how to use SSE under Linux? PinmemberPSuade22-Oct-03 4:22 
GeneralSSE2 Examples... Pinmembergodot_gildor31-Jul-03 6:52 
GeneralRe: SSE2 Examples... PinmemberAlex Farber2-Aug-03 20:06 
GeneralSSE2 Examples... Pinmembergodot_gildor31-Jul-03 6:51 
GeneralVisual C++ 6.0 Processor Pack Pinmemberlipunov@hotmail.com21-Jul-03 15:06 
GeneralRe: Visual C++ 6.0 Processor Pack PinmemberAlex Farber21-Jul-03 20:46 
GeneralGreat! PinmemberGreg S.17-Jul-03 6:07 
GeneralExcellent; well written, in depth PinsussAnonymous11-Jul-03 20:28 
GeneralThank you! PinmemberRobert Buldoc11-Jul-03 12:43 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web04 | 2.8.141015.1 | Last Updated 11 Jul 2003
Article Copyright 2003 by Alex Fr
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid