Click here to Skip to main content
13,006,288 members (56,954 online)
Click here to Skip to main content
Add your own
alternative version


4 bookmarked
Posted 6 Oct 2014

Flexible Particle System - Code Optimization

, 6 Oct 2014
Rate this:
Please Sign up or sign in to vote.
Several code changes that made my particle system faster: SIMD, random number generator, pointer aliasing, memory alignment


After playing with the tools, we have some more options to improve the performance of the particle system. This time, we need to rewrite some parts of the code.

In total, the particle system runs almost twice as fast as initially! Read more to see what pieces of code were changed.

The Series

Plan for this Post


We are starting with those numbers, see the previous post (last results).

Core i5 Sandy Bridge


Core i5 Ivy Bridge


(time in milliseconds)

SIMD Preparation

Previously, I tried to force the compiler to use SSE2 or AVX instructions. As we saw, there was a nice performance boost (around 10% for AVX). But hey... SIMD should calculate things 4x or 8x times faster... so why we got only a little improvement?

In real life, it's not that simple:

  • SIMD can do 4 or 8 instructions at a time, but we still need to wait for the memory. See my summary of a talk "Native code performance on modern CPUs" for more information. In general, we can get max 2.5x speedup using SSE2/4, assuming we have ideally 'vectorizable' code. Not all code is in such a perfect state.
  • Current CPUs are superscalar that means CPU can run several different instructions in parallel. Sometimes, SIMD code can be even slower than the original code created by a compiler.
  • Additional small problem: SIMD registers needs memory chunks to be aligned to 128bits (16-byte alignment). We need to take care of this when we allocate new memory. So not every variable or array is good for SSE code.

What can we do?

  • Since particles operate mostly on glm::vec4, there is a high chance to use the full power of SSE. We use 4 floats per vector, 16 bytes.
  • glm adds a very nice feature glm::simdVec4 which basically adds SSE code to common vector functions. So I simply changed glm::vec4 to glm::simdVec4.
  • Memory must be aligned, so I used _aligned_malloc and _aligned_free.

Some code examples:

// particles.h, in ParticleData class declaration
glm::simdVec4 *m_pos;
glm::simdVec4 *m_col;

// in particles.cpp, generate() method:
m_pos = (glm::simdVec4 *)_aligned_malloc(sizeof(glm::vec4)*maxSize, 16);
m_col = (glm::simdVec4 *)_aligned_malloc(sizeof(glm::vec4)*maxSize, 16);

// particles.cpp, destructor

The results after changes (Visual Studio):

Sandy Bridge:


Ivy Bridge:


Wow: almost 20% of improvement! All by proper data structures (for vectors) and memory alignment.

SSE and AVX Instructions

So far, we got a nice speed up... Now, let's write some SSE code for most critical loops. Will it run faster?

Euler update, SSE:

__m128 ga = globalA.Data;
__m128 *pa, *pb, pc;
__m128 ldt = _mm_set_ps1(localDT);

size_t i;
for (i = 0; i < endId; i++)
    pa = (__m128*)(&p->m_acc[i].x);
    *pa = _mm_add_ps(*pa, ga);

for (i = 0; i < endId; i ++)
    pa = (__m128*)(&p->m_vel[i].x);
    pb = (__m128*)(&p->m_acc[i].x);
    pc = _mm_mul_ps(*pb, ldt);
    *pa = _mm_add_ps(*pa, pc);

for (size_t i = 0; i < endId; i++)
    pa = (__m128*)(&p->m_pos[i].x);
    pb = (__m128*)(&p->m_vel[i].x);
    pc = _mm_mul_ps(*pb, ldt);
    *pa = _mm_add_ps(*pa, pc);

Readability is much worse in this case.

The results:

Sandy Bridge


Ivy Bridge:


Not much, unfortunately. This is because of glm::simdVec4 which uses SSE code. So there is no point in rewriting it. We lose readability and the performance gain is questionable.

Pointer Aliasing: __restrict Keyword

In my previous post, I got a very interesting comment from Matías N. Goldberg:

... In my experience, you get massive performance improvements by just telling the compiler the pointers do not alias to each other...

Matias suggest using __restrict keyword to tell the compiler that pointers are not aliasing. For instance:

glm::vec4 * __restrict acc = p->m_acc;
glm::vec4 * __restrict vel = p->m_vel;
glm::vec4 * __restrict pos = p->m_pos;

And then, instead of p->m_pos, just use pos pointer.

When I did such change in all the updaters (and generators) code, I got the following results:

Sandy Bridge


Ivy Bridge


This is not a massive improvement, but it's still worth testing.

Random Number Generator

Right now, there is standard C rand() function called. For a particle system we, probably, do not need to use something more advanced (like normal distribution random generator) - uniform distribution is just fine... maybe there are some faster generators than the default one?I focused mostly on the updaters part so far. But, generators could also be improved a bit. In this module, a random number generator is heavily used. What if we changed it?

I've searched and found something here, here and here.

I've tried to use this generator:

static unsigned int mirand = 1;
float sfrand(void) {
    unsigned int a;
    mirand *= 16807;
    a = (mirand & 0x007fffff) | 0x40000000;
    return(*((float*)&a) - 3.0f);

It has a uniform distribution and 23 bits of precision (C rand() has only 16 bits).

The results:

Sandy Bridge:


Ivy Bridge:


Wow! Now it is around 28% of total improvement for Sandy Bridge and almost the same for Ivy Bridge.

Wrap Up

Final Results

Sandy191000384.869 (-21.3%)501.435 (-27.2%)394.217 (-24.2%)
Ivy191000473.518 (-20.3%)606.759 (-27.1%)484.880 (-24.3%)

Total (taking times before tools optimization)


We can 'reverse' those numbers and say that now the attractor effect runs almost two times faster! Not that bad!


  • Memory alignment and proper data structures are the key factors.
  • Write SIMD code only if needed, usually it is better to rely on a compiler and third party libraries.
  • Describe your code better: for instance using __restrict keyword. That way, a compiler can generate better code.
  • Random number generator can make a difference.

What's Next

The renderer is very simple so far. Maybe, there are some options to improve its code. For sure, we have to look at CPU to GPU memory transfers and better buffers usage.



This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author

Bartlomiej Filipek
Software Developer
Poland Poland
Software developer interested in creating great code and passionate about teaching.

I have around 10 years of professional experience in C++/Windows/Visual Studio programming. Plus other technologies like: OpenGL, game development, performance optimization.

If you like my articles please subscribe to my coding blog or just visit

You may also be interested in...

Comments and Discussions

-- There are no messages in this forum --
Permalink | Advertise | Privacy | Terms of Use | Mobile
Web02 | 2.8.170628.1 | Last Updated 7 Oct 2014
Article Copyright 2014 by Bartlomiej Filipek
Everything else Copyright © CodeProject, 1999-2017
Layout: fixed | fluid