Click here to Skip to main content
12,759,979 members (32,885 online)
Click here to Skip to main content
Add your own
alternative version

Stats

10.3K views
18 bookmarked
Posted 1 Jun 2014

Vector of Objects vs Vector of Pointers

, 2 Jun 2014 CPOL
Rate this:
Please Sign up or sign in to vote.
How memory access patterns can influence performance of your code

In one of my previous posts, I wrote about performance differences between using vector<Obj> and vector<shared_ptr<Obj>>. Somehow, I knew that the post is a bit unfinished and needs some more investigation.

This time, I will try to explain memory access patterns used in my code and why performance is lost in some parts.

Contents

  • Recap
  • The test code
  • The results
  • Memory access pattern
  • Conclusion

Recap

I've compared the following cases:

  • std::vector<Object> - Memory is allocated on the heap but vector guarantees that the mem block is continuous. Thus, iteration over it should be quite fast. See below to see why it should be fast.

  • std::vector<std::shared_ptr<Object>> - This simulates array of references from C#. You have an array, but each element is allocated in a different place in the heap.
    • update - Although my code allocated hundreds of shared pointers they almost looked like single block of memory! I've allocated space one after another, thus memory allocator simply choose next free memory cell that was actually quite close to the previous one.

The Test Code

Full repository can be found here: github/fenbf/PointerAccessTest.

The test code:

  • creates a desired container of objects
  • runs generate method one time
  • runs update method N times

Vector of Pointers

// start measuring time for Creation
std::vector<std::shared_ptr<Particle>> particles(count);
for (auto &p : particles)
    p = std::make_shared<Particle>();
// end time measurment

// >>>>>>
// randomize:
for (size_t i = 0; i < count / 2; ++i)
{
    int a = rand() % count;
    int b = rand() % count;
    if (a != b)
        std::swap(particles[a], particles[b]);
}
// >>>>>>

for (auto &p : particles)
    p->generate();

// start measuring time for Update
for (size_t u = 0; u < updates; ++u)
{
    for (auto &p : particles)
        p->update(1.0f);
}
// end time measurment

Please notice that the above code was enhanced with randomize section. This code creates a fragmentation in the memory. It tries to simulate some real life scenario where objects might be created/deleted not in different moments. Instead of that, we could create X pointers, then delete some of them, allocate some other objects, allocate then the rest... just to make allocator not create space close to each other.

I would like to thank Morten Bendiksen for this idea from his comments!

Before the randomize, we could get the following pointers' addresses:

addressdifference
167385640
16712876-25688
1671297296
1676806055088
1676815696
1676825296
1676834896
1676844496
1676854096
1676863696
1676873296
1676882896
1676892496
167704041480
16770212-192
16769444-768
16769252-192
16770020768
16769540-480
16769348-192
16769636288

The first table is not a single block of memory, but still the distance to the neighbour particles is usually not that huge. In my last post, there was something like 8% of loss of using such pattern.

After randomize:

addressdifference
147724840
1483264460160
1484695614312
1487697230016
14802076-74896
1480217296
148099167744
1485857248656
1487562817056
14816612-59016
148197563144
148229963240
14802844-20152
148046121768
14803364-1248
148132529888
14803556-9696
1484426840712
1490690062632
14815460-91440
1486116445704

On the other hand, the second table shows huge distances between neighbour objects. They are very random and hardware prefetcher cannot cope with this pattern.

Vector of Objects

// start measuring time for Creation
std::vector<Particle> particles(count);
// end time measurment

for (auto &p : particles)
    p.generate();

// start measuring time for Update
for (size_t u = 0; u < updates; ++u)
{
    for (auto &p : particles)
        p.update(1.0f);
}
// end time measurment

This time randomization of course has no sense. :)

The Results

  • Core i5 2400, Sandy Bridge
  • Visual Studio 2013 for Desktop Express
  • Release Mode, /fp:fast, /arch:SSE2, /O2

Memory Access Pattern

The most important thing about the memory is the "latency".

Recently, there was a great picture explaining it:

Orginal link to the picture

In general, having data already in cache is a great thing. When some variable needs to be loaded from the main memory, then we need to wait. We definitely do not want to wait too long for the data.

So, why it is so important to care about iterating over continuous block of memory?

Let us look at our main loop:

for each particle p:
        p->update(DELTA_TIME);

The Continuous Case

  1. Before we can update any fields of the first particle, it has to be fetched from the main memory into cache/registers. Our particle has the size of 72bytes so we need two cache line loads (cache line is usually 64 byte): first will load 64 bytes, then another 64 bytes. Notice that only the first 8 bytes from the second load are used for the first particle. The rest - 56b - are the bytes of the second particle.
  2. In the second step, we have already 56 bytes of the second particle, so we need another load - 64 bytes - to get the rest. This time, we also get some data of the third particle.
  3. And the patterns repeat...

For 1000 particles, we need 1000*72bytes = 72000 bytes, that means 72000/64 = 1125 cache line loads. In other words, for each particle, we will need 1.125 cache line reads.

But CPU is quite smart and will additionally use thing called Hardware Prefetcher. CPU will detect that we operate on one huge memory block and will prefetch some of the cache lines before we even ask. Thus, instead of waiting for the memory, it will be already in the cache!

What about the case with vector of pointers?

The Pointer Case

  1. Load data for the first particle. Two cache line reads.
  2. Load data for the second particle. Uups... this time, we cannot use data loaded in the second cache line read (from the first step), because second particle data is located somewhere else in the memory! So for the second particle, we need also 2 loads!
  3. The patterns repeat...

For 1000 particles, we need on the average 2000 cache line reads! This is 78% more cache line reads than the first case! Additionally, Hardware Prefetcher cannot figure out the pattern - it is random - so there will be a lot of cache misses and stalls.

In our experiment, the pointer code for 80k of particles was more 266% slower than the continuous case.

Conclusion

The test showed that memory access pattern are the crucial thing in optimized code. Just by moving data around, by carefully storing it in one place, we can get huge performance speed up. Another thing to notice is that sometimes we are, simply, not aware of memory space - we might think that the language/compiler/runtime will take care of everything.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

Bartlomiej Filipek
Software Developer
Poland Poland
Software developer interested in creating great code and passionate about teaching.

I have around 10 years of professional experience in C++/Windows/Visual Studio programming. Plus other technologies like: OpenGL, game development, performance optimization.

If you like my articles please subscribe to my coding blog or just visit www.bfilipek.com.

You may also be interested in...

Comments and Discussions

 
QuestionGood research Pin
Sergey Chepurin6-Jun-14 1:39
memberSergey Chepurin6-Jun-14 1:39 
AnswerRe: Good research Pin
Bartlomiej Filipek6-Jun-14 3:20
memberBartlomiej Filipek6-Jun-14 3:20 
QuestionGreat Pin
Manikandan103-Jun-14 1:14
professionalManikandan103-Jun-14 1:14 
AnswerRe: Great Pin
Bartlomiej Filipek3-Jun-14 3:02
memberBartlomiej Filipek3-Jun-14 3:02 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Permalink | Advertise | Privacy | Terms of Use | Mobile
Web02 | 2.8.170217.1 | Last Updated 2 Jun 2014
Article Copyright 2014 by Bartlomiej Filipek
Everything else Copyright © CodeProject, 1999-2017
Layout: fixed | fluid