Click here to Skip to main content
15,869,940 members
Articles / Programming Languages / C++
Article

Memory allocations caching

Rate me:
Please Sign up or sign in to vote.
3.79/5 (11 votes)
28 Jun 20042 min read 68.6K   26   22
How to optimize heap usage

Introduction

Not a long time ago, I've posted an article that describes a technique to optimize frequent dynamic allocations of a constant size.

A few days ago, I talked to my friend and occasionally this subject came up. He told me: "In such cases, instead of deleting the unneeded blocks, I just store them in a linked list, and then reuse them again".

Impressing, isn't it? This method is even simpler than what I've invented, and its advantage is that the caching pool can grow dynamically without special efforts or performance penalties.

Overhead: Theoretically, we need no overhead at all: when the allocated block is in use, we need no references to it, and once it is unneeded - we just use its body to implement the linked list node (assume the minimal allocation size is at least the size of a pointer, 4 bytes on Win32). But on the other hand - all blocks are allocated via a heap, and it probably needs some header. This can be significant when allocation size is small. In such a case, it would be a good idea to use my allocator (that I described earlier) instead of a heap.

Implementation

Now, let's discuss it a bit closer. We need a one-directional linked list (stack or queue, doesn't matter) which is initially empty. During the runtime, we allocate blocks in a regular way. Then, when they are no longer needed, we push them into our linked list (instead of deleting). Upon next allocation request, we just return a block from our linked list.

OK, let's focus on some pitfalls:

  1. Suppose there's a spike at an early running stage. In other words, you need suddenly very many allocation blocks whereas the caching list hasn't collected too many blocks yet. Hence, you'll have to allocate them rapidly, and that is when it is especially critical for your application to respond quickly upon such a payload! So, our first conclusion would be that some amount of blocks should be PRE-cached upon initialization.
  2. After a huge spike goes off, our list will probably contain a lot of blocks, whereas most of the time we'll need only a small part of them. So, our second conclusion would be that when our caching list becomes large enough - we delete allocation blocks in a regular way instead of collecting them.

Well, not too complex at all. Here, I'm giving one possible implementation of this idea. It caches allocations up to some maximum count, which doesn't change. However, a clever application can somehow collect statistics and adjust dynamically this limit. Well, that's up to you. Good luck!

P.S.: Sorry if there are bugs, I haven't tested it.

class CAllocaterEx {
    ULONG m_nAllocationSize;
    ULONG m_nPoolMax;
    ULONG m_nCount;

    struct CNode {
        CNode* m_pNext;
    };
    CNode* m_pHead;

    CNode* Pop()
    {
        ASSERT(m_pHead && m_nCount);
        CNode* pHead = m_pHead;
        m_pHead = pHead->m_pNext;
        m_nCount--;
        return pHead;
    }
    void Push(CNode* pNode)
    {
        ASSERT(pNode);
        pNode->m_pNext = m_pHead;
        m_pHead = pNode;
        m_nCount++;
    }

public:
    CAllocaterEx(ULONG nAllocationSize = 0) :
        m_nAllocationSize(nAllocationSize),
        m_nPoolMax(0),
        m_nCount(0),
        m_pHead(NULL)
        {
        }
    ~CAllocaterEx()
    {
        Clear();
    }

    bool Init(ULONG nPoolSize)
    {
        // ensure our allocation size is at least size of a pointer
        if (m_nAllocationSize < sizeof(ULONG_PTR))
            m_nAllocationSize = sizeof(ULONG_PTR);

        Clear();
        return EnsurePoolMin(m_nPoolMax = nPoolSize);
    }
    bool Init(ULONG nPoolSize, ULONG nAllocationSize)
    {
        m_nAllocationSize = nAllocationSize;
        return Init(nPoolSize);
    }

    bool EnsurePoolMin(ULONG nPoolSize)
    {
        while (m_nCount < nPoolSize)
        {
            CNode* pNode = (CNode*) new BYTE[m_nAllocationSize];
            if (!pNode)
                return false;
            Push(pNode);
        }
        return true;
    }
    void EnsurePoolMax(ULONG nPoolSize)
    {
        while (m_nCount > nPoolSize)
            delete[] (PBYTE) Pop();
    }

    void Clear()
    {
        EnsurePoolMax(0);
    }

    PVOID Alloc()
    {
        return EnsurePoolMin(1) ? (PVOID) Pop() : NULL;
    }
    void Free(PVOID pPtr)
    {
        Push((CNode*) pPtr);
        EnsurePoolMax(m_nPoolMax);
    }
};

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
Software Developer (Senior)
Israel Israel
My name is Vladislav Gelfer, I was born in Kiev (former Soviet Union), since 1993 I live in Israel.
In programming I'm interested mostly in low-level, OOP design, DSP and multimedia.
Besides of the programming I like physics, math, digital photography.

Comments and Discussions

 
GeneralClearing Heap - memory Pin
Alexandros546-Nov-07 14:54
Alexandros546-Nov-07 14:54 
Generalsuggestion Pin
Conor O'Doherty9-Jul-04 0:38
Conor O'Doherty9-Jul-04 0:38 
GeneralMaybe I'm missing something but... Pin
peterchen28-Jun-04 23:44
peterchen28-Jun-04 23:44 
GeneralRe: Maybe I'm missing something but... Pin
valdok29-Jun-04 8:19
valdok29-Jun-04 8:19 
GeneralPlease not a dump factory for C++!! Pin
C++ Matter17-Jun-04 11:20
C++ Matter17-Jun-04 11:20 
GeneralRe: Please not a dump factory for C++!! Pin
valdok19-Jun-04 1:43
valdok19-Jun-04 1:43 
I disagree with you. And I think you're confusing something.

1. Standard C++ new and delete operators allocate and free memory using windows heap functions (HeapAlloc, HeapFree). Now, what do those functions do?
In fact the heap is implemented using more basic windows functionality, so-called virtual memory (VirtualAlloc, VirtualFree), which has a granularity of 4k (on intel processor). Those functions really allocate memory pages for the process, and the heap just incapsulates them to allow you to allocate blocks of an arbitary size. In fact it allocares pages for your process (on demand), and manages them for you.

2. What I meant to optimze is the heap implementation only. Still, at some point there will be use of the virtual memory functions, and this is inavoidable.

3. Now, why do I claim that my implementation is faster than the standard heap? The reason is that my heap is especially designed to allocate blocks of a constant size. Hence, there is no memory fragmentation, and I also never perform any searches (to find you a block of appropriate size), whereas standard heap has to do so.

I suggest you read my previous article http://www.codeproject.com/cpp/advanced_heap.asp
GeneralRe: Please not a dump factory for C++!! Pin
ICBM9-Jul-04 10:51
ICBM9-Jul-04 10:51 
GeneralRe: Please not a dump factory for C++!! Pin
valdok10-Jul-04 7:10
valdok10-Jul-04 7:10 
GeneralRe: Please not a dump factory for C++!! Pin
ICBM10-Jul-04 11:16
ICBM10-Jul-04 11:16 
GeneralRe: Please not a dump factory for C++!! Pin
Tim Smith20-Jun-04 4:33
Tim Smith20-Jun-04 4:33 
GeneralRe: Please not a dump factory for C++!! Pin
Anonymous29-Jun-04 7:42
Anonymous29-Jun-04 7:42 
GeneralLet's make it clear! Pin
valdok29-Jun-04 9:14
valdok29-Jun-04 9:14 
GeneralRe: Let's make it clear! Pin
Anonymous30-Jun-04 9:44
Anonymous30-Jun-04 9:44 
GeneralRe: Let's make it clear! Pin
valdok30-Jun-04 12:13
valdok30-Jun-04 12:13 
GeneralRe: Let's make it clear! Pin
Lutz Grosshennig1-Jul-04 11:21
Lutz Grosshennig1-Jul-04 11:21 
GeneralRe: Let's make it clear! Pin
ICBM9-Jul-04 11:09
ICBM9-Jul-04 11:09 
GeneralRe: Let's make it clear! Pin
valdok9-Jul-04 21:57
valdok9-Jul-04 21:57 
GeneralDoug Lea's malloc Pin
Tim Smith16-Jun-04 17:05
Tim Smith16-Jun-04 17:05 
GeneralFYI Pin
Brit16-Jun-04 11:20
Brit16-Jun-04 11:20 
GeneralRe: FYI Pin
WREY10-Jul-05 19:42
WREY10-Jul-05 19:42 
GeneralRe: FYI Pin
Brit20-Jul-05 17:32
Brit20-Jul-05 17:32 
GeneralI like the concept. Pin
WREY16-Jun-04 10:49
WREY16-Jun-04 10:49 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.