Click here to Skip to main content
15,886,199 members
Articles / Programming Languages / C++
Article

Memory allocations caching

Rate me:
Please Sign up or sign in to vote.
3.79/5 (11 votes)
28 Jun 20042 min read 68.7K   26   22
How to optimize heap usage

Introduction

Not a long time ago, I've posted an article that describes a technique to optimize frequent dynamic allocations of a constant size.

A few days ago, I talked to my friend and occasionally this subject came up. He told me: "In such cases, instead of deleting the unneeded blocks, I just store them in a linked list, and then reuse them again".

Impressing, isn't it? This method is even simpler than what I've invented, and its advantage is that the caching pool can grow dynamically without special efforts or performance penalties.

Overhead: Theoretically, we need no overhead at all: when the allocated block is in use, we need no references to it, and once it is unneeded - we just use its body to implement the linked list node (assume the minimal allocation size is at least the size of a pointer, 4 bytes on Win32). But on the other hand - all blocks are allocated via a heap, and it probably needs some header. This can be significant when allocation size is small. In such a case, it would be a good idea to use my allocator (that I described earlier) instead of a heap.

Implementation

Now, let's discuss it a bit closer. We need a one-directional linked list (stack or queue, doesn't matter) which is initially empty. During the runtime, we allocate blocks in a regular way. Then, when they are no longer needed, we push them into our linked list (instead of deleting). Upon next allocation request, we just return a block from our linked list.

OK, let's focus on some pitfalls:

  1. Suppose there's a spike at an early running stage. In other words, you need suddenly very many allocation blocks whereas the caching list hasn't collected too many blocks yet. Hence, you'll have to allocate them rapidly, and that is when it is especially critical for your application to respond quickly upon such a payload! So, our first conclusion would be that some amount of blocks should be PRE-cached upon initialization.
  2. After a huge spike goes off, our list will probably contain a lot of blocks, whereas most of the time we'll need only a small part of them. So, our second conclusion would be that when our caching list becomes large enough - we delete allocation blocks in a regular way instead of collecting them.

Well, not too complex at all. Here, I'm giving one possible implementation of this idea. It caches allocations up to some maximum count, which doesn't change. However, a clever application can somehow collect statistics and adjust dynamically this limit. Well, that's up to you. Good luck!

P.S.: Sorry if there are bugs, I haven't tested it.

class CAllocaterEx {
    ULONG m_nAllocationSize;
    ULONG m_nPoolMax;
    ULONG m_nCount;

    struct CNode {
        CNode* m_pNext;
    };
    CNode* m_pHead;

    CNode* Pop()
    {
        ASSERT(m_pHead && m_nCount);
        CNode* pHead = m_pHead;
        m_pHead = pHead->m_pNext;
        m_nCount--;
        return pHead;
    }
    void Push(CNode* pNode)
    {
        ASSERT(pNode);
        pNode->m_pNext = m_pHead;
        m_pHead = pNode;
        m_nCount++;
    }

public:
    CAllocaterEx(ULONG nAllocationSize = 0) :
        m_nAllocationSize(nAllocationSize),
        m_nPoolMax(0),
        m_nCount(0),
        m_pHead(NULL)
        {
        }
    ~CAllocaterEx()
    {
        Clear();
    }

    bool Init(ULONG nPoolSize)
    {
        // ensure our allocation size is at least size of a pointer
        if (m_nAllocationSize < sizeof(ULONG_PTR))
            m_nAllocationSize = sizeof(ULONG_PTR);

        Clear();
        return EnsurePoolMin(m_nPoolMax = nPoolSize);
    }
    bool Init(ULONG nPoolSize, ULONG nAllocationSize)
    {
        m_nAllocationSize = nAllocationSize;
        return Init(nPoolSize);
    }

    bool EnsurePoolMin(ULONG nPoolSize)
    {
        while (m_nCount < nPoolSize)
        {
            CNode* pNode = (CNode*) new BYTE[m_nAllocationSize];
            if (!pNode)
                return false;
            Push(pNode);
        }
        return true;
    }
    void EnsurePoolMax(ULONG nPoolSize)
    {
        while (m_nCount > nPoolSize)
            delete[] (PBYTE) Pop();
    }

    void Clear()
    {
        EnsurePoolMax(0);
    }

    PVOID Alloc()
    {
        return EnsurePoolMin(1) ? (PVOID) Pop() : NULL;
    }
    void Free(PVOID pPtr)
    {
        Push((CNode*) pPtr);
        EnsurePoolMax(m_nPoolMax);
    }
};

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here


Written By
Software Developer (Senior)
Israel Israel
My name is Vladislav Gelfer, I was born in Kiev (former Soviet Union), since 1993 I live in Israel.
In programming I'm interested mostly in low-level, OOP design, DSP and multimedia.
Besides of the programming I like physics, math, digital photography.

Comments and Discussions

 
GeneralClearing Heap - memory Pin
Alexandros546-Nov-07 14:54
Alexandros546-Nov-07 14:54 
Generalsuggestion Pin
Conor O'Doherty9-Jul-04 0:38
Conor O'Doherty9-Jul-04 0:38 
GeneralMaybe I'm missing something but... Pin
peterchen28-Jun-04 23:44
peterchen28-Jun-04 23:44 
GeneralRe: Maybe I'm missing something but... Pin
valdok29-Jun-04 8:19
valdok29-Jun-04 8:19 
GeneralPlease not a dump factory for C++!! Pin
C++ Matter17-Jun-04 11:20
C++ Matter17-Jun-04 11:20 
GeneralRe: Please not a dump factory for C++!! Pin
valdok19-Jun-04 1:43
valdok19-Jun-04 1:43 
GeneralRe: Please not a dump factory for C++!! Pin
ICBM9-Jul-04 10:51
ICBM9-Jul-04 10:51 
New and delete do not use the VirtualXXX functions at all. They use the HeapXXX functions using a single Windows heap that the C runtime allocates from Windows at initialisation. Undoubtedly, the NT (and later) heap managers must of course use the VirtualXXX functions, but Windows hides all of this from the programmer. It uses various techniques to increase performance, including look-aside buffers etc.
The page size is indeed used by the VirtualXXX functions, but as a C++ programmer you are completely isolated from this when using the standard new and delete functions.

I agree that using a fixed-sized chunk allocator should, if written properly, enhance performance. However, there are numerous problems when writing code that does not 'free' this memory. In effect, you are cheating. Generally, you can allocate memory quickly, or free it quickly, but not both. Whether or not this is suitable for your application depends entirely on the application.

As an example, if your application copies a lot of small objects, using temporaries along the way (perhaps using a map, set etc etc), then you will likely find that your heap management routines use far more memory that they need to. Also, many real-world applications, especially data driven applications, create a lot of small memory blocks. On a server with lots of memory, it might not be important to free those blocks. However, on a client machine with limited memory resources, freeing it is essential.

If you are looking for a fast heap manager, look at WinHeap (http//www.winheap.com) - there's a free download. It manages small blocks of memory (less than 256 bytes) using special memory pools and on a single processor machine is about 30% faster than the Microsoft heap manager. On a multiprocessor machine, it's can be an order of magnitude faster.


GeneralRe: Please not a dump factory for C++!! Pin
valdok10-Jul-04 7:10
valdok10-Jul-04 7:10 
GeneralRe: Please not a dump factory for C++!! Pin
ICBM10-Jul-04 11:16
ICBM10-Jul-04 11:16 
GeneralRe: Please not a dump factory for C++!! Pin
Tim Smith20-Jun-04 4:33
Tim Smith20-Jun-04 4:33 
GeneralRe: Please not a dump factory for C++!! Pin
Anonymous29-Jun-04 7:42
Anonymous29-Jun-04 7:42 
GeneralLet's make it clear! Pin
valdok29-Jun-04 9:14
valdok29-Jun-04 9:14 
GeneralRe: Let's make it clear! Pin
Anonymous30-Jun-04 9:44
Anonymous30-Jun-04 9:44 
GeneralRe: Let's make it clear! Pin
valdok30-Jun-04 12:13
valdok30-Jun-04 12:13 
GeneralRe: Let's make it clear! Pin
Lutz Grosshennig1-Jul-04 11:21
Lutz Grosshennig1-Jul-04 11:21 
GeneralRe: Let's make it clear! Pin
ICBM9-Jul-04 11:09
ICBM9-Jul-04 11:09 
GeneralRe: Let's make it clear! Pin
valdok9-Jul-04 21:57
valdok9-Jul-04 21:57 
GeneralDoug Lea's malloc Pin
Tim Smith16-Jun-04 17:05
Tim Smith16-Jun-04 17:05 
GeneralFYI Pin
Brit16-Jun-04 11:20
Brit16-Jun-04 11:20 
GeneralRe: FYI Pin
WREY10-Jul-05 19:42
WREY10-Jul-05 19:42 
GeneralRe: FYI Pin
Brit20-Jul-05 17:32
Brit20-Jul-05 17:32 
GeneralI like the concept. Pin
WREY16-Jun-04 10:49
WREY16-Jun-04 10:49 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.