Click here to Skip to main content
Rate this: bad
good
Please Sign up or sign in to vote.
See more: C++ STL
#include "stdafx.h"
#include <iostream>
#include <vector>
#include "conio.h"
int main ()
{
	typedef struct
	{
		int iArr[10];
	}BIGSTRUCT;
 
	BIGSTRUCT st;
  std::vector<BIGSTRUCT> myvector;
  int ii = sizeof(BIGSTRUCT);
  std::cout << "max_size: " << (int) myvector.max_size() << '\n';
  _getch();
  return 0;
}
 
I am using following code to get maximum vector I can allocate; If the size of iArr is 1, then I am getting it 1,073,741,823; But if I increase the size to 10, it is 1,073,741,82;
http://stackoverflow.com/questions/4321040/c-vectors-of-objects-and-pointers?rq=1[^]
http://stackoverflow.com/questions/3813124/c-vector-max-size[^]
I have too many confusions here:
1) how this number is coming? Ho wit is calculated?
 
Now, if I am on a 32bit machine, with 2GB RAM, and if I have a vector of size say 170,000,000 members, what is the maximum size of each member do I have to have so I will not go out of memory?
 
Thanks
Posted 21-Apr-13 9:33am
iDebD842
Rate this: bad
good
Please Sign up or sign in to vote.

Solution 2

Now, if I am on a 32bit machine, with 2GB RAM, and if I have a vector of size say 170,000,000 members, what is the maximum size of each member do I have to have so I will not go out of memory?
 
First, the physical amount of RAM has nothing to do with it: a modern OS will shuffle your memory contents around as needed. E. g. on Windows, that's what the pagefile is for!
 
That said, the maximum amount does depend on both your application, and your OS: in theory, you can address a total of 4 GB on a 32 bit machine. However, that address space covers every object of your entire application! So if you've got a lot of other spacious objects lying around, this will limit the amount of space available for your array accordingly! Second, your OS may reserve part of your address space: e. g. Windows likes to use the address space above ~3 GB for system functions (this may in fact vary depending on the version of Windows you are using). Therefore your address space may be considerably smaller than you think.
 
A word of advice: you should never strive to use up the maximum amount of memory available! If nothing else, you're severely hampering every other application that is running on the same machine because they're forced to constantly move memory contents around!
 
If you need to store that much data in memory, then you should really try to redesign your algorithm and make it use less memory! The usual way for working with big data is to use a database! They let you load all the data necessary to work on one record individually. Or you can just read the data record by record.
  Permalink  
v2
Comments
nv3 at 22-Apr-13 17:10pm
   
Very good answer, Stefan. My 5. Except for the statement that each thread gets its own address space. Each thread gets its own stack (which is part of the 2GB or 3GB total address space). Other than that, all threads of a process share the same memory. That's in fact the whole point about threads.
Stefan_Lang at 23-Apr-13 2:41am
   
Thanks. I removed that advice.
Rate this: bad
good
Please Sign up or sign in to vote.

Solution 1

The calculation is based on the formula (for your case):
max_size = (size_t)(-1) / sizeof(BIGSTRUCT);
For an array with 1 int (4 bytes) this yields 1,073,741,823.
For an array with 10 int (40 bytes) this yields 107,374,182.
 
Unfortunately it is not based on your actual machine configuration.
  Permalink  
v3
Rate this: bad
good
Please Sign up or sign in to vote.

Solution 3

When using the stl, you can never pre-calculate the exact maximum size of a collection of items. The best you can do is approximate. What follows is an estimate I make here and now, with a few unstated assumptions. If you ask 2 other people, you might get 5 or more different answers. If you ask me again tomorrow I might have a different estimate.
 
For std::vector, memory consumed for each element will be a pointer to the struct, plus an instance of the struct, which I assume is on the heap, which means that there will be memory management overhead.
 
On recent Windows machines:
For debug builds, this will be a minimum of sizeof(void*) + (sizeof(BIGSTRUCT) + 16) where the last item is rounded up to whatever the memory management "chunk size" is.
For release builds, this will be a minimum of sizeof(void*) + (sizeof(BIGSTRUCT) + 8) where the last item is rounded up to whatever the memory management "chunk size" is.
 
Both of the rounded values might be different (haven't checked lately).
 
The st::vector array of pointers will likely be contiguous (or if not, then close to it). The structs in the heap will not likely be contiguous (and will depend on how you create them).
  Permalink  

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

  Print Answers RSS
0 OriginalGriff 436
1 Maciej Los 249
2 BillWoodruff 199
3 /\jmot 180
4 Suraj Sahoo | Coding Passion 170
0 OriginalGriff 8,484
1 Sergey Alexandrovich Kryukov 7,407
2 DamithSL 5,639
3 Maciej Los 5,159
4 Manas Bhardwaj 4,986


Advertise | Privacy | Mobile
Web02 | 2.8.1411023.1 | Last Updated 23 Apr 2013
Copyright © CodeProject, 1999-2014
All Rights Reserved. Terms of Service
Layout: fixed | fluid

CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100