|
Sho nuff!
Jeremy Falcon
|
|
|
|
|
Raccoon city cult?
Just in case... resident evil joke
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Nelek wrote: resident evil joke I was about to say... I don't get it.
Had a buddy back in the day who played Resident Evil (talking like PS1 days), but I never played it. Sim City on the other hand...
Jeremy Falcon
|
|
|
|
|
Film serie (Mila Jovovic) or Animation Films (at least 5) and a Netflix Serie or 2 are out there too
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
My Mice have a pad in the garage
I know because they keep stepping on the Mouse Trap
I guess they like Peanut Butter Crunchy style
10 this summer I need a Cat NOT sorry HoneyWitch
|
|
|
|
|
I'll make some statements I believe to be true and then you can respond to each statement with true/false & reasoning.
On Windows x64 (and maybe Linux too) a (normal) process
1. gets an address space which is 4GB in size when the process starts
2. OS modules take up certain portion of that address space (1GB?)
3. The rest (3GB) is used for the stack & heap of the running process
4. As the app instantiates new objects (allocates memory for objects) on the heap, the amount of memory the process uses grows -- but it can never grow beyond the 3GB (4GB total) anyways, right?
5 ## This is a biggie ## A process can never eat up more memory than its address space allows 3GB (4GB total) so a process can never really impact another process anyways because each one is limited to 4GB anyways, right?
6. Extreme Example - If there is 128GB RAM in the hardware and we say OS (associated services) take up 28GB (to make things easy) and there are two services running (2 X 4GB = 8GB) then this machine could never run out of memory, since it would have 92GB just sitting idle
7 Driving A Point Home - So when a developer notices that the app he wrote running on the Server keeps crashing with "out of memory" error, then looks around and says, "Hey, wait a second, I think your service (which has been running on the Server long before aforementioned dev's app) is eating up memory and making mine die", then that developer doesn't understand process address space, right? Right? Right!
This is also why
A. you can solve memory problems created by lots of processes running, by installing more memory (if hardware is further expandable)
B. You cannot solve memory problems of a service or app that crashes due to low memory (since it is simply eating it's own memory) by installing more memory (even if hardware is furhter expandable).
Agree? Agree some? Disagree? Disagree entirely?
modified 20-Sep-24 11:35am.
|
|
|
|
|
The 4GB limit is a Windows 32 limit, and the default in Win32 is 4 GB with 2 GB being granted the OS, effectively limiting the application to 2 GB. There is an option to make this 1 & 3 GB but I don't remember how to do this. Exchange Server and SQL Server versions designed to run on 32 bit Windows server used this option.
For Windows 64, the limits are listed at Memory Limits for Windows and Windows Server Releases - Win32 apps | Microsoft Learn. Note that 32-bit applications are still limited to 4GB simply because 2^32 is 4GB. 64 bit application limits vary by OS version, generally growing with each new version of Windows.
|
|
|
|
|
Yes, that makes sense. I'm stuck in a Win32 mindset.
That's all very good info. I appreciate you reading and replying.
Thanks for your time.
|
|
|
|
|
Once upon a time Windows and x86 hardware memory management were more closely related; the hardware could be much more fully exploited. But Microsoft hoped Windows to be The OS for all different sorts of processors, so it was ported to Alpha, MIPS, PowerPC, Itanium, and I believe there were beta releases for other CPUs as well that never made it to the market. MS didn't want to build their memory management on mechanisms available on x86 only, so instead of exploiting the x86 MMS to its fullest extent, they switched to a single, flat memory space which is managed by software in several areas where x86 hardware could have been employed.
x86 hardware allows a process to have a number of segments with different properties, mapped dynamically into the address space. A DLL would be a typical case of a code segment. The contents of a database could be data segment, like a memory mapped file. The sum of the data segment sizes used simultaneously by a process must fit into the 32 bit address space (without overlapping), but in principle, a process can replace one segment (code or data) with another one at runtime, if it really requires more that 4 GB of either code or data. All of it can reside in RAM; only segment descriptors are updated. Code and data are separate address spaces. Stack is a third address space. So in principle, a process could address 4+4+4 GB of memory (but the chips of the days didn't have enough address pins to allow addressing of 12 GB physical RAM).
The 386 came with a mechanism for making calls to the OS that switched to the segments of the OS. So the OS also had, in principle, a 4+4+4 GB address space, which was like in a different dimension from the user space. There was no way that the user could address OS space, whether intentionally or unintentionally. The problem is that switching between user an OS mode (with a 4-ring graded protection, in addition to privileged/unprivileged instructions) that it took far too much time. MS refused to use the mechanism.
Raymond Chen tells in one of his blog posts that in an MS/Intel meeting, the Intel guys asked what would be MS' highest priority for performance improvement, if only a single thing could be improved. The Intel guys though the MS answer was a joke: Faster handling of the illegal instruction interrupt. It was dead serious: Developers had identified it as the fastest way to enter privileged mode, so they used it as the basic mechanism for performing calls to the OS. They also put OS code/data and user code/data into a single address space, so that no update of segment or page tables is required. All three segments - a single segment for code, data, stack - was put on top of each other, offering new and exciting possibilities for data and stack to be interpreted as code, to make a jump into a data structure and other fascinating adventures , and also half of the address space left for the OS. 2 GB should be enough for everybody. (There was an option for doing a 1+3 split, and 3 GB should most definitely be enough!)
Certainly: Today, you can map files into you address space, and DLLs are more common than ever. It is not based on the segment hardware of x86, but using the same software mechanism on all CPU architectures regardless of hardware support available. Sometimes, such as with the system call mechanism, abandoning the hardware mechanism can actually be faster, yet I have somewhat mixed feelings: Shouldn't they put some more effort into improving the hardware, rather than abandoning e.g. protection of OS code by being isolated in a different address space, when the hardware was designed for it. Maybe MS was too quick in switching to a single, flat, shared address space. (Or maybe it has to do with employing chief designers with a background from architectures that had nothing comparable to offer.)
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
Are you the developer with the app that is dying or the app that is consuming memory like Lays/Walkers?
I’ve given up trying to be calm. However, I am open to feeling slightly less agitated.
I’m begging you for the benefit of everyone, don’t be STUPID.
|
|
|
|
|
I'm the one with the app that has been running on server for a long time (many years).
No out of memory errors.
Other comes along and has out of memory errors then thinks it's someone else.
Kind of annoying. I'm mostly just curious, because my main point is that "you can't really entirely effect another process (this would be almost like a malicious process which could cause another app to fault)" because your app is stuck in it's own address space.
Now if the server has 4GB memory then yes maybe my app taking memory could cause you issue since neither of us really has the correct amount of space.
|
|
|
|
|
raddevus wrote: gets an address space which is 4GB in size when the process starts I'm not sure about the actual limit, but I don't think it's a low cap like that per process... maybe per thread. I dunno. Gonna have to check out obermd's link myself.
raddevus wrote: OS modules take up certain portion of that address space (1GB?) Not from an application's perspective. That died out in the Win31 days. These days just about every OS uses a ring architecture, where 0 is the highest access and is usually only accessed by the kernel. Applications have the least amount of access into something called protected memory, which virtually maps addresses to make your app think it can access whatever, but it can't.
You'll never accidently overwrite the OS' memory these days, but you can have a memory leak and take the system down that way.
raddevus wrote: The rest (3GB) is used for the stack & heap of the running process
For an application, there are 3 areas of memory: static , stack , and heap . Stack and heap get all the attention, but static is just as important. Static memory is used for things like literals that is embedded in the application itself. For instance:
const char *howdy (void) {
return "Howdy";
}
int main()
{
printf("%s\n", howdy());
return 0;
}
This is perfectly valid C code. The string "howdy" can be found in a hex editor and stored in the executable itself. Unlike stack memory it's not going to get wiped out so easy, which means you can return a pointer to it directly in a function. You copy that literal over to a local variable on the stack though and return the variable, well things will start breaking if you did that.
raddevus wrote: As the app instantiates new objects (allocates memory for objects) on the heap, the amount of memory the process uses grows -- but it can never grow beyond the 3GB (4GB total) anyways, right? Depends. The whole idea behind swap (*nix) or page files (Windows) is to get around that. The OS would extend the memory be moving things in and out of disk. Slow as dirt and not used these days nearly as much. But, if swap/paging is disabled then yes.
raddevus wrote: 5 ## This is a biggie ## A process can never eat up more memory than its address space allows 3GB (4GB total) so a process can never really impact another process anyways because each one is limited to 4GB anyways, right? Correct, in a protected memory model. So, stay away from Win31. I don't think the exact limit is 3GB again, but an app can only screw up its own memory. However, if it does have a memory leak it could make it impossible for other apps to allocate memory and you get a crash that way. But, their memory won't be corrupted unless there's a bug in the other app somewhere.
raddevus wrote: 6. Extreme Example - If there is 128GB RAM in the hardware and we say OS (associated services) take up 28GB (to make things easy) and there are two services running (2 X 4GB = 8GB) then this machine could never run out of memory, since it would have 92GB just sitting idle Assuming there's no memory leaks/bugs, yeah.
raddevus wrote: Driving A Point Home - So when a developer notices that the app he wrote running on the Server keeps crashing with "out of memory" error, then looks around and says, "Hey, wait a second, I think your service (which has been running on the Server long before aforementioned dev's app) is eating up memory and making mine die", then that developer is an idiot who doesn't understand process address space, right? Right? Right!
It depends. Could be, your app. Could be another app. You don't have to guess though. Every OS on the planet will tell you which processes are using the most memory. If it's Linux use top or install htop and find out. You can also write a script to monitor them.
raddevus wrote: You cannot solve memory problems of a service or app that crashes due to low memory (since it is simply eating it's own memory) by installing more memory (even if hardware is furhter expandable). Depends on the nature of the bug in the service.
Jeremy Falcon
|
|
|
|
|
I've overstated many things.
My point is that a single process cannot just consume all memory on a box, right?
It's limited to some address space size, I'm guessing.
But, maybe I'm wrong, maybe a process can consume all 128GB of ram??
That's what I'm really curious about.
I think it cannot becuase it would be quite easy for a malicious process to crash other processes and the OS if this were true.
When a process gets memory from heap (for newing up objects) it gets it from its own address space. When that memory is gone then you get "out of memory" but doesn't mean you ate all the memory on the box.
|
|
|
|
|
raddevus wrote: My point is that a single process cannot just consume all memory on a box, right? I already mentioned this... It's only available memory not already claimed. It's first come, first serve. So, assuming you're 64-bit and have less than 8TB memory then it can consume all the remaining available memory. Just like disks though, memory should never be "full" or else you'll run into problems.
raddevus wrote: It's limited to some address space size, I'm guessing. Obermd posted the limits. My reply was about how memory works, so between the two of these you should know the limits.
raddevus wrote: But, maybe I'm wrong, maybe a process can consume all 128GB of ram?? Again, the OS will use some. Some graphics cards may also use some. Your app will never have access to that. It's available memory it can consume. I took time to write that reply, please go through it.
raddevus wrote: I think it cannot becuase it would be quite easy for a malicious process to crash other processes and the OS if this were true. Nobody ever said otherwise. You asked if it could, all of us said no and I gave a detailed explanation of why not.
raddevus wrote: When a process gets memory from heap (for newing up objects) it gets it from its own address space. When that memory is gone then you get "out of memory" but doesn't mean you ate all the memory on the box. Again, I already talked about protected/virtual memory. Did you read my reply?
Jeremy Falcon
|
|
|
|
|
Oh, I read. I was just confused.
It often takes me a long while to understand things.
I'm a very slow developer. I've worked in IT for > 33 years & dev around 25 but I'm still honestly slow.
Just takes me a while to understand concepts.
That's it. Thanks for your help. I'll read it again.
|
|
|
|
|
Thanks for the honesty buddy. We all have our thing. Like my biggest weakness is impatience (some might say being a douche ).
Jeremy Falcon
|
|
|
|
|
1 2 and 3 are true of 32-bit processes. I don't think it's true of 64 bit processes.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Yeah, I'm very curious if a process can allocate memory beyond 3-4GB size.
I saw a thing that made it look like it could :
384 GB or system commit limit, whichever is smaller Windows 8.1 and Windows Server 2012 R2: 15.5 TB or system commit limit, whichever is smaller.
But that seems terribly dangerous. But what do I know.
Also, interested in what "system commit limit" is and if that is still set to 3-4GB on 64 bit processes.
|
|
|
|
|
System commit limit sounds like it might be the amount of dirty memory it can write to the swap, or maybe that it can use as swap? Not sure, but I'm more inclined to think it's the former rather than the latter, if any.
Edit: I'm surprised it wouldn't be expressed in pages rather than GB though.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Step away from the misbehaving application for a moment. Fire up a VM running the Windows OS you want, and write a bad, evil application. It does not have to be complex. Using a VM allows you to crash/hang it instead of your main box. Try malloc (comes off the heap) and static allocation (process space?). I'm rusty. Just play around.
That said, trust nothing Microsoft "documents." I fully confess that I am jaded. Trust but verify.
I have had my left leg in the embedded world and the right leg on the desktop. My two common errors has been buffer overflows into malloc'd areas and stack overflows. Either way, it's an application issue.
Keep learning.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
I need to amend my earlier statement about 32-bit processes under Windows.
While it's not important, historically the process address space was divvied such that the lower 2GB was "user space" and the upper 2GB was "kernel space" (I may have that backward, but either way, it's half and half)
Some apps could be "3GB aware", sometimes run with a command line switch like /3GB to enable it. In that case, the kernel was only mapped to 1GB of the address space. I'm not sure why all apps weren't this way, other than compatibility. An example of a 32 process that could be 3GB aware with a command line switch is the old 32-bit versions of Image Line's FL Studio DAW software.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
For Linux and friends, start with man getrlimit and apropos oom . The whole OOM-killer environment is intriguing, to say the least. I believe it came from the *nix legacy of lots of users timesharing a machine with limited resources, particularly memory. A competitive, rather than cooperative, workload.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
raddevus wrote: So when a developer notices that the app he wrote running on
Presumably this is your real question and seems like it was already answered.
However one somewhat related gotcha with C# (if relevant) is the Large Object Heap. Which means an app can "run out" of memory long before physical/virtual memory is used.
|
|
|
|
|
jschell wrote: C# (if relevant) is the Large Object Heap. Which means an app can "run out" of memory long before physical/virtual memory is used.
Both of the Apps (Services) in question are indeed written in C# and the point you make is highly relevant and I really appreciate you mentioning it. Very helpful.
My question really hasn't been answered.
I am still very curious if a 64 bit app can eat all of the memory on a large server (64GB RAM or something larger).
I'm guessing that it cannot since
1. I believe that any app cannot allocate RAM beyond its address space.
2. This would be a huge security hole since any malicious app could just eat all RAM
Someone mentioned a "leak" but even a leak is bound to the processes address space & once the process has eaten / leaked all that RAM then it would throw the "out of memory" exception.
I'm just not sure if the OS allows address spaces to be much larger than 4GB and if it does what the "default" value would be for each address space on 64 bit Windows OS (Server).
Very difficult thing to find.
I would write an app that eats all the RAM -- and have done that, but the box I have is limited to 8GB or even 16GB and you need more RAM and running the Win Server OS to really determine this.
|
|
|
|
|
raddevus wrote: I am still very curious if a 64 bit app can eat all of the memory on a large server (64GB RAM or something larger).
I'm guessing that it cannot since
1. I believe that any app cannot allocate RAM beyond its address space. If you by 'address space' refer to the entire 2**64 bytes space that a 64 bit process can cover by its addresses, 16 exbi bytes (more than 16 million gigabytes), your assumption is right: A process cannot allocate that much space. And we will never see a computer with 16 exbi bytes of RAM. Never ever.
In no general machine (excluding e.g. embedded processors) of the last 30-40 years has the address indicated by the program code been used directly as the physical RAM address. The virtual address in the program is translated to a different physical address in RAM through a set of hardware translation tables, managed by the OS, called the Memory Management System (MMS). Each process has its own set of MMS tables. The OS sets up the MMS tables for a tiny slice of the virtual address space. If the program presents a virtual address within this slice, the range covered by the MMS tables for that process, it is translated to a physical RAM address. If the virtual address is outside the range covered by the MMS tables, an interrupt is generated, and the OS will terminate the process. (Well, it might offer a mechanism for reporting the interrupt e.g. to a debugger that can inspect the process state before it is cleaned out.)
If you by 'its address space' refer to just that slice of the total 64 bit virtual address space for which the OS has set up translation tables, then you are essentially right. The size of this slice can be a few hundred kiB, a few GiB, or many GiB - but the OS will not give you more than it is capable of handling.
When an app allocates RAM, the allocated space is, at the outset, within the address space translated by its MMS tables. If the malloc/new/... maps down to an OS request, the OS may say: 'There isn't enough unused space in the already mapped virtual address space, so I have to add another entry to the mapping tables, expanding the address space available to that process'. Before the OS does that, it will check that the process does not already control an excessive amount of address space. The limit is set by the OS to any value that it can handle.
In many systems, malloc/new/... starts out as a call to a runtime library in the process address space. As long as there is allocated, but unused, space available, the OS (and MMS) knows nothing about the allocation - it is nothing but making use of resources already allocated to the process. The library function has been given a range of valid addresses (the 'heap') to manage; allocation is left to that manager function. The OS may provide a function for the heap manager, so that if the heap overfills, the heap manager can ask the OS for an extension of the process user space, just like when the OS takes care of the entire allocation task: The OS will check that one process doesn't run away with a too big share of the resources.
2. This would be a huge security hole since any malicious app could just eat all RAM The MMS translation tables are managed by the OS; a user process doesn't have the privilege. So any extension of the address space, i.e. all new translation entries added to the tables, is done by the OS. If it accepts to give any process as much address space as is asked for, with no limits, no questions asked, no check against what the process already has, then You Asked For It, You Got It. Any decent OS will make checks, and put a cap on the virtual memory allocated to each process.
So the app can eat up all the (virtual) memory explicitly given to it by the OS that says 'Here, these addresses are yours, for now'. The OS takes the responsibility for mapping all those addresses to valid RAM. But no more. Any other address, not endorsed by the OS, causes a fatal interrupt.
An important point: When the app presents a virtual address to the MMS tables, the table entry may (and usually will) contain the physical RAM address. The table entry has a number of additional flags: One of them may indicate that the address is not a RAM address, but an address in a 'swap file' or 'page file'. This causes a (non-fatal) 'page fault' interrupt: An OS routine is triggered to first make sure that there is a free, unused RAM page. If not, it will select one of the active ones to be written to the page file to free up a memory page, its table entry is changed from the its RAM address to the file address it is written to, and the appropriate flag is set to show that this page is now on disk, not in RAM. Then the requested page is read from disk into the free RAM page, the disk address is changed to the RAM address in the MMS table entry, and the flag is reset, the one that said that this page resides on disk.
This entire operation takes so much time that the OS will let another process use the CPU while it waits for the disk access to complete. The requesting process is suspended. When the disk page has been read into RAM, the suspension is lifted. The process will retry the memory access, which will now succeed without generating an interrupt.
In the days when RAM was expensive, the total sum of virtual address space allocated to processes could be many times the RAM size. The OS might allocate virtual addresses up to the amount of disk space it could spare for the page file. In those days, you could bring a system to a near-halt by initiating a large number of processes requesting so much memory that almost all of it would be in a page file, and referencing data 'all over'. So, upon the first page fault, the OS lets a second process use the CPU, almost immediately causing another page fault, that is queued up for processing. A third process gets the CPU, causing yet another page fault, and so it goes on, the list of suspended processes growing, the disk being busy more or less 100% of the time, and any operation requesting disk data (that includes a lot of OS functions!) stalled in wait for disk capacity. This was referred to as 'thrashing', when the machine spends almost all its resources on managing waiting queues, with minimal resources available to complete tasks to clean up the traffic jam.
Today, RAM is so cheap that few active developers have experienced real thrashing, so bad that you had to restart the machine. Lots of people don't even know the meaning of the term 'thrashing'. Maybe OSes still come with thrashing prevention mechanisms: If a dangerous build up of queues is detected, maybe half of the processes are suspended, to let the other half complete its job first. The suspended ones will experience a grave delay, but that is better than none of them being able to complete. Maybe today's OSes have removed such processing, thinking them no longer needed.
Someone mentioned a "leak" but even a leak is bound to the processes address space & once the process has eaten / leaked all that RAM then it would throw the "out of memory" exception. This occurs within the valid address space of a single process. If the OS does not provide a function for extending the address space, no other process is affected by the leak. If the OS does honor extension requests without question, it may of course run out of paging file space. (Or, if you have disabled paging, which is possible at least under Windows, the RAM size may be the limit.) Any decent OS should allocate an unreasonable large address space, whether in the page file or in RAM.
I'm just not sure if the OS allows address spaces to be much larger than 4GB In a 64 bit environment, 4GiB is no 'natural' limit. It is set by the OS to any value it finds reasonable. I would say that if you expect to have a real need beyond 4GiB, then it isn't too much to ask that you indicate your needs to the OS at startup to have the limit raised for you process. The OS may have a default limit of 4GiB. That doesn't imply any real allocation of that size, it just puts a cap on the total memory that the process may request while running, whether by heap allocation, adding DLLs to its address space or otherwise.
I would write an app that eats all the RAM -- and have done that, but the box I have is limited to 8GB or even 16GB and you need more RAM and running the Win Server OS to really determine this. Unless you turn paging off, when the OS runs out of free RAM, it will start throwing pages out to the paging disk before you manage to crash the system by allocating 'all' RAM to your process. In early, primitive OSes you might succeed in crashing the machine that way, but not with any modern, general OS. ("Modern" goes as far back as the 1970s.)
To study how memory is used, I'd recommend the SysInternals RAMMap program (RAMMap[^]). Even the standard Resource Monitor ('Memory' tab) give you a lot of info, but RAMMap gives you more, and it has function (in the 'Empty' tab) to do a cleanup, throwing out of RAM everything that doesn't need to be in RAM; it can be fetched later if needed.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|