|
I think OR is from the heraldic name ORO (also Italian for gold ) very commonly used in cryptics
Or
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
I had orifice, but I couldn't make "if" stick.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
Do you think the clue was poorly written Peter ? I did think of using may be for if ! but it didn't read well
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Probably a bit clumsy, but no worse than many of mine.
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
Edit: To be clear I'm talking about user facing machines rather than server or embedded, and a hypothetical ideal. In practice CPUs need about 10% off the top to keep their scheduler working, for example, and there are a lot of details I'm glossing over in this post, so it would be a good idea to read the comments before replying. There has been a lot of ground covered since.
When your CPU core(s) aren't performing tasks, they are idle hands.
When your RAM is not allocated, it's doing no useful work. (Still drawing power though!)
While your I/O was idle, it could have been preloading something for you.
I see people complain about resource utilization in modern applications, and I can't help but think of the above.
RAM does not work like non-volatile storage in that it's best to keep some free space available. Frankly, in an ideal world, your RAM allocation would always be 100%
Assuming your machine is performing any work at all (and not just idling) ideally it would do so utilizing the entire CPU, so it could complete quickly.
Assuming you're going to be using your machine in the near future, your I/O may be sitting idle, but ideally it would be preloading things you were planning to use, so it could launch faster.
My point is this:
Utilization is a good thing, in many if not most cases.
What's that old saw? Idle hands are the devil's playground. Your computer is like that.
I like to see my CPU work hard when it works at all. I like to see my RAM utilization be *at least* half even at idle. I like to see my storage ticking away a bit in the background, doing its lazy writes.
This means my computer isn't wasting my time.
Just sayin'
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
modified 8-Sep-23 9:15am.
|
|
|
|
|
I recall reading a short essay years ago by a senior OS engineer (Microsoft or Apple, not sure) that said much the same. It makes good sense IMO.
Thanks for the reminder.
|
|
|
|
|
I think you didn't think that to the end...
If any single software would take up all the resources it would kill any real productivity...
Let us say VS takes all memory just when opening a solution... Now I ask it to compile that solution... VS - by default IIRC - will compile 8 projects in parallel, so it will try and fire-up 8 instances of msbuild... But there is no memory, so before each and every of those 8 instances the OS will do a memory swap... And memory swap for the 4th instance of msbuild may take memory from the 1st instance as it may be in IO (blocked) and considered inactive... And memory swapping is very expensive...
I do agree that any app should utilize all resource when it needs it, but it also should release it the moment it needs it not...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
I did. I said in an ideal world RAM utilization would always be at 100%. That's a hypothetical. It's not intended to be real world, but rather illustrative of a point: RAM is always drawing power, even at idle. The most efficient way to use it is to allocate it for something, even if you do so ahead of time.
I did not say that it would or even should be utilized by one application.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
If I may interject: memory is always used at 100% by an app called "operating system". Parts that are not urgently needed are relinquished to other apps upon request.
In the scenario pointed by Peter, how is VS going to know how much memory MSBuild instances are gonna need? Should they ask VS pretty please to release the memory? Is VS going to act as some type of surrogate OS?
Memory hogging is not a disease of VS only; it's a virus that has spread to browsers and many others.
Mircea
|
|
|
|
|
I'm not necessarily endorsing this approach so much as observing it, since I haven't run any performance metrics on alternatives but:
It seems to me that the OS effectively has the information it needs due to paging. It doesn't play out directly to where an app knows how much memory is free, *but* firstly there are plenty of ways to get an idea of general "memory pressure" in windows, and paging allows for an app to preallocate and let the OS manage which parts are actually in RAM at a given period.
Does it work? Well, I mean if it worked perfectly, people wouldn't keep needing new computers every 5 years I suppose. (I'm half kidding here, there's a lot that goes in to that)
I don't know. But that seems to be how things operate now, say with VS, Chrome and Windows for example.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: I said in an ideal world RAM utilization would always be at 100%. A perfectly balanced system has bottlenecks everywhere.
|
|
|
|
|
I think the "ideal world" you imagine ceased to exist with the invention of multitasking. To me, the ideal world is one in which the demands of the 200 processes running on a PC for memory, I/O bandwidth, and other resources are balanced by the operating system to achieve the best overall performance. Run one memory-hogging program on a PC and you get 100% memory utilization. Run two such programs and what you get is virtual memory page thrashing and a thousandfold decrease in performance. I remember early Java programs that were like this.
|
|
|
|
|
SeattleC++ wrote: Run two such programs and what you get is virtual memory page thrashing and a thousandfold decrease in performance. If that really was the case, I would immediately throw that OS out of the window!
Of course you cannot expect 200% performance - 100% for each process. You must expect that the process (hence working set) switching take some resources. But no program uses all the memory all the time; reality is that even when you think your program is all over the place, there are plenty of untouched physical memory pages that can be used by another process. Any decent MMS hardware and OS can handle that quite well. If your program really does make use of 100%, then any 10% (or maybe even 1%) increase in the data structures of that single program would take long strides towards that "thousandfold decrease in performance".
If you keep insisting that your program actually makes use of 100% of the RAM: Take a look in Resource Monitor, the Memory tab: Is it really true that the color bar is all green, "In use", or orange, "Modified"? No dark or light blue? If you flush memory - my tool for doing that is Sysinternals RamMap; its Empty menu has commands for emptying standby lists and flushing modified pages - there is a definite chance that the color bar goes at least a little blue at the right end. Probably much more than you would expect! Let your program run, and see how long it takes before all that blue has turned green/orange. Probably much longer than you would expect!
I am of course assuming that you have a "reasonable" amount of RAM. In the old days of 16 bit minis, a memory card with a mebibyte of RAM cost around USD 10,000 (the Euro wasn't invented then); inflation would bring that to USD 50,000 today, so you didn't buy RAM that you didn't need. This one mini had an OS that would actually run (or maybe I should say 'crawl') with two 2 Ki pages of RAM available to user processes (the rest taken by the OS. "4 Ki should be enough for everybody!" ). The only ones actually running on 4 Ki for paging were the OS developers doing stress tests to see if practice matched theory. It did, but that configuration failed to enter the Top500 list . Those OS developers claimed that any system doing physical paging more than 5% of the time is heavily starved on memory. I have never encountered any production system doing that much paging. But if you regularly run two processes side by side, each with an active working set that fills all of your RAM, you really do need to buy some more RAM!
If my memory is correct, the 16-bit minis we used for interactive program development around 1980 initially had 256 Ki of RAM, that was increased to 512 Ki a year after installation. Each machine (we had three of them) served 24 interactive terminals, running screen oriented editors (although character oriented, 24 by 80, no graphics) and Pascal / Fortran compilers. That worked very well. It must be said that those machines had an MMS which was advanced for its time, and a very good interrupt handling system: The first user instruction in the interrupt handler was running 900 ns after the arrival of the interrupt signal. I guess both were essential for the machine's ability to handle lots of processes fighting for resources.
And then (and this is essential!):
As long as the 24 users were requesting RAM and CPU, the OS managed it very well. However, it had a file system design requiring a global lock to be set on the directory root before any disk operation! (Who are familiar with the Python GIL?) Fetching a file now and then is OK, but when 24 students at the call of the bell types 'logout' and raises from their chairs, a lot of file system operations lined up for the general lock. We (I was a TA) had to extend the break between lessons for the machines to be able to complete all the file system operations for the 24 user sessions.
That is the problem in lots of software: Programmers reserve a resource much earlier than needed, and release it long after the work on it has been done, e.g. in a final 'cleanup' stage. They reserve much more (e.g. the entire file system) when they actually need to reserve a small part of it (e.g. a single file). When the system runs slow as molasses, usually there is not a CPU saturation or paging bottleneck, but lots of processes waiting for some resource to become available, but the resource is locked by some process that probably could have released it long ago or not reserved it yet.
My memory goes back to the old batch oriented mainframes of the 1960s and 70s - in my student days, a few of them were still around - where you had to indicate in the job prologue which files and devices your program would refer, how much memory it would require, how many seconds of processing time you expected, how much output you expected. So the OS would pack as many programs side by side that would fit in physical memory, selecting those indicating much I/O but little CPU to run in parallel with those having high CPU requirements but little I/O, so that both classes of resources could be utilized at the same time. In our 'Algorithms' course, optimizing a job queue for maximum total utilization was one of the problems we were given to solve, and we were presented with the solutions in the OS-1100 (aka. Exec-8) OS. (We did use punched cards on the Univac with its batch OS for 'Programming 101', but after that we never touched the beast, but went to interactive terminals.)
|
|
|
|
|
With 100% CPU utilization, you will find that you can barely move the mouse or press a key in Windows (or macOS, Linux).
Likewise, at least on Windows, if your RAM allocation goes above 90% or the "available" RAM as shown in the taskbar will go below 1GB, what ever comes first, your system will become significantly sluggish.
So no, your "ideal world" doesn't exist, and thus 100% utilization of any computer resource will lead to a pretty much impossible to use system.
|
|
|
|
|
If the firmware I have to develop end up using 100% of resources it's a disaster, as it would be impossible to add functionalities without changing hardware or messing with already available features.
Also a personal computer is a flexible tool: flexibility requires that resources must be available at any given time.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
The shortest horror story: On Error Resume Next
|
|
|
|
|
Firmware has other considerations. I'm talking PCs primarily, user machines.
If those resources are queued up and preallocated they are that much *more* ready to use than if you suddenly need gigs of RAM waiting in the wings. This is precisely why modern apps, and frameworks (like .NET) do it.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
Quote: I'm talking PCs primarily, user machines.
In this hypothetical ideal world where everything is at 100% utilisation on a user's PC, anything the user does (like moving the mouse 2mm to the left) will have to wait for the utilisation to drop before that action can be completed.
Even in this hypothetical world scenario, it still seems like a bad idea to have everything at 100% utilisation: users don't want a 15s latency each time they move the mouse.
(In the real world, of course, it's worse - CPUs and cores scale their power drawn with their load - increasing the load to 100% makes them draw more power. In the real world, it makes sense to have as little CPU utilisation as possible, and to leave as much RAM as possible for unpredictable overhead.)
|
|
|
|
|
To be clear I did not say the CPU should *stay* at 100%. I said when it's performing work, it should use it all.
And yes, realistically you want about 10% off the top for the scheduler to work effectively, if I'm being technical.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
honey the codewitch wrote: I like to see my CPU work hard when it works at all.
In the space that I work in which is different than yours I like it when the CPU load is less than 50%. That gives me a buffer when the new feature I added for some reason starts chewing up that additional space.
And for a database I want to see it at even less than that. Similar reason but I expect more surprises with the db than with the application. It gets real scary when the database is running at a sustained utilization of 80%.
|
|
|
|
|
I probably should have been clear that I am primarily talking about traditionally user facing machines like desktops and laptops here rather than servers and embedded.
Utilization is important in those arenas too, but both how you achieve it, and where you want it are going to be dramatically different.
I sure hope that when I'm searching a distributed partitioned view in SQL Server, that all the logical "spindles" its partitioned across are speeding right along together. I also expect a database server to be less CPU heavy and more storage heavy, meaning your utilization metric will be your storage and I/O primarily. That's how you know your queries are being properly parallelized, for example.
It's different considerations to be sure, even if utilization sits at the center of all of them.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
At one time unused physical RAM in Windows machines was used for disk cache, thereby keeping RAM utilization at 100% for all intents and purposes.
Software Zen: delete this;
|
|
|
|
|
That's actually in theory a good idea. I wonder why they stopped allocating all of it.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
At one time they did use all of it, minus a fraction to keep handy as reserve. In today's world with SSD's and much faster 'disk' interfaces, I don't know if this is still valuable or not.
The fact that the unallocated RAM was used for disk cache wasn't visible to the user or to applications.
Software Zen: delete this;
|
|
|
|
|
I don't even necessarily mean for disk cache, but as something.
Check out my IoT graphics library here:
https://honeythecodewitch.com/gfx
And my IoT UI/User Experience library here:
https://honeythecodewitch.com/uix
|
|
|
|
|
I think they just changed it so it doesn't appear that way anymore.
It still predictively loads things into RAM but the presentation is different so it doesn't appear that RAM is used.
I think they changed that because people were like "WTF MSFT WHY USE ALL MY RAM?!"
It takes almost nothing for the OS to chuck it and use it for whatever is actually needed instead of what it predicted if it got it wrong.
|
|
|
|
|