|
The number of service host processes has been steadily going up since IIRC w7. MS used to cram dozens of services into a single process to save a bit of ram. Newer versions have gone the other direction for troubleshooting and security reasons. It's much easier to figure out what service is going bonkers causing its host to eat a CPU core and needs a cluebat applied if you don't have 20 services in a single process. On the security front running services in their own processes makes it harder for a malicious or buggy and hackable one to attack other services by increasing the level of isolation between them.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
In the old days of virtual memory, when we had to pay for RAM chips (that is, more than small change) ...
There was the concept of "working set": The resources actually in use. If they are not in use, why should they occupy space in RAM?
Fortunately, both Windows and the x32/x64 architecture have roots back to those days. They can shuffle virtual memory pages in and out as they are needed. You can pretend to have a 16 GB RAM machine, because you only make actual use of half of the data/code. All the memory segments have been assigned addresses in memory space, but they are not actually brought into RAM until you actually reference them.
As long as your working set fits into RAM, the performance of your system is very little affected. I can assure you: You are not referencing more than two million different memory pages (each 4 kibyte) all the time! Lots of the code perform functions you are not using (and if you start using them, it takes a handful milliseconds to brin them in), or initialization code that can be thrown out once run, or tables that you are not referencing (say, user messages in some strange language you do not master anyway), and so on.
For a number of years, quite a few people have believed that virtual memory slows down execution even when you have so much RAM that you can load all the segments, not just your working set. They believe that they speed up the machine by turning off paging. Placebo works. They know that it gives better performance. If it can at all be measured, I guess it would be by fractions of a percent - not the least because most of it is handled in hardware that is active even if you turn off the software to handle a possible page fault. Disabling a page fault handler that is never called anyway gives no speed-up.
The main effect of turning off paging is that your physical RAM size sets an absolute limit to the total size of programs you can have in memory. When you hit the ceiling, it is hard.
If you really are banging your head into the ceiling at 8 GByte, it sounds as if paging is turned off on your machine. Turn it on, and you can go up to 16 GByte virtual memory space. You can go even higher, but the default setting in Windows is to set virtual memory to twice the physical one. It will cost you 8 Gbyte of disk space on your primary disk, but that you can afford! (If it is a flash disk, the impact of a page fault will also be far less than with a magnetic disk.)
In the old days, when RAM was super-expensive, you could encounter machines with a very high virtual-to-physical memory rate: One of the early OSes I worked with could in principle run with only two physical memory pages available for paging, after the resident parts of the OS had taken what they required. I never saw any machine that starved on RAM, but we did run one with 18 physical RAM pages, handling 20 interactive users. You have two million, not 18, physical pages to yourself. 8 million physical RAM pages should be a large enough working set for everybody
OK: I'll take exception for things like weather forecasting, FEM modelling and some more extreme simulation models. If you are working with such things, you know that data volumes can be extreme. You are indicating nothing of this sort.
|
|
|
|
|
Great post and very interesting to me.
I also run the same setup (Android Studio, FireFox, Android Emulator) on a Virtual Machine that only has 2.5GB RAM given to it.
Just this morning I was thinking, "how is that even possible?"
It is quite slow but none of the processes ever crash like they do on my real machine that has 8GB.
I think what you've explained here (virtual memory) is in effect what the VM is doing. It is quite a bit slower because this VM only has 2.5GB of RAM, but on my main machine (8GB and an SSD) I probably won't see near the slowness and I won't see the crashes.
I will try this out later today. Thanks for the great insight!
|
|
|
|
|
If a system's slowing to a crawl it's probably that it's swapping faster than the drive can keep up with not that the page file is turned off.
At my last job I briefly was trying to work on a machine misconfigured by someone who'd drank the no page file good koolaid. When the ram was maxed out - which was easy to do with my workload - applications would randomly crash when they page-faulted, rendering the system not obnoxiously slow but a chaotic crashfest that initially had me thinking hardware failure/corrupted OS install.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
It is highly unlikely that paging is "slowing a system to a crawl". Certainly not on a desktop system used for varied tasks, like sw development. You have a real page fault, one that leads to a real, physical disk access, "every now and then". When your system comes to a crawl, have the Resource Monitor running on your system, and take a look at the Disk load. The total data volumen handled by the disk system is shown in the top running display; blow that , the queue length for each disk is shown. If you opwn Storage section in the main display, you can see for each disk the "Active Time (%)".
If the disk can't keep up with the paging requirements, the data volume would approach the transfer rate of your primary disk. A SATA-600 disk could in principle go up to 75 Mbyte/sec; you'll never see a magnetic disk that can do 75 Mbyte/sec, especially not on random accesses. Seing flash disks doing 25 Mbyte/sec is nothing special. The data volume is for all disk combined, and if you have more than one disk controller, the sum could actually go even higher than 75 Mbytes/sec.
Then take a look at the "Active Time" of your paging disk (usually the C-disk). Is it getting close to busy 100% of the time? 50%? Even 10%, over an extended period of time? I doubt it very much! Peaks: yes, of course! When you start up a huge application, thousands of pages must be brought in; that is unavoidable. Those "startup" disk accesses would have to be done no matter how much RAM you've got.
Also, when the disk driver initites a physical disk operation, it will not block the entire system while the transfer is done: The CPU is released for other tasks, and the electronics of the physical disk interface transfers data directly between the disk and RAM. For paging, the thread causing the page fault will be blocked, but no other threads.
No matter how slowly the system is crawling, I have never during this millennium (and we could add another ten years) traced it down to the paging disk being the bottleneck. Practically always, it comes down to resources (in RAM) being locked by one thread (/process), and the queue of other threads for this resource builds up. Lots of developers never analyze their locing of resources, but rather, "to be on the safe side" holds locks throughout lengthy operations, across disk accesses or network interactions, thereby blocking others out. This is the main cause of systems going into a crawl. Besides, you still see lots of programmers running in busyloops waiting for the resource to become released; that certainly doesn't improve the situation!
It certainly is possible to set up demos to "prove" that paging is a problem, such as processing matrices with a few billion floats, in operations addressing every element. If RAM isn't big enough to hold the entire matrix, plus the other matcix or vector that it is multiplied with, then you even have an easily reproducible case - it will always be slow as molasses.
You can set it up "to prove your point". Like proving the Pope to be Catholic. When a systems comes to a crawl is somethng very different, that does not have the same explanations. It is not "probably that it's swapping faster than the drive can keep up with".
This story about applications crashing on page faults, on which OS was that? I haven't met a single OS this millennium that allows a process to be started without allocating memory for all its segments. If a process then asks "give me more! I need it for my heap!", the OS says "Sorry, there isn't that much space left, not even in virtual memory", but the application ignores the error and goes on using an invalid memory address (usually zero), then it causes a segment violation and usually crashes. (If it couldn't handle an out-of-memory condition, it probably couldn't handle a segmentation fault gracefully, either!)
This has nothing to do with delays from the paging mechanism. In your case, a sufficiently large virtual memory and a paging system would problably have solved the problem. Secondary problems might have to to with the OS not doing a proper cleanup (I suppose it is well known that early versions of Unix had an "out of memory" error message that was never displayed - the routine to print it out crashed becase it called new() and there was no memory available...). Most such problems have been solved today, but if two applications share data structures, one of them crashing could easily lead to inconsistencies causing the other one to crash as well. As long as an OS allows applications to share data, there is no way to prevent that!
(One of the main reasons why Unix early got a reputation for being rock stable is that you had no way whatsoever to share data in memory, no semaphores or other synchronization mechanisms that could deadlock - each process lived in its closed container, not that different from the Docker containers we have today.)
|
|
|
|
|
Member 7989122 wrote: I can assure you: You are not referencing more than two million different memory pages (each 4 kibyte) all the time!
Yes, you are. That is the problem. Not exactly all time, but in a minute you may need to reference 30 millions of pages. The OS will have to swap pages a lot.
You have running Java virtual machine, .NET, ARM emulator, an editor with has in memory all the methods and variables and modules, so it can prompt you parameters or suggest properties. You have the debugger running. You have a browser, each page with its images, with page rendered in memory to display it fast, running the javascript machine, with the DOM of the page, plus plugins, etc etc.
Software is more bloated day by day, so every application/program/service pretends that you have 4Gb of RAM. You operating system is thus emulating a 200 Gb machine and it is continuously swapping pages.
Quote: If you really are banging your head into the ceiling at 8 GByte, it sounds as if paging is turned off on your machine.
No, I'm sure he hasn't turned off paging. No modern software can run without paging. I don't think that a standard windows 10 installation can boot without paging.
Simply the system is slow because software uses a lot of memory and it swaps pages lot. He clicks on window, the OS dumps all memory associated to the current window and loads the memory associated to the new active window because it has the whole physical RAM used, and that takes a few milliseconds. Finally every gesture means swapping, so applications don't run, creep.
modified 27-Aug-19 11:52am.
|
|
|
|
|
You may be making 30 million memory accesses per second to some memory page - that is one per 80-100 clock cycles - but those are essentially to pages already in RAM. (Actually, even 30 million sounds high: The great majority of memory references hit the cache, and doesn't even go out to RAM.) If you really reference every corner of 120 GByte (30 million 4ki pages) every minute, then you are working with tasks such as weather forecasts or FEM. Not general software development.
If you generated one page fault every 33 microsecond, there would be now way for the disk to serve the requests. Not even a flash disk can deliver 120 Gbytes/sec (30 million * 4 ki) from the paging disk, so within a brief moment, the threads not blocked on a page fault would have the CPU to themselves and run at top speed ... and that claimed 30 million accesses a second would drop to almost zero. In other words: I do not trust your figures at all.
objectvill wrote: I think that not even windows 10 can boot without paging. At boot time, paging is certainly not activated! It needs an OS in place to set up page tables properly; all entries are empty. I don't know how far you will stretch the "boot" time; if it goes all the way up to the login dialog, the OS has certainly done the initialization. You can tell Windows not to use a paging file; then you will never page anythng out to disk. I assume that read-only segments (e.g. code) are still read on demand from the .exe/.dll - the "memory mapped file" mechanism uses the paging tables, and code access is very close to the same thing. So page tables are used, the hardware is there and difficult to overlook. You will have that logical-to-physical mapping in any case, whether you have a page file or not. You may call it "paging" because page tables are used; I do not, when the main purpose is to allow different applications to run in overlapping virtual address spaces which are mapped to distinct physical spaces, not to provide more virtual than physical memory.
As I said before: Take a look at actual physical disk activity. In the old days, each disk had a LED indicating physical activty; today they do not, but the SATA controller may have a head for connecting a LED indicating activity on any of the disks controlled by it. The Resource Monitor can tell you for how large percentage of time the driver is waiting for the disk, but not even that will necessarily indicate physical access: All (magnetic) disks today have fairly large RAM caches, so data may be served at RAM speeds.
You can add up the size of JVM, .net, emulator, editor..., and the sum may come to 120 GByte (i.e. 30 million pages), but they do not "have in memory all the methods and variables and modules". They are out on disk. When you are, say, prompted for parameters, and the code for that dialog happens to not have been used yet (since you started the application), there may be a 3 millisecond delay before it is in place in RAM. Especially for operations that require human intervention of three to four magnitudes higher duration than the page fault handling, the paging means nothing for the performance.
You may monitor page faults and see that in periods, there may be many thousands of them per second. Then you should know how the paging is handled: To sort pages in RAM into "recently referenced" and "not referenced for some time; candidate for yielding if RAM space runs out", at regular intervals, the OS marks all pages in RAM as "not present RAM", by resetting a hardware bit (per page) in the page tables. When the page is reference, an interrupt to the OS is rasied. It looks at the physical RAM address in the page table: OK, this is a valid address, it is not zero, so all that needs to be done is to set the "page present" flag to prevent more such interrupts, and go on.
So you have zillions of page fault interrups that only leads to a flag to be set, no paging operation. If the interrupt handler sees that the RAM address is zero, then it must start a more involved operation. I believe Windows have a third level: A set of RAM pages not used for a LONG time: When it does its regular sweeps resetting the present bit, if the bit was already reset (so the page hasn't been referenced since the previous sweep), the RAM addresses is zeroed in the page table and the page moved to a pool of "standby" pages actually in RAM, that is searched before any disk operation is initiated. If the faulting page is found, its RAM address is inserted into the page table and the page ulinked from the pool.
The OS also keeps a pool of "free" RAM pages: At boot up, all pages are in this pool. As space is being used, for code, data or stack, pages are moved from the free pool into the page tables. When a process terminates, the RAM pages used in physical RAM for the stack and data segments is moved back to the free pool.
If RAM really is crowded, and the requested page is neither flagged as present (so no interrupt is generated), not present but with valid RAM address (so the interrupt handler sets the present flag) or present in the pool of standby pages (so the memory address in the page table must be updated), it must be brought in from disk. This is what causes a real disk access - but that is only a small fraction of the page fault interrupts. If the free pool is not empty, one of its RAM pages is moved to the page table. If the free pool is empty, one a page in the standby pool must yield: Its RAM address is entered int the page tables, its contents overwritten. Its entry in the pool list is removed, so that next time it is referenced, causing a page fault interrupt, it will not be found and must be fetched from disk.
So, you cannot count disk accesses by the number of page fault interrups. You must look at the disk activity.
If you actually manage to generate 30 million page fault interrupts a second, you still may have zero disk accesses: The interrupt handler says: But I have got it here! Take a look in the Memory section of the Resource monitor: Is the Pysical Memory green all the way? (there is a grey part at the bottom; that is DMA buffers for IO equipment.) Is there never any dark blue "Standby", no light blue "Free"? Then you can go to the Disk section and observe how busy your paging disk is. If it is busy more than 20% of the time, then you may have to consider more RAM.
A few points worth noting: If your applications make heavy use of the file system, on the same disk as your page file, a significant part of the disk activity may be other than paging. 20% disk activity may be quite normal with file intensive applications running.
Second: A high number of "real" page faults are perfectly normal when you start a new application. The code and data segments are set up as an extension of the page file, the page tables for all segments set up with reset present flag and zero RAM address. When your application starts up, referencing code and data, each first reference to a page will be treated as a "real" page fault, and the page brought in. For pages from data segments, space in the paging file is not allocated until that page must yield because of RAM crowding, and the page has been modified. Pages from code segments never take any space in the paging file. (For a debugger, the code segments being debugged are data segments!)
Third: Code segments are shared. If you run five instances of your web browser, they address exactly the same RAM pages for the code. (For modified data, each have their own data).
Remember that only actively running code may generate page faults. If you have five instances of your web browser open, you are actively interacting with only one of them, and not with other applications. E.g. a media presentation or animation may run by itself in another window, but the RAM resources it requires are brought in place within milliseconds, and make up a tiny little fraction of the total functionality of the browser. Similar with other systems: When you are not interacting with them, 99+ percent of the time, they will not generate page faults, only when you ask for an operation that you haven't requested for a long time.
It is true that modern software is extremely bloated. But lots of it comes from tons of functions you never need, so they are never brought in from the .exe/.dll file into memory. A lot of stuff is code run once - initialization stuff, or only as a result of explicit user operations taking several magnitudes more time than the page fault, and after two memory sweeps the pages are over in the standby pool, ready for being overwritten by pages from the active working set. So the bloat has far more effect on disk space requirements than on paging.
Now Java and DOM are notorious for building large data structures. But even with those, 8 GByte is a lot. And even if the data structures are large, it is a very special appliaction referencing, say, every single node of the DOM tree all the time. A great deal of it goes first to "not present", then to "standby", and then possibly to the page file on a (single) page by page basis. When a user presses the PageDown key, maybe one or two memory pages have been swapped out. If you have other processes running in the background, they may have had a more urgent need for RAM. So there may be a very slight delay, bringing in that small branch of the DOM tree. Then it is in, and will be present as long as it is used.
I hear you saying that you have an extreme level of paging. I have heard such claims numerous times. When I have dived into it to see if the claims hold water, I have never found a paging disk bottleneck. The typical case is sers who insist that they need more RAM, but when you walk up to them while they work, start Resource Monitor and show them that one third of the RAM they have got is "free", one third is in "standby", and only one third is "in use". You can do that again and again, and every time they say: But wait a second - if I start this and that and that and that, then the "used" part going up, see? Yes, but not in your ordinary mode of working. I am not claiming that it is impossible to overload the RAM, I am just saying that you never do it in you ordinary work.
Those who regularly work with extremely memory intensive tasks like FEM, weather or discrete simulation (an ARM emulator counts only of it emulates down to the gae level - but then you are a hardware developer, not a software developer), they know what they are doing. Also, the software they use cost magnitudes more than 16 or 32 GBytes of RAM. Those are not the ones coming with vague suspicions about unbeliavbly high levels of paging.
|
|
|
|
|
I make a mistake by a factor of 1000, and not a single voice was raised...
30 million accesses a second makes 33 nanoseconds - not microseconds, as I wrote - between each access, on the average. I can assure you that no paging system can serve a hard page fault in 30 nanoseconds. That is like 100 clock cycles. Obviously it cannot be handled. But even referencing a new and swapped out memory page every 100 clock cycles, on the average, appears to be slightly on the high side.
|
|
|
|
|
I just upgraded my new-ish laptop to 32gb. Mostly so I can give my win7 vm up to 24gb RAM.
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
#realJSOP wrote: I just upgraded my new-ish laptop to 32gb.
I'm very jealous! My stupid cheap laptop only goes to 8GB.
|
|
|
|
|
I have two 8-year old laptops than can only go to 8gb (and they're both maxed out).
".45 ACP - because shooting twice is just silly" - JSOP, 2010 ----- You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010 ----- When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013
|
|
|
|
|
My newish work laptop came with 32gb of ram, it's one of the few things I like about it; as with a crapton of browser tabs and multiple copies of visual studio on my old 16gb machine I could find myself swapping enough that even with an SSD to read/write to my system started lagging noticeably.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
8gb is fine for Win10. You must have some piece of software that is running in an infinite loop allocating memory for variables until you run out of space.
|
|
|
|
|
Fun fact of the day!
Each process on your PC has something called "Modified Page List Bytes" which is additional RAM that a process is using that is hidden from Task Manager.
This amount happily sits at around 8GB on my 32GB Desktop Machine. This amount cannot be picked up by Task Manager, and requires specialized RAM-based process monitors to detect (RAMMap, in-depth investigation into Windows Performance Counters, etc).
Whilst using such methods can tell you the total amount of invisible RAM being used, there is no way to tell what is using it, if something is leaking it, or even how to easily free it up (Although closing a process will free up its associated modified bytes)
Task Manager can show you at 30% total Memory usage when in reality you could be sitting at 90%+ and start tossing "Out Of Memory" exceptions any moment now. Disable your Pagefile to drastically speed up this process to experience this first hand
Have fun :p
-= Reelix =-
modified 26-Aug-19 13:46pm.
|
|
|
|
|
That's really great information and explains some additional things I've seen in recent past where things have crashed even though I have 0.5-1GB free (according to TaskManager).
Thanks for posting.
|
|
|
|
|
|
Quote Investigator is fantastic! I always enjoy their articles. Just last week I read one on the quote:
To Avoid Criticism,
Say Nothing,
Do Nothing,
Be Nothing
|
|
|
|
|
Even though it is scvhost.exe, it doesn't mean that Windows 10 is at fault.
Exit Android Studio and you will see your RAM freed up.
From my experience, scvhost is generic service something. It can be called by anyone.
|
|
|
|
|
I use Android Studio + Emulator + Firefox + Thunderbird on an older AMD A10 on Ubuntu Linux and the whole thing seldom goes over 6GB of ram.
Sometimes I have IntelliJ open at the same time all well under 8GB.
I guess there's something in your windows.
|
|
|
|
|
Check this out. I was just browsing through a folder and I played an mp4 file and the video played and then out of nowhere I got this message*:
"Because you're accessing sensitive info, you need to verify your password."
https://i.stack.imgur.com/vbbGC.png[^]
What!?!
I couldn't even tell what app was causing this to appear. It seems to be the app that is playing the mp4 video. Crazy, microsoft. Really crazy.
I did not enter my password and the app played the video and everything else works the same. Seems so fake!
*I blotted my valid MS login email out.
|
|
|
|
|
It may be the video player: some of them try to access metadata online I think.
Which player were you using?
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: It may be the video player: some of them try to access metadata online I think.
Again, I believe you are correct.
OriginalGriff wrote: Which player were you using?
That was part of the craziness. It was the built-in win10 one I think?
Can't really tell. There is nothing in the Store App that lets me know what is running. I do see "Movies & TV" so I think that is what it is called.
The additionally weird thing is that I've played the video a number of times again to see if the popup would come back and it didn't.
|
|
|
|
|
Hmmmm,
raddevus wrote: "Because you're accessing sensitive info, you need to verify your password." Do you have OneDrive as your default save location?
raddevus wrote: *I blotted my valid MS login email out *I have X-ray vision.
Best Wishes,
-David Delaune
|
|
|
|
|
I don’t have OneDrive as a default save location on my machine.
I wouldn’t ever want that really.
However, it is possible that the recent big update (1903 recently installed on my machine) May have changed something. However, I checked and that file is in my Downloads directory.
|
|
|
|
|
It is a real popup but I agree that they should make it clear which app is behind it.
Social Media - A platform that makes it easier for the crazies to find each other.
Everyone is born right handed. Only the strongest overcome it.
Fight for left-handed rights and hand equality.
|
|
|
|
|