|
|
Pretty sure it's not ATARI!
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
|
|
|
|
|
Didn't even see that one - cool!
But yes, nope
|
|
|
|
|
I'm not good with ornithology, but ... CROWN?
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Yep
|
|
|
|
|
I've had my i7 laptop with 8GB for at least 4 years now (started out on win 8.1).
It has worked fine. I could run
1. web browser
2. Android Studio
3. Android emulator
All at the same time and I never had a problem. Now, I cannot run those 3 or all my RAM is gone.
I can now only (barely) run the
1. web browser
2. android studio
The only way i can use my computer for Android development now is to use an externally connected android device to run/debug apps on. Very sad. This laptop is not able to upgrade to 16GB. I know. It's crazy and cheapo.
Actually, I can barely run the web browser when Android Studio is running. Not great.
When I open too many tabs the tabs just crash and burn.
Win 10 Ram Eater?
Anyways, I also noticed this in win 10 at work and now at home. Check out how many svchost.exe processes are running:
I can't even fit them all on one screen (in task manager) see the snapshot :
https://i.stack.imgur.com/bPaaD.png[^]
What is going on? Has anyone else noticed this?
EDIT - Android Studio RAM
Android Studio eats > 1.0 GB
And it starts up two Java Processes
java.exe - 823 MB
java.exe - 333 MB
Oy!
Meanwhile, any browser eats up about 1 GB (my FF is at 723 MB).
That's 3 GB and the rest of the 8GB is basically eaten up by random win10 and other background processes. Such is the modern life, I assume.
modified 25-Aug-19 17:15pm.
|
|
|
|
|
IRC the android emulator is kinda greedy ... but I can run Chrome with 9 tabs open and VS2019 with a single man team sized project in 4GB of RAM on and i5 Surface. I'm doing that right now!
I'd wonder about AS as well - I installed it once years ago and decided I didn't like it. Slow, and not that obvious IIRC. Maybe those two are the problem?
Sent from my Amstrad PC 1640
Never throw anything away, Griff
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
That is quite amazing on the VS2019 and the 4GB RAM on the Surface.
Yes, it does seem to be Android Studio as the source of the problem. Very unfortunate.
Thanks.
|
|
|
|
|
I just started Visual Studio 2019 on my machine to see what it would look like.
Loaded up a basic sized MVC project and it's at 300MB. Quite a difference compared to Android STudio and it's desire for 2GB of ram (AS @ 1.0GB and the two Java procs at 1 GB).
|
|
|
|
|
raddevus wrote: Check out how many svchost.exe processes are running:
I can't even fit them all on one screen (in task manager) see the snapshot :
https://i.stack.imgur.com/bPaaD.png[^]
What is going on? Has anyone else noticed this? The old Windows security model was severely lacking... but process isolation is actually quite good. The reason browsers consume so much RAM is because they are also taking advantage of process isolation and job object isolation. The operating system is now also taking advantage of process isolation.
Plus... in the old service model when a service crashed a half-dozen other services crashed along with it. You can go back to the old behavior by changing a registry key but you will not gain much by doing that.
For a development box I'd recommend a minimum of 16GB RAM.
Best Wishes,
-David Delaune
|
|
|
|
|
That’s great information and very good points about what is happening.
And, you are, of course, correct, 16GB for development is a lot more realistic.
I just got away with it for so long I’ve gotten soft.
|
|
|
|
|
windows has a lot of service u can disable. and if you are running w10 if you lockdown the privacy settings and background apps...
Caveat Emptor.
"Progress doesn't come from early risers – progress is made by lazy men looking for easier ways to do things." Lazarus Long
|
|
|
|
|
abmv wrote: windows has a lot of service u can disable.
Interesting. I will have to look into that. Thanks.
|
|
|
|
|
|
Thanks for the hints. I have seen numerous problems with the telemetry proc in the past. It was the reason I went from an HDD to an SSD.
|
|
|
|
|
This is representative of society today : waste resources because there are "plenty" of them instead of trying to optimize.
|
|
|
|
|
Ha!
While it makes me the source of derision more and more these days, I still have operational copies of TASM, NASM and MASM.
Also, an apple, banana and an orange cost an awful lot less than the quantity of petrol needed to get me and the car somewhere..
|
|
|
|
|
Don't fold hands. Instead, do something. You have tons of service host processes? Find out which services run in which process. That will get you started with possible fixes.
|
|
|
|
|
This seems like an indecent number of svchost.exe processes.
I would investigate loaded dlls with ProcessExplorer to try and find what is really going on.
enum HumanBool { Yes, No, Maybe, Perhaps, Probably, ProbablyNot, MostLikely, MostUnlikely, HellYes, HellNo, Wtf }
|
|
|
|
|
The number of service host processes has been steadily going up since IIRC w7. MS used to cram dozens of services into a single process to save a bit of ram. Newer versions have gone the other direction for troubleshooting and security reasons. It's much easier to figure out what service is going bonkers causing its host to eat a CPU core and needs a cluebat applied if you don't have 20 services in a single process. On the security front running services in their own processes makes it harder for a malicious or buggy and hackable one to attack other services by increasing the level of isolation between them.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
In the old days of virtual memory, when we had to pay for RAM chips (that is, more than small change) ...
There was the concept of "working set": The resources actually in use. If they are not in use, why should they occupy space in RAM?
Fortunately, both Windows and the x32/x64 architecture have roots back to those days. They can shuffle virtual memory pages in and out as they are needed. You can pretend to have a 16 GB RAM machine, because you only make actual use of half of the data/code. All the memory segments have been assigned addresses in memory space, but they are not actually brought into RAM until you actually reference them.
As long as your working set fits into RAM, the performance of your system is very little affected. I can assure you: You are not referencing more than two million different memory pages (each 4 kibyte) all the time! Lots of the code perform functions you are not using (and if you start using them, it takes a handful milliseconds to brin them in), or initialization code that can be thrown out once run, or tables that you are not referencing (say, user messages in some strange language you do not master anyway), and so on.
For a number of years, quite a few people have believed that virtual memory slows down execution even when you have so much RAM that you can load all the segments, not just your working set. They believe that they speed up the machine by turning off paging. Placebo works. They know that it gives better performance. If it can at all be measured, I guess it would be by fractions of a percent - not the least because most of it is handled in hardware that is active even if you turn off the software to handle a possible page fault. Disabling a page fault handler that is never called anyway gives no speed-up.
The main effect of turning off paging is that your physical RAM size sets an absolute limit to the total size of programs you can have in memory. When you hit the ceiling, it is hard.
If you really are banging your head into the ceiling at 8 GByte, it sounds as if paging is turned off on your machine. Turn it on, and you can go up to 16 GByte virtual memory space. You can go even higher, but the default setting in Windows is to set virtual memory to twice the physical one. It will cost you 8 Gbyte of disk space on your primary disk, but that you can afford! (If it is a flash disk, the impact of a page fault will also be far less than with a magnetic disk.)
In the old days, when RAM was super-expensive, you could encounter machines with a very high virtual-to-physical memory rate: One of the early OSes I worked with could in principle run with only two physical memory pages available for paging, after the resident parts of the OS had taken what they required. I never saw any machine that starved on RAM, but we did run one with 18 physical RAM pages, handling 20 interactive users. You have two million, not 18, physical pages to yourself. 8 million physical RAM pages should be a large enough working set for everybody
OK: I'll take exception for things like weather forecasting, FEM modelling and some more extreme simulation models. If you are working with such things, you know that data volumes can be extreme. You are indicating nothing of this sort.
|
|
|
|
|
Great post and very interesting to me.
I also run the same setup (Android Studio, FireFox, Android Emulator) on a Virtual Machine that only has 2.5GB RAM given to it.
Just this morning I was thinking, "how is that even possible?"
It is quite slow but none of the processes ever crash like they do on my real machine that has 8GB.
I think what you've explained here (virtual memory) is in effect what the VM is doing. It is quite a bit slower because this VM only has 2.5GB of RAM, but on my main machine (8GB and an SSD) I probably won't see near the slowness and I won't see the crashes.
I will try this out later today. Thanks for the great insight!
|
|
|
|
|
If a system's slowing to a crawl it's probably that it's swapping faster than the drive can keep up with not that the page file is turned off.
At my last job I briefly was trying to work on a machine misconfigured by someone who'd drank the no page file good koolaid. When the ram was maxed out - which was easy to do with my workload - applications would randomly crash when they page-faulted, rendering the system not obnoxiously slow but a chaotic crashfest that initially had me thinking hardware failure/corrupted OS install.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
It is highly unlikely that paging is "slowing a system to a crawl". Certainly not on a desktop system used for varied tasks, like sw development. You have a real page fault, one that leads to a real, physical disk access, "every now and then". When your system comes to a crawl, have the Resource Monitor running on your system, and take a look at the Disk load. The total data volumen handled by the disk system is shown in the top running display; blow that , the queue length for each disk is shown. If you opwn Storage section in the main display, you can see for each disk the "Active Time (%)".
If the disk can't keep up with the paging requirements, the data volume would approach the transfer rate of your primary disk. A SATA-600 disk could in principle go up to 75 Mbyte/sec; you'll never see a magnetic disk that can do 75 Mbyte/sec, especially not on random accesses. Seing flash disks doing 25 Mbyte/sec is nothing special. The data volume is for all disk combined, and if you have more than one disk controller, the sum could actually go even higher than 75 Mbytes/sec.
Then take a look at the "Active Time" of your paging disk (usually the C-disk). Is it getting close to busy 100% of the time? 50%? Even 10%, over an extended period of time? I doubt it very much! Peaks: yes, of course! When you start up a huge application, thousands of pages must be brought in; that is unavoidable. Those "startup" disk accesses would have to be done no matter how much RAM you've got.
Also, when the disk driver initites a physical disk operation, it will not block the entire system while the transfer is done: The CPU is released for other tasks, and the electronics of the physical disk interface transfers data directly between the disk and RAM. For paging, the thread causing the page fault will be blocked, but no other threads.
No matter how slowly the system is crawling, I have never during this millennium (and we could add another ten years) traced it down to the paging disk being the bottleneck. Practically always, it comes down to resources (in RAM) being locked by one thread (/process), and the queue of other threads for this resource builds up. Lots of developers never analyze their locing of resources, but rather, "to be on the safe side" holds locks throughout lengthy operations, across disk accesses or network interactions, thereby blocking others out. This is the main cause of systems going into a crawl. Besides, you still see lots of programmers running in busyloops waiting for the resource to become released; that certainly doesn't improve the situation!
It certainly is possible to set up demos to "prove" that paging is a problem, such as processing matrices with a few billion floats, in operations addressing every element. If RAM isn't big enough to hold the entire matrix, plus the other matcix or vector that it is multiplied with, then you even have an easily reproducible case - it will always be slow as molasses.
You can set it up "to prove your point". Like proving the Pope to be Catholic. When a systems comes to a crawl is somethng very different, that does not have the same explanations. It is not "probably that it's swapping faster than the drive can keep up with".
This story about applications crashing on page faults, on which OS was that? I haven't met a single OS this millennium that allows a process to be started without allocating memory for all its segments. If a process then asks "give me more! I need it for my heap!", the OS says "Sorry, there isn't that much space left, not even in virtual memory", but the application ignores the error and goes on using an invalid memory address (usually zero), then it causes a segment violation and usually crashes. (If it couldn't handle an out-of-memory condition, it probably couldn't handle a segmentation fault gracefully, either!)
This has nothing to do with delays from the paging mechanism. In your case, a sufficiently large virtual memory and a paging system would problably have solved the problem. Secondary problems might have to to with the OS not doing a proper cleanup (I suppose it is well known that early versions of Unix had an "out of memory" error message that was never displayed - the routine to print it out crashed becase it called new() and there was no memory available...). Most such problems have been solved today, but if two applications share data structures, one of them crashing could easily lead to inconsistencies causing the other one to crash as well. As long as an OS allows applications to share data, there is no way to prevent that!
(One of the main reasons why Unix early got a reputation for being rock stable is that you had no way whatsoever to share data in memory, no semaphores or other synchronization mechanisms that could deadlock - each process lived in its closed container, not that different from the Docker containers we have today.)
|
|
|
|
|
Member 7989122 wrote: I can assure you: You are not referencing more than two million different memory pages (each 4 kibyte) all the time!
Yes, you are. That is the problem. Not exactly all time, but in a minute you may need to reference 30 millions of pages. The OS will have to swap pages a lot.
You have running Java virtual machine, .NET, ARM emulator, an editor with has in memory all the methods and variables and modules, so it can prompt you parameters or suggest properties. You have the debugger running. You have a browser, each page with its images, with page rendered in memory to display it fast, running the javascript machine, with the DOM of the page, plus plugins, etc etc.
Software is more bloated day by day, so every application/program/service pretends that you have 4Gb of RAM. You operating system is thus emulating a 200 Gb machine and it is continuously swapping pages.
Quote: If you really are banging your head into the ceiling at 8 GByte, it sounds as if paging is turned off on your machine.
No, I'm sure he hasn't turned off paging. No modern software can run without paging. I don't think that a standard windows 10 installation can boot without paging.
Simply the system is slow because software uses a lot of memory and it swaps pages lot. He clicks on window, the OS dumps all memory associated to the current window and loads the memory associated to the new active window because it has the whole physical RAM used, and that takes a few milliseconds. Finally every gesture means swapping, so applications don't run, creep.
modified 27-Aug-19 11:52am.
|
|
|
|
|