The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
I have plenty of memory in my machine (32GB). I have never seen the machine use even half that - not even close. I wonder: Do I really need the Pagefile.sys and the Swapfile.sys?
The reason I am asking, is that I suspect my Macrium image files of the systems drive are getting bloated by these files. They can grow quite large. Maybe Macrium leaves these files out of the images? They are quite clever - those fellas at Macrium.
Any expert advice out there?
Note: For the same reason I disabled Hibernation. It's a real Gigabyte hog.
An Image backup should include the whole disk , which will include those files - but they will be zero length and the content is counted as unused disk space which isn't normally included in an image, I believe.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
You shouldn't disable the pagefile.
Everything in the kernel expects it to exist and has been built around that assumption. so if you remove it you will actually get worse performance. Even if your RAM is more than enough.
You can probably make it a lot smaller though.
If you go to the control panel -> System -> advanced system settings -> Select the advanced tab -> Performance settings -> Advanced tab -> click Virtual memory - change, you'll get a recommended size that will quite probably work just fine for you.
I can't find it now, but I read it in a blog on the Microsoft homepage some years ago.
But as I understood it: The pagefile is built in at a so low level and is so optimized, so that when they added the possibility to disable it they didn't actually remove or bypass any code in the kernel.
All code that has to do with the pagefile is still used whether the file exists or not. And he claimed that it performance wise is better to have a small pagefile instead of none, because of how this is done.
The other gotcha wit disabling the page file is that when some rogue processChrome goes berserk and eats all your ram, if you have paging turned off random applications will start crashing with OOM errors. I experienced that first hand about 5 years ago when the admin for my project lab drank the no pagefile koolaid and turned it off; on systems that didn't have enough ram for all the applications we needed open to do our work. It took about a week to figure out WT was wrong with our new systems because the affected applications just terminated without displaying useful error messages.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
A pagefile wouldn't solve the problem of some process going berserk - it would just take a little longer.
I love automatic garbage collection. For a number of years, I worked with C/C++ and its explicit malloc/free: For every single piece of C/C++ software I have worked on, a walkthrough of the memory use has revealed leaks. You fix them all before the release, but for the next release, new leaks has in mysterious ways crept in. Always. Then you become too eager, and free() memory that is still referenced by a pointer that you overlooked... If I should point out one single benefit from switching from C/C++ to C#, the automatic garbage collection is the obvious winner.
If you truly need more virtual space than what is physically available, then you must of course have a pagefile to support it. For my home machine, I have never seen more than slightly over half of my physical RAM being in use for anything, even when I deliberately tried to start many "large" applications at the same time. I guess part of the reason is that none of my applications have excessively large data structures - executable code doesn't require pagefile space (except possibly while being debugged), and code pages that you don't reference are not brought into RAM at all, no matter how large the executable is.
My funniest out-of-memory story happened back in the DOS days, not to me but to one of my collagues: This dBase-application refused to start; we didn't have a clue about why. After a long search for software causes, the PC was wheeled down to the hardware guys to verify that the hardware was OK. Down there, they plugged in probes and meters, and the application started with no problems. The PC was returned to my colleague, application woudln't start. Back to the lab: The application started.
After several trips back and forth, it was realized that this application was so demanding that you had to keep the cover off the PC to give it enough space. Many old tower cabinets had covers that you had to slide backwards: This was deliberate, to make sure that you disconnected all cables (or: the power cable) before digging into the electronics. The HW guys of course had to plug the power cable back in to do the testing.
But they did not plug in the mouse: The dBase application input was keyboard-only. So at boot-up, the mouse driver looked around for a mouse, found none, and terminated, freeing up a couple hundred bytes. When the PC was returnd to be installed in the office, the mouse was plugged in (we had started using Windows 2.11), the mouse driver found hardware, and clinged to its bytes of RAM. This was enough to make the total RAM requirements exceed the 512 kbyte available. (But 640 kbyte would have been sufficient for anybody...)
Last Visit: 19-Feb-20 22:21 Last Update: 19-Feb-20 22:21