|
I just realized I am on vacation tomorrow. Some stupid paperwork in Govt office.
cheers,
Super
------------------------------------------
Too much of good is bad,mix some evil in it
|
|
|
|
|
No problem - I'll set it for you.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
I have sent an email with few CCC made. Its not very refined yet but please free to discard and set something tough as I will be away long weekend
cheers,
Super
------------------------------------------
Too much of good is bad,mix some evil in it
|
|
|
|
|
OK - I'll go with (at least one) of those for you.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
That's a bit easy for you mush
"We can't stop here - this is bat country" - Hunter S Thompson - RIP
|
|
|
|
|
Took over an hour ...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Nice one. I was going to do anagram, but was not happy with the clues I could think of.
It was a choice between:
Two articles do not weigh much but they mix things up. (7)
and
Shaken like a ragman. (7)
|
|
|
|
|
As every word is an anagram of at least itself, perhaps simply:
Anagram (7)
|
|
|
|
|
I have plenty of memory in my machine (32GB). I have never seen the machine use even half that - not even close. I wonder: Do I really need the Pagefile.sys and the Swapfile.sys?
The reason I am asking, is that I suspect my Macrium image files of the systems drive are getting bloated by these files. They can grow quite large. Maybe Macrium leaves these files out of the images? They are quite clever - those fellas at Macrium.
Any expert advice out there?
Note: For the same reason I disabled Hibernation. It's a real Gigabyte hog.
|
|
|
|
|
Cp-Coder wrote: Do I really need the Pagefile.sys and the Swapfile.sys? No, since it is more than just virtual memory.
Windows 7 includes a file caching mechanism called SuperFetch that caches the most frequently accessed application files in RAM so your applications will open more quickly.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
Just try it. You can safely disable paging, and turn it back on later if you want.
Backup software shouldn't be backing up the page file, but who knows, it might be.
|
|
|
|
|
Good to know. Thanks
|
|
|
|
|
An Image backup should include the whole disk , which will include those files - but they will be zero length and the content is counted as unused disk space which isn't normally included in an image, I believe.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
An irrelevant distinction.
|
|
|
|
|
Yes, it seems there is not much to gain by disabling these files. At least nor as far as system images are concerned.
|
|
|
|
|
You could turn the them off. You're going to end up turning them back on.
According to Macrium Support, those files are added to the image, but they are zero length in the image even though they show a file size if you mount the image in Explorer.
|
|
|
|
|
Thanks! That makes sense. I have noticed that whether they are on or off does not seem to affect image size.
modified 25-Nov-19 21:58pm.
|
|
|
|
|
You shouldn't disable the pagefile.
Everything in the kernel expects it to exist and has been built around that assumption. so if you remove it you will actually get worse performance. Even if your RAM is more than enough.
You can probably make it a lot smaller though.
If you go to the control panel -> System -> advanced system settings -> Select the advanced tab -> Performance settings -> Advanced tab -> click Virtual memory - change, you'll get a recommended size that will quite probably work just fine for you.
|
|
|
|
|
|
Jörgen Andersson wrote: You shouldn't disable the pagefile.
Everything in the kernel expects it to exist and has been built around that assumption. so if you remove it you will actually get worse performance. I am struggling to understand which kind of OS kernel functions are speeded up by the presence of a pagefile. It certainly isn't obvious to me.
Could you provide a couple of examples (including an explanation of how the presence of a page file speeds it up), or a link to some source that you base your statement on?
|
|
|
|
|
I can't find it now, but I read it in a blog on the Microsoft homepage some years ago.
But as I understood it: The pagefile is built in at a so low level and is so optimized, so that when they added the possibility to disable it they didn't actually remove or bypass any code in the kernel.
All code that has to do with the pagefile is still used whether the file exists or not. And he claimed that it performance wise is better to have a small pagefile instead of none, because of how this is done.
|
|
|
|
|
|
A pagefile wouldn't solve the problem of some process going berserk - it would just take a little longer.
I love automatic garbage collection. For a number of years, I worked with C/C++ and its explicit malloc/free: For every single piece of C/C++ software I have worked on, a walkthrough of the memory use has revealed leaks. You fix them all before the release, but for the next release, new leaks has in mysterious ways crept in. Always. Then you become too eager, and free() memory that is still referenced by a pointer that you overlooked... If I should point out one single benefit from switching from C/C++ to C#, the automatic garbage collection is the obvious winner.
If you truly need more virtual space than what is physically available, then you must of course have a pagefile to support it. For my home machine, I have never seen more than slightly over half of my physical RAM being in use for anything, even when I deliberately tried to start many "large" applications at the same time. I guess part of the reason is that none of my applications have excessively large data structures - executable code doesn't require pagefile space (except possibly while being debugged), and code pages that you don't reference are not brought into RAM at all, no matter how large the executable is.
My funniest out-of-memory story happened back in the DOS days, not to me but to one of my collagues: This dBase-application refused to start; we didn't have a clue about why. After a long search for software causes, the PC was wheeled down to the hardware guys to verify that the hardware was OK. Down there, they plugged in probes and meters, and the application started with no problems. The PC was returned to my colleague, application woudln't start. Back to the lab: The application started.
After several trips back and forth, it was realized that this application was so demanding that you had to keep the cover off the PC to give it enough space. Many old tower cabinets had covers that you had to slide backwards: This was deliberate, to make sure that you disconnected all cables (or: the power cable) before digging into the electronics. The HW guys of course had to plug the power cable back in to do the testing.
But they did not plug in the mouse: The dBase application input was keyboard-only. So at boot-up, the mouse driver looked around for a mouse, found none, and terminated, freeing up a couple hundred bytes. When the PC was returnd to be installed in the office, the mouse was plugged in (we had started using Windows 2.11), the mouse driver found hardware, and clinged to its bytes of RAM. This was enough to make the total RAM requirements exceed the 512 kbyte available. (But 640 kbyte would have been sufficient for anybody...)
|
|
|
|
|
Member 7989122 wrote: A pagefile wouldn't solve the problem of some process going berserk - it would just take a little longer.
No, but what it does do is to replace catastrophic random failure with graceful degradation. If you system gradually starts running slower than normal looking at task manager and seeing 25 of 16GB ram used will make the problem clear, and then you can go to the per app tab and see what the pig is. Failure to allocate ram will cause whatever program attempts to make the first allocation past the system being maxed out fail. In cases where the primary offenders ram footprint is slowly growing due to heap fragmentation will frequently be some other random application. The case I mentioned the first time around took so long to track down because it was completely random which of the half dozen applications we were running would terminate. As it was it only was resolved as fast as it was because I noticed there wasn't a pagefile. The person who disabled it had no idea it would cause problems and was busy chasing the same red herrings that my coworker and I were in trying to figure out why the new machines were unstable.
Your DOS story brings me back. I was only a kid at the time; but also the resident PC expert at home. Dealing with DOS games with different tight memory limits resulted in the PC having a half dozenish boot menu options that controlled what type of additional memory was enabled (XMS, EMS, both, or neither) along with if all the normal TSRs were loaded or just a subset to free more <640k ram. I knew what was what since I set it all up, my younger brother had a cheatsheet for when a game he wanted to play needed a non-default setting, everyone else in the house either just used the default or picked the direct to win3.1 option.
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
Training a telescope on one’s own belly button will only reveal lint. You like that? You go right on staring at it. I prefer looking at galaxies.
-- Sarah Hoyt
|
|
|
|
|
There was another memory leak story in The Old New Thing[^] blog: This web server that gradually had its memory filled up due to a leak. The server had to be up 24/7 without interruption, so while they were searching for a solution, they replaced the one server with a cluster and a load balancer. When any of the machines were reaching a critical limit, it was taken out of the cluster and rebooted, and then put back in. Once the leak was found, they went back to a single server setup.
(I read the story in the book that presents a long series of the blog posts. Some of them are really entertaining. They are all available on line, but this is several years back and I won't spend the time to search up this one post. Both the book and the blog is certainly recommended!)
|
|
|
|