|
The defrag "winners" have been posted at Donn Edwards' blog:
Freeware: JkDefrag
Shareware: Raxco PerfectDisk
See blog for more details and links.
Anyone here using PerfectDisk? Any comments?
|
|
|
|
|
I am completely mystified as to how he can compare performance of defrag utilities equally. I was surprised to see that aspect listed and as far as I can see he doesn't give any methodology of testing.
Cum catapultae proscriptae erunt tum soli proscripti catapultas habebunt
|
|
|
|
|
John Cardinal wrote: I am completely mystified as to how he can compare performance of defrag utilities equally. I was surprised to see that aspect listed and as far as I can see he doesn't give any methodology of testing.
John raises a question that every programmer should ask, and it is most valid. I don't have full testing facilities and so no attempt was made to test whether program A did a defrag that resulted in faster hard drive performance than program B, but my testing method ended up being quite thorough anyway. Allow me to explain.
I'm a database programmer, using Access 97 and Microsoft SQL Server 2000. In a typical week I copy a complete SQL backup file (6GB "SAClinic.dbk") from the production server to my laptop, and save it in a compressed folder on drive D:. I then attempt to defragment it so that I can do a SQL Database Restore from the data file.
The SQL Database is stored as c:\sql\mssql\data\SAClinic_Data.mdf and a corresponding log file. Again, because these files are large, they get stored on my laptop drive as compressed files.
Compressed files of this size fragment easily and in a gazillion fragments, and most of the defrag programs choked on the number of fragments and/or the remaining disk space. I mentioned such problems in each review in "The Great Defrag Shootout". I also used contig.exe to analyse files or folders and report the number of fragments in the files.
The SQL Database Restore operation doesn't work properly if the files are too fragmented, so I was at the mercy of this software (and the defrag program being tested) whenever I needed to perform this operation.
In addition, I download and edit audio books and podcasts, so my "Audio Books" folder would get fragmented over the course of a few days. Again, it was up to the defrag program being tested to fix up this mess.
Finally, I keep all the source code for my main programming project in a 4GB encrypted volume maintained by TrueCrypt, and so was able to determine if the defrag program being tested could recognise and defrag this volume as well.
I was quite shocked at how many commercial defrag utilities were unable to cope with the fragmented files on my hard drive. Diskeeper, the most expensive utility tested, failed miserably, and took over 20 minutes just to analyse the drive. Incidentally, it was another aspect of DK (its inability to deal with drives that get too full) that led me to look for a better defrag program in the first place.
So I wasn't comparing "performance" of defrag utilities in the conventional sense, but it was more a question of "usability" (was it easy to set up and use) and "capability" (was it able to do the job). I only gave a "thumbs up" to defrag utilities that managed to keep my laptop drive defragmented during the course of my normal working week.
During the course of the testing some utilities were uninstalled and remained so; others were installed and retained because they remained useful. My laptop now has 5 programs installed: the built-in Windows Disk Defragment program, SysInternals contig (I still use it on those big SQL data files from time to time), SysInternals PageDefrag (boot time defrag of system files), JkDefrag and PerfectDisk 8.
PerfectDisk is my "program of last resort" and I call on it to sort out fragmented metadata and other tricky data. It has never let me down. I use the JkDefrag screen saver to keep my drive neat and tidy. That's how these two ended up being the "winners". I kept them installed because they are the most useful of the lot.
I hope this clarifies the method I used. I don't claim to be a professional tester, just a user with some very demanding defrag requirements.
Donn Edwards
http://donnedwards.openaccess.co.za
|
|
|
|
|
Donn Edwards wrote: so no attempt was made to test whether program A did a defrag that resulted in faster hard drive performance than program B
Actually what I was curious about was the performance mentioned for each utility to defrag, not how fast the drive is afterwards, assuming they all return the drive to the same state given the same starting point that should be equal. Though now that you mention it I guess it's probably not.
To compare speed of defragmentation realistically you would have to start from the same point, same hardware, same fragmentation in the same places, same time after boot etc etc. I guess you can see gross differences and make some conclusions but if it were at all close it would be unwise to make any declarations on which one defragmented faster without nearly impossible circumstances to start with. As programmers we all know that there are many tricks to fool the user into thinking that an operation isn't taking as long as it actually does. Short of a stopwatch it's impossible to tell sometimes.
Ease of use and features though are definitely good to know about.
Personally I use the windows Vista defrag utility; it never crossed my mind that anyone else makes defrag utilities because I never thought there was anything wrong with the windows built in one, now I have a vague feeling there probably is so thank you for that.
Cheers!
Cum catapultae proscriptae erunt tum soli proscripti catapultas habebunt
|
|
|
|
|
Where "performance" was mentioned in the review it was usually because it was noticeable without the use of a stopwatch. For example, Vopt was busy moving files around within seconds of being run, while UltimateDefrag and Diskeeper were still analysing after 15 minutes.
I agree that each product got a different set of data to deal with, so in that sense it wasn't scientific. On the other hand, most of the files on my hard drive didn't change from one week to the next in such a way as to benefit one program over another.
I can't comment on the Vista WDD utility other than to say that it doesn't appear to have kept my wife's laptop in good condition. The WDD utility in Windows XP is well known to me, and doesn't meet my needs at all.
Depending on the size and variability of your data files, you may find that WDD is either perfectly adequate or completely ineffective. Defrag is weird that way.
Donn Edwards
http://donnedwards.openaccess.co.za
|
|
|
|
|
Check my blog. I'm just blow away by how good it is. I'm actually getting ready to post a head-to-head between PerfectDisk and Diskkeeper in the next few days. I'll be listing it as an article here.
http://blog.code-frog.com[^]
My blog is going to be growing with at least one entry a day and maybe more. I have so much to write about and PerfectDisk is just a perfect 10 in my book.
|
|
|
|
|
Don't forget to mention the 20% discount with the coupon code mentioned in Donn Edwards' blog.
|
|
|
|
|
defrag? people still do that? with disks up in the .5TB range?
that takes, what, three days?
|
|
|
|
|
Chris Losinger wrote: defrag? people still do that?
I think better than use the computer idle time for search for Extraterrestrial Intelligence[^]
For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.(John 3:16)
|
|
|
|
|
I search for evidence of ET in my fragmented files.
every night, i kneel at the foot of my bed and thank the Great Overseeing Politicians for protecting my freedoms by reducing their number, as if they were deer in a state park. -- C hris L osinger, Online Poker Players?
|
|
|
|
|
"The Truth Is Out There"
For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.(John 3:16)
|
|
|
|
|
How many can actually remember that? Loved that series.
If you truly believe you need to pick a mobile phone that "says something" about your personality, don't bother. You don't have a personality. A mental illness, maybe - but not a personality. - Charlie Brooker
My Blog - My Photos - ScrewTurn Wiki
|
|
|
|
|
A defrag on my heavily fragmented 500GB HD takes about 5-6 hours, assuming I walk away from the computer and come back when it's done.
Unfortunately, Microsoft never thought to invent a file system that doesn't fragment (*cough* ext3 *cough*). So long as that is true, defragging will always eventually be necessary.
The early bird who catches the worm works for someone who comes in late and owns the worm farm. -- Travis McGee
|
|
|
|
|
Patrick Sears wrote: a file system that doesn't fragment (*cough* ext3 *cough*)
A quick google search suggests that isn't true[^].
How would a file system that doesn't fragment look like?
If I fill a 100 GB disk completely with 1000000 files that are 100 KB each, then delete 1000 random files and create new a 100 MB file, there's no way the isn't going to be fragmented.
What would be useful is a defrag utility that can defrag incrementally, and runs as a background task when the computer is idle. Wait, Vista already has that...
|
|
|
|
|
You're right; it would be more accurate to say ext3 doesn't fragment unless it has to, as in your example.
NTFS fragments as a matter of course. There seems to be a file size threshold over which NTFS will actually try to write the file as one piece; for most files, it doesn't care where it splatters them on the platter.
It'd be interesting to compare ext3 fragmentation to NTFS fragmentation for a same-sized drive with the same files.
Daniel Grunwald wrote: What would be useful is a defrag utility that can defrag incrementally, and runs as a background task when the computer is idle. Wait, Vista already has that...
It'll be interesting in 5 years to see how many hard drives have burned out prematurely because of that. It sounds good in theory but in practice it may not be so smart. That is of course purely speculation on my part; it's possible it won't add much additional stress on the hardware at all.
The early bird who catches the worm works for someone who comes in late and owns the worm farm. -- Travis McGee
|
|
|
|
|
The problem for the filesystem - whether NTFS or ext3 - is that the application doesn't declare up front how much data it's going to write. All the filesystem sees is the size of each block sent to WriteFile()/write(). Some applications send large blocks, others very small ones - Microsoft Word is particularly bad, writing 512 bytes per write command, which is why saving is so damn slow in Word. Presumably that's inherited from its DOS days (512 bytes being the size of a sector).
A naive application might therefore opt to call SetFilePointer/SetEndOfFile to indicate how large the file will be, but on NTFS that has the effect of causing the filesystem to write zeros to every allocated disk block in order to prevent the possibility that one application could read another's data - it's a security measure to prevent information disclosure. You can tell NTFS not to do this by indicating that it's a sparse file (using DeviceIoControl with the FSCTL_SET_SPARSE parameter) but then it doesn't allocate any space for the data either, putting you back at square one.
I'm sure the file system or operating system can do some extra buffering as well to coalesce these writes so that it has a better idea of how much space will be required, and therefore where to start placing the file, but I don't know how much. But at some point it has to actually start writing the file out (to prevent data loss) and at that point it has to decide where to write it, and update the (journalled) file system structures as well.
Growing existing files is also a problem - the existing clusters allocated to the file are retained, even if overwritten with different data, and new clusters may or may not be available adjacent to the old ones. If not, fragmentation occurs.
So it could be that greater amounts of fragmentation experienced on the Windows platform is more down to how the applications write their data than it is to the filesystem itself.
NTFS treats the data in a file as just another attribute of the file, like its filename, timestamps, read-only status and security descriptor. Those attributes can be either stored in a Master File Table record (resident) or elsewhere on the disk (non-resident). If the data can fit into the MFT record (which is 1KB in size) along with the other attributes, that's where NTFS will put it. NTFS does reserve a chunk of the disk for the MFT to expand into which won't otherwise be used unless the rest of the disk space is used up first, which will tend to keep the MFT in relatively few fragments.
I don't know how NTFS decides which blocks to use for a given write request that grows a file. This kind of algorithm is likely to change between Windows versions anyway. Is it better to find any block quickly, rather than perform an exhaustive search? Is it better to use a block near where the last read or write operation took place, to minimize head movement, or to locate it near other files that the same application has used?
Then, you have to ask what the benefit of a defragmented file is. A file with a lot of fragments wastes a little disk space, as the list of fragments then can't fit into the MFT record (this happens at around 200 fragments according to "Windows Internals, Fourth Edition") but it's not that significant. If a file is split into multiple fragments, a sequential read of the file will suffer some disk head movement that it wouldn't otherwise have had, and that's the slow part of disk accesses. However, random file accesses will always incur head movement, and therefore they don't benefit much from defragmentation, except that the movement might be constrained to be within a smaller area.
DoEvents : Generating unexpected recursion since 1991
|
|
|
|
|
I don't defrag any more. When a hard drive gets to 75% fragmented, I throw it away and get a new one.
"Why don't you tie a kerosene-soaked rag around your ankles so the ants won't climb up and eat your candy ass..." - Dale Earnhardt, 1997 ----- "...the staggering layers of obscenity in your statement make it a work of art on so many levels." - Jason Jystad, 10/26/2001
|
|
|
|
|
Chris Losinger wrote: defrag? people still do that? with disks up in the .5TB range?
Given an infinite amount of free space, fragmentations wouldn't happen, but in the case of "The Great Defrag Shootout" I am limited to a laptop with a 60GB drive, so there isn't that much space to play with in the first place.
I find that on a file server the overall responsiveness of the server improves when the files are less fragmented, irrespective of the size of the drive. If a SQL database file gets fragmented as the database grows, the extra load on the server can make a difference between happy and unhappy users. There is plenty of time to defrag a server's drives after hours or on the weekend.
Donn Edwards
http://donnedwards.openaccess.co.za
|
|
|
|
|