|
You wasn't rude at all, you was just "Aggressive" (that's the title of the thread)
Kochise
|
|
|
|
|
Well if I was unclear, let me know. But since a picture is worth a thousand words, I'll be posting a lovely little chart here...
Visual Studio Favorites - improve your development!
GUIgui - skin your apps without XP
|
|
|
|
|
I agree with Mustafa Demirhan ! It's the same for pocket digital camera ! Why should we save our pictures in packet JPEG when you can save them in plain unpacked TIFF on a 128 MB cartidge or IBM MicroDisc ? On PC, you will have soon gigabytes of memory, and terabytes of harddrives. Nobody would mind have an EXE of 145 MB just to display "Hello World !" on your screen. Nobody yet minds to have MicroSoft scerwing 140 MB of your main memory just after the boot, and 2 GB on your harddrive just to seat some useless assistants and drivers (drivers.cab - 75 MB - is loaded into the memory for immediate recognization of hot-pluged USB devices), when you have already these drivers supplied with your hardware. For whose cannot install them by hand, every one has to suffer from EXE/DATA size explosion.
I remember that good old time when I coded in full assembly language a little video player that support resume, pause, forward, backward, zoom, supersampling, RGB and CYM filter, ... ONLY 4 KB !
Kochise
|
|
|
|
|
Kochise wrote:
[...]Nobody yet minds to have MicroSoft scerwing 140 MB of your main memory[...]
Then maybe they should. Putting up with something just because Microsoft is too lazy doesn't make it right, and doesn't mean the rest of us should do it to. I guess you don't mind having 140 mb of spam in your mailbox per day, either, right? After all, terrabye drives grow on trees, and bandwidth is free. It's just a little extra space, means nothing. What's a few extra gig between friends, right?
Uncompressed TIFF file: 3,400K. Compressed JPG with high quality: 395K. 128,000K memory card / 395K = 324 pictures. Or only 37 if you use your uncompressed tiff option. And that's with a 1.2 megapixel camera. But yes, raw images look better than uncompress LOSSY jpg images - and we're not talking images here, we're talking NON LOSSY code! Not data, code!
I think both you and him are totally missing the entire point of this subject, but you are allowed to voice your opionion, even if you both are mistaken.
Visual Studio Favorites - improve your development!
GUIgui - skin your apps without XP
|
|
|
|
|
It was just a cynical answering Have you ever understood why NO digital camera manufacturer ever implemented the fabulous PNG format, absolutely LOOSELESS, that save picture with bigger files than JPEG, but with AlphaChannel and no LOSS. Try http://www.libpng.org and http://www.libmng.org ! It ROCKS !
I'm an embedded coder, an totaly ashamed to see and to be asked to code sh*t, "because if the consumer lack of memory, he will just have to buy some more", when I try to explain I takes further times to optimize my code. Strange 'political' behaviors
I mostly use QNX (http://www.qnx.com) because there is no LOSS of any mean, neither drivers nor assistants. But still have to work on Windows, and expect I could delete from my harddrive/memory at least 'drivers.cab'. If I have to install some drivers, I can do it by myself, no needs to loose half of my main memory just for childnesses of most of the users.
Yes, I'm feed up with this run of power, but beside when you code nicely, like I try to do, you have the benefit of megabytes of memory and gigabytes of harddrive, what allows you to do almost whatever you want.
So leave MicroSoft screwing your memories, it is helpful for a few coders that still believes coding is an art that cannot only be learnt at school.
I have a RTC 56k modem connection, and I perfectly know what pain it is to download a 10meg file. On Atari/Amiga computers, there was alot of program packers, such IcePacker, that was adding a file header containing a depacker code, while in the '.data' segment was the original code previously packed. I hardly found something like this. And on Windows, it's almost useless since there is alotsa buch of files surrounding the main program file. Solution : "ZIP THEM ALL" !
Kochise
|
|
|
|
|
I do not get the point why anybody should use this. Executables are always
the least part of a software package. And more important the size of hard disks
with 120 GB and more, is so big, that this is no longer an issue.
E.g. If You would save 10 megs
with this on a 120 GB disk You would save 0,0000008 % disk space.
This is not far away from nothing.
|
|
|
|
|
As the author mentioned in the article the code was for download from the internet so, using your example, saving 10mb is worth a hell of a lot. Taking a real-world example, even saving 50k is worth it to those on a 56k modem line.
bibamus, edamus, cras moriemur [eat, drink, for tomorrow we die]
|
|
|
|
|
You might want to think about things more before making such a blanket statement. Besides, your math is off by several orders there.
First off, not everyone has 120gig disks - most users are still running 10 or 20 gig systems that are a few years old.
Second, you're forgetting about cluster size. A 31k file will take a 32k cluster under Fat32,and a 33k file will take two 32k clusters - wasting 31k! (Actually, it will take 3 clusters at this point - the index cluster for anything larger than a single cluster which points to the clusters in the file, so the total is 96k for a 33k file. Maganify that by say the 870 files in Office and you get 55megs of waste).
Third, smaller programs generally load faster AND TAKE LESS MEMORY on the system; again, most users (not developers!!!) have 128meg older systems.
Fourth, exe's & dll's *are* the most important part - unless you're delivering a PDF file. What are you going to drive the data and bitmaps with? If you're thinking that binaries make up less of the install than say bitmaps or documentation, that totally depends on the product. On a nasty little port we just completed, the binaries are 47 megs total - and the help file is only 1.1 megs in size. And it's compressed, too, by the way.
Fifth, and this is important, smaller files generally compress smaller (within reason - if you program is mostly bitmaps, it's not going to compress much different), which reduces download time over the dialup modems - which most users have.
See what I keep harping on? End user, end user, end user. Developers are not end users - we generally have really beefy boxen, and tend to forget the real world doesn't have uber machines that can play Quake at 1,000fps.
And nothing is always something.
Visual Studio Favorites - improve your development!
GUIgui - skin your apps without XP
|
|
|
|
|
Todd C. Wilson wrote:
You might want to think about things more before making such a blanket statement. Besides, your math is off by several orders there.
I totally disagree with you. Alex has a very good point here.
Todd C. Wilson wrote:
First off, not everyone has 120gig disks - most users are still running 10 or 20 gig systems that are a few years old.
Second, you're forgetting about cluster size. A 31k file will take a 32k cluster under Fat32,and a 33k file will take two 32k clusters - wasting 31k! (Actually, it will take 3 clusters at this point - the index cluster for anything larger than a single cluster which points to the clusters in the file, so the total is 96k for a 33k file. Maganify that by say the 870 files in Office and you get 55megs of waste).
Not at all!!! This optimization DOES NOT help you to reduce the waste of the space by clustering. I can prove this easily with some basic math and stochastic. For example, if Office used this optimization, the waste caused by the clustering would again be 55 megs!!!
Todd C. Wilson wrote:
Third, smaller programs generally load faster AND TAKE LESS MEMORY on the system; again, most users (not developers!!!) have 128meg older systems.
Not in this case. The optimization just removes some spare space. This does not eat from the memory. And about the speed: The bigger file may load even faster. You cannot make a statement like this. With this optimization, Windows has to do more operations for the memory allocation initially and this may cause the program to load slower. For example, in the Compiler Optimization, you have two options: Optimize for Speed and Optimize for Smaller Size. Try out both of these and you will see that the one optimized for speed is BIGGER than the one that is optimized for size. This means that, smaller files do not necessarily load faster!!
Todd C. Wilson wrote:
Fourth, exe's & dll's *are* the most important part - unless you're delivering a PDF file.
Nope. Exe's and Dll's are generally very small. Even with very big projects, the executables do not exceed 3-4 megabytes. For example, I have a 70,000 line project and the total size of the executables and dll's are about 3 MB.
Todd C. Wilson wrote:
Fifth, and this is important, smaller files generally compress smaller (within reason - if you program is mostly bitmaps, it's not going to compress much different), which reduces download time over the dialup modems - which most users have.
Totally wrong!! Smaller files do not compress smaller. For example, take a jpg and bmp of the same photo and compress them. The results will be pretty much the same. The jpg won't even compress because its already compressed. There are some certain "Information Theory" rules and they say that there is a certain limit on the compression and the LZW compression method generally gives outputs that are very close to this limit. So, in the end, when it comes to the distribution of the files on the Internet, you WILL NOT gain anything at all (because you will compress it and for both configurations you will get pretty much the same output).
Anyway, all of your points are absurd and WRONG. Before judging somebody about having blank statements, look at your statements: They are all ridiculous.
Mustafa Demirhan
http://www.macroangel.com
Sonork ID 100.9935:zoltrix
<nobr>They say I'm lazy but it takes all my time
|
|
|
|
|
Mustafa Demirhan wrote:
Not in this case. The optimization just removes some spare space. This does not eat from the memory. And about the speed: The bigger file may load even faster. You cannot make a statement like this. With this optimization, Windows has to do more operations for the memory allocation initially and this may cause the program to load slower. For example, in the Compiler Optimization, you have two options: Optimize for Speed and Optimize for Smaller Size. Try out both of these and you will see that the one optimized for speed is BIGGER than the one that is optimized for size. This means that, smaller files do not necessarily load faster!!
Well, you're comparing two completely different things. This optimization settings do a lot more, like loop unrolling, creating inline functions automatically etc...
The aggressive optimize header just causes the compiler/linker use smaller padding, merge sections etc...
The point is: If optimize for smaller size is already selected, why do programs still benefit from the aggressive optimizations?
And even today, smaller files DO load faster! Before loading into the memory, I've to get them onto my harddrive, either by downloading or copying from a CD, both takes more time for larger files
Mustafa Demirhan wrote:
Todd C. Wilson wrote:
Fifth, and this is important, smaller files generally compress smaller (within reason - if you program is mostly bitmaps, it's not going to compress much different), which reduces download time over the dialup modems - which most users have.
Totally wrong!! Smaller files do not compress smaller. For example, take a jpg and bmp of the same photo and compress them. The results will be pretty much the same. The jpg won't even compress because its already compressed. There are some certain "Information Theory" rules and they say that there is a certain limit on the compression and the LZW compression method generally gives outputs that are very close to this limit. So, in the end, when it comes to the distribution of the files on the Internet, you WILL NOT gain anything at all (because you will compress it and for both configurations you will get pretty much the same output).
Well, this header does not compress files, it just removes unneeded data, and files with fewer data usually compress better.
Try compressing any file where you've added 1000 zeros (or add any other data) anywhere and then compress the same file without the zeros: The last one will be smaller!
---
Author of FileZilla FTP
http://sourceforge.net/projects/filezilla
|
|
|
|
|
Tim Kosse wrote:
Well, you're comparing two completely different things. This optimization settings do a lot more, like loop unrolling, creating inline functions automatically etc...
I just wanted to point out that with this optimization, the files may become smaller. But this does not mean that they load faster. The reason is the memory allocation. Now, in this case Windows have to do more operations in Memory Allocation.
Tim Kosse wrote:
Try compressing any file where you've added 1000 zeros (or add any other data) anywhere and then compress the same file without the zeros: The last one will be smaller!
Even if I add 40 KB of 0's; the difference will be about 50-60 bytes. So, why do I bother myself to gain 50-60 bytes of data. Come on who cares about that...
Also, even without compression: Suppose you have a huge project like MS Office. What's the total number of .exe and .dll components? In Office XP, its about 300-400. So what will you gain in the end without compression? About 12-16 MB out of hundreds of MB's. What I am trying to tell is that this is not a gain? If this can be considered as a gain, MS would put an option in their compilers.
One more thing: The author talks about cluster size,...etc. This DOES NOT help in reducing cluster size. It has nothing to do with cluster size. The waste will again be the same. Also, the waste because of clustering in MS Office XP is not 55 megs. It is just 5 mges in my NTFS file system. Also, 80% of the components in Office XP are NOT dll's and exe's.
What I am trying to tell is that author IS NOT RIGHT. These optimizations may help to reduce the size of each file about 40K; but THAT'S ALL. It does not help the program to load faster or use less memory or reduce the waste because caused by clustering.
We are not in the late 80's. We are in early 2000's and things changed a lot. Even the poorest system now does not care about this benefit. This optimization *may be* useful in some very specific cases; but for a regular project it does not.
Mustafa Demirhan
http://www.macroangel.com
Sonork ID 100.9935:zoltrix
<nobr>They say I'm lazy but it takes all my time
|
|
|
|
|
This has turned into quite an argument.
From what I've seen, the gain for large projects is not significant. A gain of 40KB in a 4MB binary does not make a huge difference (in the region of 1%). On the other hand 40KB saving in a 400KB project does make a fairly big difference (in the region 10%).
For loading time, methinks that smaller files will load faster. Why? Well one needs to look at the process that takes place when a file loads. But the first, and slowest part, is getting the file from the disk into memory. Disk access is gazillions of times slower than memory access. Even if Windows has to do more memory allocation for an aggressively optimized file (and I cannot see why it should) that should still be offset by the saving in time of loading from the disk.
As for compiler optimization, it is my understanding that an optimizing compiler makes certain optimizations, such as loop unrolling, when compiling optimized for speed, which will increase the file size. A larger file will take longer to load. Its the execution that is optimized for speed not loading.
Finally, as for cluster size. These optimizations do not decrease cluster size. Changing the file system might do that but not compiler optimizations. But that's not what the original author had to say. What he said was that by reducing the file size, you may also reduce the number of clusters used by the file on the disk, thereby reducing the amount of space used on the disk. So, if you have a number of files that are 60KB in size, and you reduce those by 30KB (for example) then the total disk space saving will be 64KB per file (two clusters)!! I don't think this is significant anyhow. For small applications, the disk space usage is small anyway. For large applications the saving is not that big to bigin with. However, this may again reduce loading time for small apps. Fewer clusters means fewer times to reposition the disk read head.
Overall, I find these optimizations useful. And if memory usage is improved too (and I have not checked) then that's just an added bonus!
______________ Marc Heiligers Not one shred of evidence supports the notion that life is serious.
|
|
|
|
|
Tim basically answered you - but to recap, you're basically confused about what is compressed, what is not, and where the savings are from. And I have a nice chart showing the before and after results of a larger than 3 meg project. I really fail to see what you are trying to prove other than you haven't bothered to test it for yourself and perform your own benchmarks.
Visual Studio Favorites - improve your development!
GUIgui - skin your apps without XP
|
|
|
|
|
Mustafa Demirhan wrote:
This optimization DOES NOT help you to reduce the waste of the space by clustering. I can prove this easily with some basic math and stochastic.
OK, so prove it.
I am not sure the point is that optimization will always reduce the waste of space by clustering. It can save space if the unoptimized exe/dll is just above the cluster size, and the optimization will reduce it just below. Note that this will depend on the system of the end user.
Mustafa Demirhan wrote:
With this optimization, Windows has to do more operations for the memory allocation initially and this may cause the program to load slower. For example, in the Compiler Optimization, you have two options: Optimize for Speed and Optimize for Smaller Size. Try out both of these and you will see that the one optimized for speed is BIGGER than the one that is optimized for size. This means that, smaller files do not necessarily load faster!!
I doubt having Windows to do some more operations on memory will slow down the loading more than having to load some more KB from a slow hard disk (or even a fast one...).
Optimize for speed doesn't mean it will load faster, it means it will execute faster! And indeed, optimizing for speed may need to unroll loops, to replace computed stuff by tables, etc. All techniques eating space. That depends on the needs of the application.
An image processing application (or a ray-tracer) are better optimized for speed, we don't care they are large, and loading of the file is probably much faster than the requested operation, anyway.
Mustafa Demirhan wrote:
Exe's and Dll's are generally very small. Even with very big projects, the executables do not exceed 3-4 megabytes.
Mmm, we are not in the same world... For me, loving to download and test small freewares, a 1MB download is quite big (but can be worth it)... As stated in the article, this optimization is mostly worth for small programs, and is spectacular for very small ones.
I have written small utilities, I confess they are bloated to almost 20KB because I played with the icon editor and added an icon with several resolutions, taking half of the end size...
Mustafa Demirhan wrote:
For example, take a jpg and bmp of the same photo and compress them. The results will be pretty much the same. The jpg won't even compress because its already compressed.
Bad example, as you state, since the JPG file is already compressed, it won't be compressed more.
And you are comparing the compression of exe/dll with the compression of compressed files. The first compress quite well (frequently around 50%), the second won't.
Don't compare different things.
Now, I don't believe in the first statement ("smaller files generally compress smaller"), because most compression algorithms take the frequently met patterns and replace them with smaller patterns. So with a big file, we can get more similar patterns.
Now, I have done a few tests, and the answer is that there doesn't seems to be a final rule, the compression ratio is quite variable...
Final note: this discussion is interesting, giving the readers a choice. All in all, it is up to the users to choose to use or not these optimizations.
|
|
|
|
|
have you ever heard of the demo scene, our 3d demos fits in 64k, strings, procedural textures, compression algo, music, b-splined camera, etc.. all that in a 64k .exe file.
and there's games which need 2GB+ to install... only if the coders knew how a computer work it would fit in 250MB or less.
|
|
|
|
|
Real programers do not use C++ perhaps C. They code their Demos in assembler
Did that by myself some years ago. This is rather intellectual sport than serious
programming. The time to shrink the executable below a certain limit exceeds the
pure implementation time by a factor of 2-5.
|
|
|
|
|
http://mitglied.lycos.de/yoda2k/snippets/SmallVcExe.zip
^ Yoda shows you how to do it with just one line and one setting of a linker switch, the link is a download to a sample on his site at http://mitglied.lycos.de/yoda2k/
|
|
|
|
|
You can also remove this line in yourprojectname.rc
//#include "l.fra\afxres.rc" // Standard components
|
|
|
|
|
I tried this header file with VC++ 6 SP3 and some of my Win32 (non-MFC) apps and it helped a little ... though one app went from 139k to 328k! Using the header (and _MERGE_RDATA_) only saved 2k after compression. I just UPX on everything - it has not failed me ever, not once, since I started using it a couple years ago (when it was still DJP).
-To
|
|
|
|
|
As far as I can see, You just don't get the point! It is obvious, that You always save few kilobytes in size using this kind of optimization. And it is the only way to reduce small program size. Try using UPX with 'hello world' programs. You will get larger file than linking everything in some different way, aligning data, etc. If You are doing a BIG program, which is loaded into memory _once_ - UPX is Your solution. But only then! It is how to minimize small exe, how to make it even smaller.
Peace!
|
|
|
|
|
It was my understanding that this particular optimization eliminated runs of empty space within the executable. If that's true, UPX will accomplish the same thing with less effort and less risk.
Don't get me wrong, I'm all for aggressive optimizations when they are appropriate... but if they don't buy me anything over UPX, why bother?
|
|
|
|
|
You might want to re-read the notes in the header file and look up what the optimization switches do in MSDN; they do not remove empty space, it reduces what is there, making the link packing smaller. This takes up less space on disk *AND* in memory when the PE image is loaded.
When UPX loads a program, it pulls the *entire* image into RAM in order to decompress it; it does not decompress chunks (unless someone has made serious modifcations to it very recently), so the OS cannot page the PE or resource data as needed.
You're making the assumption that what the optimizations do are the same as what UPX does. They are *not* the same - you're *compressing* a file in order to save space on the drive, whereas, you're *optmizing* the compile & link process to move code closer together to save runtime space & gain some speed. You're also assuming that UPX is a risk-free tool which is dead wrong. It has it's own risks with programs that don't like / cannot be compressed, and with low-memory systems.
So your tradeoff is a smaller file on the harddrive, vs. a larger shipped & zipped install file (zip a zip file) + longer load time (decompress) + bigger memory footprint (after decompress).
|
|
|
|
|
|
It's a compression loader for programs. Like PKzipLite, or Shrink-IT. Very similar to a self-extracting archive (zip/rar/arc). There is code example on codeproject that shows how to do things like this (put a loader front end on a package of code). This can be helpful if you have a program with lots of text and bitmaps in it; it doesn't help much with small programs, which is the same problem you get into with SFX files in general.
|
|
|
|
|
/filealign is apparently an undocumented linker command line arg. So instead, you can use /opt:[no]win98, which does exactly the same thing. By default, /opt:win98 is on, which links to the 4K boundary. Setting /opt:nowin98 links to the old 512bytes bounday, resulting in a smaller exe size. Here's a knowledge base article:
http://support.microsoft.com/support/kb/articles/Q235/9/56.ASP
There is also some documentation in your readmevc.htm file that gets installed with VC.
Note that the documentation states that this only affects on-disk size, and not the process' working set size. Setting /opt:win98 also speeds up
|
|
|
|
|