This post is less "solution seeking" and more "opinion seeking".
I have 2 large image files. Each pixel is stored either as a short integer or a float value (depending on source). The images are huge - about 10000px by 7000px.
I want to display them in a set windows - of say 500px by 500px each (user resizable). One window will contain the whole image (resampled) and another will contain a 1:1 zoomed view. The area to zoom into will be chosen by clicking in the resized image view. (Marked with a red square)
That is what I intend to do. But I need help deciding how to actually implement it.
1. One idea I have is to resample the whole file at start (may take 5-10 secs?)and save as a bitmap or a binary file. Then load it.
The second idea is to load in patches dynamically. So initially I do something like a nearest neighbour approximation, and then do bicubic resampling on small patches. THis will reduce the wait time to see the image but may be much more complex. (Bit like what google earth does?)
Which one do you think is better? Resample at start, but have a uniform user experience later. Or resample block by block in some way.
2. I cant decide how to implement the drawing surface. Plot pixel by pixel using GDI on a document. (Im using MFC) Or save as a bitmap and load to picturebox. Or some openGL method? (Im very lost here, but im willing to learn - links will be appriciated)
3. Can I access one file at multiple points? If I could have one thread resampling half of the file, and another doing the next half, it would be really quick. But im not sure if its possible at all. (May be I can copy the file beforehand and seek?)
Any help and opinion on this would be appriciated. Im just trying to lay down some a framework / plan of work to.
Links / Tutorials appriciated. Im new to MFC.
I would choose option 2 (even if you choose option 1, you probably have to segment the whole process, because, performed as a single step, it would require a huge amount of memory).
As about point (3) I believe you can but, probably you don't need. I mean you should split the processing time among the threads (that is each thread works on a different portion of the image) , the read-from-file time probably doesn't affect the overall result.
Edited 21-Sep-12 2:37am
If I were to do this, I would keep the images in memory, converted to RGB24 for display purposes (assuming these are color images). 210 MB each, you can afford that. No external file.
At the same time I am doing the conversion, I would average in blocks of, say, 10 by 10 pixels to yield the reduced images. (Don't use bicubic resampling, just average!)
Properly coded, I'd expect the whole process not to exceed 1 second per image.
Drawing a subwindow of the large image is done with
SetDIBitsToDevice, passing the desired coordinates. In principle, MS did implement the blitting in an efficient way even for big images.
If you want to show a reduced image as soon as possible, I don't see a way to avoid a full scanning of the original image. [If you are generating the big images, you could precompute and save the reductions at the same time. Or precompute and save the reductions with a dedicated utility as you receive the files.]
Unless you resort to decimation (instead of averaging all pixels in 10 x 10 block, you could average every other pixel or every third... with a little loss of quality because of aliasing). But this won't give you a linear speed improvement as the bottleneck will be memory accesses.
Last piece of advice: make sure to scan the image row by row (i.e. by increasing addresses) rather than column by column.
Edited 25-Oct-12 21:39pm
This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)