Click here to Skip to main content
Rate this: bad
good
Please Sign up or sign in to vote.
See more: C++ MFC ImageProcessing
Hi,
 
This post is less "solution seeking" and more "opinion seeking".
 
I have 2 large image files. Each pixel is stored either as a short integer or a float value (depending on source). The images are huge - about 10000px by 7000px.
 
I want to display them in a set windows - of say 500px by 500px each (user resizable). One window will contain the whole image (resampled) and another will contain a 1:1 zoomed view. The area to zoom into will be chosen by clicking in the resized image view. (Marked with a red square)
 
That is what I intend to do. But I need help deciding how to actually implement it.
 
1. One idea I have is to resample the whole file at start (may take 5-10 secs?)and save as a bitmap or a binary file. Then load it.
The second idea is to load in patches dynamically. So initially I do something like a nearest neighbour approximation, and then do bicubic resampling on small patches. THis will reduce the wait time to see the image but may be much more complex. (Bit like what google earth does?)
Which one do you think is better? Resample at start, but have a uniform user experience later. Or resample block by block in some way.
 
2. I cant decide how to implement the drawing surface. Plot pixel by pixel using GDI on a document. (Im using MFC) Or save as a bitmap and load to picturebox. Or some openGL method? (Im very lost here, but im willing to learn - links will be appriciated)
 
3. Can I access one file at multiple points? If I could have one thread resampling half of the file, and another doing the next half, it would be really quick. But im not sure if its possible at all. (May be I can copy the file beforehand and seek?)
 
Any help and opinion on this would be appriciated. Im just trying to lay down some a framework / plan of work to.
Links / Tutorials appriciated. Im new to MFC.
Posted 21-Sep-12 2:23am
Edited 21-Sep-12 2:24am
v2
Rate this: bad
good
Please Sign up or sign in to vote.

Solution 1

I would choose option 2 (even if you choose option 1, you probably have to segment the whole process, because, performed as a single step, it would require a huge amount of memory).
 
As about point (3) I believe you can but, probably you don't need. I mean you should split the processing time among the threads (that is each thread works on a different portion of the image) , the read-from-file time probably doesn't affect the overall result.
  Permalink  
v2
Comments
Shaunak De at 21-Sep-12 8:41am
   
Given I can consume upto 500MB or RAM (smug) which do you think I should pick? And what about the drawing surface?
CPallini at 21-Sep-12 9:03am
   
With such a memory amount you can go with option 1. Plotting pixel by pixel is not an option (too slow). You may check out my old article
http://www.codeproject.com/Articles/22271/Plain-C-Resampling-DLL
to get an idea (if you tweak it a bit I guess you may able to actually use it)
Shaunak De at 21-Sep-12 9:08am
   
Thanks a lot! Grazie!
CPallini at 21-Sep-12 9:09am
   
You are welcome.
Rate this: bad
good
Please Sign up or sign in to vote.

Solution 2

If I were to do this, I would keep the images in memory, converted to RGB24 for display purposes (assuming these are color images). 210 MB each, you can afford that. No external file.
 
At the same time I am doing the conversion, I would average in blocks of, say, 10 by 10 pixels to yield the reduced images. (Don't use bicubic resampling, just average!)
 
Properly coded, I'd expect the whole process not to exceed 1 second per image.
 
Drawing a subwindow of the large image is done with SetDIBitsToDevice, passing the desired coordinates. In principle, MS did implement the blitting in an efficient way even for big images.
 
If you want to show a reduced image as soon as possible, I don't see a way to avoid a full scanning of the original image. [If you are generating the big images, you could precompute and save the reductions at the same time. Or precompute and save the reductions with a dedicated utility as you receive the files.]
 
Unless you resort to decimation (instead of averaging all pixels in 10 x 10 block, you could average every other pixel or every third... with a little loss of quality because of aliasing). But this won't give you a linear speed improvement as the bottleneck will be memory accesses.
 
Last piece of advice: make sure to scan the image row by row (i.e. by increasing addresses) rather than column by column.
  Permalink  
v6
Comments
Shaunak De at 26-Oct-12 5:49am
   
Thanks. I did that project following the guidelines above. But for personal interest I want to implement your method. The images are not color, but complex in nature (real and img parts). But sometimes its possible to generate color imagery by combining various statistics. I will give this a shot.
YvesDaoust at 26-Oct-12 5:56am
   
You are welcome.
 
There are several options for mapping complex numbers to RGB (2D unbounded plane to unit cube). One is Phase -> Hue, Magnitude -> Lightness (rescaled), fully saturated colors. It is an interesting challenge to make such transforms fast.
Shaunak De at 26-Oct-12 6:03am
   
I will give it a shot. Although color is not so important for my application, but it would still be cool. I will however try to come up with a scheme where HSV calculations need be avoided. I was thinging of mapping the magnitude to a blue to red/brown look up table. [generally low amplitude corresponds to water areas and high to rocks etc.] I also want to measure the standard deviation in phase... which can be indicative of the type of surface. Maybe map that to the green channel as well..
Shaunak De at 26-Oct-12 6:32am
   
Can I ask why you suggest row by row?
YvesDaoust at 26-Oct-12 6:53am
   
As said, to work by increasing addresses. The column-by-column scheme stresses memory management too much, it lacks locality.
Shaunak De at 26-Oct-12 6:56am
   
Thanks. I understand. Basically to increment instead of some a+offset*b type calculations. :)
YvesDaoust at 26-Oct-12 7:01am
   
The issue is that when going column by column the addresses go back and forth, and the successive pixels you visit are far apart.
Shaunak De at 26-Oct-12 7:04am
   
Ok, so that is going to take more time to fetch from memory? [Something like cache hit/miss?]
Please correct me if I am wrong and forgive my amateurish questions.
YvesDaoust at 26-Oct-12 7:07am
   
Yes, much longer. Bad for the caches and for virtual memory. It increases the amount of space that the processor needs to keep close.

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

  Print Answers RSS
0 OriginalGriff 406
1 Sergey Alexandrovich Kryukov 309
2 CB Sharma 79
3 RyanDev 75
4 PhilLenoir 70
0 Sergey Alexandrovich Kryukov 6,676
1 OriginalGriff 6,056
2 CPallini 2,473
3 Richard MacCutchan 1,697
4 Abhinav S 1,560


Advertise | Privacy | Mobile
Web01 | 2.8.140821.2 | Last Updated 26 Oct 2012
Copyright © CodeProject, 1999-2014
All Rights Reserved. Terms of Service
Layout: fixed | fluid

CodeProject, 503-250 Ferrand Drive Toronto Ontario, M3C 3G8 Canada +1 416-849-8900 x 100