So you wanna generate ascii art? Here is how to do that: First you have to read in the picture file and convert it if necessary so that you have the width and height of the picture and the color (usually RGB) for each pixel in a 2 dimensional array (the dimensions are the width and height of the picture). It might sound easy to "just read in a picture file" but it is very tricky even for the simplest formats like .bmp and .tiff or .tga if you want to handle every cases. What you think, why are there programs out there that specialize on picture viewing? Because if you want to handle all picture formats without the help of some libraries you will spend your remaining life (at least a few years) writing libraries that handle picture formats before you can create your ascii generator prog. :-) For this simple reason I recommend you to handle only a simple format that doesn't emply compression and other tricks, for example the windows BMP format. Even in BMP files there are 1, 2, 4, 8, 16, 24, and 32 bit color depth representations that totally differ in binary, and it might contain RLE compression (and a lot of other things if we supported various other headers: http://en.wikipedia.org/wiki/BMP_file_format
]). For first handle just BMP files without RLE (I never supported RLE compression in my progs) and restrict the color depth to 32 bits to make your work easier. So support just pure 32 bit bmp files. Search for bmp tutorials using google, I wont write the 1000th bmp tutorial. Its not just reading in bytes from a file and using them as color values, basically every picture file format contains a header at the beginning of the file and different kind of other garbage here and there... So use google.
Lets say you successfully allocated your memory and read in the color values of each pixels into memory and by specifying a row (Y) and column (X) value you can pick the color (RGB) of a pixel from your picture. To make ascii art you have to be able to convert every pixel into grayscale (the Red, Green, and Blue - aka RGB - components of it into intesity), there are a lot of methods to do that (http://en.wikipedia.org/wiki/Grayscale
]). The color of a pixel consists of 3 values (3 channels - at least in RGB color space) and you will convert it to a single value (1 channel) - intensity. This is already a lossy conversion. The R G and B values in bmp files are bytes, this means that their range is [0..255]. Your resulting intensity is also a byte with range [0..255]. The linked wiki article mentions the method I always used to convert RGB to a single grayscale intensity value: INTENSITY = R*0.3 + G*0.6 + B*0.1
With byte magic in C its something like the following:
unsigned char intensity = (unsigned char)(((int)R*75 + (int)G*155 + (int)B*25) / 255);
Lets say you converted each pixel of your picture into a byte (grayscale intensity value). The next step is to associate an ascii character with each intesity value from 0 to 255. The problem is that you have only a limited number of ascii characters, much less than 256 that you need so first you have to downsample your "256 color" grascale bitmap to use much less intensity values (this is a lossy conversion like when you convert a 32 bit color bitmap into a 1 or 4 or 16 or whatever color bitmap). During downsampling you can achieve much better quality by applying dithering but that goes far beyond what fits into this mini beginner tutorial so I will use a much simpler method. Instead of 256 values in my grayscale image I want only 8, so I divide each intensity value by 32. After this each intesity value will be in the range [0..7].
After this I associate all the intensity values with an ascii character:
In plain C this conversion looks like this:
char ascii_pixel = " .:ioVM@"[intensity_value];
Notes: You can use more than 8 intensity values since you have more ascii characters, I used 8 just for the sake of simplicity. You could use 16 by dividing the intesity values just by 16 or you could use 32 by dividing intesity by 8. How to pick characters for your intensity values? Each ascii character itself is also a small image formed from black and white pixels. The ratio of the black and white pixels and the way they are distributed on the small image of the ascii character can define an intensity when you look at it from a distance like you doo it with ascii art. Don't bother with finding out your own set of ascii characters for example if you wanna use 16 ascii characters for your conversion, you should rather search with google to find 16 right ascii values for your job.
Since the conversion is quite lossy you can not reverse it fully. If you understood the above process then the reversing process should be straightforward. Note that the method I described is quite a simple one, you can involve dithering and different image processing algorithms (like edge detection) to make the conversion of different kind of images better. If you are showing ascii art on the monitor with some character video mode then you can use colors with your ascii characters. Ascii art can be used to play video in character mode! For other image to ascii art conversion methods you can check out this: http://en.wikipedia.org/wiki/ASCII_art
Other note: Converting a big (lets say 1024x768) picture with this method can lead to a huge file, for this reason you might want to shrink your picture before conversion so that its width is around 80-100 pixels because if you view a text file not much more characters fit on the screen even with huge monitors. There are a lot of image rescale methods (http://en.wikipedia.org/wiki/Image_scaling
]) but discussing them is unfortunately out of the scope of this tutorial. You can simulate rescaling by giving only small pictures as an input to your program or by rescaling it with a good photoshop like software. :-)
Other useful references: