Click here to Skip to main content
Click here to Skip to main content

IIF - Indexed Image File Format specification - Version 1.0

, 15 May 2007 CPOL
Rate this:
Please Sign up or sign in to vote.
An article on simple but ultra-fast new image compression method (IIF)

Introduction

This article is about the new image file format specification called IIF - Indexed Image File Format. There are numerous different image file formats available today, so one must wonder, why bother using another one when all these formats work fine? Well, the question here is not if this new format will replace others (it won't for sure), but will custom applications be able to use this format more easily than others for reading and writing images? Looks like they could...

If one wonders why, the answer might be that although PNG and TIFF images -- with the exception of JPEG and GIF -- look cool, one should just try to build one's own compressor or decompressor for those image file formats. I think I would give up. It would require 3rd party compression libraries (ZLIB, LZV etc.) and might also be in a non-public format (i.e. licensed).

So, faced with these questions, I thought of a cheap alternative -- it will not hold any records for compression rates, as JPEGs do, but it will likely find its place somewhere between PNG and TIFF image compression methods. How do we achieve this? A good question...

Background

For a few years I have been researching different image file formats, and came up with the following: all but BMP or TGA, are too complex to be directly used by custom applications. They are often compressed to high compression rates (like JPEG) and must be de-compressed before use. Also, each file format could support more than one compression method (like TIFF) which makes it harder to be used directly.

So, I have developed a simple and effective image compression method that has three main goals:

  • to be as simple as it can, so the custom application can use it with no problems
  • to be an ultra-fast method for reading images from and writing images to
  • to support "lossless" image compression techniques (i.e. the restored image must be identical to the original)

Using the code

Said... done. I have come up with the following technique, which involves splitting the image into small "tiles" (rectangular regions of fixed size 16x16 pixels) and then finding the total number of different colors inside each region. If this number of colors is less than some experimentally determined value (say 170), then these colors may be represented with indices to a previously built color table through enumeration. We would need from 1 to 8 bits to enumerate all colors inside each tile. This is the place where we basically "compress" the original image. We perform this task throughout the original image data, saving the color tables and the indexed color bits for each tile of 16x16 pixels.

The second way of doing color indexing is a little bit advanced. It includes spliting of RGB bytes into another triplet called ABC with the following equations:

A = R8R7R6G8G7B8B7B6
B = R5R4G6G5G4G3B5B4
C = R3R2R1G2G1B3B2B1

As one can see, A channel would hold LIGHTS, B channel would hold MID-TONES and C channel would hold SHADOWS. So, basicaly, we would have different number of color changes for different types of images, but eventualy we will get a better compression rate when indexing these derived colors than original one. The plan is to implement this type of color indexing when the number of original colors inside a single tile (16x16 pixels) reaches the value of 128 (a some mid-value). The limit of implementing this type of color indexing is an index word of length 16.

The third way of performing color indexing is to find MIN and MAX value for all three color channels (R, G and B) inside each tile. Then, the difference of these two numbers will give us the exact number of different color levels (for each channel) we need to index.

Finally, we should end up with an output file with the compression range of -1% to 400% of the original image data. What does this mean? The output file can be bigger (bad for this method) for 1% of the original image, or it can be smaller (good for this method) for 400% of the original image data. So, we cannot enlarge the original file too much, but we can decrease its size significantly. All this depends of the type of the original image.

The IIF compression algorithm (in pseudo-code) would be as follows (but one can modified it the way he likes it):

foreach tile in the image
{
    // Count the colors in the each tile
    Count_Tile_Colors();

    // Determine the number of bits to index tile colors (from 1 to 4)
    Determine_Bits_Number();

    // Write color table and indexed image data
    Write_Color_Table_And_Data();
}

So simple that it could hardly be simpler. The decompression algorithm is obvious from the explained compression algorithm. Here it goes:

repeat until the end of the compressed stream
{
    // Get color table for each tile
    Get_Tile_Color_Table();

    // Get color indices for each tile
    Get_Tile_Indices();

    // Write original color data
    Write_Color_Data();
}

Since there can be a number of different approaches to this matter, I won't suggest what the best solution would be. However, all implemented solutions should execute fast enough (according to the second defined goal).

Also, if one does not like the compression rate of this method, one can increase it by using a compression library like ZLIB. The produced files should be much smaller afterwards, smaller even than the PNG output.

Results

Here is the sample image Earth.bmp, which has dimensions of 320*240 pixels:

Sample image 320x240 pixels

Please see the comparison table below:

comparison table
Image format Image size Compressed size
BMP 225 Kb 100%
JPG 15.3 Kb 6.8%
GIF 27.7 Kb 12.31%
PNG 82.6 Kb 36.71%
TIFF 104 Kb 46.22%
PCX 89.6 Kb 39.82%
TGA 225 Kb 100%
IIF 83 Kb 36.88%

As one can see, there is actually a significant compression rate that can be achieved using the explained image compression method.

Points of Interest

The next step in this direction would be enhancing the compression rate of the IIF algorithm. Please feel free to send suggestions considering this topic.

History

  • May, 2007, - IIF version 1.4 (3 types of color indexing: 8b + 16b + 23b) - increased compression rate
  • May, 2007, - IIF version 1.3 (2 types of color indexing: 8b + 16b) - increased compression rate
  • May, 2007. - IIF version 1.2 (8b color indexing) - increased compression rate
  • April, 2007. - IIF version 1.0 (4b color indexing) - basic compression rate

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Share

About the Author

darkoman
Software Developer (Senior) Elektromehanika d.o.o. Nis
Serbia Serbia
He has a master degree in Computer Science at Faculty of Electronics in Nis (Serbia), and works as a C++/C# application developer for Windows platforms since 2001. He likes traveling, reading and meeting new people and cultures.

Comments and Discussions

 
GeneralChannel Mixing and Source Code [modified] Pinmemberlouiszc8-Apr-09 7:07 
GeneralRe: Channel Mixing and Source Code Pinmemberdarkoman8-Apr-09 22:12 
GeneralCouple observations.... PinmemberTeashirt21-May-07 13:11 
GeneralRe: Couple observations.... PinmemberKochise1-May-07 22:02 
AnswerRe: Couple observations.... Pinmemberdarkoman2-May-07 2:51 
GeneralRe: Couple observations.... PinmemberTeashirt22-May-07 5:48 
GeneralRe: Couple observations.... PinmemberNickVellios21-May-08 12:21 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Terms of Use | Mobile
Web01 | 2.8.150327.1 | Last Updated 15 May 2007
Article Copyright 2007 by darkoman
Everything else Copyright © CodeProject, 1999-2015
Layout: fixed | fluid