This is not really overflow, you can simply treat an N bit integer as signed and unsigned. If you use the integer as signed then the N bit integer can store values in the range [-2^(N-1) .. 2^(N-1)-1] if you use it as unsigned then it value range is [0 .. (2^N)-1]. No matter which way you use that integer, range [0 .. 2^(N-1)-1] is the same in both cases.
Read this article to understand what I mean, the same integer value and its bits can be interpreted as either a signed or an unsigned integer:
also try this:
unsigned int ui;
for (i=-5; i<=5; ++i)
ui = (unsigned int)i;
printf("%2d %2d %2u %2u %04x %04x\n", i, ui, i, ui, i, ui);
After running this piece of code you will see that the hex value (binary representation) of i and ui are always the same, only the printf function and the generated assembly code treats the bits of the integer differently.
Try this code as well:
for (i=0; i<256; ++i)
printf("%d%d%d%d%d%d%d%d %02x %3u %4d\n", i&0x80?1:0, i&0x40?1:0, i&0x20?1:0, i&0x10?1:0,
i&0x8?1:0, i&0x4?1:0, i&0x2?1:0, i&0x1?1:0, i, (unsigned char)i, (signed char)i);
This piece of code prints byte values from 0x00 to 0xFF and it always prints you which decimal value do you get if you interpret those bits as unsigned and signed values. Check out the negative numbers! where is the highest and the lowest negative number? what happens if you subtract 1 from an integer whose value is currently 0 and you print it as a binary, signed or unsigned format? This is where the topic of overflow comes in...