You don't say what 2's complement is (why is it called that?), and you left out what I think is the most important part which is WHY integers work this way.
First of all, the top bit is the sign bit. If the leftmost bit is 1, it's negative. Simple.
But why use 11111111 to represent -1 instead of, let's say, 10000001? The answer is that this rule allows addition and subtraction (and equality tests, and left shifts) to work with no changes to the computer whatsoever. For example, let's say you want to add the unsigned number 248 (1111_1000) + 3 (0000_0011). The answer is 251 (1111_1011).
If you add 4 more (0000_0100) the answer is 255 (1111_1111).
If you add 2 more, the answer is 257 (1_0000_0001) but if you are using 8-bit math then the top bit is discarded so the answer is one (0000_0001). In that case an "unsigned overflow" has occurred.
Now let's consider signed 8-bit math instead. In 2s-complement signed math, the number 248 (1111_1000) that we started with has the sign bit set, which makes it negative. In fact, the value 1111_1000 represents the number -8, so let's add -8 (1111_1000) + 3 (0000_0011). The answer is -5 (1111_1011).
If you add 4 more (0000_0100) the answer is -1 (1111_1111).
If you add 2 more, the answer is 1 (0000_0001) because we are using 8-bit math and the top bit is discarded. An unsigned overflow has occurred, but a signed overflow has not occurred.
Notice that the operations performed by the computer are identical for signed and unsigned math. That's the key advantage of the 2s-complement representation: it allows the computer to support signed and unsigned numbers with less circuitry and fewer operations. This keeps integer math simple from the computer's perspective.
Now let's consider the case of 96 (0110_0000) + 64 (0100_0000) = 160 (1010_0000). If we treat the result as unsigned then this is a normal addition and no overflow has occurred. But if we treat the result as signed then 160 actually means -96, so an overflow HAS occurred. To the machine, there is no significant difference between overflow and non-overflow; when you do an addition or subtraction, the CPU has a flag bit that detects unsigned overflow and another flag bit to detect signed overflow, but the CPU doesn't really care, it just has these two flags available for the program to check them if it wants to.
Unfortunately most programming languages don't provide a way to check these flag bits, following the lead of C which does not support overflow detection. In C, overflow is always silent, and it is not easy to tell that it has happened. Thus most video games, written in C/C++ or a related language, do not detect overflow. In this case the video game had no idea that anything went wrong.
The alternative often isn't any better. In some languages like C# that offer overflow detection, the only action that the language can take is to throw an exception or signal an error; if the game is not designed to handle this then it will simply crash (exit), which is no better (and arguably worse) than silent overflow.
In this case (increasing or decreasing hitpoints), a better way to handle the overflow would be to limit the result to the maximum value, i.e. 2 billion hit points + 2 billion more hit points = 2 billion hit points. The game should not have to be designed to explicitly handle this case, because the game's designers were never expecting the hit points to get that high. Instead this is useful as automatic language feature to limit the "damage" or bad effects of an overflow; limiting the result to the maximum value is often a better outcome than causing the number to go negative (in this case it would keep the card alive at 2 billion hit points), and often a better outcome than raising an error/exception (if the program was not designed to expect it, an exception will either terminate the program instantly (which does not make users happy!) or terminate some functions on top of the call stack, causing who-knows-what effect).
Last Visit: 31-Dec-99 18:00 Last Update: 20-May-18 6:36