In the sake of simplicity, suppose you have a
8-bit-only
processor and want to show a
16 bit
unsigned number.
Assuming
n
is the input number, the algorithm could be (pseudo code):
- k = 0
- if n = 0 exit
- r[k] = reminder of n div 10
- n = n div 10
- k = k + 1
- goto 2
(where div
is the integer division)
At the end, the array
r
contains the digits of the decimal representation of the number (in reverse order).
Of course there is a problem, in the given algorithm: we have to compute the integer division (and the reminder) of a
16
bit number with a
8-bit-only
processor!
We could perform this operation like we do by hand, but using 4 bits at time, in other word using the hexadecimal representation of the numbers.
Let's try to do it in an example.
Namely we try to divide
n = 51728
by
10
, that is, in
hexadecimalese,
0xCA10
by
0xA
.
0xCA
is the most significative byte and
0x10
is the least significative byte.
In the first step we divide the upper nibble of
0xCA
, that is
0xC
by
0xA
, obtaining
0x1
as quotient and
0x2
as reminder.
Then we add such reminder to the lowest nibble of
0xCA
(namely
0xA
) in order obtain
0x2A
and repeat the process:
CA10 |A
---
2A 1 (C/A is 12/10, that is quotient 1, reminder 2)
21 4 (2A/A is 42/10, that is quotient 4, reminder 2)
30 3 (21/A is 33/10, that is quotient 3, reminder 3)
6 4 (30/A is 48/10, that is quotient 4, reminder 8)
hence we have obtained the quotient, namely
0x1434
(5172 in decimal) and the reminder
8
,
always working with, at most, a byte at time. In other words, our
8-bit-only
processor can perform it.