**floating point**solution.

Reading now your expected result shows that you in fact talk about a

**fixed point**problem.

A fix point approach places the separation between integral part and the fraction part at a

**fixed**position within the binary format.

E.g. with 32 bit format, 22 integral bits and 10 fraction bits.

```
12.2 = 1100.0011001100
3.2 = 11.0011001100
```

In fix-point arithmetic, you can multiply as plain unsigned integer and shift the result by the proper amount of fractional bit back. To get a more accurate result, you may add two additional rounding bits, resulting in 10 + 2 fraction bit, where only the first 10 bits are taken and the last two are used for rounding.

E.g. with 32 bit format, 20 integral bits and 10+2 fraction bits.

```
12.2 = 1100.0011001100[11]
3.2 = 11.0011001100[11]
```

It can be viewed as scaling: in this example you have 10+2 fractional bits, hence, scaled by 2

^{(10+2)}= 4096.

```
12.2 x 4096 = 49971.2 --> 49971
3.2 x 4096 = 13107.2 --> 13107
49971 x 13107 = 654969897
654969897 / 4096 = 159904.76 --> 159904
159904 / 4096 = 39.0392 --> 39.04
159904 --> 100111.0000101000[00]
```

Algorithm:

1) scale decimal numbers by 4096 and store as integral bit pattern

2) multiply the integral bit patterns as integer

3) divide result by 4096 and take the resulting bit pattern as fixpoint number

4) for printing: leave the two rounding bits away

E.g.

```
1) 12.2 --> 1100.0011001100[11]
3.2 --> 11.0011001100[11]
2) mult --> 100111000010100000.[110000101001]
3) scale --> 100111.0000101000[00]
4) print --> 39.0390625 --> 39.04
```

To avoid overflow, you should of course merge mult and scale back (steps 2 and 3 above) into one operation taking care to throw away the lower 10+2 result fraction bits while multiplying.

Cheers

Andi