File this away for future reference:
The "imprecision" is in producing the OUTPUT, the conversion of the binary / computer representation of the number into the string of characters that you display. This is true regardless of whether it is you printing the value or the debugger displaying it for you. Both processes need to take the binary value and convert it to a string of characters for your eyes.
If you a computing a value and wish to use it in other compututations then always carry the binary value around, don't convert it to a string and then reconvert it to binary. The binary value is as precise as you are going to get, converting it back and forth only adds "imprecision"
Computer binary representations (Base 2) and printed represetnations (in Base 10) are inherently incompatible and can only be approximated. You control the approximation with the format specifier for how many digits you want to see.
PS - when I was in college, I had a Comp Sci instructor who told us that "Floating Point Numbers have a precision of about 6 significant digits" (single precision back then). Concerning format statements, I asked "What happens when I ask it to print 10 digits after the decimal point? How does it get the extra 4 digits?" His response - "It makes them up." That's as true today as it was 45 years ago. You asked a single precision floating point number to print 12 digits of precision. It made some of them up