Yes you are correct.
I had no knowledge of IEEE 754 64-bit FP. The python doc says that
floats are implemented using the C 'double' data type but I didn't
realise there was a standard for this accross platforms .
Thanks for clarifying this. As my question shows I am not versed in
floating point arithmetic!
Looking at the definition of IEEE 754, the mantissa is made of 53
significant binary digits, which means
53*log10(2) = 15.954589770191003 significant decimal digits
(I got 16 with my previous dodgy calculation).
Does it mean it is safe to assume that this would hold on any
platform?
Evidently not; here's some documentation we both need(ed) to read:
http://docs.python.org/tut/node16.html
"""
Almost all machines today (November 2000) use IEEE-754 floating point
arithmetic, and almost all platforms map Python floats to IEEE-754
"double precision".
"""
I'm very curious to know what the exceptions were in November 2000 and
if they still exist. There is also the question of how much it matters
to you. Presuming the representation is 64 bits, even taking 3 bits
off the mantissa and donating them to the exponent leaves you with
15.05 decimal digits -- perhaps you could assume that you've got at
least 15 decimal digits.
While we're waiting for the gurus to answer, here's a routine that's
slightly less dodgy than yours:
| >>> for n in range(200):
| ... if (1.0 + 1.0/2**n) == 1.0:
| ... print n, "bits"
| ... break
| ...
| 53 bits
At least this method has no dependency on the platform's C library.
Note carefully the closing words of that tutorial section:
"""
(well, will display on any 754-conforming platform that does best-
possible input and output conversions in its C library -- yours may
not!).
"""
I hope some of this helps ...
Cheers,
John