D
David Mathog
What exactly do DBL_MAX and DBL_MIN define? It looks like it is
probably "biggest number that can be represented at a specified
precision" and not "biggest number that can be represented".
There are some differences as the upper/lower limits are approached in
a 32 bit program compiled by gcc ver 4.2.3 like:
gcc -Wall -std=c99 -pedantic -lm -g -O0 -o dmath test_math.c
Such a program prints DBL_MAX, DBL_MIN (from float.h) as:
1.797693e+308 and 2.225074e-308
The upper limit looks about right. When a larger number is entered the
conversion fails,
and precision is maintained right up to that point as in this example
(input value, echo it back)
1e+308
1.000000000000000e+308
1.5e+308
1.500000000000000e+308
1.6e+308
1.600000000000000e+308
1.7e+308
1.700000000000000e+308
1.8e+308
(conversion failed)
However the lower limit seems to be a bit squishy. The number of
correct digits in the mantissa shrinks as the exponent becomes more
negative until the conversion fails outright, but that's much smaller
than DIG_MIN. Unfortunately that leaves room for some (admittedly
tiny) numbers with significant errors to enter a calculation without
warning. Here is a log, number entered, then printed back in %le
format:
2.3e-308
2.300000000000000e-308 <-- so precision is OK down to DIG_MIN
2.3e-309
2.299999999999998e-309 <-- but not below it
2.3e-310
2.299999999999978e-310
2.3e-315
2.300000001942595e-315
2.3e-320
2.299875581391003e-320 <-- error in the 5th digit
2.3e-325
(conversion failed)
Conversion is with:
dtmp=strtod(buffer,&cptr);
Are we really supposed to have to do a
if(dtmp < DIG_MIN){ reject_it(); }
?
Thanks,
David Mathog
probably "biggest number that can be represented at a specified
precision" and not "biggest number that can be represented".
There are some differences as the upper/lower limits are approached in
a 32 bit program compiled by gcc ver 4.2.3 like:
gcc -Wall -std=c99 -pedantic -lm -g -O0 -o dmath test_math.c
Such a program prints DBL_MAX, DBL_MIN (from float.h) as:
1.797693e+308 and 2.225074e-308
The upper limit looks about right. When a larger number is entered the
conversion fails,
and precision is maintained right up to that point as in this example
(input value, echo it back)
1e+308
1.000000000000000e+308
1.5e+308
1.500000000000000e+308
1.6e+308
1.600000000000000e+308
1.7e+308
1.700000000000000e+308
1.8e+308
(conversion failed)
However the lower limit seems to be a bit squishy. The number of
correct digits in the mantissa shrinks as the exponent becomes more
negative until the conversion fails outright, but that's much smaller
than DIG_MIN. Unfortunately that leaves room for some (admittedly
tiny) numbers with significant errors to enter a calculation without
warning. Here is a log, number entered, then printed back in %le
format:
2.3e-308
2.300000000000000e-308 <-- so precision is OK down to DIG_MIN
2.3e-309
2.299999999999998e-309 <-- but not below it
2.3e-310
2.299999999999978e-310
2.3e-315
2.300000001942595e-315
2.3e-320
2.299875581391003e-320 <-- error in the 5th digit
2.3e-325
(conversion failed)
Conversion is with:
dtmp=strtod(buffer,&cptr);
Are we really supposed to have to do a
if(dtmp < DIG_MIN){ reject_it(); }
?
Thanks,
David Mathog