How to display a "double" in all its precision???

P

Patricia Shanahan

The_Sage said:
Isn't it obvious that your "lack of precision" is due to rounding off and
quantizing errors -- as would be expected?

No, in this case it isn't at all obvious. Take another look at the
program in the base message of the thread.

The value being printed, x, is calculated as 1+1/Math.pow(2,i) where i
ranges from one to 30.

For i in the range one through thirty, each of Math.pow(2,i),
1/Math.pow(2,i) and 1+1/Math.pow(2,i) has a mathematical result that is
exactly representable as a double, and that is required to be the result
according to the Math.pow documentation and the JLS descriptions of
divide and add.

The OP knew that the calculations were exact, and that the final double
held the expected result.

The issue was entirely one of output formatting, and using BigDecimal it
is possible to get the decimal representation of the exact result.

Patricia
 
B

blmblm

If you're starting from scratch, start by truly understanding unsigned
and signed char, then unsigned and signed two's complement, then single
precision floating point, and then, with a full comprehension of that,
the full spec won't be that hard to understand.

Are there CS programs out there that don't include a computer
organization class where this stuff gets drilled into your brain?

It's probably presented somewhere in most CS programs, but drilled
into the students' brains -- hm, I'm going to guess that not so
many of them do that. The ACM's most recent set of curriculum
guidelines (http://acm.org/education/curric_vols/cc2001.pdf) call
for spending about a week's worth of lecture time on bit-level
representations of various kinds of data, including integers and
floating point. You can only get across so much in a week.

And if you consider the general population of people trying to write
code, and not just those who are products of a formal CS program
somewhere .... If most programmers understood how floating point
works, would there be so many questions along the lines of "how
come when I divide 1.0 by 10 I don't get exactly one tenth?" ?

Not a good state of affairs, I agree.
 
C

Chris Uppal

jmcgill said:
Are there CS programs out there that don't include a computer
organization class where this stuff gets drilled into your brain?

I would imagine there are lots.

And I think that's defensible: a course could validly limit it's coverage of
floating-point to "don't use floating point (unless you know what you are
doing)". With an optional course component which covered not only
floating-point representation issues, but also issues of numerical stability
and the like. Few programmers would need the optional component, I would
think -- it would be of interest primarily to scientists and masochists.

-- chris
 
P

Patricia Shanahan

Chris said:
I would imagine there are lots.

And I think that's defensible: a course could validly limit it's coverage of
floating-point to "don't use floating point (unless you know what you are
doing)". With an optional course component which covered not only
floating-point representation issues, but also issues of numerical stability
and the like. Few programmers would need the optional component, I would
think -- it would be of interest primarily to scientists and masochists.

I think some of the confusion in this thread may be a result of this
strategy. Programmers seem know floating point rounding error exists,
without being able to recognize exact calculations, or maybe even
without realizing that some floating point calculations do have exact
results.

Patricia
 
C

Chris Smith

CS Imam said:
That was the succinct and correct answer, not
repeatedly insisting that nothing was being lost. Indeed, LOTS is being
lost... but on purpose it turns out.

I'll point out that while you and Patricia are right, the other
responses you got aren't as dumb as you seem to think. Specifically, no
information at all was lost in that display under the following two
assumptions:

(a) you understand a floating point value as representing a range of
possible mathematical values, as Chris Uppal pointed out; AND

(b) you know the original precision of the binary floating point number.

Under those assumptions, which are quite reasonable for most uses, you
got back a correct answer with no loss of information versus the
original. However, if you don't assume (a), then the answer is
incorrect; and if you don't assume (b), then information was lost.

Hope that clarifies,
 
P

Patricia Shanahan

Chris said:
I'll point out that while you and Patricia are right, the other
responses you got aren't as dumb as you seem to think. Specifically, no
information at all was lost in that display under the following two
assumptions:

(a) you understand a floating point value as representing a range of
possible mathematical values, as Chris Uppal pointed out; AND

There are three problems with regarding a floating point number as
representing a range of possible mathematical values rather than as
corresponding to a unique real:

1. It conflicts with both the JLS and ANSI/IEEE Std 754-1985. Each gives
a formula for calculating the real number value of a floating point
number, based on the values of its bit fields. The formulas differ, but
give the same results.

2. It would make describing floating point operations much harder. Every
statement of the form "In the remaining cases, where neither an
infinity, nor a zero, nor NaN is involved, and the operands have the
same sign or have different magnitudes, the exact mathematical sum is
computed." would need to be replaced by a more complicated discussion in
terms of the ranges of the two floating point numbers.

3. There are different rounding ranges for different purposes. An add is
allowed at most half a ulp of rounding error, and must round the half
way between numbers towards even. Math.sin is allowed one ulp of
rounding error. Which range does a double x represent? Only the add
results that would round to it? Or does x's range include sine(y) if
Math.sin(y)==x?

I find it simpler to go with the specs, and think of each floating point
number as having a unique value, surrounded a range of real numbers that
would be rounded to it under the arithmetic rounding rules, and broader
ranges that could be rounded to it under some of the more relaxed
function evaluation rules.
(b) you know the original precision of the binary floating point number.

Under those assumptions, which are quite reasonable for most uses, you
got back a correct answer with no loss of information versus the
original. However, if you don't assume (a), then the answer is
incorrect; and if you don't assume (b), then information was lost.

Hope that clarifies,

Certainly I find the normal Java Double.toString result very practical
for most, but not all, purposes. Printing the shortest decimal number
that Double.valueOf(String) would round to the double is a reasonable
default.

Patricia
 
C

Chris Smith

Patricia,

I believe that "conflicts" is too strong a word for the relationship
between a mental model of floating point numbers as ranges, and the JLS
and IEEE specs. A floating point value can have both a range of numbers
that it best represents, and also an exact mathematical value. It is
more useful to use the exact mathematical value for some purposes, and
the range for others.

I do suspect, though, that there is too much emphasis here on the exact
mathematical value of a floating point number. For most purposes, this
exact value is somewhat arbitrary from the perspective of the
programmer; it may or may not be precisely specified by the operations
(the "within one ulp" operations cause it to become unspecified), and
even when it is specified, it is still often not particularly relevant
to the intended operation. For most purposes, the most meaningful thing
that can be said about the exact value of the floating point number is
that it approximates the correct answer to some degree of accuracy that
depends on context. The same can be said of any other number that
rounds to that floating point value, and there's not necessarily any
good reason to choose one over another except that it happens to be
representable.

The ranges of values that are best represented by a given float are not
accuracy ranges and have nothing to do with the degree of accuracy of
the approximation, so the error in certain calculations is not relevant.
An operation can lack accuracy all it wants, and since floating point
numbers have no concept of accuracy, this would have to be tracked
elsewhere, in separate variables. All it means is that there's
generally no reason to believe that 0.100000001490116119384765625 is
really a better answer than 0.1 to that question. They are both within
the range of numbers that would be represented by a given float.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,534
Members
45,007
Latest member
obedient dusk

Latest Threads

Top