question regarding java puzzlers #2

V

vishist

Hi,
With reference to the following code

public class Change {
public static void main(String args[]) {
System.out.println(2.00 - 1.10);
}
}

It prints 0.899999999999999. Now, I'm unable to understand the
explanation given by Joshua Bloch. Can you please explain it in a
different manner.

thanks
vishist.
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

vishist said:
With reference to the following code

public class Change {
public static void main(String args[]) {
System.out.println(2.00 - 1.10);
}
}

It prints 0.899999999999999. Now, I'm unable to understand the
explanation given by Joshua Bloch. Can you please explain it in a
different manner.

All real numbers can not be represented exact in a fixed number
of bits because there are an infinite number of them.

Floating point are based on 1/2 1/4 1/8 and not on
1/10 1/100 1/1000, which mean that numbers that are
exact representable in a small number of decimals
are not necessarily exact representable in a computer
floating point.

So just because some math works in decimals does
not mean that the same math works in computer
floating points.

When you use float and double types you should
expect this type of noise.

If you can not live with that chose another
data type (BigDecimal would be obvious).

Arne
 
S

Stefan Ram

vishist said:
Can you please explain it in a different manner.

You have not stated what detail you want to have explained.

For example, I might answer:

»This is printed, because the print-expression
"System.out.println(2.00 - 1.10)" is part of
an expression statement. The semantics of the
expression statement require the expression in
front of the semicolon ";" to be evaluated, when
the expression statement is executed - and as a
side-effect of this evaluation a representation
of the value is printed.«
 
V

vishist

Stefan said:
You have not stated what detail you want to have explained.

For example, I might answer:

»This is printed, because the print-expression
"System.out.println(2.00 - 1.10)" is part of
an expression statement. The semantics of the
expression statement require the expression in
front of the semicolon ";" to be evaluated, when
the expression statement is executed - and as a
side-effect of this evaluation a representation
of the value is printed.«
Hi Stefan,
Sorry for confusing question. My question was that when the
above statement got executed, it prints 0.899999999999 instead of 0.9.
Joshua Bloch (book author) gave an explanation for this and I'm unable
to understand that explanation. So, wanted to clear that question from a
different perspective.

vishist.
 
S

Stefan Ram

vishist said:
above statement got executed, it prints 0.899999999999 instead of 0.9.

The floating point values are represented as binary floating
point values. /Binary/ floating point values can not exactly
represent all values, even if those value have a short exact
representation using the decimal system.

For example, the value of the decimal numeral »0.1« can not be
represented as a finite binary fraction. So, when it is
represented, the true representation with infinitely many
binary digits needs to be abbreviated to finitely many digits.
The error becomes visible, when such a value then is
converted back to a decimal representation.

public class Main
{ public static void main( final java.lang.String[] args )
{ java.lang.System.out.println( new java.math.BigDecimal( 0.1 )); }}

0.1000000000000000055511151231257827021181583404541015625

So, then, the next question to ask would be: Why does the
following program not print
»0.1000000000000000055511151231257827021181583404541015625«,
but »0.1« instead?

public class Main
{ public static void main( final java.lang.String[] args )
{ java.lang.System.out.println( 0.1 ); }}

0.1
 
P

Patricia Shanahan

vishist wrote:
....
Sorry for confusing question. My question was that when the
above statement got executed, it prints 0.899999999999 instead of 0.9.
Joshua Bloch (book author) gave an explanation for this and I'm unable
to understand that explanation. So, wanted to clear that question from a
different perspective.

I understand exactly why the Java program gives the answer it does, and
know several different ways of explaining it, but I don't have the book
whose explanation you don't understand.

Could you give a brief summary of the explanation you don't like, and
what you don't like about it? That would help with picking an
explanation that may work better for you.

Patricia
 
V

vishist

Patricia said:
vishist wrote:
...

I understand exactly why the Java program gives the answer it does, and
know several different ways of explaining it, but I don't have the book
whose explanation you don't understand.

Could you give a brief summary of the explanation you don't like, and
what you don't like about it? That would help with picking an
explanation that may work better for you.

Patricia
"the problem is that the number 1.1 can't be represented exactly as a
double, so it is represented by the closest double value. The program
subtracts this value from 2. Unfortunately, the result of this
calculation is not the closest double value to 0.9. The shortest
representation of the resulting
double value is the hideous number that you see printed." is the
explanation that I got from the book and he went on to elaborate that
"not all decimals can be represented exactly using binary
floating-point."

I'm working on the previous responses for now. As for the following code
public class Main
{ public static void main( final java.lang.String[] args )
{ java.lang.System.out.println( 0.1 ); }}
printing 0.1 instead on
»0.1000000000000000055511151231257827021181583404541015625«, from the
API, it invokes Double.toString(double) and Float.toString(float). So,
the returned value is basically a string representation of the value. I
guess my explanation is correct?

vishist.
 
S

Stefan Ram

vishist said:
"the problem is that the number 1.1 can't be represented exactly as a
double, so it is represented by the closest double value. The program
subtracts this value from 2. Unfortunately, the result of this
calculation is not the closest double value to 0.9. The shortest
representation of the resulting
double value is the hideous number that you see printed." is the
explanation that I got from the book

I deem this better than my own explanations I have given on
the subject so far, because he even manages to get the
relevancy of the printing algorithm and everything else right
using only few words.
{ java.lang.System.out.println( 0.1 ); }}
printing 0.1 instead on
»0.1000000000000000055511151231257827021181583404541015625«, from the
API, it invokes Double.toString(double) and Float.toString(float). So,
the returned value is basically a string representation of the value. I
guess my explanation is correct?

The string representation would be
»0.1000000000000000055511151231257827021181583404541015625«,
however. As far as I understand it, Double.toString(double)
takes special measures to use a short decimal representation
(such as »0.1«) instead, if the binary representation is the
binary representation closest (or even second-closest?) to it
in order to produce »nice« representations.
 
P

Patricia Shanahan

Stefan Ram wrote:
....
The string representation would be
»0.1000000000000000055511151231257827021181583404541015625«,
however. As far as I understand it, Double.toString(double)
takes special measures to use a short decimal representation
(such as »0.1«) instead, if the binary representation is the
binary representation closest (or even second-closest?) to it
in order to produce »nice« representations.
....

The converted value is the shortest decimal expansion that
Double.parseDouble(String) would round to
0.1000000000000000055511151231257827021181583404541015625.

The default string conversion for float and double maintains
reversibility: If x is a double number (not a NaN) then

x == Double.parseDouble(Double.toString(x))

The converted value can be equidistant between x and one of its
neighbors if the least significant bit of x is even, because the
parseDouble rounding would return x. It can never be closer to some
other double than to x.

Patricia
 
B

blmblm

Stefan Ram wrote:
...
...

The converted value is the shortest decimal expansion that
Double.parseDouble(String) would round to
0.1000000000000000055511151231257827021181583404541015625.

I had to think a little to come up with a plausible explanation of
why BigDecimal(0.1) should have so many significant digits -- more
than one would normally think of a 64-bit floating-point number as
being able to represent, no?

I'm guessing that the relevant BigDecimal constructor takes the
"double" closest to 0.1 and converts it to decimal with full
precision. Huh. I guess it makes a kind of sense, but I understand
why the documentation for BigDecimal recommends against using this
particular constructor.
 
P

Patricia Shanahan

I had to think a little to come up with a plausible explanation of
why BigDecimal(0.1) should have so many significant digits -- more
than one would normally think of a 64-bit floating-point number as
being able to represent, no?

Although every binary fraction can be expressed as a decimal fraction,
decimal is less efficient than binary at representing binary fractions.

For example, 1/8, which has only one significant bit in binary, is 0.125
in decimal.
I'm guessing that the relevant BigDecimal constructor takes the
"double" closest to 0.1 and converts it to decimal with full
precision. Huh. I guess it makes a kind of sense, but I understand
why the documentation for BigDecimal recommends against using this
particular constructor.

It's the compiler, not BigDecimal, that does the rounding. 0.1 is a
double literal, and will appear in the bytecode as the corresponding
double bit pattern, value
0.1000000000000000055511151231257827021181583404541015625

Patricia
 
G

Greg R. Broderick

Hi Stefan,
Sorry for confusing question. My question was that when the
above statement got executed, it prints 0.899999999999 instead of 0.9.
Joshua Bloch (book author) gave an explanation for this and I'm unable
to understand that explanation. So, wanted to clear that question from a
different perspective.

What have you done, on your own to attempt to understand this?

Have you typed Joshua Bloch's code into your editor, compiled and run it? If
not, then why not?

Have you stepped through the code in a debugger? If not, then why haven't
you?

You will find that the amount of help that you receive in response to your
Usenet queries is directly proportional to the amount of your own effort that
you've demonstrated in an attempt to solve the problem.

c.f. <http://www.catb.org/~esr/faqs/smart-questions.html>

Cheers!
--
---------------------------------------------------------------------
Greg R. Broderick (e-mail address removed)

A. Top posters.
Q. What is the most annoying thing on Usenet?
---------------------------------------------------------------------
 
V

vishist

Greg said:
What have you done, on your own to attempt to understand this?

Have you typed Joshua Bloch's code into your editor, compiled and run it? If
not, then why not?

Have you stepped through the code in a debugger? If not, then why haven't
you?

You will find that the amount of help that you receive in response to your
Usenet queries is directly proportional to the amount of your own effort that
you've demonstrated in an attempt to solve the problem.

c.f. <http://www.catb.org/~esr/faqs/smart-questions.html>

Cheers!
I ran the code and got the output. But I didn't run it through the
debugger. I guess that is where I made the mistake. I will go through
the smart-questions and will try to be much more sensible in posting
questions.

thanks
 
B

blmblm

Although every binary fraction can be expressed as a decimal fraction,
decimal is less efficient than binary at representing binary fractions.

For example, 1/8, which has only one significant bit in binary, is 0.125
in decimal.

Yeah .... Still feels wrong somehow to use that many significant
figures for something that in some sense is an approximation. I'm
not explaining this especially well, but maybe the idea is coming
across a little?
It's the compiler, not BigDecimal, that does the rounding. 0.1 is a
double literal, and will appear in the bytecode as the corresponding
double bit pattern, value
0.1000000000000000055511151231257827021181583404541015625

I'm not sure what you mean here by rounding, and we might be
talking at cross purposes a little. It does make sense, now that
you mention it, that the error (difference between actual value and
0.1) comes in when the compiler turns the 0.1 in the source code
into a double. What I'm thinking, though, is that the BigDecimal
constructor must then be taking that double and turning it into
a decimal representation with more significant figures than seem
reasonable to me -- I mean, I understand where they come from, but
it seems a little wrong-headed to me to use an exact representation
for something that I think is better thought of as an approximation.
 
F

Faton Berisha

I ran the code and got the output. But I didn't run it through the
debugger. I guess that is where I made the mistake. I will go through
the smart-questions and will try to be much more sensible in posting
questions.

thanks

I think that your question was sensible enough.

If you want to to go beyond the explanation you cited from the book,
then you will have to learn about binary representation of integers
first, and then about binary floating point representation of
fractional numbers.

If you're not ready to get into details (mathematical ones, as well),
then I think you could just draw some useful conclusions. For example,
how may iterations you think the following loop does? (A trial might
surprise you.)

double d = 0.0;
while ( d != 1.0 ) {
d += 1.0 / 13;
System.out.println(d);
}

A conclusion: it usually is not a good idea to compare doubles in a
loop (or a conditional) test.

Faton Berisha
 
P

Patricia Shanahan

Yeah .... Still feels wrong somehow to use that many significant
figures for something that in some sense is an approximation. I'm
not explaining this especially well, but maybe the idea is coming
across a little?

The point of using BigDecimal for printing the exact values of doubles
is that the conversion from double to BigDecimal is never an
approximation.
I'm not sure what you mean here by rounding, and we might be
talking at cross purposes a little. ...

The results of double operations are defined in terms of two aspects:

1. What would be the exact result of this calculation, if we could store it?

2. Pick a representable value to store based on the answer to question 1.

"Rounding" is the process of reducing the nominal exact result to a round
number that can be stored.

In many cases the exact answer is not needed, and cannot be calculated.
The implementation just has to find out enough about it to get the
rounded answer right.

Patricia
 
C

Chris Uppal

What I'm thinking, though, is that the BigDecimal
constructor must then be taking that double and turning it into
a decimal representation with more significant figures than seem
reasonable to me -- I mean, I understand where they come from, but
it seems a little wrong-headed to me to use an exact representation
for something that I think is better thought of as an approximation.

This, I think, is the heart of the problem. You use the word "approximation",
but it's not clear what a value is an approximation /to/. A big decimal,
seeing a double with exact value
0.1000000000000000055511151231257827021181583404541015625
has no way of knowing whether that is an "approximation" to
0.1
or to
0.100000000000000005551115
or, indeed, to
0.100000000000000005551115123125782702118158340454101562500000000001
So what is it going to choose ?

That goes double when you consider that BigDecimal's /job/ is the precise
representation of numerical values -- it would be inappropriate for it to make
information-loosing guesses about what the programmer "really meant". If you
want to convert doubles to BigDecimals using different rules (which is not in
itself unreasonable), then some of the possible rules are pre-packed for you.
For instance, by using Double.toString(double), and the BigDecimal(String)
constructor, you can convert using the
shortest-sequence-of-decimal-digits-which-maps-to-the-same-floating-point-value
rule (as Patricia has mentioned).

I don't really think that the concept of "approximation" is appropriate here.
There is a sense in which floating point computation can be considered to be
"like" precise computation with a certain amount of random noise added, so that
(like real physical measurements), you only ever have an approximate number,
and -- equally importantly -- you don't know what the true number should be.
That picture is fine as an approximation (;-) but it doesn't really reflect the
semantics of floating point arithmetic. The rules for Java FP are precise and
exact, down to the last bit -- there is no approximation, or error involved at
all[*]. If we programmers want to use floating point values to represent
something other than the specific set of rational numbers defined by the IEEE
format, then it is /our/ responsibility to implement whatever mapping we have
decided upon -- that mapping is not, and cannot be, the responsibility of a
fundamental library. (Not to say that pre-packed facilities for common
mappings are not handy -- and in fact Java provides such things, but as
supplements to the fundamental operations, not as replacements for them.)

Incidentally, that's one way of resolving the "puzzle" that the value
0.1000000000000000055511151231257827021181583404541015625
seems to take more bits than are available to represent it. There is a
specific set of slightly less than 2**64 rational numbers which can be
represented as floating point. Each of those is represented /exactly/, whereas
the others cannot be represented /at all/. For instance the number
0.1000000000000000055511151231257827021181583404541015626
cannot be represented in 64-bit IEEE floating point. It doesn't take ~180 bits
to represent a 55-digit decimal value because most of those 10**55 values have
no representation.

-- chris

[*] Actually some slop is allowed in the last bit for some operations under
some conditions, but that doesn't affect the issue here.
 
B

blmblm

Obviously it's not, and in the process I think I'm starting to sound
more clueless about floating-point representations and arithmetic
than I think I am.
Every floating point number is an exact representation of something.

Just not necessarily of the number you care about.

Agreed. I think the point I was trying to make is that each
floating-point number can be thought of as representing a range of
real numbers that includes its actual value.
Every floating point number of the same type (not counting denorms) has the
same number of significant figures, 52 in the case of double.

But those are binary "significant figures" (bits), not the same as
the decimal (base 10) significant figures I'm talking about.

<whine>Do I have to?</whine> Yeah. Standard reference, but helpful
to be reminded of, and probably someday I should print yet another
copy and finally actually read the whole thing.
 
B

blmblm

This, I think, is the heart of the problem. You use the word "approximation",
but it's not clear what a value is an approximation /to/. A big decimal,
seeing a double with exact value
0.1000000000000000055511151231257827021181583404541015625
has no way of knowing whether that is an "approximation" to
0.1
or to
0.100000000000000005551115
or, indeed, to
0.100000000000000005551115123125782702118158340454101562500000000001
So what is it going to choose ?

Yes, I thought about that, and you may be right that there's really no
sensible choice other than the one made by the BigDecimal constructor.
It still seems subtly wrong to me, but maybe not more so than any
other choice.
That goes double

! (pun intended?)
when you consider that BigDecimal's /job/ is the precise
representation of numerical values -- it would be inappropriate for it to make
information-loosing guesses about what the programmer "really meant". If you
want to convert doubles to BigDecimals using different rules (which is not in
itself unreasonable), then some of the possible rules are pre-packed for you.
For instance, by using Double.toString(double), and the BigDecimal(String)
constructor, you can convert using the
shortest-sequence-of-decimal-digits-which-maps-to-the-same-floating-point-value
rule (as Patricia has mentioned).

I don't really think that the concept of "approximation" is appropriate here.
There is a sense in which floating point computation can be considered to be
"like" precise computation with a certain amount of random noise added, so that
(like real physical measurements), you only ever have an approximate number,
and -- equally importantly -- you don't know what the true number should be.
That picture is fine as an approximation (;-) but it doesn't really reflect the
semantics of floating point arithmetic.

I think is close to what's underlying my vague sense of unease --
except for the part about "random noise". As you say:
The rules for Java FP are precise and
exact, down to the last bit -- there is no approximation, or error involved at
all[*].

Well, if you add floating-point numbers of different magnitudes,
there is round-off error involved -- possibly a precise and
well-defined error, but error.
If we programmers want to use floating point values to represent
something other than the specific set of rational numbers defined by the IEEE
format,

Rational numbers with some unusual properties under arithmetic
operations, no? e.g., addition is not always associative.

Given that, I'm inclined to stick with my claim that it's more
sensible to regard floating-point numbers as approximations of
reals than as rationals, even though almost [*] every floating-point
number has associated with it an exact rational value.

[*] Excluding NaN and other "not number" values (+/- Inf?).

But I may still just not be looking at things from the most useful
or reasonable perspective.
then it is /our/ responsibility to implement whatever mapping we have
decided upon -- that mapping is not, and cannot be, the responsibility of a
fundamental library. (Not to say that pre-packed facilities for common
mappings are not handy -- and in fact Java provides such things, but as
supplements to the fundamental operations, not as replacements for them.)

Incidentally, that's one way of resolving the "puzzle" that the value
0.1000000000000000055511151231257827021181583404541015625
seems to take more bits than are available to represent it. There is a
specific set of slightly less than 2**64 rational numbers which can be
represented as floating point. Each of those is represented /exactly/, whereas
the others cannot be represented /at all/. For instance the number
0.1000000000000000055511151231257827021181583404541015626
cannot be represented in 64-bit IEEE floating point. It doesn't take ~180 bits
to represent a 55-digit decimal value because most of those 10**55 values have
no representation.

-- chris

[*] Actually some slop is allowed in the last bit for some operations under
some conditions, but that doesn't affect the issue here.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,045
Latest member
DRCM

Latest Threads

Top