problem with 'double'

I

Iris

I have this problem:
I would like to add two doubles: -0.07 + 0.0175 and te result, instead of
be -0.0525, is -0.052500000000000005. I do not know what I have this error
and I would like avoid it or truncate the double. How can I do it?
Thank you
 
C

Christophe Vanfleteren

T

Thomas Weidenfeller

Iris said:
I have this problem:
I would like to add two doubles: -0.07 + 0.0175 and te result, instead of
be -0.0525, is -0.052500000000000005. I do not know what I have this error

It is not an error. Check a textbook about floating point
representations in computers, or read one of the MANY past threads about
this here in the past.
and I would like avoid it or truncate the double. How can I do it?

DecimalFormat

/Thomas
 
V

VisionSet

Iris said:
I have this problem:
I would like to add two doubles: -0.07 + 0.0175 and te result, instead of
be -0.0525, is -0.052500000000000005. I do not know what I have this error

Because the internal representation is binary, and binary cannot exactly
represent decimal.
and I would like avoid it or truncate the double. How can I do it?

((int)(-0.05250000000000000005 * 10000)) / 10000

But there are other ways depending on exactly what you want it for and why
you want it.
 
J

John C. Bollinger

VisionSet said:
Because the internal representation is binary, and binary cannot exactly
represent decimal.

Being pedantic, a more accurate statement would be that the set of
numbers exactly (finitely) representable in binary is a proper subset of
the those exactly (finitely) representable in decimal, which itself is a
proper subset of all rational numbers.


John Bollinger
(e-mail address removed)
 
T

Tom McGlynn

Iris said:
I have this problem:
I would like to add two doubles: -0.07 + 0.0175 and te result, instead of
be -0.0525, is -0.052500000000000005. I do not know what I have this error
and I would like avoid it or truncate the double. How can I do it?
Thank you

This subject comes up every week or two on the Java newsgroups and
equivalently frequently in every other language newsgroup that I've
followed. While I understand that binary representations of
floating point numbers are more efficient, I wonder if this continual
misunderstanding of floating point suggests that the
default floating point representation for data should be decimal rather
than binary.

This isn't going to happen any time soon, but in a world where only a tiny
fraction of applications really need to worry about the efficiency of
floating point computations, maybe it would be better to have a
a default numeric model in the computer that is closer to what we use outside
of it. Of course it still won't be perfect. There will still be the
"How come 10 - 10./3 * 3 != 0?" messages, but my sense is that
that is less confusing to users than the fact that most of the short decimals
they have been using all of their lives are not representable (exactly)
by computers.

PL/1 catered to this as I recall. One could ask for float binary,
or float decimal numbers -- though I'm not sure if that wasn't just
different ways to describing the precision of the underlying numbers. I doubt that
we want to ever preclude binary floating point numbers, but maybe future
languages should consider adopting decimal floating point numbers as
a first-class data type. (Rather than, e.g., the fairly clumsy classes
that Java provides.)



Regards,
Tom McGlynn
 
A

Andrew Thompson

While I understand that binary representations of
floating point numbers are more efficient,

Not just more efficient, significantly so.
..I wonder if this continual
misunderstanding of floating point suggests that the
default floating point representation for data should be decimal rather
than binary.

No. You are suggesting to have every program
that does numerical calculations take a performance
hit rather than teach each programmer (once)
of the nature of digital number storage.
That is ludicrous.
 
M

Michael Borgwardt

Tom said:
This subject comes up every week or two on the Java newsgroups and
equivalently frequently in every other language newsgroup that I've
followed. While I understand that binary representations of
floating point numbers are more efficient, I wonder if this continual
misunderstanding of floating point suggests that the
default floating point representation for data should be decimal rather
than binary.

Most widely-used CPUs support BCD arithemtic in their command set,
which is pretty much what you want. But somehow it never really made
the transition to languages higher than assembler and hardly anyone
uses it anymore.

I think there are good reasons for this.
 
R

Roedy Green

, I wonder if this continual
misunderstanding of floating point suggests that the
default floating point representation for data should be decimal rather
than binary.

Tim Cowlishaw, the inventor of NetRexx has been pushing for that for a
long time. He points out that decimal floating point hardware would
not be slower.
 
R

Roedy Green

No. You are suggesting to have every program
that does numerical calculations take a performance
hit rather than teach each programmer (once)
of the nature of digital number storage.
That is ludicrous.

Not at all. There then becomes no NEED to play silly games to dance
around the problem of imperfect representation.
 
T

Tom McGlynn

Roedy said:
Tim Cowlishaw, the inventor of NetRexx has been pushing for that for a
long time. He points out that decimal floating point hardware would
not be slower.

I'd be surprised if you don't give up a little bit if one were
to go to decimal, but I imagine that the cost could
be negligibly small (e.g., 5-15%). One is also going to
pay some price in space since one cannot pack in decimal
digits into bits perfectly. Of course if one is willing
to use something like base 1000 numbers (each digit with
10 bits), then one could make that cost very small too, just 3%.
If one uses 4 bits per digit, then the cost is about 20%.

There are some more subtle issues... You lose the extra bit
of precision that base 2 gives you when you normalize. Probably
the most serious problem is that the ratio between successive
numbers that can be represented will vary a lot more in a decimal
system than in a binary one. However anyone who needs to worry
about this probably is going to be using binary floating point
anyway!

Did Tim suggest any specific representations for decimal floating point
numbers?

Regards,
Tom
 
G

Gary Labowitz

Roedy Green said:
Not at all. There then becomes no NEED to play silly games to dance
around the problem of imperfect representation.

No problem. If there was clear thinking at all times programmers would see
that a finite set of bits can not represent an infinite number of values.
It's the same as a thermometer showing the "temperature" of 18 or 19
degrees: what should it show when the temperature is "actually" 18.5
degrees? Or 18.4383749 degrees? Or 18.653847293847 degrees? There are an
infinite number of "actual" temperatures, but that thermometer only shows 18
or 19, presumably the value closest to the "actual" temperature. Do you hear
many persons complaining that thermometers are inaccurate and useless
because of this?

If you need the temperature accurate to several decimal places, then you
better get a better thermometer. I doubt the chip makers are going to go to
any trouble to improve on what is already there (about 15 decimal digits)
unless you dump a bunch of Euros on them.

BTW, I teach my students NEVER to compare inexact values (float and double)
for equality. I grade down if they do.
 
T

Tor Iver Wilhelmsen

Michael Borgwardt said:
Most widely-used CPUs support BCD arithemtic in their command set,
which is pretty much what you want. But somehow it never really made
the transition to languages higher than assembler and hardly anyone
uses it anymore.

I think there are good reasons for this.

You still lose precision there, too, simply because you need to stop
adding decimals at some point.

The classic example I use is base 10 representation of a fraction:

a = 1.0/3.0;

Now, "a" will either need to be imprecise, or will add to the endless
chain 0.3333333... until it runs out of memory. In the former case,

b = 3.0 * a;

will yield the incorrect answer 0.9999999...

So it's not just about IEEE binary representation either, it's about
any finite representation in general.

Perhaps some future format will represent numbers as a/b instead of as
the current a*2^b - or at least as an addition.
 
J

Java Architect

Roedy Green said:
Not at all. There then becomes no NEED to play silly games to dance
around the problem of imperfect representation.

Instead we'd play even sillier games to dance around the horrid performance.
BTW, this is mute, Java offers BigDecimal which will provide that -0.0525 is
represented as -0.0525. If you don't care about performance, use that.
 
R

Roedy Green

Instead we'd play even sillier games to dance around the horrid performance.
BTW, this is mute, Java offers BigDecimal which will provide that -0.0525 is
represented as -0.0525. If you don't care about performance, use that.

BigDecimal is a scaled representation. What Tim was calling for was
decimal floating point.
 
R

Roedy Green

The classic example I use is base 10 representation of a fraction:

a = 1.0/3.0;

Now, "a" will either need to be imprecise, or will add to the endless
chain 0.3333333... until it runs out of memory. In the former case,

the advantage of decimal floating point is 1/3 will be inexact and .1
will be exact just as the man on the street expects.

You don't waste time trying to live up to the expectation that 0.1 is
exact dealing with special rounding etc.
 
G

George Neuner

I wonder if this continual misunderstanding of floating point suggests that the
default floating point representation for data should be decimal rather
than binary.

This isn't going to happen any time soon, but in a world where only a tiny
fraction of applications really need to worry about the efficiency of
floating point computations, maybe it would be better to have a
a default numeric model in the computer that is closer to what we use outside
of it.

For many purposes, a fraction based implementation akin to Lisp's
rational numbers would be a reasonable choice. Using 64-bit integers
for numerator and denominator gives plenty of range and precision and
covers most typical program uses of reals.

George
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top