Float comparison

K

Keith Thompson

James Kuyper said:
It's an unnecessary complication if it happens to already be known
that x - y is not negative. It's a rare occurrence, but I have had the
occasion to take advantage of that simplification.

I wasn't emphasizing the use of the absolute value. My point was the
checking the difference, as oppposed to, say, the ratio, might or
might not make sense.
 
B

BartC

Keith Thompson said:
It depends on how you interpret it.

Given:

double x = 1.0;

what does that value 1.0 (however it's represented) actually *mean*?
Does it represent the exactly mathematical value 1, no more, no less?
Or does it represent the range of real values for which (double)1.0 is
the nearest representable approximation? Or does it represent some
quantity in the range 0.995 to 1.005, based on the precision of the
instrument used to obtain the figure?

You can make the same argument with integer and char values obtained from
the same instrument.

In double x=1.0, then the right side is clearly mathematical 1, and the left
side very likely represents exactly 1.0.

In double x=0.1, the right side is mathematical 0.1 or 1/10, while the left
side is going to be some exact single value that is very close to 0.1, and
which I can't print out exactly in decimal, and even the binary will depend
on the hardware.

How programmers make use of this is up to them.
 
J

James Kuyper

CBFalconer said:
Because floating point values are not exact. They always simply
assert they mean a value between V (the value printed) - x_EPSILON
and V + x_EPSILON. Replace x by FLOAT or DOUBLE, depending on type
used. The EPSILON values are found in float.h.

The appropriate expression is closer to V + V*x_EPSILON, not V + x_EPSILON.

Even that expression is not the exact value, but merely an
approximation. The exact value can be as small as V +
V*x_EPSILON/FLT_RADIX, if V is slightly smaller than an exact integral
power of FLT_RADIX.

For those who can use C99, nextafter() provides a more precise way of
determining what the next (and previous) representable values are.
 
J

James Kuyper

Keith said:
I wasn't emphasizing the use of the absolute value. My point was the
checking the difference, as oppposed to, say, the ratio, might or
might not make sense.

For x/y sufficiently close to 1.0, the expression

fabs(x/y - 1.0) <= tolerance

tests essentially the same thing as

fabs(x - y) <= max(fabs(x),fabs(y))*tolerance

For x/y not close to 1.0, I prefer the second approach, because I don't
have to worry about division by zero.
 
P

Phil Carmody

CBFalconer said:
Because floating point values are not exact.

Oh, no - not this one again.

Floating point values, at least those with numerical values,
are always exact. (The meaning of 'exact' in the context of
NaNs is one which probably brings little to the discussion
at hand.)

Floating point operations do not always yield answers which
are numerically exact (i.e. exactly equal to the result of the
computation if it were done with purely abstract real numbers).
Some of these operations are subtle (such as the conversion
of a constant to a type with less precision in a simple
assignment).

Phil
 
P

Phil Carmody

James Kuyper said:
For x/y sufficiently close to 1.0, the expression

fabs(x/y - 1.0) <= tolerance

tests essentially the same thing as

fabs(x - y) <= max(fabs(x),fabs(y))*tolerance

For x/y not close to 1.0, I prefer the second approach, because I
don't have to worry about division by zero.

The former has the problem of not being symmetric, unlike the
latter. (Though depending on rounding mode, even the latter
may not be, though only causing variation by 1ulp, which is.)
If tolerance is enough less than the square root of relevant
epsilon, those differences will not be important.

Of course, if you're doing floating point calculations, you
should always make sure you've looked at the numerical
instability in your code. It's trivial to turn half of your
precision into noise with one wrong operation. Often more
than that.

Phil
 
C

CBFalconer

Richard said:
CBFalconer said:

More precisely, most numbers cannot be represented exactly by
floating point values. Some numbers, however, can be so
represented - for example, low-magnitude integers are no problem
at all. It is perfectly simple to represent these numbers
exactly, using floating point.

You are wrong, because that floating point value makes no assertion
about any exact value. It only asserts that the real value is
between established limits. Any other assertion comes from other
facts, such as the overall program structure. If you don't realize
this you will make serious usage errors.
 
C

CBFalconer

Phil said:
.... snip ...

Oh, no - not this one again.

Floating point values, at least those with numerical values, are
always exact. (The meaning of 'exact' in the context of NaNs is
one which probably brings little to the discussion at hand.)

No. Try compiling and running this, and explain the results.

#include <stdio.h>
#include <stdlib.h>

int main(void) {
double d1 = 2.0, d2 = 6.0, d3 = 3.0, da, db, dc;

da = d1/d3;
db = d1/d2;
dc = d3 * db;
printf("da = %f, db= %f dc= %f\n", da, db, dc);
printf("da == (2 * db) = %d\n", (da == (2 * db)));
printf("((dc - 1.0) == 1) = %d\n", (dc - 1.0) == 1);
return 0;
}
 
K

Keith Thompson

CBFalconer said:
You are wrong, because that floating point value makes no assertion
about any exact value. It only asserts that the real value is
between established limits. Any other assertion comes from other
facts, such as the overall program structure. If you don't realize
this you will make serious usage errors.

A floating point value makes no assertion at all. It can be
interpreted in any of several ways, depending on the context. Your
assertion that a given value represents a range is no better founded
than an assertion that it represents one specific value.

Given:
double x = 1.0;
the value of x is exactly 1.0, no more, no less. What that value
*means*, whether it's just that specific value or any arbitrary value
in some specified range, depends on the application.

If you think otherwise, please cite wording in the standard that
contradicts what I just wrote.
 
R

Richard Tobin

Thank you, How can I check for equality ?
[/QUOTE]
You can't. You can only see if two floats are very close to each other.

No, you can test whether they're equal. Generally this won't
correspond perfectly to whatever they represent being equal, but the
floats can be equal.
pseudocode:

if ((x-y) < SMALL_DIFFERENCE) return true;

Don't use this kind of comparison in a sort algorithm (e.g. in the
compar() function for qsort). It won't work reliably, since it does
not define a total order. Use ==, possibly after checking for NaNs.

-- Richard
 
B

Beej Jorgensen

Keith Thompson said:
double x = 1.0;

As far as I can tell, that's exactly accurate. But as soon as you do
any floating point math on it, accuracy is implementation-defined...?
(5.2.4.2.2p5)

-Beej
 
K

Keith Thompson

Beej Jorgensen said:
As far as I can tell, that's exactly accurate. But as soon as you do
any floating point math on it, accuracy is implementation-defined...?
(5.2.4.2.2p5)

The accuracy *of the math* is implementation-defined (and the
implementation may state that the accuracy is unknown). But I would
argue that the result is, or at least can be interpreted as, some
exact mathematical value.

For example:

double x = 1.0; /* exactly 1.0 on any sane implementation */
x /= 3.0;

The result of the division will be a somewhat inaccurate approximation
of the mathematical value 1.0/3.0. But the value stored in x will be
*some* exact value; on my system, it's exactly
0.333333333333333314829616256247390992939472198486328125.

Or, to be just a bit more precise, x holds a floating-point
representation that might be interpreted either as that exact value,
or as a small range of real values surrounding that exact value
(presumably the range includes the mathematical value 1.0/3.0).
 
B

Bruce Wheeler

Given:
double x = 1.0;
the value of x is exactly 1.0, no more, no less. What that value
*means*, whether it's just that specific value or any arbitrary value
in some specified range, depends on the application.

If you think otherwise, please cite wording in the standard that
contradicts what I just wrote.

There was a thread a long time ago when this was discussed.
Are Floating Point Constants "constant" from 1998.

Updated to N1124:

N1124 has the following.
----------------------------------
6.4.4.2 Floating constants
3 The significand part is interpreted as a (decimal or hexadecimal)
rational number; the digit sequence in the exponent part is
interpreted as a decimal integer. For decimal floating constants, the
exponent indicates the power of 10 by which the significand part is
to be scaled. For hexadecimal floating constants, the exponent
indicates the power of 2 by which the significand part is to be
scaled.

====>
For decimal floating constants, and also for hexadecimal floating
constants when FLT_RADIX is not a power of 2, the result is either
the nearest representable value, or the larger or smaller
representable value immediately adjacent to the nearest representable
value, chosen in an implementation-defined manner.
<====

For hexadecimal floating constants when FLT_RADIX is a power of 2, the
result is correctly rounded.
----------------------------------
My reading of this is that
x=1.0;
can result in
x==1.0
x==the next smaller representable value to 1.0
x==the next larger representable value to 1.0

I assume that what you wrote was what was intended, but my DS9000
doesn't produce the results you expect.

c89 seems to agree with you:
--------------------------------------
3.1.3.1 Floating constants
The value part is interpreted as a decimal rational number; the
digit sequence in the exponent part is interpreted as a decimal
integer. The exponent indicates the power of 10 by which the value
part is to be scaled.

====>
If the scaled value is in the range of representable values (for its
type) but cannot be represented exactly, the result is either the
nearest higher or nearest lower value, chosen in an
implementation-defined manner.
<====
--------------------------------------
It appears that an important distinction got lost in the wording
change. Maybe somebody should submit a DR.

Regards,
Bruce Wheeler
 
G

Guest

As far as I can tell, that's exactly accurate.  But as soon as you do
any floating point math on it, accuracy is implementation-defined...?
(5.2.4.2.2p5)

it's not as bad as that.

y = 2.0 + 2.0;

y has a pan-implementation exact value
 
G

Guest

Secondly, nobody has claimed that *every* real number can be
represented exactly by floating point. In fact, most of them can't,

*most of them*! I think you understating the case by amounts that
require hewbrew letters to express
 
A

Alessio Ribeca

Mark McIntyre ha scritto:
Ok, but don't use #defines or typedefs in this way, they only make your
code hard to read and more error prone.

Why ?
For testing purposes I must to compile and run program with double or
float types.
 
K

Keith Thompson

Bruce Wheeler said:
There was a thread a long time ago when this was discussed.
Are Floating Point Constants "constant" from 1998.

Updated to N1124:

N1124 has the following.

N1256 is newer.
<http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1256.pdf> But that
particular section hasn't changed since the C99 standard, at least if
the lack of change bars in N1256 is any indication.
----------------------------------
6.4.4.2 Floating constants [...]
====>
For decimal floating constants, and also for hexadecimal floating
constants when FLT_RADIX is not a power of 2, the result is either
the nearest representable value, or the larger or smaller
representable value immediately adjacent to the nearest representable
value, chosen in an implementation-defined manner.
<==== [...]
----------------------------------
My reading of this is that
x=1.0;
can result in
x==1.0
x==the next smaller representable value to 1.0
x==the next larger representable value to 1.0

I assume that what you wrote was what was intended, but my DS9000
doesn't produce the results you expect.

Yeah, I think you're right.

My real point (which I expressed without taking this into account) is
that a stored floating-point value can be interpreted as representing
an exact value *or* a range of values. Given
double x = 1.0;
*some* exact value will be stored in x; whether it's actually 1.0 or
not (it will be on any sane implementation) is another issue.

[...]
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,072
Latest member
trafficcone

Latest Threads

Top