Floating point computations differ in different runs of same program

R

Rahul

Hi,

I have a C program which does lots of computations in floats and
doubles,
It is processing the same file (same data) but in different runs of
the program (on the same machine) the results vary sometimes by
numeric value 1 (which is after rounding the resulting float number)
Is this expected behavior with floats and doulbe (with lots of
intermediate conversion in between) ?

Since There is only one thread so I guess the result should be same
always.

Any reason/pointers for the above mentioned behavior.

Thanks in advance
Rahul
 
L

Lionel B

Hi,

I have a C program

Um... so why are you posting to a C++ newsgroup?
which does lots of computations in floats and doubles,
It is processing the same file (same data) but in different runs of the
program (on the same machine) the results vary sometimes by numeric
value 1 (which is after rounding the resulting float number) Is this
expected behavior with floats and doulbe (with lots of intermediate
conversion in between) ?

Since There is only one thread so I guess the result should be same
always.

Any reason/pointers for the above mentioned behavior.

Sure there's no (pseudo)-random number generation going on?

You'll probably have to post some (minimal) code that demonstrates the
problem to sort this out... but comp.lang.c might be a better bet.
 
K

kasthurirangan.balaji

Um... so why are you posting to a C++ newsgroup?




Sure there's no (pseudo)-random number generation going on?

You'll probably have to post some (minimal) code that demonstrates the
problem to sort this out... but comp.lang.c might be a better bet.

I assume that you are trying to compare doubles. Do not use "=="
operator for comparing doubles. Best way is using abs, subtraction and
less than operator with the precisions defined.

fabs(d1-d2) < 0.0001

where d1,d2 are two doubles.

Thanks,
Balaji.
 
D

Daniel Kraft

I assume that you are trying to compare doubles. Do not use "=="
operator for comparing doubles. Best way is using abs, subtraction and
less than operator with the precisions defined.

fabs(d1-d2) < 0.0001

where d1,d2 are two doubles.

That's surely a good advice, but please explain how this could lead to
the same program returning different results when tun twice, as the OP
asked? I can't see a connection.

Cheers,
Daniel
 
L

Lionel B

^^^^^^^^
Please don't quote sigs
I assume that you are trying to compare doubles.

Why do you assume that? And even if true, why should that lead to non-
deterministic results?

We don't know the answer until the OP posts some code.
Do not use "=="
operator for comparing doubles. Best way is using abs, subtraction and
less than operator with the precisions defined.

Generally good advice, although there are situations where it is "safe"
to compare doubles (when no arithmetic has been performed on the compare-
ees). At least I think so... sure someone will point to something in the
standard that ensures it is never guaranteed safe ;-)
fabs(d1-d2) < 0.0001

where d1,d2 are two doubles.

Not necessarily the best way to do it either... depends on the magnitudes
of the doubles you are trying to compare (e.g. in your example, d1 =
0.00001, d2 = 0.00002 come out equal). There has been plenty of
discussion on this issue in this ng if I recall.
 
F

Fred Zwarts

Rahul said:
Hi,

I have a C program which does lots of computations in floats and
doubles,
It is processing the same file (same data) but in different runs of
the program (on the same machine) the results vary sometimes by
numeric value 1 (which is after rounding the resulting float number)
Is this expected behavior with floats and doulbe (with lots of
intermediate conversion in between) ?

Since There is only one thread so I guess the result should be same
always.

Any reason/pointers for the above mentioned behavior.

Possible causes are:
+ Use of uninitialized variables.
+ Use of a random generator.
+ Use of the changing environment, e.g. the current time.
+ ...
 
R

Richard Herring

In message
Do not use "=="
operator for comparing doubles.

Do not offer such sweeping generalisations.

If it weren't appropriate to use operator== on doubles, the language
wouldn't supply it.
Best way is using abs, subtraction and
less than operator with the precisions defined.

The "best" way depends on what you are trying to do. If testing for
equality is what you want to do, then operator== is the best way.
fabs(d1-d2) < 0.0001

where d1,d2 are two doubles.

This may be appropriate in some circumstances, but it is not a test for
equality.
 
T

Tim Slattery

Richard Herring said:
In message


Do not offer such sweeping generalisations.

If it weren't appropriate to use operator== on doubles, the language
wouldn't supply it.

Since floating point numbers - single or double precision - are
approximations, the == operator is not appropriate for them.
The "best" way depends on what you are trying to do. If testing for
equality is what you want to do, then operator== is the best way.


This may be appropriate in some circumstances, but it is not a test for
equality.

Maybe, but it's the best you can do with floating point numbers.
 
V

Victor Bazarov

Tim said:
Since floating point numbers - single or double precision - are
approximations, the == operator is not appropriate for them.

I do not understand what relevance there is between the fact that
FP numbers are approximations and the appropriateness of equality
operator for them. Could you please elaborate? Also, consider
that if I write

double a = 0.1; // it's most likely an approximation
double b = a;
assert(a == b);

the assertion is true, approximation or not.
Maybe, but it's the best you can do with floating point numbers.

Just as well as you can do any other test. Like < or >. Just don't
call it "equality". That's all Richard is saying. And why "maybe"?

V
 
P

Pete Becker

Since floating point numbers - single or double precision - are
approximations, the == operator is not appropriate for them.

Floating-point numbers are only approximations in a very limited sense:
they don't exactly represent physical quantities, which can,
presumably, be determined to far greater precision. Once you have your
values, the operations on those floating-point numbers are completely
determinate. No approximations involved.

The problem is that programmers assume that floating-point numbers will
work just like real numbers, and when the results differ, they start
talking about approximations and randomness rather than making the
considerable effort needed to understand how floating-point works.

So, yes, if you don't know what you're doing (like most people who try
it), floating-point math gives you approximate answers. Hacking in
half-understood approximate comparisons doesn't solve this knowledge
problem.
Maybe, but it's the best you can do with floating point numbers.

No, it may be the best that YOU can do, but it's by no means a
universally better approach than testing for equality. Granted,
numerical methods are hard enough that they serve as PhD thesis topics,
but that doesn't mean that a simplistic replacement for equality
testing eliminates problems. It doesn't. It just hides them.
 
K

Kai-Uwe Bux

Victor said:
I do not understand what relevance there is between the fact that
FP numbers are approximations and the appropriateness of equality
operator for them. Could you please elaborate? Also, consider
that if I write

double a = 0.1; // it's most likely an approximation
double b = a;
assert(a == b);

the assertion is true, approximation or not.

Really? The standard has this weird provision [5/10]:

The values of the floating operands and the results of floating
expressions may be represented in greater precision and range than that
required by the type; the types are not changed thereby.

I think that the standard would actually allow a compliant implementation to
treat your snippet as follows:

represent 0.1 in a register with maximum internal precision.
write the contents to a memory region representing a (truncating).
write the contents to a memory region representing b (truncating).
read from region b into another register.
compare the two registers yielding false.

It would skip reading object a into a register from memory as an
optimization since it has the value already in the first register.

However, I have to admit that I never was able to fully understand the
impact or non-impact of [5/10]. So I might be missing something. (Actually,
I hope that I am missing something:)


Best

Kai-Uwe Bux
 
P

Pete Becker

double a = 0.1; // it's most likely an approximation
double b = a;
assert(a == b);

the assertion is true, approximation or not.

Really? The standard has this weird provision [5/10]:

The values of the floating operands and the results of floating
expressions may be represented in greater precision and range than that
required by the type; the types are not changed thereby.

I think that the standard would actually allow a compliant implementation to
treat your snippet as follows:

represent 0.1 in a register with maximum internal precision.
write the contents to a memory region representing a (truncating).
write the contents to a memory region representing b (truncating).
read from region b into another register.
compare the two registers yielding false.

I haven't been through the analysis in C++, but my recollection is that
in C the compiler must do narrowing conversions when storing values. So
it is not allowed to keep either b or a at higher precision. The kind
of thing where you get into trouble is with return values, which, for
example on x86, are typically carried at 80-bit precision in an fpu
register. For example:

double f(double a, double b)
{
return a/b;
}

int main()
{
double res = f(1.0, 3.0);
if (res == f(1.0, 3.0))
printf("they're equal\n");
else
printf("they're not equal\n");
double res1 = f(1.0, 3.0);
if (res == f(1.0, 3.0))
printf("they're equal\n");
else
printf("they're not equal\n");
return 0;
}

The first if statement can take either branch. The second if statement
can only take the first.

On the other hand, most compilers don't do that by default. You have to
compile with an option that says to follow the floating-point rules.
Otherwise they generate faster code, without the conversions.
 
P

Pete Becker

double a = 0.1; // it's most likely an approximation
double b = a;
assert(a == b);

the assertion is true, approximation or not.

Really? The standard has this weird provision [5/10]:

The values of the floating operands and the results of floating
expressions may be represented in greater precision and range than that
required by the type; the types are not changed thereby.

I think that the standard would actually allow a compliant implementation to
treat your snippet as follows:

represent 0.1 in a register with maximum internal precision.
write the contents to a memory region representing a (truncating).
write the contents to a memory region representing b (truncating).
read from region b into another register.
compare the two registers yielding false.

I haven't been through the analysis in C++, but my recollection is that
in C the compiler must do narrowing conversions when storing values. So
it is not allowed to keep either b or a at higher precision. The kind
of thing where you get into trouble is with return values, which, for
example on x86, are typically carried at 80-bit precision in an fpu
register. For example:

double f(double a, double b)
{
return a/b;
}

int main()
{
double res = f(1.0, 3.0);
if (res == f(1.0, 3.0))
printf("they're equal\n");
else
printf("they're not equal\n");
double res1 = f(1.0, 3.0);
if (res == f(1.0, 3.0))

Whoops, sorry. That should be:

if (res == res1)
 
K

kasthurirangan.balaji

On 2008-02-14 19:22:43 -0500, Kai-Uwe Bux <[email protected]> said:
double a = 0.1;  // it's most likely an approximation
double b = a;
assert(a == b);
the assertion is true, approximation or not.
Really? The standard has this weird provision [5/10]:
  The values of the floating operands and the results of floating
  expressions may be represented in greater precision and range than that
  required by the type; the types are not changed thereby.
I think that the standard would actually allow a compliant implementation to
treat your snippet as follows:
  represent 0.1 in a register with maximum internal precision.
  write the contents to a memory region representing a (truncating).
  write the contents to a memory region representing b (truncating).
  read from region b into another register.
  compare the two registers yielding false.
I haven't been through the analysis in C++, but my recollection is that
in C the compiler must do narrowing conversions when storing values. So
it is not allowed to keep either b or a at higher precision. The kind
of thing where you get into trouble is with return values, which, for
example on x86, are typically carried at 80-bit precision in an fpu
register. For example:
double f(double a, double b)
{
return a/b;
}
int main()
{
double res = f(1.0, 3.0);
if (res == f(1.0, 3.0))
   printf("they're equal\n");
else
   printf("they're not equal\n");
double res1 = f(1.0, 3.0);
if (res == f(1.0, 3.0))

Whoops, sorry. That should be:

if (res == res1)
   printf("they're equal\n");
else
   printf("they're not equal\n");
return 0;
}
The first if statement can take either branch. The second if statement
can only take the first.
On the other hand, most compilers don't do that by default. You have to
compile with an option that says to follow the floating-point rules.
Otherwise they generate faster code, without the conversions.

--
  Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)- Hide quoted text -

- Show quoted text -

I apologize if wrong message content is what was passed/presented from
my words, though definitely not my intent. I too faced such a problem
with floating point numbers and this is what i did, after lots of
googling. I mentioned precision defined, but epsilon is what that
needs to be used.

My intent was to provide a solution and that too was based on my
assumption.

Thanks,
Balaji.
 
R

Richard Herring

Tim Slattery said:
Since floating point numbers - single or double precision - are
approximations, the == operator is not appropriate for them.


Maybe, but it's the best you can do with floating point numbers.

I was going to compose a detailed refutation of this post, but I see
Pete Becker has already said everything I would have done, and probably
better.

First, understand the algorithm. Second, understand how floating-point
actually works. _Then_ you can start giving advice about what condition
you should be testing for.
 
J

James Kanze

Since floating point numbers - single or double precision - are
approximations, the == operator is not appropriate for them.

That's simply false. Floating point numbers are not, in
themselves, approximations. Every floating point number
represents an exact value.

Of course, that value might be an approximation of what is
wanted. But the same thing is true for int: any value
representing the population of France, be it int or double, will
be an approximation (and in this case, double is probably just
as precise as int---maybe more so, if int's ony have 16 bits).

The problem is that double is often used as an abstraction for
real numbers. But it's not a perfect abstraction, and you have
to understand the differences, and where the abstraction fails,
if you want to use double in this way.
Maybe, but it's the best you can do with floating point numbers.

If it's not appropriate, then its the wrong way. There are
cases where the right thing to do is compare using ==, and there
are cases where its not. And you need to understand how
floating point works, and what you are trying to do, to be able
to even begin determing whether == is appropriate in a specific
case, and if it isn't, what you should use to replace it.
(FWIW: I don't think I've ever seen a case where your example
would be appropriate.)
 
D

dohboy

Possible causes are:
+ Use of uninitialized variables.
+ Use of a random generator.
+ Use of the changing environment, e.g. the current time.
+ ...

Maybe a vortex in the space-time continuum. It happens to my programs all the time. ;-)
Possibly explains what happens to my socks in the dryer, too.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,756
Messages
2,569,540
Members
45,025
Latest member
KetoRushACVFitness

Latest Threads

Top