How to test a 'float' or 'double' zero numerically?

P

Peng Yu

Hi,

Suppose T is 'float' or 'double'.

T x;

x < 10 * std::numeric_limits<T>::epsilon();

I can use the above comparison to test if 'x' is numerically zero. But
I'm wondering what should be a good multiplicative constant before
epsilon?

Thanks,
Peng
 
A

Anders Dalvander

I can use the above comparison to test if 'x' is numerically zero.

No, as x can also be negative.
But I'm wondering what should be a good multiplicative constant before
epsilon?

Epsilon is the smallest value such such that 1.0 + epsilon != 1.0. You
need to scale it with the numbers to compare with. Comparing against
zero is always hard. You are probably best of with using abs(x) <
your_own_epsilon. Set your_own_epsilon to what ever you want, such as
0.00000001 perhaps.

Regards,
Anders Dalvander
 
P

Peng Yu

No, as x can also be negative.

Right, I meant std::abs(x).
Epsilon is the smallest value such such that 1.0 + epsilon != 1.0. You
need to scale it with the numbers to compare with. Comparing against
zero is always hard. You are probably best of with using abs(x) <
your_own_epsilon. Set your_own_epsilon to what ever you want, such as
0.00000001 perhaps.

Therefore, there is no general accept such epsilon?

Thanks,
Peng
 
J

Juha Nieminen

Peng said:
x < 10 * std::numeric_limits<T>::epsilon();

I can use the above comparison to test if 'x' is numerically zero.

No you can't. A value of x distinct from zero might also test as
"zero" with that.
 
R

Ron AF Greve

Hi,

Consider a machine where the smallest number that can be represented is
0.0001

Lets asume I have the following calculation (lets assume the 0.00005 would
be the result of some calculaton).
0.0001 -0.00005 - 0.00005
Now it is obvious that this should result in zero. However the last two
results would be zero since the machine can only have up to four digits
behind the dot. So what should be zero is actually 0.0001 so a correct value
for a multiplier for epsilon would be 0.0002. Reasoning 0.0001 < 0.0002
therefore it is zero?

Consider then the following

The same formula only we also divide by 0.0001 afterwards
( 0.0001 -0.00005 - 0.00005 ) / 0.0001 = 1 However the one actually should
be a zero therefore our first conclusion was incorrect. A correct multiplier
for epsilon should be 10001

Of course one could go on, epsilons multiplier could be anything.

Conclusion there is not a correct multiplier for epsilon. There can be one
per formula but that is probably not very practical.


Regards, Ron AF Greve

http://www.InformationSuperHighway.eu
 
P

Peng Yu

Hi,

Consider a machine where the smallest number that can be represented is
0.0001

Lets asume I have the following calculation (lets assume the 0.00005 would
be the result of some calculaton).
0.0001 -0.00005 - 0.00005
Now it is obvious that this should result in zero. However the last two
results would be zero since the machine can only have up to four digits
behind the dot. So what should be zero is actually 0.0001 so a correct value
for a multiplier for epsilon would be 0.0002. Reasoning 0.0001 < 0.0002
therefore it is zero?

Consider then the following

The same formula only we also divide by 0.0001 afterwards
( 0.0001 -0.00005 - 0.00005 ) / 0.0001 = 1 However the one actually should
be a zero therefore our first conclusion was incorrect. A correct multiplier
for epsilon should be 10001

Of course one could go on, epsilons multiplier could be anything.

Conclusion there is not a correct multiplier for epsilon. There can be one
per formula but that is probably not very practical.

I see. Then the problem is how to derive it for a particular formula.

Probably, I need to write down the formula and take the derivatives of
all its arguments, check how much errors there could be for each
arguments. Then I would end up with a bound of the rounding error
(epsilon is equivalent to it). Right?

Thanks,
Peng
 
R

Rune Allnor

I see. Then the problem is how to derive it for a particular formula.

Probably, I need to write down the formula and take the derivatives of
all its arguments, check how much errors there could be for each
arguments. Then I would end up with a bound of the rounding error
(epsilon is equivalent to it). Right?

Numerical analysis is an art in itself. There are departments
in universities which deal almost exclusively with the analysis
of numerics, which essentially boils down to error analysis.

In my field of work certain analytical solutions were formulated
in the early '50s, but a stable numerical solution wasn't found
until the early/mid '90s.

You might want to check with the math department at your local
university on how to approach whatever problem you work with.

Rune
 
P

Peng Yu

In my field of work certain analytical solutions were formulated
in the early '50s, but a stable numerical solution wasn't found
until the early/mid '90s.

Would you please give some example references on this?

Thanks,
Peng
 
E

Erik Wikström

Right, I meant std::abs(x).


Therefore, there is no general accept such epsilon?

No, different applications requires different precision, some would
consider a variable equal to zero if it was 0.0001 from zero while
others might require 0.0000001. You have to analyse your problem to find
a value that suites you.
 
R

Rune Allnor

Would you please give some example references on this?

At the risk of becoming inaccurate, as I haven't reviewed
the material in 5 years and write off the top of my head:

Around 1953-55 Tompson and Haskell proposed a method to
compute the propagation of seismic waves through layered
media. The method used terms on the form

x = (exp(y)+1)/(exp(z)+1)

where y and z were of large magnitude and 'almost equal'.
In a perfect formulation x would be very close to 1.

Since y and z are large an one uses an imperfect numerical
representation, the computation errors in the exponents
become important. So basically the terms that should
cancel didn't, and one was left with a numerically unstable
solution.

There were made several attempts to handle this (Ng and Reid
in the '70s, Henrik Schmidt in the '80), with varoius
degrees of success. And complexity. As far as I am concerned,
the problem wasn't solved until around 1993 when Sven Ivansson
came up with a numerically stable scheme.

What all these attempts had in common was that they took
the original analytical formulation and organized the terms
in various ways to avoid the complicated, large-magnitude
internal terms.

I am sure there are simuilar examples in other areas.

As for an example on error analysis, you could check out the
analysis of Horner's rule for evaluating polynomials, which
is tretaed in most intro books on numerical analysis.

Rune
 
R

Rune Allnor

Around 1953-55 Tompson and Haskell proposed a method to
compute the propagation of seismic waves through layered
media. The method used terms on the form

 x = (exp(y)+1)/(exp(z)+1)

where y and z were of large magnitude and 'almost equal'.
In a perfect formulation x would be very close to 1.

Typo correction: The problematics terms were on the form

x = exp(y)-exp(z)

where y and z are large and x is small.

Rune
 
R

Ron AF Greve

Hi,

You could indeed do an analysis that way. Actually that kind of thing is
also done when measuring something and one has to know the error in the
measurement. Taking in account the accuracy of measuring equipment and the
kind of operation you (multiplying, addition etc) you can the tell what the
error range is (like I measured 5V +/- 0.5V.

It is a lot of work though.

Regards, Ron AF Greve

http://www.InformationSuperHighway.eu
 
J

James Kanze

Suppose T is 'float' or 'double'.
x < 10 * std::numeric_limits<T>::epsilon();
I can use the above comparison to test if 'x' is numerically
zero.

If you want to test whether x is numerically zero, "x == 0.0" is
the only correct way.
But I'm wondering what should be a good multiplicative
constant before epsilon?

There isn't one, since the idiom is broken (in general---there
are specific cases where it might be appropriate).
 
J

James Kanze

Really? What if x is -10000? What if it is equal to
std::numeric_limits <T>::epsilon()?
To answer your question literally, then comparing to zero is
easy, just use if(x==0). However, this usually does not give
you much if x is a result of some computation, with this
expression you can pretty much just check whether x has been
assigned literal zero beforehand.

It depends on the computation. There are a lot of contexts
where you get 0.0 exactly, and that's what you want to test for.
There are less contexts where this is true for other values (0.0
is a bit special), but they also exist.
 
J

James Kanze

I see. Then the problem is how to derive it for a particular formula.

No. The problem is how to implement the formula so that it
gives the correct results.
Probably, I need to write down the formula and take the
derivatives of all its arguments, check how much errors there
could be for each arguments. Then I would end up with a bound
of the rounding error (epsilon is equivalent to it). Right?

Not necessarily. You need to better understand how machine
floating point works, and the mathematics which underlies it.

Think of it for a minute. If I had a system in which sin(0.0)
returned anything but 0.0 (exactly), I'd consider it defective.
For other values, this is somewhat less obvious, but 0.0 (and in
some contexts, 1.0 and -1.0) are a bit special.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top