Floating Point comparison problem

R

red floyd

arnuld said:
searching the archives at Google Groups gave me a Template, created
by Jerry Coffin. I just added this to my program but my program is not
compiling:

Did you remember to #include <limits>?

That's where std::numeric_limits lives.
#include <iostream>
#include <cassert>

template<class T> bool float_equality(T const& a, T const& b, T const& factor)
{
assert(!std::numeric_limits<T>::is_integer);

T x = std::numeric_limits<T>::epsilon * factor * a;
T y = std::numeric_limits<T>::epsilon * factor * b;

T diff = std::abs(a-b);

return (diff < x) || (diff < y);
}


int main()
{
double d1 = 0.1;
const double d2 = 0.9;
d1 *= 9;
const double my_epsilon = 1E-10;
if( float_equality( d1, d2, my_epsilon ) )
{
std::cout << " == \n";
}
else
{
std::cout << " != \n";
}

return 0;
}

=============== OUTPUT ========================
/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra float-compare.cpp
float-compare.cpp: In function ‘bool
float_equality(const T&, const T&, const T&) [with T = double]’:
float-compare.cpp:26: instantiated from here float-compare.cpp:11:

error: invalid operands of types ‘double ()()throw ()’ and ‘const
double’ to binary ‘operator*’

float-compare.cpp:12: error: invalid operands of types ‘double ()()throw ()’ and
‘const double’ to binary ‘operator*’

float-compare.cpp:14: error: call of overloaded ‘abs(double)’ is ambiguous
/usr/include/stdlib.h:691: note: candidates are: int abs(int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:143:
note: long int std::abs(long int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:174:
note: long long int __gnu_cxx::abs(long\long int)

/home/arnuld/programs $
 
J

Jerry Coffin

searching the archives at Google Groups gave me a Template, created
by Jerry Coffin. I just added this to my program but my program is not
compiling:

The comparison code should look more like this:

#include <limits>
#include <math.h>
#include <assert.h>

template<class T>
bool float_equality(T const& a, T const& b, T const& factor)
{
assert(!std::numeric_limits<T>::is_integer);

T x = std::numeric_limits<T>::epsilon() * factor * a;
T y = std::numeric_limits<T>::epsilon() * factor * b;

T diff = abs(a-b);

return (diff < x) || (diff < y);
}

I.e. std::numeric_limits<T>::epsilon is a function, and we need to use
its result. I've since come up with a method I like somewhat better
though:

#include <math.h>

bool isEqual(double a, double b, double prec = 1e-6) {
int mag1, mag2;

frexp(a, &mag1);
frexp(b, &mag2);

if ( mag1 != mag2)
return false;

double epsilon = ldexp(prec, mag1);

double difference = abs(a-b);

return difference <= epsilon;
}

The use of frexp and ldexp make this a bit dense, but the general idea
is fairly simple: frexp separates a floating point number into two
parts, the significand and the exponent. ldexp reverses that, putting an
exponent together with a significand to produce a complete floating
point number.

This does work a bit differently though. For the previous code, the
third number you passed was supposed to represent the amount of
precision you thought might have been lost due to rounding and such. For
example, if you thought you might have lost a couple of digits due to
rounding, you'd pass 100. It would then figure out how many digits that
meant should still be good, and go from there. This has a basic problem
of being somewhat different from what you'd usually expect to pass.

The newer code works a bit more like you'd expect: you pass the
allowable relative error directly. For example, to ensure agreement to
10 significant digits, you can pass 1e-10 as the allowable error.
 
R

Reetesh Mukul

On Thu, 14 Feb 2008 22:07:38 -0800, red floyd wrote:
Did you remember to #include <limits>?
That's where std::numeric_limits lives.

that does not make any difference. Even after including the <limits>,
error remains the same:

/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra float-compare.cpp
float-compare.cpp: In function 'bool float_equality(const T&, const T&,
const T&) [with T = double]': float-compare.cpp:28: instantiated from
here
float-compare.cpp:13: error:
invalid operands of types 'double ()()throw ()' and 'const double'
to binary 'operator*'
float-compare.cpp:14: error: invalid operands of types 'double ()()throw
()' and 'const double' to binary 'operator*'
float-compare.cpp:16: error: call of overloaded 'abs(double)' is
ambiguous /usr/include/stdlib.h:691: note: candidates are: int abs(int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:143:
note: long int std::abs(long int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:174:
note: long long int __gnu_cxx::abs(long\
long int)
/home/arnuld/programs $

--http://lispmachine.wordpress.com/

You should use,

std::numeric_limits<T>::epsilon()

in place of
std::numeric_limits<T>::epsilon.

It will work now.

Regards,
Reetesh Mukul
 
A

arnuld

Next month I will start to work on a C++ based Software named CAT++ which
is going to provide FORTRAN like arrays in C++ and will be used within
Scientific Community and hence will heavily depend on
Numerical-Computations. I was reading 29.16 ans 29.17 sections of FAQs:

http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.16

as a test, I just tried this program on Linux, GCC 4.2.2 and it gave me 2
different results:


#include <iostream>
#include <string>
#include <vector>

int main()
{
const double x = 0.05;
const double y = 0.07;
const double z = x + y;
const double key_num = x + y;

if( key_num == z )
{
std::cout << "==\n";
}
else
{
std::cout << "!=\n";
}

return 0;
}

========== OUTPUT ============

/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra test2.cpp
/home/arnuld/programs $ ./a.out
==
/home/arnuld/programs $


it always outputs == , Now i changed key_num to this:

const double key_num = 0.12;

and this program now always outputs !=

/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra test2.cpp
/home/arnuld/programs $ ./a.out
!=
/home/arnuld/programs $


As FAQ explains Floating-Point is inaccurate but my project is heavily
oriented towards number-crunching, so I was thinking of stop developing
that sofwtare in C++ and simply use FORTRAN for this specific
application in a special domain.
 
R

Ralph D. Ungermann

arnuld said:
here is some strange thing that happens on my system, Archlinux, AMD64
GCC 4.2.2:

int main()
{
double a = 1.0 / 10.0;
double b = a * 10.0;

if( float_equality( a, b ))
{
std::cout << "EPSILON at job\n";
}
else
{
std::cout << "OOPS!" << std::endl;
}

return 0;
}

opposite to what FAq says this code always outputs: OOPS!

Is it really that strange, that 0.1 != 1.0 ?
int main()
{
double a = 1.0 / 10.0;
double b = a * 10.0;
const double c = 1.0;

if( b != c )
{
std::cout << "Surprise ! \n" << std::endl;
}
else
{
std::cout << "== \n";
}

return 0;
}

it always OUTPUTs --> ==


what is wrong here ?

nothing: 1. == 1. sounds reasonable. I guess you need a break :)

-- ralph
 
L

Lionel B

On Fri, 15 Feb 2008 00:41:13 -0700, Jerry Coffin wrote:

[...]
The comparison code should look more like this:

#include <limits>
#include <math.h>
#include <assert.h>

template<class T>
bool float_equality(T const& a, T const& b, T const& factor) {
assert(!std::numeric_limits<T>::is_integer);

T x = std::numeric_limits<T>::epsilon() * factor * a;
T y = std::numeric_limits<T>::epsilon() * factor * b;

T diff = abs(a-b);

return (diff < x) || (diff < y);
}

Am I missing something, or shouldn't that be:

return (diff < abs(x)) || (diff < abs(y));

otherwise a pair of negative numbers will never return equal ...
I.e. std::numeric_limits<T>::epsilon is a function, and we need to use
its result. I've since come up with a method I like somewhat better
though:

#include <math.h>

bool isEqual(double a, double b, double prec = 1e-6) {
int mag1, mag2;

frexp(a, &mag1);
frexp(b, &mag2);

if ( mag1 != mag2)
return false;

double epsilon = ldexp(prec, mag1);

double difference = abs(a-b);

return difference <= epsilon;
}

That's very neat, cheers.
 
J

James Kanze

arnuld said:
Next month I will start to work on a C++ based Software named CAT++ which
is going to provide FORTRAN like arrays in C++ and will be used within
Scientific Community and hence will heavily depend on
Numerical-Computations. I was reading 29.16 ans 29.17 sections of FAQs:

as a test, I just tried this program on Linux, GCC 4.2.2 and
it gave me 2 different results:
#include <iostream>
#include <string>
#include <vector>
int main()
{
const double x = 0.05;
const double y = 0.07;
const double z = x + y;
const double key_num = x + y;
if( key_num == z )
{
std::cout << "==\n";
}
else
{
std::cout << "!=\n";
}
return 0;
}
========== OUTPUT ============
/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra test2.cpp
/home/arnuld/programs $ ./a.out
==
/home/arnuld/programs $
it always outputs == , Now i changed key_num to this:
const double key_num = 0.12;
and this program now always outputs !=

Which seems normal to me. I certainly wouldn't expect anything
else.
/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra test2.cpp
/home/arnuld/programs $ ./a.out
!=
/home/arnuld/programs $
As FAQ explains Floating-Point is inaccurate but my project is
heavily oriented towards number-crunching, so I was thinking
of stop developing that sofwtare in C++ and simply use FORTRAN
for this specific application in a special domain.

It's not so much a question of floating point being inaccurate
per se, but rather that it follows somewhat different rules than
the real numbers. You can use them to simulate real numbers,
but you have to know what you are doing. (The term "inaccurate"
here generally should be interpeted to mean that they are an
inaccurate simulation of the real numbers.)

Changing to Fortran won't change anything, of course, because
the problem isn't with the language; it's with the way floating
point is implemented in hardware.

Until you've read and understood "What Every Computer Scientist
Should Know About Floating-Point Arithmetic"
(http://docs.sun.com/source/806-3568/ncg_goldberg.html), you
shouldn't even try to do anything with float or double. (But
note that that article is only an introduction. If you want to
write complex numeric applications, you'll need to know a lot
more.)
 
A

arnuld

As FAQ explains Floating-Point is inaccurate but my project is heavily
oriented towards number-crunching, so I was thinking of stop developing
that sofwtare in C++ and simply use FORTRAN for this specific
application in a special domain.

searching the archives at Google Groups gave me a Template, created
by Jerry Coffin. I just added this to my program but my program is not
compiling:


#include <iostream>
#include <cassert>

template<class T> bool float_equality(T const& a, T const& b, T const& factor)
{
assert(!std::numeric_limits<T>::is_integer);

T x = std::numeric_limits<T>::epsilon * factor * a;
T y = std::numeric_limits<T>::epsilon * factor * b;

T diff = std::abs(a-b);

return (diff < x) || (diff < y);
}


int main()
{
double d1 = 0.1;
const double d2 = 0.9;
d1 *= 9;
const double my_epsilon = 1E-10;
if( float_equality( d1, d2, my_epsilon ) )
{
std::cout << " == \n";
}
else
{
std::cout << " != \n";
}

return 0;
}

=============== OUTPUT ========================
/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra float-compare.cpp
float-compare.cpp: In function ‘bool
float_equality(const T&, const T&, const T&) [with T = double]’:
float-compare.cpp:26: instantiated from here float-compare.cpp:11:

error: invalid operands of types ‘double ()()throw ()’ and ‘const
double’ to binary ‘operator*’

float-compare.cpp:12: error: invalid operands of types ‘double ()()throw ()’ and
‘const double’ to binary ‘operator*’

float-compare.cpp:14: error: call of overloaded ‘abs(double)’ is ambiguous
/usr/include/stdlib.h:691: note: candidates are: int abs(int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:143:
note: long int std::abs(long int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:174:
note: long long int __gnu_cxx::abs(long\long int)

/home/arnuld/programs $
 
A

arnuld

Did you remember to #include <limits>?

That's where std::numeric_limits lives.

that does not make any difference. Even after including the <limits>,
error remains the same:

/home/arnuld/programs $ g++ -ansi -pedantic -Wall -Wextra float-compare.cpp
float-compare.cpp: In function ‘bool float_equality(const T&, const T&,
const T&) [with T = double]’: float-compare.cpp:28: instantiated from
here
float-compare.cpp:13: error:
invalid operands of types ‘double ()()throw ()’ and ‘const double’
to binary ‘operator*’
float-compare.cpp:14: error: invalid operands of types ‘double ()()throw
()’ and ‘const double’ to binary ‘operator*’
float-compare.cpp:16: error: call of overloaded ‘abs(double)’ is
ambiguous /usr/include/stdlib.h:691: note: candidates are: int abs(int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:143:
note: long int std::abs(long int)
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.2.2/../../../../include/c++/4.2.2/cstdlib:174:
note: long long int __gnu_cxx::abs(long\
long int)
/home/arnuld/programs $
 
A

arnuld

I.e. std::numeric_limits<T>::epsilon is a function, and we need to use
its result. I've since come up with a method I like somewhat better
though:

#include <math.h>

bool isEqual(double a, double b, double prec = 1e-6) {
int mag1, mag2;

frexp(a, &mag1);
frexp(b, &mag2);

if ( mag1 != mag2)
return false;

double epsilon = ldexp(prec, mag1);

double difference = abs(a-b);

return difference <= epsilon;
}

It is going much more complex, I think the version of FAQ is much more
simple to work with.
 
A

arnuld

Refering to "isEqual()" function of FAQ version:
http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.17

here is some strange thing that happens on my system, Archlinux, AMD64
GCC 4.2.2:

int main()
{
double a = 1.0 / 10.0;
double b = a * 10.0;

if( float_equality( a, b ))
{
std::cout << "EPSILON at job\n";
}
else
{
std::cout << "OOPS!" << std::endl;
}

return 0;
}

opposite to what FAq says this code always outputs: OOPS!


int main()
{
double a = 1.0 / 10.0;
double b = a * 10.0;
const double c = 1.0;

if( b != c )
{
std::cout << "Surprise ! \n" << std::endl;
}
else
{
std::cout << "== \n";
}

return 0;
}

it always OUTPUTs --> ==


what is wrong here ?


-- arnuld
http://lispmachine.wordpress.com/
 
J

Jerry Coffin

It is going much more complex, I think the version of FAQ is much more
simple to work with.

This is still pretty trivial to work with -- you normally just use it
as:

if (isEqual(x,y))
whatever
else
whatever_else

When/if you want to specify a degree of precision, you can do that. For
example, if you want to specify agreement to 10 decimal digits, you'd do
something like:

if (isEqual(x,y, 1e-10))
equal();
else
not_equal();

I'll openly admit that _internally_, this is more complex than the code
in the FAQ. That's a simple matter of added complexity to make it work
right though. For example, using code like in the FAQ, you're stuck with
things like:

if (isEqual(1e-30, 1e-130))


coming out as true even though the numbers differ by 100 orders of
magnitude. At the same time, two numbers in the range of, say, 1e+100
will be shown as different, even if they differ by only the smallest
difference that's possible (in fact, for numbers much greater than 1,
it's identical to using simple comparison, with no tolerance for
rounding differences at all).

Saying the code in the FAQ is simple (to use) is a bit like observing
that a pair of clock-hands painted onto a board makes a clock that's
really simple to read -- you don't have to learn to read time; you just
always say: "It's 10" (or whatever).

Learning to read time adds complexity -- and a real clock is clearly
more complex than a board painted with clock hands. Most people consider
it a worthwhile trade off to get something that's at least close to
correct more than twice a day.
 
J

James Kanze

Just some comments on Jerry's original code. I missed it when
he posted it.


What ever you do, don't call this isEqual. It doesn't test for
equality, and it doesn't establish an equivalence relationship.
Something like isApproximatelyEqual may require a bit more
typing, but it won't confuse the reader.

And the above condition is almost certainly wrong. On my Intel
based machine, it means that your function will return false for
the values 1.0 and 0.9999999999999998, regardless of the
precision requested.

You really do have to calculate the espilon as a function of the
two values involved, something like:

if ( sign( a ) != sign( b ) ) { // not sure, but maybe...
return false ;
}
double tolerance = fabs(a + b) / prec / 2.0 ;
return fabs( a - b ) <= tolerance ;

Except, of course, that still has problems with very big values
(for which a + b might overflow) and very small values (for
which a - b might underflow, and result in zero). And I'm not
really sure what you should do for values which are near the
minimum representable: Do they compare approximately equal to
zero? Do they compare equal even if their signs are not the
same?

The correct answers to the above questions, of course, depend on
what is being done with the numbers. As Pete and I have said,
there isn't any real solution except understanding machine
floating point, and using that understanding intelligently.
This is still pretty trivial to work with -- you normally just use it
as:

if (isEqual(x,y))
whatever
else
whatever_else

And this is a perfect example of why the function shouldn't be
called isEqual. Any reader would expect that:
isEqual(x,y) && isEqual(y,z) <==> isEqual(x,z)
A function where that doesn't hold (modulo "singular values"
like NaN's) shouldn't be called isEqual. Ever.

[...]
I'll openly admit that _internally_, this is more complex than the code
in the FAQ.

I'm not sure what the FAQ says, but your code isn't really
correct (although I'm not sure that there is an absolute
definition of "correct" in this case).
 
J

Jerry Coffin

[ ... ]
What ever you do, don't call this isEqual. It doesn't test for
equality, and it doesn't establish an equivalence relationship.
Something like isApproximatelyEqual may require a bit more
typing, but it won't confuse the reader.

You have a good point -- this was originally posted as a follow-up to a
post that used the name isEqual, so I used the same even though it's
clearly not accurate. I believe a comment to that effect was made at the
time, but it should have been incorporated into the code.
And the above condition is almost certainly wrong. On my Intel
based machine, it means that your function will return false for
the values 1.0 and 0.9999999999999998, regardless of the
precision requested.

Good point.
You really do have to calculate the espilon as a function of the
two values involved, something like:

That's what I was using frexp and ldexp to do. I didn't take all the
corner cases into account, but the idea was definitely to adjust the
magnitude of the precision to fit the magnitudes of the numbers.
Unfortunately, both you and I took roughly the same route, of adjusting
the precision specification to fit the magnitude of the numbers, which
makes some problems such as possible underflow and overflow difficult to
avoid.

After some thought, I realized that it would be better to reverse that:
adjust the magnitude of the numbers to fit the precision specification.
This makes most of the problems disappear completely. I've included the
code at the end of this post.
Except, of course, that still has problems with very big values
(for which a + b might overflow) and very small values (for
which a - b might underflow, and result in zero). And I'm not
really sure what you should do for values which are near the
minimum representable: Do they compare approximately equal to
zero? Do they compare equal even if their signs are not the
same?

See below -- I'm pretty sure my new code avoids problems with either
underflow or overflow.

I'm not sure the problems with numbers extremely close to zero are
entirely real. The question being asked is whether two numbers are equal
to a specified precision. The answer to that is a simple yes/no. That
question may not always be the appropriate one for every possible
calculation, but that doesn't mean there's anything wrong with the code
as it stands.
I'm not sure what the FAQ says, but your code isn't really
correct (although I'm not sure that there is an absolute
definition of "correct" in this case).

This is a basic question of what should be specified, and whether that
specification fits all possible purposes. The code in the FAQ might fit
some specification, but only a very limited one (i.e. that all the
numbers involved must be quite close to 1.0 or -1.0).

I wouldn't attempt to claim that it fits every possible situation, but I
think the following code is (at least starting to get close to) correct.
The specification I'm following is that it is supposed to determine
whether two numbers are equal within a specified relative difference.
Here's the code, along with a few test cases:

#include <math.h>

bool isNearlyEqual(double a, double b, double prec = 1e-6) {
int mag1, mag2;

double num1 = frexp(a, &mag1);
double num2 = frexp(b, &mag2);

if (abs(mag1-mag2) > 1)
return false;

num2 = ldexp(num2, mag2-mag1);

return fabs(num2-num1) < prec;
}

#ifdef TEST
#include <iostream>

int main() {

std::cout.setf(std::ios_base::boolalpha);

// these should yield false -- the differ after 5 digits
std::cout << isNearlyEqual(1e-20, 1.00001e-20) << std::endl;
std::cout << isNearlyEqual(1e200, 1.00001e200) << std::endl;
std::cout << isNearlyEqual(-1e10, -1.0001e10) << std::endl;

// these should yield true; all are equal to at least 6 digits
std::cout << isNearlyEqual(1e-20, 1.000001e-20) << std::endl;
std::cout << isNearlyEqual(1e200, 1.000001e200) << std::endl;
std::cout << isNearlyEqual(1.0, 0.99999998) << std::endl;
std::cout << isNearlyEqual(-1e10, -1.000001e10) << std::endl;
return 0;
}

#endif

I'd certainly welcome any comments on this code.

As an aside, I'd note that I don't really like the 'if (abs(mag2-mag1)>
1)' part, but I can't see anything a lot better at the moment. In
theory, I'd like it to take the tolerance into account -- e.g. if
somebody specifies a tolerance of 100, it should allow numbers that are
different by a factor of 100 to yield true. This would add some
complexity, however, and I can't think of any real uses to justify it.
 
J

James Kanze

[ ... ]
You really do have to calculate the espilon as a function of the
two values involved, something like:
That's what I was using frexp and ldexp to do. I didn't take all the
corner cases into account, but the idea was definitely to adjust the
magnitude of the precision to fit the magnitudes of the numbers.

I realized that. I was just wondering why you didn't take the
standard approach. Or what I believe is the standard
approach---I'm really not that up in numeric programming. As I
pointed out, the standard approach has problems with overflow
and underflow, but depending on the application, that will often
not be an issue.

[...]
See below -- I'm pretty sure my new code avoids problems with
either underflow or overflow.
I'm not sure the problems with numbers extremely close to zero are
entirely real. The question being asked is whether two numbers are equal
to a specified precision. The answer to that is a simple yes/no. That
question may not always be the appropriate one for every possible
calculation, but that doesn't mean there's anything wrong with the code
as it stands.

[...]
This is a basic question of what should be specified, and whether that
specification fits all possible purposes.

[and later...]
I wouldn't attempt to claim that it fits every possible
situation,[...]

I think that that is really the crux of the problem. More
important than the actual code is understanding exactly what it
is doing, and being able to analyse when it is appropriate, and
when not.
The code in the FAQ might fit
some specification, but only a very limited one (i.e. that all the
numbers involved must be quite close to 1.0 or -1.0).

As I said, I haven't seen it. If it is something like "return
fabs(a-b) < prec", then I agree that it's almost never
appropriate.

Except, of course, when it is. In process control software,
I've had cases where the requirement was to keep the actual
value within a specified epsilon of the target value. In such
cases, of course, the "tolerance" is usually something between
1% and 10%, and the actual values will be of similar magnitude.
On the other hand, you rarely test for equality in such cases,
since the action you take will depend on whether the actual
value is less than or greater than the target value. (Is
"target value" the correct English word here: "valeur de
consigne" in French, "Sollwert" in German?)
I wouldn't attempt to claim that it fits every possible
situation, but I think the following code is (at least
starting to get close to) correct. The specification I'm
following is that it is supposed to determine whether two
numbers are equal within a specified relative difference.
Here's the code, along with a few test cases:
#include <math.h>
bool isNearlyEqual(double a, double b, double prec = 1e-6) {
int mag1, mag2;
double num1 = frexp(a, &mag1);
double num2 = frexp(b, &mag2);
if (abs(mag1-mag2) > 1)
return false;
num2 = ldexp(num2, mag2-mag1);
return fabs(num2-num1) < prec;
}

I'm still curious as to why you are trying to use frexp and
ldexp, rather than a more classical approach. Whatever that is:
I've been searching the Web for something, but none of my usual
numeric resources pages seem to turn up anything---should I
conclude that people competent in numeric processing don't have
such a function in their toolbox, and that thus, we are probably
wrong it trying to define one.

(FWIW: The reason I don't think that ldexp and frexp would be in
a classical solution, is that the classical solutions
doubtlessly date from the days of Fortran.)

[...]
As an aside, I'd note that I don't really like the 'if
(abs(mag2-mag1)> 1)' part, but I can't see anything a lot
better at the moment.

I think it's a common characteristic for numerical functions to
special case limit cases. On the other hand, you definitely
need to test border values here.
In theory, I'd like it to take the tolerance into account --
e.g. if somebody specifies a tolerance of 100, it should allow
numbers that are different by a factor of 100 to yield true.
This would add some complexity, however, and I can't think of
any real uses to justify it.

Agreed. I suspect as well that you'll get some errors when the
limit is around 2---your test will return false when when one
number is 1.9999 times the other, or some such. I'd say that
your missing an essential first line in your code:

assert( prec > 0.0 && prec < 1.0 ) ;

That seems like a reasonable constraint. (The first condition
should actually be something like "prec >= pow( 2.0,
-std::numeric_limits< double >::digits )" if the function is to
make sense. But maybe it is more reasonable to use prec >= 0.0,
with the documentation saying that if prec is too small, the
function just becomes a very expensive way to write ==.)
 
J

Jerry Coffin

On Feb 16, 9:09 pm, Jerry Coffin <[email protected]> wrote:

[ ... ]
[...]
This is a basic question of what should be specified, and whether that
specification fits all possible purposes.

[and later...]
I wouldn't attempt to claim that it fits every possible
situation,[...]

I think that that is really the crux of the problem. More
important than the actual code is understanding exactly what it
is doing, and being able to analyse when it is appropriate, and
when not.

I wholeheartedly agree.
As I said, I haven't seen it. If it is something like "return
fabs(a-b) < prec", then I agree that it's almost never
appropriate.

It's not quite that bad, but pretty close:

inline bool isEqual(double x, double y)
{
const double epsilon = /* some small number such as 1e-5 */;
return std::abs(x - y) <= epsilon * std::abs(x);
}

That's also undoubtedly where the poor name originated as well...
Except, of course, when it is. In process control software,
I've had cases where the requirement was to keep the actual
value within a specified epsilon of the target value. In such
cases, of course, the "tolerance" is usually something between
1% and 10%, and the actual values will be of similar magnitude.
On the other hand, you rarely test for equality in such cases,
since the action you take will depend on whether the actual
value is less than or greater than the target value. (Is
"target value" the correct English word here: "valeur de
consigne" in French, "Sollwert" in German?)

Yup -- target value looks entirely appropriate here. You're right
though: for a case like this, you're generally looking at whether the
value is in one of two ranges, not really looking for equality per se.

[ ... ]
I'm still curious as to why you are trying to use frexp and
ldexp, rather than a more classical approach. Whatever that is:
I've been searching the Web for something, but none of my usual
numeric resources pages seem to turn up anything---should I
conclude that people competent in numeric processing don't have
such a function in their toolbox, and that thus, we are probably
wrong it trying to define one.

Mostly because virtually all the existing solutions I've seen have
obvious problems. As I said (or at least implied) previously, I'm not
out to write something that applies in every possible situation, but I
would like to eliminate obvious, well-known problems.

[ ... ]
Agreed. I suspect as well that you'll get some errors when the
limit is around 2---your test will return false when when one
number is 1.9999 times the other, or some such. I'd say that
your missing an essential first line in your code:

assert( prec > 0.0 && prec < 1.0 ) ;

That seems like a reasonable constraint. (The first condition
should actually be something like "prec >= pow( 2.0,
-std::numeric_limits< double >::digits )" if the function is to
make sense.

Right -- an error with prec near 2 wouldn't surprise me, but when you
get down to it, I'm not terribly concerned with prec being greater than
one.

In theory, I'm pretty sure the correct computation should be:

pow(std::numeric_limits<double>::radix, -std::numeric_limits
<double>::digits)

Although essentially all existing hardware IS binary, the 'digits' is in
terms of the specified radix, which could theoretically be something
other than 2.

I do like the idea of asserting on the range of prec. In this case I
think we can reasonably just use epsilon:

assert(prec>=std::numeric_limits<double>::epsilon() && prec<1.0);

In theory you could argue that epsilon is technically the wrong value --
IIRC, it's the smallest value that can be added to 1.0 and get a number
distinguished from 1.0. What we (sort of) want is the smallest value
that can be subtracted from 1.0 and get a value distinguishable from
1.0. That number could be something like one FP "quanta" smaller than
epsilon.
But maybe it is more reasonable to use prec >= 0.0,
with the documentation saying that if prec is too small, the
function just becomes a very expensive way to write ==.)

That would certainly work as well. I'll have to sit back and think a bit
about which is really better. Realistically, a value equal to epsilon is
probably pretty pointless as well, but I do like it from a viewpoint of
helping embed documentation of the function's intent into the code.
 
J

James Kanze

On 2008-02-17 08:05:09, James Kanze wrote:
"Setpoint" is usually used for this.

Thanks. I'm not familiar with the term, but then, I've never
worked in an English speaking country.

I'll admit that I particularly like the German in this case:
Istwert and Sollwert---literally "is-value" and
"should-be-value". Although my library was originally developed
in French, and has since been converted to English, the
parameters of the verification functions in the test suites have
always been "ist" and "soll". Short, simple, direct and to the
point.
 
J

James Kanze

[ ... ]
As I said, I haven't seen it. If it is something like "return
fabs(a-b) < prec", then I agree that it's almost never
appropriate.
It's not quite that bad, but pretty close:
inline bool isEqual(double x, double y)
{
const double epsilon = /* some small number such as 1e-5 */;
return std::abs(x - y) <= epsilon * std::abs(x);
}
That's also undoubtedly where the poor name originated as well...

Hmmm. If the FAQ actually recommends using this function to
replace ==, then I'd say that it's an error in the FAQ. Or is
it just a suggestion along the lines of "you might have to do
something like this in certain cases".
Yup -- target value looks entirely appropriate here. You're
right though: for a case like this, you're generally looking
at whether the value is in one of two ranges, not really
looking for equality per se.

Typically, you have three possibilities to consider: in range
(do nothing), below range, and above range. The action you take
for below range will not be the same as for above range. (The
one exception might be absolute limits: if you get out of those,
your system is misfunctionning, you want to shut down
immediately and let the backup take over.)

Actually, typically, it's more complicated than that, since for
real physical values, you almost always have to take history
into account as well, in order to avoid oscillations in the
value due to overcorrection. (All of the work I've done in this
domain was in Germany, so I only know the German word for it:
PID-Regler. But the PID part is based on an international
vocabulary: proportional, integral and derivative. Basically,
the correction you apply is proportional to the error, the
integral of the error and the derivative of the error.)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,045
Latest member
DRCM

Latest Threads

Top