Testing for very small doubles with DBL_EPSILON and _isnan()

O

Olumide

Hello,

I recently had a colleague introduce me to the variable macro
DBL_EPSILON and the function _isnan(), because some of my functions
return 1.#QNANO values. However, I'm not sure of of how best to use
DBL_EPSILON and _isnan(). Which is the better way to test for a small
double:

Method 1: _isnan( really_small_double );

Method 2: if( really_small_double < DBL_EPSILON ) // do stuff

Thanks,

- Olumide
 
K

Keith Thompson

Olumide said:
I recently had a colleague introduce me to the variable macro
DBL_EPSILON and the function _isnan(), because some of my functions
return 1.#QNANO values. However, I'm not sure of of how best to use
DBL_EPSILON and _isnan(). Which is the better way to test for a small
double:

Method 1: _isnan( really_small_double );

Method 2: if( really_small_double < DBL_EPSILON ) // do stuff

Neither.

C99 has a standard macro isnan(); if your implementation doesn't
support it, it might instead have something similar called "_isnan()".
You'll have to consult your implementation's documentation to find out
exactly what it does. But it almost certainly tests whether its
argument is a NaN -- which stands for "Not a Number". It's not a
really small number; it's not a number at all.

DBL_EPSILON is the difference between 1.0 and the next representable
number above 1.0. Whether that qualifies as "really small" is
something you'll have to decide for yourself. It depends on the
requirements of your application. There is no single answer.

Oh, and (x < DBL_EPSILON) is true for very large negative numbers.

I *think* this applies to both C and C++ (I'm reading and posting in
comp.lang.c), but cross-posts between comp.lang.c and comp.lang.c++
are almost never a good idea.
 
B

Barry Schwarz

Hello,

I recently had a colleague introduce me to the variable macro
DBL_EPSILON and the function _isnan(), because some of my functions
return 1.#QNANO values. However, I'm not sure of of how best to use
DBL_EPSILON and _isnan(). Which is the better way to test for a small
double:

Method 1: _isnan( really_small_double );

Method 2: if( really_small_double < DBL_EPSILON ) // do stuff

You misunderstand DBL_EPSILON. It is not a very small number. It is
the smallest number that can be added to 1.0 to produce a result >
1.0. If a double has d digits of precision, then DBL_EPSILON is on
the order of pow(10,-d). Almost all floating point representations
can handle numbers much smaller.
 
O

Olumide

You misunderstand DBL_EPSILON.  It is not a very small number.  It is
the smallest number that can be added to 1.0 to produce a result >
1.0.  If a double has d digits of precision, then DBL_EPSILON is on
the order of pow(10,-d).  Almost all floating point representations
can handle numbers much smaller.

Thanks everyone. I'm sorry I didn't express myself clearly enough. By
"too small" I mean cannot be represented by a double, thus the value
1.#QNANO (BTW, I'm using Visual Studio .NET 2003). Therefore, I meant
to ask what is the proper technique for testing the result of a
computation is not representable as a double?

Thanks.
 
O

Olumide

Thanks everyone. I'm sorry I didn't express myself clearly enough. By
There's no such thing as too small to represent as a double. Really
really small values become zero. NaNs come from things like 0/0,
inf-inf, etc.

Perhaps then the problem is a NaN. My functions perform lots of
computation of the type: a^b, and occasionally, a is a very small
number, while b is negative, so that the result tends to 1/0 . Might
this explain why my function occasionally returns 1.#QNANO?

Thanks.
 
B

Barry Schwarz

DBL_MIN is a very small number.

Actually it is a very small positive number but it is still larger
than most (half+1) floating point values the system can represent.
 
K

Keith Thompson

Olumide said:
Perhaps then the problem is a NaN. My functions perform lots of
computation of the type: a^b, and occasionally, a is a very small
number, while b is negative, so that the result tends to 1/0 . Might
this explain why my function occasionally returns 1.#QNANO?

It's hard to tell.

Is a^b supposed to be a raised to the power b? If so, I'd expect
something like, say, 1e-20 ^ (-100) to yield Infinity, not a NaN.
You'll need to analyze your code, perhaps using a debugger, to
determine where it's going wrong. If you can narrow your code down to
a small self-contained program that exhibits the problem, we can
probably help you -- or, very likely, you'll find the problem yourself
in the process of narrowing it down.

But please decide which language you're using, C or C++, and post just
to the appropriate newsgroup.

the top? That's an attribution line. Please leave those in place for
any quoted text. Thanks.
 
D

Dik T. Winter

> You misunderstand DBL_EPSILON. It is not a very small number. It is
> the smallest number that can be added to 1.0 to produce a result >
> 1.0.

You misunderstand DBL_EPSILON. It is the difference between 1.0 and the
next larger number. With standard rounding:
1.0 + 3 * DBL_EPSILON / 4 > 1.0
 
D

Dik T. Winter

> Perhaps then the problem is a NaN. My functions perform lots of
> computation of the type: a^b, and occasionally, a is a very small
> number, while b is negative, so that the result tends to 1/0 . Might
> this explain why my function occasionally returns 1.#QNANO?

No, that would result in an infinity, not in a NAN. The only way that could
yield a NAN would be if a was negative (and with a well-designed library,
b non-integral). In all other cases a number or an infinity should be
returned (with the now common IEEE standard arithmetic).

So I suspect your a has become negative.
 
D

Dik T. Winter

> Thanks everyone. I'm sorry I didn't express myself clearly enough. By
> "too small" I mean cannot be represented by a double, thus the value
> 1.#QNANO (BTW, I'm using Visual Studio .NET 2003). Therefore, I meant
> to ask what is the proper technique for testing the result of a
> computation is not representable as a double?

You misunderstand. A NAN does not mean what you think it means. It means
either that the result is not a real number (but complex, like sqrt(-2.0))
or that the question asked makes no sense (like 0.0/0.0).

The proper way to check for it is to test with an isnan() function, if
available, otherwise the test a != a should yield true for a NAN, but
beware of compilers that are too eager in their optimisation.
 
B

Barry Schwarz

You misunderstand DBL_EPSILON. It is the difference between 1.0 and the
next larger number. With standard rounding:
1.0 + 3 * DBL_EPSILON / 4 > 1.0

I took the definition right off of K&R II, page 258.

Your definition is from (or consistent with) n1256 but that document
gives five possible rounding methods, at least two of which render
your assertion false.
 
J

James Kuyper

Barry said:
I took the definition right off of K&R II, page 258.

Your definition is from (or consistent with) n1256 but that document
gives five possible rounding methods, at least two of which render
your assertion false.

Actually, it defines five different values for FLT_ROUNDS, but I would
not consider the value of -1 to describe a specific rounding method; and
it allows for other possible rounding modes.
The standard uses the phrase "default rounding" many times, and that may
be what he meant by "standard rounding", but as far as I can tell the
standard never defines what default rounding is. Since most of those
uses are found in Annex F, it's probably defined by IEC 60559, in which
case it would not applicable unless __STDC_IEC_559__ is pre-defined by
the implementation.

However, his point was that DBL_EPSILON is defined by subtraction, not
addition, is a valid one, and significant, since the two definitions
will not, in general, define the same number.
 
T

Tim Rentsch

Barry Schwarz said:
Actually it is a very small positive number but it is still larger
than most (half+1) floating point values the system can represent.

Apparently someone is being deliberately obtuse. Of course what
was meant (and I expect how it was read by most people) is that
DBL_MIN is a (positive) number with a very small magnitude. The
earlier discussion makes this reading the only sensible one.
 
O

Olumide

...
 > Perhaps then the problem is a NaN. My functions perform lots of
 > computation of the type: a^b, and occasionally, a is a very small
 > number, while b is negative, so that the result tends to 1/0 . Might
 > this explain why my function occasionally returns 1.#QNANO?

No, that would result in an infinity, not in a NAN.  The only way that could
yield a NAN would be if a was negative (and with a well-designed library,
b non-integral).  In all other cases a number or an infinity should be
returned (with the now common IEEE standard arithmetic).

So I suspect your a has become negative.

Okay, here is a snip of a typical section of code that occasionally
results in a output being 1.#QNANO:

double deltaX, deltaY, deltaZ;
initializeDeltaXYZ( ... );
double r2 = deltaX * deltaX + deltaY * deltaY + deltaZ * deltaZ;
double output = 2 * m_alpha * pow( r2 , m_alpha - 1 ) + 4 * m_alpha *
(m_alpha - 1) * pow( deltaX , 2 ) * pow( r2 , m_alpha - 2 );

What I would like to know is:

1. why output is sometimes evaluated as 1.#QNANO in this case
2. how to test when this happens.

Thanks.
 
O

Olumide

Okay, here is a snip of a typical section of code that occasionally
results in a output being 1.#QNANO:

double deltaX, deltaY, deltaZ;
initializeDeltaXYZ( ... );
double r2 = deltaX * deltaX + deltaY * deltaY + deltaZ * deltaZ;
double output = 2 * m_alpha * pow( r2 , m_alpha - 1 ) + 4 * m_alpha *
(m_alpha - 1) * pow( deltaX , 2 ) * pow( r2 , m_alpha - 2 );

What I would like to know is:

1. why output is sometimes evaluated as 1.#QNANO in this case
2. how to test when this happens.

I forgot to add, m_alpha is a double, with typical values: 0.5, 1.5 or
2.5
 
B

Barry Schwarz

Apparently someone is being deliberately obtuse. Of course what
was meant (and I expect how it was read by most people) is that
DBL_MIN is a (positive) number with a very small magnitude. The
earlier discussion makes this reading the only sensible one.

DBL_MIN was introduced into a discussion in which it had no relevance,
or at least none that I could find. Or maybe your ability to infer
intended meaning is just sharper than mine.

On the other hand, considering your penchant for clarity (such as your
post in the VLA thread 45 minutes earlier), I guess you are precise
while I am obtuse. I can live with that. I am also stubborn as well
as persistent.
 
N

Nate Eldredge

Olumide said:
Okay, here is a snip of a typical section of code that occasionally
results in a output being 1.#QNANO:

double deltaX, deltaY, deltaZ;
initializeDeltaXYZ( ... );
double r2 = deltaX * deltaX + deltaY * deltaY + deltaZ * deltaZ;
double output = 2 * m_alpha * pow( r2 , m_alpha - 1 ) + 4 * m_alpha *
(m_alpha - 1) * pow( deltaX , 2 ) * pow( r2 , m_alpha - 2 );

I'd suggest printing the values of all these variables (r2, m_alpha, and
all the deltas) before the computation of `output'. That will probably
bring enlightenment.
What I would like to know is:

1. why output is sometimes evaluated as 1.#QNANO in this case

2. how to test when this happens.

As mentioned earlier, my guess would be that r2 comes out negative, so
you could test for that. It's also conceivable that one of the deltas
got set to a NaN somehow, depending on what initializeDeltaXYZ does.
 
T

Tim Rentsch

Barry Schwarz said:
DBL_MIN was introduced into a discussion in which it had no relevance,
or at least none that I could find. Or maybe your ability to infer
intended meaning is just sharper than mine.

The earlier discussion shows pretty clearly that DBL_EPSILON is a
positive number with a relatively small (but not inordinately
small) magnitude. But if DBL_EPSILON "is not a very small
number" and DBL_MIN is, it's more likely what's being talked
about is the magnitude than the sign. I admit, I thought
that was obvious from the discussion.

On the other hand, considering your penchant for clarity (such as your
post in the VLA thread 45 minutes earlier), I guess you are precise
while I am obtuse. I can live with that. I am also stubborn as well
as persistent.

I try to write both clearly and precisely, perhaps moreso than
I should. On the other hand, I know not everyone does. If a
statement seems incorrect or not to make sense, I think it helps
to try to give the writer the benefit of the doubt -- assume
that they were trying to say something reasonable, but perhaps
just phrased it poorly. Sometimes doing that leads to more
confusion rather than less, but usually I think it helps much
more than it hurts. What's the most sensible meaning (reading)
a statement (especially a confusing or ambiguous one) can have?
I recommend taking that approach, and not just passively but
actively.

Also, my apologies if my comment about deliberate obtuseness
was misplaced.
 
K

Keith Thompson

Tim Rentsch said:
Apparently someone is being deliberately obtuse. Of course what
was meant (and I expect how it was read by most people) is that
DBL_MIN is a (positive) number with a very small magnitude. The
earlier discussion makes this reading the only sensible one.

But note that one of the tests the OP proposed was:

if (really_small_double < SOME_TINY_CONSTANT)

The OP needs to think about how to handle negative numbers.

And yes, DBL_MIN is a very small positive number, but it's not useful
in determining whether some given number is very small.
 
J

JosephKK

Neither.

C99 has a standard macro isnan(); if your implementation doesn't
support it, it might instead have something similar called "_isnan()".
You'll have to consult your implementation's documentation to find out
exactly what it does. But it almost certainly tests whether its
argument is a NaN -- which stands for "Not a Number". It's not a
really small number; it's not a number at all.

DBL_EPSILON is the difference between 1.0 and the next representable
number above 1.0.

Not really. It specifies the guaranteed accuracy of mathematical
operations, not the resolution. The difference can be pretty subtle,
but consider: if you are dealing with largish quantities, say
Avogadro's number, and you want to know the number of atoms delta
resolution that can be represented it would still be many trillions.
But the accuracy ratio scales directly with the size of the numbers
represented, and this is but one of the contributing components to a
property called numerical instability.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,150
Latest member
MakersCBDReviews
Top