Float comparison

F

Flash Gordon

CBFalconer said:
You did. See the first quote above. As I read it.

You have not read up on the technique then. Simplifying a lot, as a
result you get one floating-point number which is as close as you can
get to the mathematical result, and another floating point number which
is in effect the binary places to the right of what you could get in to
the first number (i.e. it is the exact error). This means that you end
up with twice the number of bits in the result as were in either of the
operands. The technique, however, does RELY on all of the numbers being
exact numbers and not ranges. It also relies on the optimiser not
cheating and doing things the C standard does not allow it to do. One
thing it does NOT rely on is having extra bits in the operations
performed by the hardware (it is a technique explicitly designed to get
more precision than the hardware supports).

To save you buying the book, here is one reference to it that describes
the technique. http://www.mrob.com/pub/math/f161.html
I've not reviewed beyond a brief skim, so it could contain errors. For
other possible links just google for it.
 
C

CBFalconer

Flash said:
CBFalconer wrote:
.... snip ...


You have said that your EPSILON is the appropriate xxx_EPSILON,
now you are saying it is half the LSB of the significand. Which
is it, since it can't be both. This is part of the problem, you
say a term means one thing then some time later decide it means
something else without bothering to tell anyone that you have
stopped using the previous meaning.

I am saying that, if defined to the weight of the LSB in float.h,
the system is WRONG.
 
C

CBFalconer

Flash said:
You have not read up on the technique then. Simplifying a lot, as a
result you get one floating-point number which is as close as you can
get to the mathematical result, and another floating point number which
is in effect the binary places to the right of what you could get in to
the first number (i.e. it is the exact error). This means that you end
up with twice the number of bits in the result as were in either of the
operands.

Which is more bits. I never denied that doubling the digits
available could improve the precision.

....
To save you buying the book, here is one reference to it that
describes the technique. http://www.mrob.com/pub/math/f161.html
....
 
F

Flash Gordon

CBFalconer said:
I am saying that, if defined to the weight of the LSB in float.h,
the system is WRONG.

No, in that case YOU are WRONG. The definition of xxx_EPSILON at the
bottom of page 26 of n1256 is:

| the difference between 1 and the least value greater than 1 that is
| representable in the given floating point type, b**(1−p)
^^^^^^^^^^^^^

If you have problems with the English then read the simple equation at
the end.

The definitions of b and p (given earier in that section) are:
b base or radix of exponent representation (an integer > 1)
p precision (the number of base-b digits in the signiï¬cand)

I'm sure that you have been shown this definition before.

Oh, and if the maths is a problem read the words. Note the word
"representable". That means that 1.0+xxx_EPSILON is ALWAYS a
representable number, which means that it exactly corresponds to a bit
pattern, so xxx_EPSILON CANNOT be smaller than the value represented by
one bit.

Note the results of the following as well:
#include <float.h>
#include <fenv.h>
#include <math.h>
#include <stdio.h>

void check(double one,double eps)
{
double pluseps = one+eps;
double next = nextafter(one,2.0);
double plushalf = one + eps/2.0;
printf("Rounding mode: %d\n",fegetround());
if (next==pluseps)
printf("Epsilon is one bit at 1.0, 1.0=%a, pluseps=%a, next=%a,
plushalf=%a, DBL_EPSILON=%a\n",1.0,pluseps,next,plushalf,DBL_EPSILON);
else
puts("Oops");
}

int main(void)
{
check(1.0,DBL_EPSILON);
fesetround(FE_DOWNWARD);
check(1.0,DBL_EPSILON);
fesetround(FE_TONEAREST);
check(1.0,DBL_EPSILON);
fesetround(FE_TOWARDZERO);
check(1.0,DBL_EPSILON);
fesetround(FE_UPWARD);
check(1.0,DBL_EPSILON);
}
Mark-Gordons-MacBook-Air:~ markg$ gcc -std=c99 -pedantic -lm -Wall
-Wextra t.c
Mark-Gordons-MacBook-Air:~ markg$ ./a.out
Rounding mode: 0
Epsilon is one bit at 1.0, 1.0=0x1p+0, pluseps=0x1.0000000000001p+0,
next=0x1.0000000000001p+0, plushalf=0x1p+0, DBL_EPSILON=0x1p-52
Rounding mode: 1024
Epsilon is one bit at 1.0, 1.0=0x1p+0, pluseps=0x1.0000000000001p+0,
next=0x1.0000000000001p+0, plushalf=0x1p+0, DBL_EPSILON=0x1p-52
Rounding mode: 0
Epsilon is one bit at 1.0, 1.0=0x1p+0, pluseps=0x1.0000000000001p+0,
next=0x1.0000000000001p+0, plushalf=0x1p+0, DBL_EPSILON=0x1p-52
Rounding mode: 3072
Epsilon is one bit at 1.0, 1.0=0x1p+0, pluseps=0x1.0000000000001p+0,
next=0x1.0000000000001p+0, plushalf=0x1p+0, DBL_EPSILON=0x1p-52
Rounding mode: 2048
Epsilon is one bit at 1.0, 1.0=0x1p+0, pluseps=0x1.0000000000001p+0,
next=0x1.0000000000001p+0, plushalf=0x1.0000000000001p+0,
DBL_EPSILON=0x1p-52
Mark-Gordons-MacBook-Air:~ markg$

This is on an Intel platform in x86 (32 bit) mode, so can check the
details of floating point. Note that 1+DBL_EPSILON is not affexted by
rounding mode, but smaller values are (as per the standard).
 
G

gwowen

I interpret "at the beginning" to mean in the 1st year if not
1st semester.  I don't imagine even top universities cover
Lebesgue measure and integration in the first year.

Oxford do, or at least they did in the early 1990s.
A quick check of their web pages suggest that even they
don't anymore.
I would expect every half decent undergraduate mathematics
course to offer Lebesgue measure and
integration as an option but not necessarily amongst their
compulsory modules.

I would too, but you'd probably be unpleasantly surprised by
how many don't.
 
G

Guest

CBFalconer wrote:


So you are saying the standards definition of of xxx_EPSILON is wrong?
The last time I checked standards were allowed to define terms, and
language standards were allowed to define constants to be part of the
language. So I would say that the standards definition of what
xxx_EPSILON is correct by *definition* and if you want to talk about
something that does not meet that definition you need to use a different
term for which you provide an accurate definition which you stick to.

Chuck only follows the standard when it is consistent with his view
of mathematics and physics

--
Nick Keighley

"Almost every species in the universe has an irrational fear of the
dark. But they're wrong- cos it's not irrational. It's Vashta
Nerada."
The Doctor
 
D

Dik T. Winter

>
> You did. See the first quote above. As I read it.

That is the title of an article published in a well-known journal. It
describes how you can extend the precision available using standard
floating-point operations. Note that in that case each extended
precision floating-point numbers is represented by *two* floating-point
variables. So bits are added.
 
F

Flash Gordon

CBFalconer said:
Which is more bits. I never denied that doubling the digits
available could improve the precision.

No, but in response to an article where Dik referred you to a technique
for extending precision, one which RELIES on floating point numbers
being exact and not ranges, you went on about the need to add bits. So
obviously failed to check a reference yet again and instead assumed that
it was talking rubbish. You then claimed that Dik had said that you can
extend precision without adding bits, which was also obviously an
unfounded accusation.

So now how about dealing with the issue that this technique (which is
used) relies on floating point numbers representing exact real numbers.
....
....

I've left the link in so you don't have to hunt for a description of
this method to deal with the question.
 
U

user923005

 > "Dik T. Winter" wrote:
 > > (e-mail address removed) writes:
 > >> "Dik T. Winter" wrote:
 > >>
 > ... snip ....
 > >>
 > >>> Dekker, T. J. 1971, A Floating-point Technique for Extending the
 > >>> Available Precision, Numberische Mathematik 18(3), pp 224-242.
 > >>
 > >> You can't extend the precision without adding bits, or removing
 > >> the influence of other bits.
 > >
 > > Who says you can do that?  What is the relevance of that remark?
 >
 > You did.  See the first quote above.  As I read it.

That is the title of an article published in a well-known journal.  It
describes how you can extend the precision available using standard
floating-point operations.  Note that in that case each extended
precision floating-point numbers is represented by *two* floating-point
variables.  So bits are added.

Two popular implementations of that idea are Keith Brigg's
doubledouble {now obsolete} and the QD library found here:
http://crd.lbl.gov/~dhbailey/mpdist/
QD contains both a double-wide {~32 decimal digits} and quad-wide {~64
decimal digits} floating point implementation, based on 8-byte double
precision floating point.
Most arbitrary precision number packages rely on FFTs and so they use
a different technique.
 
K

Keith Thompson

Keith Thompson said:
Whoops! I meant to declare these as tinyfloat, not double:

tinyfloat x = 1.0;
tinyfloat foo = 1.05859375; // halfway between ymin and xmax
tinyfloat y = 1.125;

Ignore that. Bad day :) They were double, after all. Change
1.125 to 1.128, and apply the mean value theorem to the resulting
chi square test.
 
K

Keith Thompson

Keith Thompson said:
So EPSILON is your abbreviation for FLT_EPSILON, DBL_EPSILON, or
LDBL_EPSILON. And epsilon is a value that varies for different values
of x; it seems to be something like (nextafter(x, +INFINITY) - x) --
so it should really be thought of as a function of x.

Sorry, I meant that the algebraic norm of the result of nextafter()
with +INFINITY has a different measure from the generic EPSILON
under the vanishing transformation. Take ymax and divide through
by the range * i^4. That's degenerate, obviously.
Is that correct? *Please* give a straight answer to that question.

If I now understand you correctly, EPSILON and epsilon are two *very*
different things. If you had been deliberately trying to confuse the
issue, you could hardly have done a better job.

I actually believe Chuck may be right on a certain point, but I'll
have to think about this some more.
 
R

Richard Bos

CBFalconer said:
Search for ISO10206.

That is Pascal Extended, which, as you've been told before, is not
Pascal. Moreover, I do not care enough to look up whether either of
those languages has got a new standard since the ones I have.

Richard
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,050
Latest member
AngelS122

Latest Threads

Top