Float comparison

C

CBFalconer

Keith said:
Was it the Goldberg paper he read? There was another paper (or
web page) that, I think, James Kuyper cited, having to do with
error analysis (I don't have the URL handy).

I read both - not all of the Goldberg. Nobody seems to have picked
up on my reason for rejecting the 'round to even' method, requoted
below:
------ requote from cbf ------
I was thinking about that last night, and I think I have a strong
argument for using the simpler (is the bit 1) criterion for
rounding. Consider the value:

1110.1000

Now we say it is exactly halfway between the two possible rounded
values, because there are no more 1 bits in the value. However,
that value represents a real, and the reals outnumber the rationals
by an order of infinities. Therefore there almost MUST be another
1 bit somewhere to the right, and therefore we should round up.
----- end requote ------

Incidentally, this thread has become ridiculous. I see 600
messages in it today.
 
K

Keith Thompson

And again, you've snipped a large portion of my article, including
some direct questions I asked you, without marking the snippage.

Please stop doing that.
And thus, when we are evaluating the fp-object-value created, there
is no difference whatsoever. If we find a difference, we are
evaluating things not in that fp-object, i.e. the programming
behind its existance.

Either floating-point constant, when assigned (in a C program) to a
double object, results (on my system) in the same value begin stored.

That value, expressed as a real, is
0.333333333333333314829616256247390992939472198486328125 or, using the
notation I introduced earlier:
REAL(0.333333333333333314829616256247390992939472198486328125). It is
not REAL(0.3333333333333333).

[snip]

(See, I marked it.)
What I have read from
<http://docs.sun.com/source/806-3568/ncg_goldberg.html> so far does
not disagree with me in the slightest. Goldbergs paper is more
thorough than I have been so so far. My writings here have been
gradually refined. His paper did the refining before publication.

Quoting from Goldberg's paper:

The term _floating-point number_ will be used to mean a real
number that can be exactly represented in the format under
discussion.

Thus, in Goldberg's terminology (and my notation),
REAL(0.333333333333333314829616256247390992939472198486328125) is a
floating-point number, but REAL(0.3333333333333333) is not.

When Goldberg talks about relative error, he's referring to the
difference between the real value represented by a floating-point
value vs. the exact real mathematical result of a computation.

Please cite where Goldberg says that a floating-point number
represents a range of real numbers.
 
K

Keith Thompson

CBFalconer said:
I have been using lower case to define real arithmetic values.
EPSILON defines a defined value, as in the C standard. I think I
have been consistent in this action.

You most certainly have not.

In this thread, you have written:

Think about WHY EPSILON changes at the power of two.

and

The practical EPSILON, which causes a minimum visible difference
in fp-object values, changes when x becomes an exact power of two.

Sheesh!
 
K

Keith Thompson

CBFalconer said:
It refers to whichever, if any, is applicable to the fp-system
under discussion.

That's not a yes or no answer. Please try again.

[snip]
Compute that x*(1+EPSILON), using the appropriate float.h value of
EPSILON, for various x's from 1.0 to something less that 2.0. Note
that you get the same result from x+EPSILON in that range. (Here
range means from x=1.0 to less that 2.0).

That EPSILON should correspond to one half the weight of the least
significant bit in the significand of 1.0.

No, it doesn't. DBL_EPSILON is the value of the LSB of the
significand of 1.0, not half the value.

In the representation of 1.0, the LSB is 0. Changing the LSB to 1
changes the value from 1.0 to the next representable number above 1.0.
That is, by definition, 1.0+DBL_EPSILON.
 
K

Keith Thompson

CBFalconer said:
That is the representable form of the sum, not the input.

What sum? What input?

Perhaps you've misunderstood the standard's definition of DBL_EPSILON.
It's

the difference between
1
and
the least value greater than 1 that is representable in the
given floating point type

In the example system, the latter is 1.0+1.0/8.0, which means that
?_EPSILON is 1.0/8.0 by definition. No rounding occurs in the
expression 1.0+1.0/8.0; all operands, and the final result, are
exactly representable.
 
F

Flash Gordon

CBFalconer said:
I may well. I am at least willing to defend things.

The reasoning is given in the paper
http://docs.sun.com/source/806-3568/ncg_goldberg.html
I do not see a need for anyone to go in to further detail than the
justification given in this paper (which you have already been pointed
at) when it does such a good job.
I don't know. You are probably correct.

OK, so in future please do NOT call your preferred round the common (or
whatever it was) rounding and do NOT base your arguments on the
assumption that it is the rounding method used.
That doesn't mean the
usual is right.

No, but when experts have decided on what they think is probably the
best way you need a damn good argument for why it is wrong. Of course,
we also have the "standard" work often referred to here for which you
have the URL which gives reasons for this being the preferred method.
If it was, there wouldn't be any choice of
methods, would there?

There are unusual situations in which something else is better and this
is acknowledged by the C standard and the floating point standards.
 
P

Phil Carmody

Why clearly?

0 ^^ 0 is clearly 1 where the zeros are cardinals, since there
is exactly one mapping from the empty set to itself. But
writing "0." strongly suggests you are considering some other
variety of numbers.

Have you missed the bit about FPs approximating reals in this thread?

Phil
 
P

Phil Carmody

Ben Bacarisse said:
In what way did I say it might?

Because there's no way that software can put "significand" into
the C standard unless it modifies the file. Is you a bit fick?

Phil
 
F

Flash Gordon

William said:
What property exactly is Richard trying to describe, then?
The phrase "vanishingly small" is pretty ambiguous, and I
interpret it to mean "having measure zero". If you
can attribute some other meaning to the phrase, then
please do so.

Most people who are not mathematicians use "vanishingly small" to mean
an extremely small number, one that in practical terms (rather than
mathematical ones) is as near to zero as makes no odds (for the purposes
under discussion when the phrase is used). For example, the percentage
of half-asian half negro females running businesses in the UK is
vanishingly small. This does not mean there are none (mathematically the
percentage is either 0 or a positive number greater than zero). When
used by non-mathematicians who know enough maths to talk about the
relative sizes of the sets of different types of numbers (integers,
algebraic numbers etc) it means naively that one set is so much larger
than the other that if you randomly pick a number from the larger set
there is a very small chance it is in the smaller. This may or may not
correspond to "having measure zero", and if it does it is by shear
chance, because you cannot expect them to know what that term means.
 
P

Phil Carmody

Too early, blear still in eyes...

Why clearly?

0 ^^ 0 is clearly 1 where the zeros are cardinals,

Yesm, yes, yes - 0^^0 is clearly 0.
since there
is exactly one mapping from the empty set to itself. But
writing "0." strongly suggests you are considering some other
variety of numbers.

Yes, yes, yes - I'm considering the reals.

Get it now? If you think I'm confused by placing the two next
to each other you've completely misunderstood _why_ I placed
them next to each other.

Phil
 
A

Antoninus Twink

Your explanation is a good one. How do I find out whether it's
correct, without having to wait until I'm an expert mathematician
(which, realistically, isn't ever going to happen)?

Why are you so paranoid, Heathfield? Why do you think everyone is out to
tell you lies? Surely as a self-confessed Christian fundamentalist
you're used to taking *plenty* of things on trust.

You've now had three detailed explanations of what Lebesgue measure is
and what it seeks to do, from three different viewpoints. Each of the
explanations has been substantially correct. Why are you so keen to
refuse new knowledge? I'd really love to be able to understand the
mental processes of a self-described fundamentalist - this attitude of
denial is just bizarre.
 
F

Flash Gordon

CBFalconer said:
I read both - not all of the Goldberg. Nobody seems to have picked
up on my reason for rejecting the 'round to even' method, requoted
below:

OK, I'll pick up on it. You have not addressed all the maths and proofs
in the paper. They explain why the proposed (and these days normally
used) method is better than the alternatives.

A simpler reason for your argument not being valid is that given that
the number is produced by either approximating a real value to store it
as a floating point number, or by calculations based on such numbers,
there is as much chance that the number is actually fractionally below
the half-way point (and that last bit (or even all bits in extreme
cases) you have calculated is wrong as that the number is actually
slightly larger (i.e. that there is another 1 somewhere).

Oh, and your rounding method was out-of date for computing at least as
far back as 1906 and probably earlier, see
http://digital.library.cornell.edu/...;cc=math;view=toc;subview=short;idno=05170001
For a simpler (and more in English) argument read the Wikipedia article
which seems reasonable to me on a first read
http://en.wikipedia.org/wiki/Rounding

Also note that your favourite Language, Pascal, users round-to-even in
its Round function.
Incidentally, this thread has become ridiculous. I see 600
messages in it today.

Yes, and strangely enough there does not seem to be anyone else who
agrees with your position. Not even the posters who have demonstrated
far more knowledge of mathematics (at least one of whom I have reason to
suspect has written formal papers on mathematics) than the main
protagonists in this thread.
 
F

Flash Gordon

CBFalconer said:
No. The range of z x*y depends on the fp-object value of z
alone. This is not an error measure. It is a minimum error
measure.

As previously demonstrated, the minimum error measure is zero, just as
it is for the integer types.
 
F

Flash Gordon

CBFalconer said:
Why not? I presented the reasons. You cited somebody as God.

You cite yourself in such a manner by talking about the floating point
systems you have implemented. Surely the ubiquity of the IEEE system,
and Kahan having a PhD (Maths, University of Toronto, 1958), and him
being, Professor Emeritus of Mathematics, and of E.E. & Computer
Science, suggests he might know just a *little* more about the subject
than you and that his works should be taken seriously until proved to be
in error?

Oh, and his lecture notes on IEEE754 talks about representable numbers,
which are the exactly representable numbers and does not include the
word "range" anywhere in the paper. Said notes are available at
http://www.eecs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF
To address a different part of the thread, the notes explain why in IEEE
754 implementing nextafter is extremely easy.
These notes also explain simply some reasons for the default rounding mode.
 
B

Ben Bacarisse

CBFalconer said:
... Nobody seems to have picked
up on my reason for rejecting the 'round to even' method, requoted
below:

I did.

I gave what I thought was a reasonable response to it.
Incidentally, this thread has become ridiculous. I see 600
messages in it today.

I agree, though I am not sure it is the size alone that makes it so.

Is there any reasonable chance you will be able to give the ranges of
the five example numbers for your simple 3-bit system?
 
B

Ben Bacarisse

Phil Carmody said:
Because there's no way that software can put "significand" into
the C standard unless it modifies the file. Is you a bit fick?

We understand each other perfectly.
 
K

Kenny McCormack

Richard said:
Girls, it's quite clear to anyone with half a brain that you the two of
you are unable to communicate. Rather than identifying ares of conflict
and coming to mutual agreements you constantly branch off, leaving the
hot thread, only to return later when its cold and incomprehensible.

You will never agree and most people don't have a CLUE about your
individual stances at this time.

Give it up.Or see a doctor/shrink.

Or take it on the road and start charging admission...

Remember, the best use of psychosis is to turn it into paid
entertainment. Happens all the time...
 
D

Dik T. Winter

> "Dik T. Winter" wrote: ....
>
> Why not? I presented the reasons. You cited somebody as God.

Your reason was based only on the misguided idea that a floating-point
number represents a range rather than a specific (rational) number. There
are very good reasons *not* to do that, as an example I posted some time
ago an article where I did show some basic blocks used to double the
precision of floating-point arithmetic, and those building blocks require
that you assume a precise value in a floating-point variable, and not
some range or other. For a complete article about it see:

Dekker, T. J. 1971, A Floating-point Technique for Extending the Available
Precision, Numberische Mathematik 18(3), pp 224-242.

For reasons behind the IEEE arithmetic read the article by Goldberg (to
which links have been posted). Kahan has also a few articles written about
it.

And finally: assuming that the stored floating-point values represent
ranges makes error analysis very difficult. For a reasonably complete
course on error analysis in algebra see:

Wilkinson, J. H. The Algebraic Eigenvalue Problem, reprinted in 1988

where the analysis is developed assuming storing exact rational numbers in
floating-point variables.
 
R

Richard Bos

Flash Gordon said:
Also note that your favourite Language, Pascal, users round-to-even in
its Round function.

Since when? BS6192:1982 specified round towards zero.

Richard
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,800
Messages
2,569,657
Members
45,417
Latest member
BonitaNile
Top