Cosine algo, was Re: sqrt algo

D

Dik T. Winter

>
> My trusty old Casio fx-7000G gives 0. Presumably it keeps some extra
> significant figures that are not shown to the user. I believe this is
> quite common.

Extra precision does not matter. The value returned should approximately
be the precision used. But you probably have to display in scientific
notation.
 
M

Michael Mair

Dik said:
Extra precision does not matter. The value returned should approximately
be the precision used. But you probably have to display in scientific
notation.

FWIW: My old TI-34 gives me 0 even in scientific notation, regardless of
how I enter (k+1/2)*pi for reasonably small integers k, as long as I
keep the input rounded to the nearest decimal fraction within precision
(which is 12 digits at 10 displayed).


Cheers
Michael
 
J

Joe Wright

Michael said:
FWIW: My old TI-34 gives me 0 even in scientific notation, regardless of
how I enter (k+1/2)*pi for reasonably small integers k, as long as I
keep the input rounded to the nearest decimal fraction within precision
(which is 12 digits at 10 displayed).

My C implementation, for cos(pi/2) gives 6.1232339957367660e-17 which
looks pretty close to zero to me.
 
D

Dik T. Winter

> Dik T. Winter wrote: ....
>
> FWIW: My old TI-34 gives me 0 even in scientific notation, regardless of
> how I enter (k+1/2)*pi for reasonably small integers k, as long as I
> keep the input rounded to the nearest decimal fraction within precision
> (which is 12 digits at 10 displayed).

In that case your old TI-34 probably does a range reduction with a pi in
about the precision. This is an old trick to make the cosine function
look good to the layman (or even some experts). Your old TI-34 is not
calculating cos(x) but cos(a * x) for some a close to 1. Given enough
data points it is possible to determine quite a lot about bad algorithms
used. Like the value of pi (or 1/pi) used in range reduction. I know
of quite a few algorithms where the argument is multiplied first by 2/pi
(in machine precision) which is followed by a much simplified range
reduction (which now amounts to taking the fractional part).

Such functions will be quite a bit wrong, especially for large values of
the argument. One of the reasons I insisted on a two-argument sine and
cosine in the Ada standard alongside to the one-argument functions. The
first argument is just the same, the second argument gives how to measure.
So sin(x, 360) gives the sine when x is in degrees, sin(x, 2 * pi) would
give the standard sine with a rounded value of pi. And sin(x) would
give the standard sine with an unrounded value of pi. The major reason
for that is that in numerical mathematics the most common expression used
is sin(2 * pi * x) with bad results, and which translates to the two
argument expression sin(x, 1.0) with good results.
 
J

Joe Wright

Joe said:
Michael Mair wrote:
[ much snippage ]
FWIW: My old TI-34 gives me 0 even in scientific notation, regardless of
how I enter (k+1/2)*pi for reasonably small integers k, as long as I
keep the input rounded to the nearest decimal fraction within precision
(which is 12 digits at 10 displayed).

My C implementation, for cos(pi/2) gives 6.1232339957367660e-17 which
looks pretty close to zero to me.

It is 0.00000000000000006123233995736766 decimal. It still looks like
zero to me.

The floating point value zero where all bits of the object are zero is
largely artificial, perhaps the result of initialization or assignment
of a 0.0 value or constant. It is not likely to be the calculated result
of any of the trig functions.

As stated up-thread (I forget who) floats, if they 'round' at all, round
toward infinity, not zero.
 
D

Dik T. Winter

> As stated up-thread (I forget who) floats, if they 'round' at all, round
> toward infinity, not zero.

I have not seen it, but it is wrong. Floating point operations round to
the nearest representable value, which may be smaller or larger.
 
A

Arthur J. O'Dwyer

It is 0.00000000000000006123233995736766 decimal. It still looks
like zero to me.

Which "it"? cos(pi/2), or 6.1232339957367660e-17? The former looks like
zero to me; the latter looks different from zero.

Obviously what we need is a computer program that, given any decimal
number or ratio expressed in exact terms, will tell us the IEEE
floating-point number that most closely approximates it. Call this
function 'IEEE'. Then the whole argument boils down to:

Is IEEE(cos(IEEE(pi/2))) == IEEE(0.0) ?

where "pi/2" is a shorthand for "1.5707963267948966..." out to more
decimal places than fit in a 'double'.

I'm sure such programs exist, but I don't know of any, can't find any on
Google, and certainly don't trust my floating-point-magic skills enough to
write one. If anybody can find or write one, though, they could settle the
whole pointless debate right now. :/


[Joe Wright wrote:]
As stated up-thread (I forget who) floats, if they 'round' at all, round
toward infinity, not zero.

In general, it depends whose floating-point chip and/or algorithms
you're using. But I have heard, and believe, that the C Standard defines
floats and doubles so that only IEEE-standard implementations are
conforming --- and they round to the nearest representable value.
(Sorry, I obviously don't read c.l.c enough anymore! :)

HTH,
-Arthur
 
K

Keith Thompson

Arthur J. O'Dwyer said:
In general, it depends whose floating-point chip and/or algorithms
you're using. But I have heard, and believe, that the C Standard
defines floats and doubles so that only IEEE-standard implementations
are conforming --- and they round to the nearest representable value.
(Sorry, I obviously don't read c.l.c enough anymore! :)

C99 requires IEEE floating-point if and only if __STDC_IEC_559__ is
defined. Conforming C implementations can and do use other
floating-point formats (VAX, Cray, IBM mainframe, etc.).

Most new hardware uses IEEE, but the C standard doesn't require it.
 
M

Michael Mair

Dik said:
In that case your old TI-34 probably does a range reduction with a pi in
about the precision. This is an old trick to make the cosine function
look good to the layman (or even some experts). Your old TI-34 is not
calculating cos(x) but cos(a * x) for some a close to 1. Given enough
data points it is possible to determine quite a lot about bad algorithms
used. Like the value of pi (or 1/pi) used in range reduction. I know
of quite a few algorithms where the argument is multiplied first by 2/pi
(in machine precision) which is followed by a much simplified range
reduction (which now amounts to taking the fractional part).

Yep, that is what I guessed, so I tried it out with slightly larger
values. Probably, I would have to go out quite a bit but that is
beside the point.
For the few "layman's" tasks I use an actual calculator today, this
indeed is good enough ;-)
Such functions will be quite a bit wrong, especially for large values of
the argument. One of the reasons I insisted on a two-argument sine and
cosine in the Ada standard alongside to the one-argument functions. The
first argument is just the same, the second argument gives how to measure.
So sin(x, 360) gives the sine when x is in degrees, sin(x, 2 * pi) would
give the standard sine with a rounded value of pi. And sin(x) would
give the standard sine with an unrounded value of pi. The major reason
for that is that in numerical mathematics the most common expression used
is sin(2 * pi * x) with bad results, and which translates to the two
argument expression sin(x, 1.0) with good results.

Interesting, thanks :)

Cheers
Michael
 
P

pete

Dik T. Winter wrote:
I have not seen it, but it is wrong.
Floating point operations round to
the nearest representable value, which may be smaller or larger.

If the operation is addition
or an adding function like fma,
then the rounding mode is characterized by the value of FLT_ROUNDS.
-1 indeterminable
0 toward zero
1 to nearest
2 toward positive infinity
3 toward negative infinity
 
R

Rob Thorpe

Keith said:
C99 requires IEEE floating-point if and only if __STDC_IEC_559__ is
defined. Conforming C implementations can and do use other
floating-point formats (VAX, Cray, IBM mainframe, etc.).

Most new hardware uses IEEE, but the C standard doesn't require it.

Not quite true. Most new hardware can be persuaded to work in a
completely IEEE compatible way, but will not always by default.

Cray and VAX have their own different, difficult formats.
IBM mainframe has it's own format, but modern machines also support
IEEE.
x86 does not quite support IEEE by default -it is normally excessively
accurate- but it can be made to if you really need less accurate
answers.
AFAICR Alpha and UltraSPARC are slightly different to IEEE by default
unless a particular flag is set.
 
D

Dik T. Winter

>
> In scientific notation it still gives 0, albeit in the form
> 0.00000000E+00.

In that case the implementation of the cosine on your calculator is not
optimal.
 
P

P.J. Plauger

Which "it"? cos(pi/2), or 6.1232339957367660e-17? The former looks like
zero to me; the latter looks different from zero.

Obviously what we need is a computer program that, given any decimal
number or ratio expressed in exact terms, will tell us the IEEE
floating-point number that most closely approximates it. Call this
function 'IEEE'. Then the whole argument boils down to:

Is IEEE(cos(IEEE(pi/2))) == IEEE(0.0) ?

where "pi/2" is a shorthand for "1.5707963267948966..." out to more
decimal places than fit in a 'double'.

I'm sure such programs exist, but I don't know of any, can't find any on
Google, and certainly don't trust my floating-point-magic skills enough to
write one. If anybody can find or write one, though, they could settle the
whole pointless debate right now. :/

We have such a program. We use it all the time in developing and testing
our math functions. But the answer may not be as clear as you'd like.
While it's true that cos(pi/2) is zero for a sufficiently precise
representation of pi, it's *not* necessarily so for one accurate just
to machine precision. And, of course, cos(n*pi/2) is less and less
likely to be zero for larger integer values of n, given pi to just
machine precision.

We define the *sensitivity* of a function as the number of ulps
(units in the least-significant place) a function result changes
for a +/-1 ulp change in the argument. All the oscillating functions
get very sensitive around their zeros (other than for an argument
of zero). We ensure that we give the best function result taking
each argument as exact, even for cos(1e37), but that's admittedly
not likely to be useful in most applications. Our goal is simply
not to add to any loss of significance.
[Joe Wright wrote:]
As stated up-thread (I forget who) floats, if they 'round' at all, round
toward infinity, not zero.

In general, it depends whose floating-point chip and/or algorithms
you're using. But I have heard, and believe, that the C Standard defines
floats and doubles so that only IEEE-standard implementations are
conforming --- and they round to the nearest representable value.
(Sorry, I obviously don't read c.l.c enough anymore! :)

Not true. Other representations besides IEEE-754 are permitted.
In C99, Annex F and Annex G impose additional constraints on an
implementation that advertises conformance to IEEE-754 (actually
IEC-60559), but you can conform without this promise.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
A

Arthur J. O'Dwyer

We have such a program. We use it all the time in developing and testing
our math functions. But the answer may not be as clear as you'd like.
While it's true that cos(pi/2) is zero for a sufficiently precise
representation of pi, it's *not* necessarily so for one accurate just
to machine precision. And, of course, cos(n*pi/2) is less and less
likely to be zero for larger integer values of n, given pi to just
machine precision.

Right --- that's why I introduced the IEEE() notation in that post,
so I could avoid saying nonsensical things like "cos(n*pi/2) isn't zero
for some n." All you mean is that IEEE(cos(IEEE(n*pi/2))) isn't
IEEE(0.0) for some values of n, and that's obvious. But the current
debate just asks, "Is it equal to zero /for the certain values of n*pi/2
with which Pete is testing his implementation?/"
I suppose I should have written

For which n is IEEE(cos(IEEE(n*pi/2))) == IEEE(0.0) ?

It's true that Pete is probably testing something more like

For which n is IEEE(cos(IEEE(IEEE(n)*IEEE(pi)/IEEE(2)))) == IEEE(0.0) ?

but that seems to me less interesting and less useful, for some reason.

-Arthur
 
P

P.J. Plauger

Right --- that's why I introduced the IEEE() notation in that post,
so I could avoid saying nonsensical things like "cos(n*pi/2) isn't zero
for some n." All you mean is that IEEE(cos(IEEE(n*pi/2))) isn't IEEE(0.0)
for some values of n, and that's obvious.

No, that's not "all I mean".
But the current debate
just asks, "Is it equal to zero /for the certain values of n*pi/2 with
which Pete is testing his implementation?/"

And I tried to say that this answer is also not obvious. In fact,
it depends on the precision and rounding mode, even for n==1.
I suppose I should have written

For which n is IEEE(cos(IEEE(n*pi/2))) == IEEE(0.0) ?

It's true that Pete is probably testing something more like

For which n is IEEE(cos(IEEE(IEEE(n)*IEEE(pi)/IEEE(2)))) == IEEE(0.0)
?

but that seems to me less interesting and less useful, for some reason.

Dunno why.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
A

Arthur J. O'Dwyer

Arthur J. O'Dwyer said:
Obviously what we need is a computer program that, given any decimal
number or ratio expressed in exact terms, will tell us the IEEE
floating-point number that most closely approximates it. Call this
function 'IEEE'. Then the whole argument boils down to:

Is IEEE(cos(IEEE(pi/2))) == IEEE(0.0) ? [...]
We have such a program. We use it all the time in developing and testing
our math functions. But the answer may not be as clear as you'd like.
While it's true that cos(pi/2) is zero for a sufficiently precise
representation of pi, it's *not* necessarily so for one accurate just
to machine precision. And, of course, cos(n*pi/2) is less and less
likely to be zero for larger integer values of n, given pi to just
machine precision.

Right --- that's why I introduced the IEEE() notation in that post,
so I could avoid saying nonsensical things like "cos(n*pi/2) isn't zero
for some n." All you mean is that IEEE(cos(IEEE(n*pi/2))) isn't IEEE(0.0)
for some values of n, and that's obvious.

No, that's not "all I mean".
But the current debate
just asks, "Is it equal to zero /for the certain values of n*pi/2 with
which Pete is testing his implementation?/"

And I tried to say that this answer is also not obvious. In fact,
it depends on the precision and rounding mode, even for n==1.

Precision being a parameter to IEEE(), and "rounding mode" being
irrelevant, as far as I can tell --- is there any case in which a
programmer would want a cosine implementation that rounded down, or toward
positive infinity, or whatever, instead of simply taking the closest
representable value?
(Okay, rounding down would be a nice feature if 1.0 weren't exactly
representable; but I'm fairly sure it /is/ representable, even in the
weirdest implementations C allows.)

-Arthur
 
P

P.J. Plauger

Arthur J. O'Dwyer said:
On Tue, 18 Oct 2005, P.J. Plauger wrote:

Obviously what we need is a computer program that, given any decimal
number or ratio expressed in exact terms, will tell us the IEEE
floating-point number that most closely approximates it. Call this
function 'IEEE'. Then the whole argument boils down to:

Is IEEE(cos(IEEE(pi/2))) == IEEE(0.0) ? [...]
We have such a program. We use it all the time in developing and
testing
our math functions. But the answer may not be as clear as you'd like.
While it's true that cos(pi/2) is zero for a sufficiently precise
representation of pi, it's *not* necessarily so for one accurate just
to machine precision. And, of course, cos(n*pi/2) is less and less
likely to be zero for larger integer values of n, given pi to just
machine precision.

Right --- that's why I introduced the IEEE() notation in that post,
so I could avoid saying nonsensical things like "cos(n*pi/2) isn't zero
for some n." All you mean is that IEEE(cos(IEEE(n*pi/2))) isn't
IEEE(0.0)
for some values of n, and that's obvious.

No, that's not "all I mean".
But the current
debate
just asks, "Is it equal to zero /for the certain values of n*pi/2 with
which Pete is testing his implementation?/"

And I tried to say that this answer is also not obvious. In fact,
it depends on the precision and rounding mode, even for n==1.

Precision being a parameter to IEEE(), and "rounding mode" being
irrelevant, as far as I can tell --- is there any case in which a
programmer would want a cosine implementation that rounded down, or toward
positive infinity, or whatever, instead of simply taking the closest
representable value?

'Fraid so. People who do interval arithmetic round the lower bound
down and the upper bound up. Not that I approve, but it *is* done.
(Okay, rounding down would be a nice feature if 1.0 weren't exactly
representable; but I'm fairly sure it /is/ representable, even in the
weirdest implementations C allows.)

It is, but pi isn't.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

Christian Bau

"Arthur J. O'Dwyer said:
Right --- that's why I introduced the IEEE() notation in that post,
so I could avoid saying nonsensical things like "cos(n*pi/2) isn't zero
for some n." All you mean is that IEEE(cos(IEEE(n*pi/2))) isn't
IEEE(0.0) for some values of n, and that's obvious. But the current
debate just asks, "Is it equal to zero /for the certain values of n*pi/2
with which Pete is testing his implementation?/"
I suppose I should have written

For which n is IEEE(cos(IEEE(n*pi/2))) == IEEE(0.0) ?

The mathematical cosine function can only be zero when the argument is
an irrational number. I have never seen any C implementation that
allowed irrational floating point numbers, so if x is the value of a C
double or float or long double number, than cosine (x) is not zero.

There would be a remote possibility that a floating point number x is so
close to (n + 1/2) pi that the absolute value of cos (x) is less than
DBL_MIN; I am sure that someone has tested this for the IEEE formats
this is not the case.
 
P

P.J. Plauger

The mathematical cosine function can only be zero when the argument is
an irrational number. I have never seen any C implementation that
allowed irrational floating point numbers, so if x is the value of a C
double or float or long double number, than cosine (x) is not zero.

There would be a remote possibility that a floating point number x is so
close to (n + 1/2) pi that the absolute value of cos (x) is less than
DBL_MIN; I am sure that someone has tested this for the IEEE formats
this is not the case.

64-bit double:

cos(pi/2) == 0.612324e-16
DBL_EPSILON == 2.220446e-16

80-bit long double:

cosl(pi/2) == -0.250828e-19
cosl(5*pi/2) == -0.169937e-19
LDBL_EPSILON == 1.084202e-19

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,022
Latest member
MaybelleMa

Latest Threads

Top