Cosine algo, was Re: sqrt algo

P

pete

pete wrote:
.. and now for a float sqrt algorithm:

I just wrote a cosine function called c_os, for no special reason.
It calls no functions that I haven't also written.

On my machine, it seems to work better than
my MS compiler's standard library cos function.
They don't get the zeros right. I do.

c_os(-123 * pi / 6) is 0.000000e+000
c_os(-129 * pi / 6) is 0.000000e+000

cos(-123 * pi / 6) is 7.839514e-015
cos(-129 * pi / 6) is -4.409261e-015


/* BEGIN c_os.c output */

const double pi = 4 * atan(1);

c_os(-120 * pi / 6) is 1.000000e+000
c_os(-121 * pi / 6) is 8.660254e-001
c_os(-122 * pi / 6) is 5.000000e-001
c_os(-123 * pi / 6) is 0.000000e+000
c_os(-124 * pi / 6) is -5.000000e-001
c_os(-125 * pi / 6) is -8.660254e-001
c_os(-126 * pi / 6) is -1.000000e+000
c_os(-127 * pi / 6) is -8.660254e-001
c_os(-128 * pi / 6) is -5.000000e-001
c_os(-129 * pi / 6) is 0.000000e+000
c_os(-130 * pi / 6) is 5.000000e-001
c_os(-131 * pi / 6) is 8.660254e-001

cos(-120 * pi / 6) is 1.000000e+000
cos(-121 * pi / 6) is 8.660254e-001
cos(-122 * pi / 6) is 5.000000e-001
cos(-123 * pi / 6) is 7.839514e-015
cos(-124 * pi / 6) is -5.000000e-001
cos(-125 * pi / 6) is -8.660254e-001
cos(-126 * pi / 6) is -1.000000e+000
cos(-127 * pi / 6) is -8.660254e-001
cos(-128 * pi / 6) is -5.000000e-001
cos(-129 * pi / 6) is -4.409261e-015
cos(-130 * pi / 6) is 5.000000e-001
cos(-131 * pi / 6) is 8.660254e-001

/* END c_os.c output */

/* BEGIN c_os.c */

#include <stdio.h>
#include <math.h>
#include <float.h>
#include <errno.h>

double c_os(double x);
double p_i(void);
double sq_rt(double x);

int main(void)
{
const double pi = 4 * atan(1);

puts("/* BEGIN c_os.c output */\n");
puts("const double pi = 4 * atan(1);\n");
printf("c_os(-120 * pi / 6) is %e\n", c_os(-120 * pi / 6));
printf("c_os(-121 * pi / 6) is %e\n", c_os(-121 * pi / 6));
printf("c_os(-122 * pi / 6) is %e\n", c_os(-122 * pi / 6));
printf("c_os(-123 * pi / 6) is %e\n", c_os(-123 * pi / 6));
printf("c_os(-124 * pi / 6) is %e\n", c_os(-124 * pi / 6));
printf("c_os(-125 * pi / 6) is %e\n", c_os(-125 * pi / 6));
printf("c_os(-126 * pi / 6) is %e\n", c_os(-126 * pi / 6));
printf("c_os(-127 * pi / 6) is %e\n", c_os(-127 * pi / 6));
printf("c_os(-128 * pi / 6) is %e\n", c_os(-128 * pi / 6));
printf("c_os(-129 * pi / 6) is %e\n", c_os(-129 * pi / 6));
printf("c_os(-130 * pi / 6) is %e\n", c_os(-130 * pi / 6));
printf("c_os(-131 * pi / 6) is %e\n", c_os(-131 * pi / 6));
putchar('\n');
printf("cos(-120 * pi / 6) is %e\n", cos(-120 * pi / 6));
printf("cos(-121 * pi / 6) is %e\n", cos(-121 * pi / 6));
printf("cos(-122 * pi / 6) is %e\n", cos(-122 * pi / 6));
printf("cos(-123 * pi / 6) is %e\n", cos(-123 * pi / 6));
printf("cos(-124 * pi / 6) is %e\n", cos(-124 * pi / 6));
printf("cos(-125 * pi / 6) is %e\n", cos(-125 * pi / 6));
printf("cos(-126 * pi / 6) is %e\n", cos(-126 * pi / 6));
printf("cos(-127 * pi / 6) is %e\n", cos(-127 * pi / 6));
printf("cos(-128 * pi / 6) is %e\n", cos(-128 * pi / 6));
printf("cos(-129 * pi / 6) is %e\n", cos(-129 * pi / 6));
printf("cos(-130 * pi / 6) is %e\n", cos(-130 * pi / 6));
printf("cos(-131 * pi / 6) is %e\n", cos(-131 * pi / 6));
puts("\n/* END c_os.c output */");
return 0;
}

double c_os(double x)
{
unsigned n, negative, sine;
double a, b, c;
static double pi, two_pi, four_pi, half_pi, quarter_pi;

if (x >= -DBL_MAX && DBL_MAX >= x) {
if (1 > pi) {
pi = p_i();
two_pi = 2 * pi;
four_pi = 4 * pi;
half_pi = pi / 2;
quarter_pi = pi / 4;
}
if (0 > x) {
x = -x;
}
while (x > two_pi) {
b = x / 2;
a = two_pi;
while (b > a) {
a *= 2;
}
x -= a;
}
if (x > pi) {
x = two_pi - x;
}
if (x > half_pi) {
x = pi - x;
negative = 1;
} else {
negative = 0;
}
if (x > quarter_pi) {
x = half_pi - x;
sine = 1;
} else {
sine = 0;
}
c = x * x;
x = n = 0;
a = 1;
do {
b = a;
a *= c / ++n;
a /= ++n;
b -= a;
a *= c / ++n;
a /= ++n;
x += b;
} while (b > DBL_EPSILON / 2);
if (sine) {
x = sq_rt(1 - x * x);
}
if (negative) {
x = -x;
}
} else {
x = -HUGE_VAL;
errno = EDOM;
}
return x;
}

double p_i(void)
{
unsigned n;
double p, a, b;

p = 0;
a = 3;
n = 1;
do {
a /= 9;
b = a / n;
a /= 9;
n += 2;
b -= a / n;
n += 2;
p += b;
} while (b > DBL_EPSILON / 4);
a = 2;
n = 1;
do {
a /= 4;
b = a / n;
a /= 4;
n += 2;
b -= a / n;
n += 2;
p += b;
} while (b > DBL_EPSILON / 2);
return 4 * p;
}

double sq_rt(double x)
{
int n;
double a, b;

if (DBL_MAX >= x) {
if (x > 0) {
for (n = 0; x > 2; x /= 4) {
++n;
}
while (0.5 > x) {
--n;
x *= 4;
}
a = x;
b = (1 + x) / 2;
do {
x = b;
b = (a / x + x) / 2;
} while (x > b);
while (n > 0) {
x *= 2;
--n;
}
while (0 > n) {
x /= 2;
++n;
}
} else {
if (0 > x) {
errno = EDOM;
x = HUGE_VAL;
}
}
} else {
errno = ERANGE;
}
return x;
}

/* END c_os.c */
 
D

Dik T. Winter

> pete wrote: ....
> They don't get the zeros right. I do.
>
> c_os(-123 * pi / 6) is 0.000000e+000
> c_os(-129 * pi / 6) is 0.000000e+000
>
> cos(-123 * pi / 6) is 7.839514e-015
> cos(-129 * pi / 6) is -4.409261e-015

The values you deliver are correct if and only if "-123 * pi / 6" is
exact, which it is not. There is a rounding error, so your function
recevies is an actual value the exact value plus some small value.
The cosinus of that is approximately that small value.
 
M

Mark McIntyre

I just wrote a cosine function called c_os, for no special reason.
It calls no functions that I haven't also written.

On my machine, it seems to work better than
my MS compiler's standard library cos function.
They don't get the zeros right. I do.

Its equally possible that you're getting the cos of a number close to
n pi/2 wrong.... :)

Seriously tho, both answers are within the possible accuracy of a
double. One might be mathematically better than the other, but neither
is more right than the other in computing terms.

Its even possible, (I've not read your algo), that you have
special-case handling.
 
C

Christian Bau

pete said:
I just wrote a cosine function called c_os, for no special reason.
It calls no functions that I haven't also written.

On my machine, it seems to work better than
my MS compiler's standard library cos function.
They don't get the zeros right. I do.

c_os(-123 * pi / 6) is 0.000000e+000
c_os(-129 * pi / 6) is 0.000000e+000

cos(-123 * pi / 6) is 7.839514e-015
cos(-129 * pi / 6) is -4.409261e-015

Well, they might get it "righter" than you do.

Whatever value "pi" is in your C program, it is _not_ the mathematical
constant Pi, but it is equal to Pi + epsilon for some very small
epsilon. Pi is an irrational number, and floating point numbers in your
C implementation are most likely all rational numbers.

When you calculate (-129 * pi / 6), that error epsilon is multiplied by
-129/6 = -21.5, so your argument is off by -21.5 epsilon from a zero of
the cosine function (plus whatever rounding errors the multiplication
and division added).

So calculating cos (-129 * pi / 6) using a _good_ implementation is very
unlikely to produce a value of zero.
 
P

pete

Christian said:
Well, they might get it "righter" than you do.

They might, but ...
I subtracted the theoretical value from the calculated value,
in the cases where the values were rational numbers
which could most likely be represented exactly in floating point,
like 1.0 and 0.5.

c_os(-120 * pi / 6) - 1.0 is 0.000000e+000
c_os(-122 * pi / 6) - 0.5 is 1.221245e-015
c_os(-123 * pi / 6) + 0.0 is 0.000000e+000
c_os(-124 * pi / 6) + 0.5 is 1.998401e-015
c_os(-126 * pi / 6) + 1.0 is 0.000000e+000
c_os(-128 * pi / 6) + 0.5 is -4.329870e-015
c_os(-129 * pi / 6) + 0.0 is 0.000000e+000
c_os(-130 * pi / 6) - 0.5 is 1.221245e-015

cos(-120 * pi / 6) - 1.0 is 0.000000e+000
cos(-122 * pi / 6) - 0.5 is 3.182025e-015
cos(-123 * pi / 6) + 0.0 is 7.839514e-015
cos(-124 * pi / 6) + 0.5 is 4.242944e-015
cos(-126 * pi / 6) + 1.0 is 0.000000e+000
cos(-128 * pi / 6) + 0.5 is -6.364809e-015
cos(-129 * pi / 6) + 0.0 is -4.409261e-015
cos(-130 * pi / 6) - 0.5 is -1.272257e-015

In all cases, c_os had the smaller or same difference.

I tried it with a different value for pi,
pi = 3.1415926535897932384626433832795028841971693993751;
and got the same results.

I think the c_os function is OK.
 
C

Christian Bau

pete said:
They might, but ...
I subtracted the theoretical value from the calculated value,
in the cases where the values were rational numbers
which could most likely be represented exactly in floating point,
like 1.0 and 0.5.

c_os(-120 * pi / 6) - 1.0 is 0.000000e+000
c_os(-122 * pi / 6) - 0.5 is 1.221245e-015
c_os(-123 * pi / 6) + 0.0 is 0.000000e+000
c_os(-124 * pi / 6) + 0.5 is 1.998401e-015
c_os(-126 * pi / 6) + 1.0 is 0.000000e+000
c_os(-128 * pi / 6) + 0.5 is -4.329870e-015
c_os(-129 * pi / 6) + 0.0 is 0.000000e+000
c_os(-130 * pi / 6) - 0.5 is 1.221245e-015

cos(-120 * pi / 6) - 1.0 is 0.000000e+000
cos(-122 * pi / 6) - 0.5 is 3.182025e-015
cos(-123 * pi / 6) + 0.0 is 7.839514e-015
cos(-124 * pi / 6) + 0.5 is 4.242944e-015
cos(-126 * pi / 6) + 1.0 is 0.000000e+000
cos(-128 * pi / 6) + 0.5 is -6.364809e-015
cos(-129 * pi / 6) + 0.0 is -4.409261e-015
cos(-130 * pi / 6) - 0.5 is -1.272257e-015

In all cases, c_os had the smaller or same difference.

Seems you didn't get the point.

Whatever value your variable "pi" has, it is not the mathematical
constant Pi. Therefore, cos ((2k + 1/2) pi) is not _supposed_ to be
zero. The larger you choose k, the further away from zero it _should_
be.

Your c_os function does its argument reduction with an approximation to
the mathematical Pi rounded to double, most likely with 53 bits of
mantissa. If the library you used uses the built-in FCOS instruction of
the P4 chip, that uses an approximation to Pi with 66 bits of mantissa.
It will _correctly_ give non-zero results when you try to calculate cos
(-129 * pi / 6) using your incorrect approximation pi.
 
P

pete

Seems you didn't get the point.

I didn't.
I've never seen a pocket calculator that didn't yield 0 exactly
as the cosine of the product of an odd integer
and the calculator's approximation of pi / 2.

Both cos and c_os hit 1.0 and -1.0 exactly,
for cos(-120 * pi / 6) and cos(-120 * pi / 6).
That makes me think that c_os and cos are normalizing
large x the same way as each other on my system.

c_os normalizes x down into the range of between 0 an pi / 4.
I assume that most implementations of cos
normalize x down to at least 2 * pi.
I think that if c_os's approximation of pi,
matches the implementation's approximation of pi,
then I shouldn't see any difference in the normalized value for x,
the point not being that that the value of x is exact,
(I know it usually isn't)
but rather that it is possible for cos and c_os to see large x,
the same.
 
P

pete

Dik said:
The values you deliver are correct if and only if "-123 * pi / 6" is
exact, which it is not.

Couldn't 0.0 also be delivered if the true cosine
of the actual argument was positive
and also closer to 0.0 than to DBL_MIN ?
 
P

pete

Mark said:
Its equally possible that you're getting the cos of a number close to
n pi/2 wrong.... :)
Its even possible, (I've not read your algo), that you have
special-case handling.

There's just a smidgeon of special case handling in sq_rt.

In these cases:

c_os(-123 * pi / 6) + 0.0 is 0.000000e+000
c_os(-129 * pi / 6) + 0.0 is 0.000000e+000

cos(-123 * pi / 6) + 0.0 is 7.839514e-015
cos(-129 * pi / 6) + 0.0 is -4.409261e-015

For values of x, which are equal to the product
of an odd integer and c_os's approximation of pi / 2,
the c_os function, normalizes x all the way down to 0.0,
and calculates the sine of 0.0 from the cosine.
c_os then reports the sine of 0.0, as the cosine of pi / 2.
The special case is that that involves using sq_rt
to find the root of zero. sq_rt(0) is a special case.

When I was writing c_os, it didn't originally
normalize x any lower than pi / 2,
but I found that I was having accuaracy problems all around
the vicinity of pi / 2.

The infinite series that c_os approximates,
converges rapidly for small x,
but not rapidly when x is greater than 1.

The two generally recognized sources of error
when summing a series, are:
1 the error in each term
2 the error from only summing a finite number of terms.
A more rapidly converging series, reduces both errors.

I was wondering if my implementation's cos function
doesn't normalized x, when x is about pi / 2.

c_os agrees with cos on my system for some input.

c_os(-120 * pi / 6) - 1.0 is 0.000000e+000
c_os(-126 * pi / 6) + 1.0 is 0.000000e+000

cos(-120 * pi / 6) - 1.0 is 0.000000e+000
cos(-126 * pi / 6) + 1.0 is 0.000000e+000

I'm am suspecting that it's because the series summing do loop,
converges after just one iteration when x == 0.0,
and also because c_os and cos are using a value for pi
which is effectively the same as each other's.
 
C

Christian Bau

pete said:
I didn't.
I've never seen a pocket calculator that didn't yield 0 exactly
as the cosine of the product of an odd integer
and the calculator's approximation of pi / 2.

Both cos and c_os hit 1.0 and -1.0 exactly,
for cos(-120 * pi / 6) and cos(-120 * pi / 6).
That makes me think that c_os and cos are normalizing
large x the same way as each other on my system.

The values 1.0 and -1.0 are much harder to miss than 0.0.

For x = 2*k*Pi + eps, cos (x) is about 1 - eps^2 / 2. With typical 53
mantissa for double numbers, you should get a result of 1.0 as long as
eps < 2^-27. That's quite a large number :)
 
P

pete

Christian Bau wrote:
The values 1.0 and -1.0 are much harder to miss than 0.0.

I think I'm dealing with that correctly by returning
sq_rt(1 - c_os(0) * c_os(0)) as the value for c_os(pi / 2).
For x = 2*k*Pi + eps, cos (x) is about 1 - eps^2 / 2. With typical 53
mantissa for double numbers, you should get a result of 1.0 as long as
eps < 2^-27. That's quite a large number :)

Thank you.
 
O

Old Wolf

pete said:
I've never seen a pocket calculator that didn't yield 0
exactly as the cosine of the product of an odd integer
and the calculator's approximation of pi / 2.

I think this is a QoI issue. ISTR a post by P.J. Plauger
one time, where he discussed the tradeoffs between speed
vs. accuracy in the [-2pi, 2pi] range, and accuracy for
large multiples of 2pi. In fact accuracy for large multiples
of 2pi is a complicated problem, as Christian Bau pointed out.
He seemed to get user complaints whichever way he went in
his C library, so he made 2 different versions of the
functions depending on what the user was going to use it for.

Obviously the CPU manufacturers have to make a similar
QoI decision when implementing their hardware trig functions.
 
W

Willem

pete wrote:
) Christian Bau wrote:
)
)> The values 1.0 and -1.0 are much harder to miss than 0.0.
)
) I think I'm dealing with that correctly by returning
) sq_rt(1 - c_os(0) * c_os(0)) as the value for c_os(pi / 2).

You're really not getting it, are you ?

The point is that a value close to 1 is much more likely to be
rounded to exactly 1, whereas a value close to 0 is more likely to
*not* be rounded to exactly 0 because that's how floating point works.


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
 
D

Dik T. Winter

>
> Couldn't 0.0 also be delivered if the true cosine
> of the actual argument was positive
> and also closer to 0.0 than to DBL_MIN ?

It could, but that is not the case. Your c_os function accepts an argument
of type double. In IEEE that means a mantissa of 53 bits. The relative
error in that argument is about 2^(-53) (^ is exponentiation). So the
absolute error is certainly larger than that value. Reduction with a
true pi would ask for the cosine of pi/2 + error. And that is approximately
equal to the error, which is much larger than DBL_MIN.

In general in double precision there is no representable value to the
function for which the true cosine of the actual argument is closer
to 0.0 than to DBL_MIN.
 
D

Dik T. Winter

>
> I think I'm dealing with that correctly by returning
> sq_rt(1 - c_os(0) * c_os(0)) as the value for c_os(pi / 2).

No, you are not. Try sq_rt((1 - c_os(0)) * (1 + c_os(0))) which gives
a much better approximation... The reason that (1 - c)*(1 + c) is
much better than (1 - c * c) is easily shown. Consider decimal
arithmetic with numbers of 5 significant digits. Assume c = 0.99999.
In that arithmetic c*c == 1, so 1 - c*c == 0. On the other hand,
1 - c == 0.00001 and 1 + c == 2.00000, and so
(1 - c) * (1 + c) == 0.00002. Much closer to the actual value which is
0.0000199999.
 
D

Dik T. Winter

> I didn't.
> I've never seen a pocket calculator that didn't yield 0 exactly
> as the cosine of the product of an odd integer
> and the calculator's approximation of pi / 2.

Any pocket calculator worth the money will return a non-zero value,
unless it uses symbolic calculations.
And apparently you never have seen the older HP pocket calculators.

When you use pi to the calculators precision, cos(pi/2) should give
approximately the precision of the calculator. If it does not it
is wrong.
 
P

pete

Dik said:
No, you are not. Try sq_rt((1 - c_os(0)) * (1 + c_os(0))) which gives
a much better approximation... The reason that (1 - c)*(1 + c) is
much better than (1 - c * c) is easily shown. Consider decimal
arithmetic with numbers of 5 significant digits. Assume c = 0.99999.
In that arithmetic c*c == 1, so 1 - c*c == 0. On the other hand,
1 - c == 0.00001 and 1 + c == 2.00000, and so
(1 - c) * (1 + c) == 0.00002.
Much closer to the actual value which is 0.0000199999.

That's interesting.
Thank you.
 
M

Martin Eisenberg

pete said:
I've never seen a pocket calculator that didn't yield 0 exactly
as the cosine of the product of an odd integer
and the calculator's approximation of pi / 2.

Many pocket calculators have higher working precision than display
precision to hide the junk accumulating in the lowest digits over
time. Some can switch that mode off.


Martin
 
P

pete

Willem said:
pete wrote:
) Christian Bau wrote:
)
)> The values 1.0 and -1.0 are much harder to miss than 0.0.
)
) I think I'm dealing with that correctly by returning
) sq_rt(1 - c_os(0) * c_os(0)) as the value for c_os(pi / 2).

You're really not getting it, are you ?

The point is that a value close to 1 is much more likely to be
rounded to exactly 1, whereas a value close to 0 is more likely to
*not* be rounded to exactly 0 because that's how floating point works.

I think I'm starting to get it.

1 + DBL_MIN == 1
0 + DBL_MIN == DBL_MIN
 
G

Gerry Quinn

Any pocket calculator worth the money will return a non-zero value,
unless it uses symbolic calculations.
And apparently you never have seen the older HP pocket calculators.

When you use pi to the calculators precision, cos(pi/2) should give
approximately the precision of the calculator. If it does not it
is wrong.

My trusty old Casio fx-7000G gives 0. Presumably it keeps some extra
significant figures that are not shown to the user. I believe this is
quite common.

- Gerry Quinn
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,074
Latest member
StanleyFra

Latest Threads

Top