Sine code for ANSI C

P

P.J. Plauger

Dan Pop said:
In <[email protected]>

But, when used in a real number context (as opposed to an integer number
context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented
is no more relevant than any other value from that set.

That's just one interpretation of floating-point representation. It is
akin to the oft-repeated presumption in this thread that every value
carries a +/- 1/2 ulp error with it. But that's not always the case.
Many a floating-point calculation is *exact*. So the real value exactly
represented is way more relevant than any other value from the set.
In particular, it is the value that library functions should assume
when deciding what function value to return. Do that and you will
please a large number of your customers. Don't do it and you won't
even please those who really believe the 1/2 ulp fuzz. (They should
be doing their own error propagation analysis no matter what, and
they'll almost always benefit from a result in the center of the range.)
The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to compute
sin(DBL_MAX) with 53-bit precision, ignoring the fact that DBL_MAX
stands for an interval so large as to make this function call completely
devoid of any meaning?

You have to ignore the possibility that DBL_MAX stands for a large range
because *it might not.* Granted, it probably does have that much fuzz
practically all the time; but you, the library vendor, don't know that.
And as we've discussed at length here, it's hard to say just where along
the road to possible perdition it's okay to stop trying. Once you tool
up for doing the job right along a good part of that road, it ain't that
much more work to go all the way. Then you don't have to worry about
some customer coming along and saying, "Hey, why did you give up on
computing that value properly? I happen to know what I'm doing." And
that customer might even be right.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

CBFalconer

Dan said:
.... snip ...

The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to
compute sin(DBL_MAX) with 53-bit precision, ignoring the fact that
DBL_MAX stands for an interval so large as to make this function
call completely devoid of any meaning?

PJPs point is that that interval is the range of values specified,
with the actual value being the most likely value in the range.
The shape of the probability curve in the interval is known only
to the end user. It may be an impulse function precisely at the
specified value.

If I ask for the value of sin(12345678*PI) I can confidently
assert that the value is 0. Using that value will never cause an
unexplained fault. The following illustrates the problem:

[1] c:\c\junk>a
1 3.1415926535897931 0.0000000000000001
2 6.2831853071795862 -0.0000000000000002
4 12.5663706143591725 -0.0000000000000005
8 25.1327412287183449 -0.0000000000000010
16 50.2654824574366899 -0.0000000000000020
32 100.5309649148733797 -0.0000000000000039
64 201.0619298297467594 -0.0000000000000078
128 402.1238596594935188 -0.0000000000000157
256 804.2477193189870377 -0.0000000000000313
512 1608.4954386379740754 -0.0000000000000627
1024 3216.9908772759481508 -0.0000000000001254
2048 6433.9817545518963016 -0.0000000000002508
4096 12867.9635091037926031 -0.0000000000005016
8192 25735.9270182075852063 -0.0000000000010032
16384 51471.8540364151704125 -0.0000000000020064
32768 102943.7080728303408250 -0.0000000000040128
65536 205887.4161456606816500 -0.0000000000080256
131072 411774.8322913213633001 -0.0000000000160512
262144 823549.6645826427266002 -0.0000000000321023
524288 1647099.3291652854532003 -0.0000000000642046
1048576 3294198.6583305709064007 -0.0000000001284093
2097152 6588397.3166611418128014 -0.0000000002568186
4194304 13176794.6333222836256027 -0.0000000005136371
8388608 26353589.2666445672512054 -0.0000000010272743
16777216 52707178.5332891345024109 -0.0000000020545485
33554432 105414357.0665782690048218 -0.0000000041090971
67108864 210828714.1331565380096436 -0.0000000082181941
134217728 421657428.2663130760192871 -0.0000000164363883
268435456 843314856.5326261520385742 -0.0000000328727765
536870912 1686629713.0652523040771484 -0.0000000657455530
1073741824 3373259426.1305046081542969 -0.0000001314911060

[1] c:\c\junk>cat sines.c
#include <stdio.h>
#include <limits.h>
#include <math.h>
int main(void)
{
double PI = 4.0 * atan(1.0);
unsigned int i;

for (i = 1; i < INT_MAX; i += i) {
printf("%11d %28.16f %20.16f\n", i, PI * i, sin(PI * i));
}
return 0;
}
 
C

Christian Bau

"P.J. Plauger said:
You have to ignore the possibility that DBL_MAX stands for a large range
because *it might not.* Granted, it probably does have that much fuzz
practically all the time; but you, the library vendor, don't know that.
And as we've discussed at length here, it's hard to say just where along
the road to possible perdition it's okay to stop trying. Once you tool
up for doing the job right along a good part of that road, it ain't that
much more work to go all the way. Then you don't have to worry about
some customer coming along and saying, "Hey, why did you give up on
computing that value properly? I happen to know what I'm doing." And
that customer might even be right.

Somehow your mentioning of sin (DBL_MAX) sparked a little idea that
might help you calculating sin (x) for arbitrarily large values quite
quickly, especially useful if you use long double with 15 exponent bits
and an enormous range:

Instead of storing huge numbers of bits of 1/2pi and doing huge
precision argument reduction, just store the values of sin (65536 ^ N)
and cos (65536 ^ N) with lets say 20 or 24 extra bits of precision.
Every large floating-point number can be written as x = +/- k * 2^n
where k is for example a 53 bit or 64 bit integer. So take a value of x,
write it as the sum of very few small multiples of (65536 ^ N) which
should be quite simple, then use some simple theorems for calculating
sin (a+b) when sin and cos of a and b are known. That should work quite
quickly for arbitrary large x.
 
J

Joe Wright

CBFalconer said:
:

... snip ...

I got really interested in P.J.'s argument reduction to sin() et.al.
and while constructing something to test with I have found something
very strange. A very simple program..

#include <stdio.h>
#include <math.h>

#define R PI*2/360 /* Radians per Degree */

int main(void) {
double d, r = R;
int i, lin;
for (lin = 0, i = 30; i < 6000000; i += 360000) {
d = i * R;
printf("\n%2d. Deg %8d, Rad %.16e Sin %+.16e", ++lin, i, d,
sin(d));
}
puts("");
for (lin = 0, i = 30; i < 6000000; i += 360000) {
d = i * r;
printf("\n%2d. Deg %8d, Rad %.16e Sin %+.16e", ++lin, i, d,
sin(d));
}
puts("");
return 0;
}

Sorry about the line wraps.

I use the DJGPP versions of gcc.

The point of the exercise was to simulate fairly large integral
(degree) angles, convert the (int) degrees to (double) radians and
present the result to sin() and see what we get. Because I know the
sine of 30 degrees is 0.5 and I suppose the sine 390 and 750 degrees
(and so on) is the same.

The program consists of two virtually identical loops. The
difference is only the assignment of d.

The loops produce 17 lines of output, identical except (in My case)
for line 14. In that case, the calculation of d yields a different
result, off by one in the least significant bit of the mantissa of
d. Passing these two to sin() result in a difference of 18 lsb bits
of the result's mantissa.

If you see the same difference, can you explain it?
 
C

CBFalconer

Christian said:
.... snip ...

Somehow your mentioning of sin (DBL_MAX) sparked a little idea
that might help you calculating sin (x) for arbitrarily large
values quite quickly, especially useful if you use long double
with 15 exponent bits and an enormous range:

Instead of storing huge numbers of bits of 1/2pi and doing huge
precision argument reduction, just store the values of
sin (65536 ^ N) and cos (65536 ^ N) with lets say 20 or 24
extra bits of precision. ... snip ...

That isn't necessary. Assuming the argument is precise (i.e.
extended by zero bits on the right) all we need to compute (arg
mod PI) to the precision of PI is a couple of guard bits.
 
C

Christian Bau

CBFalconer said:
That isn't necessary. Assuming the argument is precise (i.e.
extended by zero bits on the right) all we need to compute (arg
mod PI) to the precision of PI is a couple of guard bits.

When you write PI you mean a floating point approximation of the
mathematical constant pi? For large x, reduction modulo an approximation
to pi will give a result that is very much different from reduction
modulo pi. That is why some libraries store pi or 1/pi with a precision
of several thousand bits.
 
I

Irrwahn Grausewitz

Joe Wright said:
I got really interested in P.J.'s argument reduction to sin() et.al.
and while constructing something to test with I have found something
very strange. A very simple program..

#include <stdio.h>
#include <math.h>

#define R PI*2/360 /* Radians per Degree */

If you see the same difference, can you explain it?

If you actually define PI somewhere and parenthesize the
replacement text for R, the code not only compiles, but
all the strangeness should be gone.

Regards
 
C

CBFalconer

Christian said:
When you write PI you mean a floating point approximation of the
mathematical constant pi? For large x, reduction modulo an
approximation to pi will give a result that is very much different
from reduction modulo pi. That is why some libraries store pi or
1/pi with a precision of several thousand bits.

You're right. I was thinking of the case where we start with a
number of the form "arg = 2 * PI * rotations + delta", where we
can recover delta provided we use the same value for PI and the
number has not lost precision in the first place.
 
D

Dan Pop

In said:
That's just one interpretation of floating-point representation. It is
akin to the oft-repeated presumption in this thread that every value
carries a +/- 1/2 ulp error with it. But that's not always the case.
Many a floating-point calculation is *exact*. So the real value exactly

When I asked you for concrete examples, you provided exactly zilch.

Dan
 
D

Dan Pop

In said:
PJPs point is that that interval is the range of values specified,
with the actual value being the most likely value in the range.
The shape of the probability curve in the interval is known only
to the end user. It may be an impulse function precisely at the
specified value.

Or it may not. If I deserve 5 digits of precision, why should the
implementation waste precious CPU resources to give me 16 digits of
*bogus* precision?

To deserve 16 digits of precision, it is *my* job to call sin() with
an argument of the right magnitude. It's as simple as that.

Dan
 
P

P.J. Plauger

Or it may not. If I deserve 5 digits of precision, why should the
implementation waste precious CPU resources to give me 16 digits of
*bogus* precision?

Because it doesn't know that it shouldn't? You could supply a useful
hint by calling sinf, in this particular case at least.
To deserve 16 digits of precision, it is *my* job to call sin() with
an argument of the right magnitude. It's as simple as that.

Simple for you, perhaps, but the interface for sin currently fails
to provide any information about how many of the bits you're supplying
should be taken seriously. Absent such information, a high quality
sin function favors getting the best possible answer for the argument
given, over doing something fast in the hopes that *all* users are
as fuzzy and in a hurry as you always seem to be. And as I've indicated
several times in this thread, computing the sine of *any* angle
smaller in magnitude than pi/4 can be done pretty fast. Odds are
that any logic you add to reduce the precision in this range would
take about as long to execute as the time you save. So all you have
to do is reduce your own arguments, as quick and sloppy as suits
your needs, and you avoid all that wasteful accuracy demanded by
professional programmers.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

In <%[email protected]> "P.J. Plauger"

When I asked you for concrete examples, you provided exactly zilch.

Just last week I replied to you as follows:

: > >> So, I was asking about *concrete* examples where it makes sense to
call
: > >> sine with integral arguments or with arguments that are provably
*exact*
: > >> representations of the intended value.
: > >
: > >And I gave one.
: >
: > Nope, you gave me nothing in the way of a *concrete* example. Or maybe
: > the term is beyond your grasp... Clue: "concrete" and "hypothetical"
: > are not exactly synonyms in any language I'm familiar with.
:
: The term is beyond my grasp.

Can't you remember *anything*?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dik T. Winter

>
> When I asked you for concrete examples, you provided exactly zilch.

I think I gave one? (Zero's of the Riemann zeta function.)

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.
 
C

CBFalconer

Dan said:
.... snip ...

When I asked you for concrete examples, you provided exactly zilch.

I expanded my earlier example showing the errors arising from
large arguments to sin to show possible user argument reductions.
The first one I tried was fmod, and I am pleasantly surprised by
its accuracy here. The library is DJGPP 2.03.

#include <stdio.h>
#include <limits.h> /* INT_MAX */
#include <math.h> /* fmod, sin, atan */
#include <stdlib.h> /* strtod */

/* try delta 0.001953125 (i.e. 2**-n i.e. single bit)
0.00048828125
0.0001220703125
0.000030517578125
0.00000762939453125
0.0000019073485328125
to show the precision preserved by reduce() */

/* -------------------1 */

/* return arg modulo PI */
double reduce(double arg, double PI)
{
return fmod(arg, PI);
} /* reduce */

/* -------------------1 */

int main(int argc, char **argv)
{
double PI = 4.0 * atan(1.0);
unsigned int i;
double arg, narg, delta;

delta = 0.0;
if (argc > 1) delta = strtod(argv[1], NULL);

for (i = 1; i < (INT_MAX / 2U); i = 4 * i) {
arg = i * PI + delta;
narg = reduce(arg, PI);
printf("%11d %28.16f %20.16f\n%11c %28.16f %20.16f\n",
i, arg, sin(arg), ' ', narg, sin(narg));
}
return 0;
}

c:\c\junk>a 0.0000019073485328125
1 3.1415945609383260 -0.0000019073485328
0.0000019073485329 0.0000019073485329
4 12.5663725217077058 0.0000019073485328
0.0000019073485333 0.0000019073485333
16 50.2654843647852232 0.0000019073485314
0.0000019073485333 0.0000019073485333
64 201.0619317370952785 0.0000019073485113
0.0000019073485191 0.0000019073485191
256 804.2477212263355568 0.0000019073484878
0.0000019073485191 0.0000019073485191
1024 3216.9908791832967836 0.0000019073485074
0.0000019073486328 0.0000019073486328
4096 12867.9635110111412359 0.0000019073481312
0.0000019073486328 0.0000019073486328
16384 51471.8540383225190453 0.0000019073466264
0.0000019073486328 0.0000019073486328
65536 205887.4161475680302829 0.0000019073406072
0.0000019073486328 0.0000019073486328
262144 823549.6645845500752330 0.0000019073165305
0.0000019073486328 0.0000019073486328
1048576 3294198.6583324782550335 0.0000019072202235
0.0000019073486328 0.0000019073486328
4194304 13176794.6333241909742355 0.0000019068349957
0.0000019073486328 0.0000019073486328
16777216 52707178.5332910418510437 0.0000019052940843
0.0000019073486328 0.0000019073486328
67108864 210828714.1331584453582764 0.0000018991304387
0.0000019073486328 0.0000019073486328
268435456 843314856.5326280593872070 0.0000018744758563
0.0000019073486328 0.0000019073486328
 
G

glen herrmannsfeldt

Dik T. Winter wrote:

But if you really do think what is quoted here with "> >>", you should
distrust *every* solution of linear systems where the system has an
order over 40 or somesuch.

I would certainly look carefully at them, especially
if done with matrix inversion.

-- glen
 
D

Dik T. Winter

> Dik T. Winter wrote:
>
>
>
> I would certainly look carefully at them, especially
> if done with matrix inversion.

Also with Gauss' elimination. If you really think that the input
numbers are intervals, in almost all cases the result makes no sense.
It is quite possible that within the set of matrices given, there is
at least one singular matrix. The number 40 I use above comes from
von Neuman. And indeed, before Wilkinson, numeric mathematicians could
not deal with the error analysis of large linear systems.
 
D

Dan Pop

In said:
Because it doesn't know that it shouldn't? You could supply a useful
hint by calling sinf, in this particular case at least.

sinf is useless in portable C code.
Simple for you, perhaps, but the interface for sin currently fails
to provide any information about how many of the bits you're supplying
should be taken seriously.

ALL of them, as I have already explained you. It's the magnitude of the
value that decides how many digits of precision I deserve in the result.

Dan
 
D

Dan Pop

In said:
Just last week I replied to you as follows:

: > >> So, I was asking about *concrete* examples where it makes sense to
call
: > >> sine with integral arguments or with arguments that are provably
*exact*
: > >> representations of the intended value.
: > >
: > >And I gave one.
: >
: > Nope, you gave me nothing in the way of a *concrete* example. Or maybe
: > the term is beyond your grasp... Clue: "concrete" and "hypothetical"
: > are not exactly synonyms in any language I'm familiar with.
:
: The term is beyond my grasp.

Can't you remember *anything*?

Of course I do. Otherwise, I couldn't point out that you have still
failed to support your assertion with concrete examples, therefore I
refuse to consider it seriously, period.

Dan
 
D

Dan Pop

In said:
Also with Gauss' elimination. If you really think that the input
numbers are intervals, in almost all cases the result makes no sense.

Unless the input numbers can be exactly represented in floating point,
it is sheer foolishness to consider them as exact values. Furthermore,
in real life applications, most input numbers are obtained with a
precision even lower than that provided by the floating point
representation, so they stand for even larger intervals. That's why
the results are intervals, too. Consider the difference between
1 +/- 1000 and 1 +/- 0.001. Only a patent fool will display the result
as a plain 1 in both cases.


Dan
 
D

Dik T. Winter

>
> Unless the input numbers can be exactly represented in floating point,
> it is sheer foolishness to consider them as exact values. Furthermore,
> in real life applications, most input numbers are obtained with a
> precision even lower than that provided by the floating point
> representation, so they stand for even larger intervals. That's why
> the results are intervals, too. Consider the difference between
> 1 +/- 1000 and 1 +/- 0.001. Only a patent fool will display the result
> as a plain 1 in both cases.

Ah, so, what method do *you* use to solve linear systems on the computer?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,011
Latest member
AjaUqq1950

Latest Threads

Top