Sine code for ANSI C

D

Dan Pop

In said:
Because Crays are _very_ specialised machines, which can make
assumptions that a library writer for a normal computer cannot.

I must agree with Mr. Plauger, here. If a library gives me a value with
more precision than I need or expect, I can always ignore that extra
precision. If it gives me a value with less precision than it could, as
some people seem to be advocating, I can _never_ add that precision
myself without doing the library's job for it, which should not be the
aim of a good quality library.

You have failed to understand the substance of this discussion. Nobody
argued for less precission than could be *meaningfully* computed.

Back to my extreme example, one could consider DBL_MAX as an *exact*
value and spend significant CPU resources computing sin(DBL_MAX) with
double precision accuracy, but when would be that precision *meaningful*?

Give the user as much precision as it makes sense from the magnitude of
the argument and let *him* do the argument reduction if he needs more
precision, because only he can decide what is the proper way of performing
the argument reduction for his application. More often than not, trying
to outsmart the user results in dumb behaviour.

Dan
 
D

Dik T. Winter

>
> And does it? Last time I checked, mathematics define the sine as having
> a real argument and the C programming language provides zilch support for
> real numbers.
Yup.

>
> Please elaborate, again, with concrete examples.

You want a concrete example, I just do think that such examples are possible.
> Where do these
> "mathematical applications" get their input data from? How do they
> handle the *incorrect* (from the mathematics POV) result of calling sine
> for pretty much *any* argument except 0?

Pretty much as in every such case. Careful error analysis.
 
P

P.J. Plauger

Consider this program fragment, as no real examples have
been presented so far:

for(x=0;x+60!=x;x+=60) printf("%20.15g\n",sin(x*A));

A should have the appropriate value so that x is
expressed in degrees. Now, consider the effect
of ordinary argument reduction, and unusually
accurate argument reduction.

If you do argument reduction using an appropriate
approximation to pi/4 to machine precision, you get
the answer expected by the programmer. If you use
a pi/4 much more accurate than machine precision, the
result will slowly diverge from the expected value.

Okay, so what? The programmer has unrealistic expectations.
It's the library vendor's responsibility to meet possibly
realistic expectations. The vendor can't possibly guess
all the ways that library functions will be called with
arguments that need "correcting" so as not to hurt the
programmer's feelings.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?

Not sure what this has to do with the current discussion, but
I'll answer anyway. There was a market for machines that were
fast -- at least by the standards of those days -- and a prevailing
view that inaccurate floating-point was safe enough if you just
carried a few extra sacrificial precision bits. That view was
correct often enough that not all of Cray's customers went
bankrupt.


What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
The IBM 360/91 was documented as not following the S/360
architecture because its floating point divide gave rounded
results (0.5ulp) instead of the truncated results (1ulp)
that the architecture specified.

As before.


What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster.

Most likely.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it.


Okay.

What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision).

Nice criteria. I don't see them anywhere in the C Standard.
Are you going to propose them as a DR? Be fun to see you
defend why you pick the exponent 18 for 24-bit (IEEE)
precision, 50 for 53-bit and 100 for 64-bit(!) or 113-bit.

And what "fatal" error are you going to introduce, given that
the math library currently has no such concept. Even IEEE 654
always provides for continuing execution with some error
code. Looks like you're bravely guiding the standards community
into new and uncharted territory. Brave of you.
Giving a fatal error tells the user that something is
wrong and should be fixed.

Can you quantify the "wrongness" in terms of your criteria?
Should they not also be applied to the various rounding
functions? If not, why not?
Supplying answers using bits
that don't really exist allows the user to believe that
useful answers are being generated, possibly wasting
much calculation and money.

Happens all the time, when users call math functions with
arguments having bits that don't really exist. Just whose
responsibility is that, anyway?
Actually, the time when this happened to me, probably
when I was in ninth grade, (due to an undefined variable
if I remember right), someone carefully explained to me
why the library would refuse such a calculation. It made
sense at the time, and it still does.

Things that make sense for a ninth-grade class don't necessarily
scale to professional libraries licensed to professional
programmers, both working to language standards developed
by international organizations.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

Christian Bau

<snipped lots of stuff>

Apart from the precision of an individual call to sin(x), would people
think that fabs (sin (x)) <= 1.0 must be true for _all_ values of x, no
matter how large? And should sin (x)*sin(x) + cos(x)*cos(x) be close to
1 for all x, no matter how large?
 
G

glen herrmannsfeldt

P.J. Plauger said:
(snip)

Okay, so what? The programmer has unrealistic expectations.
It's the library vendor's responsibility to meet possibly
realistic expectations. The vendor can't possibly guess
all the ways that library functions will be called with
arguments that need "correcting" so as not to hurt the
programmer's feelings.

That example is more realistic than any you have posted
so far.
What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?
(snip)
Most likely.
What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?

When they use a single precision function they expect
less accurate answers than a double precision function.
What does this tell the designer of a sine function about
where it's okay to stop delivering accurate results?

It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.
Nice criteria. I don't see them anywhere in the C Standard.
Are you going to propose them as a DR? Be fun to see you
defend why you pick the exponent 18 for 24-bit (IEEE)
precision, 50 for 53-bit and 100 for 64-bit(!) or 113-bit.

The requirements of the C standard have already been
discussed, and that didn't seem to bother you any.

The numbers above are not for IEEE, as it hadn't been
invented yet. Extended precision has 112 bit fraction.
And what "fatal" error are you going to introduce, given that
the math library currently has no such concept. Even IEEE 654
always provides for continuing execution with some error
code. Looks like you're bravely guiding the standards community
into new and uncharted territory. Brave of you.

It seems that on most implementations sqrt() has the
ability to generate a fatal error when given a negative
argument.
Can you quantify the "wrongness" in terms of your criteria?
Should they not also be applied to the various rounding
functions? If not, why not?

There are no problems of any use to anyone where they
are useful.

(snip)
Happens all the time, when users call math functions with
arguments having bits that don't really exist. Just whose
responsibility is that, anyway?
Things that make sense for a ninth-grade class don't necessarily
scale to professional libraries licensed to professional
programmers, both working to language standards developed
by international organizations.

It wasn't in a class. They didn't teach it in ninth grade,
and as far as I know they don't now. It was explained to
me by a professional physicist.

-- glen
 
C

Christian Bau

glen herrmannsfeldt said:
It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.

Just for a thought: There is this thing called "inverse error analysis".
You try to calculate y = f (x). Usually you assume that your
floating-point math will produce a value y' instead which is not quite
equal to y, and you want to keep the difference between y and y' small.
But instead you might assume that you calculate y = f (x'), that is your
floating-point arithmetic produced the exact function value for some
different x', hopefully close to x.

Now consider y = cos (x). For x = 0, cos (x) = 1, and it becomes smaller
very slowly; for small x we have cos (x) approximately 1 - x^2/2. Now
take 64 bit IEEE floating point. The largest number smaller than 1 is 1
- 2^-53. Choose x such that cos x is about 1 - 2^-54, x = pow (2.0,
-53.0/2). For this x, the best that your implementation of cos (x) can
do is to return a result of either 1 or 1 - 2^-53. 1 = cos (x') for x' =
0, 1 - 2^-53 = cos (x') for x' about pow (2.0, -26). So the best you can
possibly get is cos (x'), with x' either zero or 41 percent larger than
x. And there is nothing you can do about it.

No matter how good your cos () implementation is, the worst case with
inverse error analysis is horrendously bad.
 
P

P.J. Plauger

That example is more realistic than any you have posted
so far.

Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.
When they use a single precision function they expect
less accurate answers than a double precision function.

No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.
It says that when the operations require exceptional
ability to perform, far above and beyond the call of
duty, it is time to give up. When thousands of
digits of pi are required to compute the result on
a 64 bit operand, something is wrong.

"Far above and beyond" is once again qualitative arm waving.
You may hear a weaker call to duty than others, but there's
nowhere in the C Standard that justifies an arbitrary
point beyond which it's okay to give up on sine, or any other
function. You think sine is hard to get right for some
arguments, try lgamma, tgamma, and erfc (all in C99).
The requirements of the C standard have already been
discussed, and that didn't seem to bother you any.

What, you mean that a conforming implementation has no explicit
precision requirements? I understand that; in fact I was one
of the principal proponents of that (lack of) requirement in
the C Standard. Dan Pop already responded to that issue better
than I can. It is well understood that every implementation
has certain "quality of implementation" features. (I also
pioneered use of that term.) If getc takes two days per character,
nobody will buy your library. Similarly, if math functions
trap out on certain unspecified values, word will get out and
your customers will stay away in droves.

Math libraries are tougher to quantify, because they require so
many measurements. But once you do the measurements and publish
them, you'd better be prepared to justify the results. You can't
just say, "I decided not to implement pow for negative exponents."
Or more to the point, "at some point the results of sine begin
to really suck, but I'd rather not say where that is." And if
you *do* choose to publish your threshold of suckiness, you'd
better have a rationale for choosing it. Otherwise, some third
party tester will report that your sine function occasionally
is off by 40 billion ulp, and not provide the rationale for why
that might be.
The numbers above are not for IEEE, as it hadn't been
invented yet. Extended precision has 112 bit fraction.

Oh, I see. You were talking about criteria accompanying a
40-year-old architecture, with software technology to match.
It seems that on most implementations sqrt() has the
ability to generate a fatal error when given a negative
argument.

Really? The C Standard says it should set errno to EDOM and
return some failure value. If your implementation also obeys
IEC 60559 (aka IEEE 754) then you should return a NaN. I
find very few customers who want a program to terminate rather
than produce a NaN. I know even fewer who would *ever* expect
sine to return a NaN for any finite argument.
There are no problems of any use to anyone where they
are useful.

Is that a Zen koan or just the nonsense it appears to be?
It wasn't in a class. They didn't teach it in ninth grade,
and as far as I know they don't now. It was explained to
me by a professional physicist.

As a one-time professional physicist myself, I now begin to
understand the nature of your education.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dr Chaos

Again, so what? We're talking about the requirements placed on
a vendor of high quality math functions. How various innocents
misuse the library doesn't give the vendor any more latitude.
It's what the *professionals* expect, and the C Standard
indicates, that matter. Sadly, the C Standard gives *no*
latitude for copping out once an argument to sine gets large
in magnitude.

Then that's a likely thought-bug in the C Standard.

No, they expect less *precise* answers. There's a difference,
and until you understand it you're not well equipped to
critique the design of professional math libraries.

The design of the professional math libraries is not the issue, it's
whether the effort is worthwhile, as opposed to accomodating likely
poorly-thought out algorithms by the user.

I think accumulation of rotations is probably best done
with complex multiplication.
 
P

P.J. Plauger

Then that's a likely thought-bug in the C Standard.

Likely, but it's there, and some of us have to live with it.
The design of the professional math libraries is not the issue, it's
whether the effort is worthwhile, as opposed to accomodating likely
poorly-thought out algorithms by the user.

The design of professional math libraries *is* the issue. Until
such time as standards quantify what calculations are "worthwhile"
and what merely accommodate poorly thought out algorithms, we
have an obligation to assume that whatever is specified might
be considered worthwhile to some serious users.

The other side of the coin is knowing where to stop once the
"worthwhile" police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."
I think accumulation of rotations is probably best done
with complex multiplication.

And why do you think this can be done with any more accuracy,
or precision, than the techniques cited (and sneered at) so far
for generating large angles?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
J

John Cochran

SNIP....
The other side of the coin is knowing where to stop once the
"worthwhile" police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."

It seems to me that a reasonable cutoff point would be where
the difference between consecutive floating point numbers is greater
than two pi. At that point you can't even determine the *sign* of the
correct answer, yet alone determine any value that is justifiable.
The only thing that you can justify is a claim that the answer lies
somewhere between -1.0 and 1.0
 
P

P.J. Plauger

SNIP....


It seems to me that a reasonable cutoff point would be where
the difference between consecutive floating point numbers is greater
than two pi. At that point you can't even determine the *sign* of the
correct answer, yet alone determine any value that is justifiable.
The only thing that you can justify is a claim that the answer lies
somewhere between -1.0 and 1.0

Yes, that's a reasonable cutoff point, on the face of it. Just don't
look too close. You're falling prey to the same error in logic that
traps most people who first study this problem -- assuming that there
must be some intrinsic error, say 1/2 ulp, in the argument. If we're
going to apply that criterion to the library uniformly, then rounding
1e38 is equally suspect. (How happy would you be if round occasionally
produced a fatal error? Particularly when the answer is obvious and
easy to compute.)

But if you assume the argument is exact, as *all* library functions
really must do, the your statement is incorrect. There *is* a well
defined angle corresponding to *every* finite floating-point argument.
You (or I, to be specific) may not like the amount of work required
to compute it accurately, but the value is known and well defined.

If, OTOH, you want to let the library vendors off the hook whenever
a function is hard to compute, I've got a little list. And it doesn't
stop at sin/cos/tan.

And if, OTOOH, you want to let the library vendors off the hook
whenever a typical programmer probably doesn't know what he's doing,
*boy* do I have a list.

The trick with standards, as with library design, is to have a
rational framework that's uniformly applied. The sine function
may illustrate some of the nastiest consequences of carrying the
current framework to its logical conclusion, but I assure you
that there are plenty of other monsters lurking out there. Any
change you propose to the framework just gives you a different
set to battle.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

Christian Bau

SNIP....


It seems to me that a reasonable cutoff point would be where
the difference between consecutive floating point numbers is greater
than two pi. At that point you can't even determine the *sign* of the
correct answer, yet alone determine any value that is justifiable.
The only thing that you can justify is a claim that the answer lies
somewhere between -1.0 and 1.0

Well, you actually _can_ find the correct answer quite well. A value of
type double represents a single real number. Of course we all know that
if I assign x = a + b; then usually x is _not_ equal to the mathematical
sum of a and b, and given only x I might not draw any useful conclusions
about sin (a + b). However, sin (x) can still be calculated quite well.
 
J

James Kanze

|> I agree that it's a Quality of Implementation issue just how fast a
|> library function supplies nonsense when called with nonsense
|> arguments. But I've yet to hear an objective criterion for
|> determining how much of an argument is garbage. Absent that, the
|> best way I know for library writers to satisfy customers is to
|> assume that every input value is exact, and to produce the closest
|> possible representation to the nearest internal representation of
|> the corresponding function value.

Maybe I'm just being naïve, but I always thought that it was a
necessary quality of a correct implementation that it give correct
results for all legal input. And that it wasn't the job of library
implementers to decide what was or was not reasonable input for my
application.
 
J

Joe Wright

P.J. Plauger said:
Likely, but it's there, and some of us have to live with it.




The design of professional math libraries *is* the issue. Until
such time as standards quantify what calculations are "worthwhile"
and what merely accommodate poorly thought out algorithms, we
have an obligation to assume that whatever is specified might
be considered worthwhile to some serious users.

The other side of the coin is knowing where to stop once the
"worthwhile" police get empowered. Several people who have
contributed to this thread are convinced that computing the
sine of a sufficiently large angle is not worthwhile, but *nobody*
has ventured a cutoff point that has any defensible logic behind
it. And I assure you that as soon as any such defense is mounted,
I and others can apply it to a variety of other math functions.
You will then hear the usual howls, "but *that's* different."




And why do you think this can be done with any more accuracy,
or precision, than the techniques cited (and sneered at) so far
for generating large angles?

P.J.
I tend to come down on your side on these things (except casting
malloc, maybe). I am not a mathematician but am very interested in
your take on the following floating point issues..

1. Accuracy vs Precision. #define Pi 3.1416 is precise to five
digits and accurate within its precision. If I do something like..

double Pi2 = Pi * 2.0;

...the constant 2.0 is accurate and precise to 16 digits. The result
of the multiplication is accurate to only five digits while it is
precise to 16. Does this make sense?

2. Large Angles. The circle is 360 degrees or '2 pi radians'. Why is
something like..

double r = 52147.3, s;
s = sin(fmod(r,2*PI));

...not the solution for large angle argument reduction?

Keep up the good work.
 
C

CBFalconer

Joe said:
.... snip ...

2. Large Angles. The circle is 360 degrees or '2 pi radians'.
Why is something like..

double r = 52147.3, s;
s = sin(fmod(r,2*PI));

..not the solution for large angle argument reduction?

That depends highly on how you compute the fmod. Say you compute
r/(2*PI), truncate it to an integer, multiply by (2*PI), and
subtract that from r. Now you have the difference of two
comparable magnitudes, with attendant loss of significant bits.

Compare with methods of computing sigma f(n) for n = 1 ....
infinity. If you start with n=1 (using the normally available
floating point system) you will end up with something quite
divergent from the true answer, regardless of whether the series
actually converges or not. A reasonably accurate computation
requires starting with the smallest terms.
 
D

Dik T. Winter

>
> Yup what?!? Please elaborate.

I would have thought that was simple. Yes, I agree with that paragraph.
>
> This is not good enough.

I'm so sorry.
>
> With the exception of numerical analysis, mathematical results are exact,
> by definition.

You are forgetting a few fields: numerical algebra, computational number
theory, statistics... But in some cases (especially computational number
theory) final results can be exact while intermediate results are not.
For instance in a project I was involved in, we have shown that the first
1,200,000,000 non-trivial zeros of the Rieman zeta function have real
part 1/2. However, none of the calculated zeros was exact. (Moreover,
the calculation involved the sine function. Moreover, we had to get
reasonable precision, as it involved separating places where the sign
of the function changes. %)
---
% If you are interested, there are well known formula's that indicate in
which region the n-th non-trivial zero resides. However, these regions
overlap. But using this you can set up algorithms that will locate
groups of zero's. The problem is that these zero's will get arbitrarily
close to each other, so using floating point on the machine we did the
runs on (a CDC Cyber 205) the best separation we could get was 10**(-13).
This was not enough, so sometimes we had to resort to double precision.
However, the argument was *exact*. A precise floating-point (hence
rational) number.
 
P

P.J. Plauger

I tend to come down on your side on these things (except casting
malloc, maybe). I am not a mathematician but am very interested in
your take on the following floating point issues..

1. Accuracy vs Precision. #define Pi 3.1416 is precise to five
digits and accurate within its precision. If I do something like..

double Pi2 = Pi * 2.0;

..the constant 2.0 is accurate and precise to 16 digits. The result
of the multiplication is accurate to only five digits while it is
precise to 16. Does this make sense?
Yes.

2. Large Angles. The circle is 360 degrees or '2 pi radians'. Why is
something like..

double r = 52147.3, s;
s = sin(fmod(r,2*PI));

..not the solution for large angle argument reduction?

People once thought it was, or should be. Indeed, that was one of the
arguments for adding the rather finicky fmod to IEEE floating point
and eventually the C Standard. But if you think about it hard enough
and long enough -- took me an afternoon and several sheets of paper --
you realize that it doesn't cut it. You effectively have to keep
subtracting 2*pi from your argument r until it's less than 2*pi.
fmod does this by subtracting the various multiples 2*pi*2^n. If
*any one* of them does not have nearly 16 good fraction digits, as
well as all the digits it needs to the left of the decimal point, it's
going to mess up the whole set of subtractions. So if you want to
reduce numbers as large as 10^38, you have to represent pi to about
log10(10^(38+16)) or 54 digits. For 113-bit IEEE long double, you
need well over 4000 digits.

We've developed an arbitrary-precision package that represents
numbers as arrays of floating-point values, each of which uses only
half its fraction bits. So we can do adjustable precision argument
reduction fairly rapidly. Still takes a lot of storage to represent
the worst case, and a bit more logic than I wish we had to use, but
it does the job. Not only that, we need the same sort of thing
to handle several other difficult math functions, though with
nowhere near as much precision, of course. So it's not like we
indulge in heroics just for sin/cos/tan.
Keep up the good work.

Thanks.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

CBFalconer

P.J. Plauger said:
.... snip ...

We've developed an arbitrary-precision package that represents
numbers as arrays of floating-point values, each of which uses only
half its fraction bits. So we can do adjustable precision argument
reduction fairly rapidly. Still takes a lot of storage to represent
the worst case, and a bit more logic than I wish we had to use, but
it does the job. Not only that, we need the same sort of thing
to handle several other difficult math functions, though with
nowhere near as much precision, of course. So it's not like we
indulge in heroics just for sin/cos/tan.

It seems to me that the reduction could be fairly rapid if an
estimate is formed by normal division, break that up into single
bit binary portions (so that multiples of PI do not add non-zero
significant bits) to do the actual reduction. Not worked out at
all, just the glimmer of a method. The idea is to remove the
leftmost digits of the original argument first.
 
D

Dan Pop

In said:
Well, you actually _can_ find the correct answer quite well. A value of
type double represents a single real number.

But, when used in a real number context (as opposed to an integer number
context -- floating point can be used in both contexts) it stands for a
whole subset of the real numbers set. The real value exactly represented
is no more relevant than any other value from that set.
Of course we all know that
if I assign x = a + b; then usually x is _not_ equal to the mathematical
sum of a and b, and given only x I might not draw any useful conclusions
about sin (a + b). However, sin (x) can still be calculated quite well.

The point is not whether it can be calculated, but rather how much
precision should the calculation produce? Does it makes sense to compute
sin(DBL_MAX) with 53-bit precision, ignoring the fact that DBL_MAX
stands for an interval so large as to make this function call completely
devoid of any meaning?

Dan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,608
Members
45,249
Latest member
KattieCort

Latest Threads

Top