Sine code for ANSI C

T

Tim Prince

Grumble said:
AMD64 supports x87. Thus FSIN is available. What did you mean by
"not really true of AMD64 either" ?
Possibly referring to compilers complying with ABIs disallowing x87, or
taking advantage of higher performance of SSE parallel libraries. Use of
fsin on IA64 is extremely unlikely, even though it's still there.
 
D

Dan Pop

In said:
Huh? If I want the phase of an oscillator after 50,000 radians are you
saying that is not computable? Please elaborate.

There was a thread hereabouts many months ago on this very subject and AFAIK
no one suggested that it was not computable, it just couldn't be done with
doubles. And I see no inherent problems.

Right. This difference of opinion highlights two conflicting
interpretations of floating-point numbers:

1) They're fuzzy. Assume the first discarded bit is
somewhere between zero and one. With this viewpoint,
CBFalconer is correct that there's no point in trying
to compute a sine accurately for large arguments --
all the good bits get lost.

2) They are what they are. Assume that every floating-point
representation exactly represents some value, however that
representation arose. With this viewpoint, osmium is correct
that there's a corresponding sine that is worth computing
to full machine precision.

I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.

I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.

Why bother? Floating point numbers *are* fuzzy. Whoever sticks to the
second interpretation has no more clues about floating point than the
guys who expect 0.1 to be accurately represented in binary floating point.

Dan
 
P

P.J. Plauger

This difference of opinion highlights two conflicting
interpretations of floating-point numbers:

1) They're fuzzy. Assume the first discarded bit is
somewhere between zero and one. With this viewpoint,
CBFalconer is correct that there's no point in trying
to compute a sine accurately for large arguments --
all the good bits get lost.

2) They are what they are. Assume that every floating-point
representation exactly represents some value, however that
representation arose. With this viewpoint, osmium is correct
that there's a corresponding sine that is worth computing
to full machine precision.

I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.

I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.

Why bother? Floating point numbers *are* fuzzy. Whoever sticks to the
second interpretation has no more clues about floating point than the
guys who expect 0.1 to be accurately represented in binary floating point.

Sorry, but some of our customers are highly clued and they *do*
know when their floating-point numbers are fuzzy and when they're
not. In the latter case, the last thing they want/need is for us
library writers to tell them that we've taken the easy way out
on the assumption that their input values to our functions are
fuzzier than they think they are.

It's the job of the library writer to return the best internal
approximation to the function value for a given input value
*treated as an exact number.* If the result has fuzz in a
particular application, it's up to the authors of that
application to analyze the consequence.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dan Pop

In said:
Sorry, but some of our customers are highly clued and they *do*
know when their floating-point numbers are fuzzy and when they're
not.

Concrete examples, please.

Dan
 
P

P.J. Plauger

Dan Pop said:
In <[email protected]> "P.J. Plauger"

Concrete examples, please.

What is the sine of 162,873 radians? If you're working in radians,
you can represent this input value *exactly* even in a float. Do
you want to be told that the return value has about 16 low-order
garbage bits because nobody could possibly expect an angle that
large to have any less fuzz? Maybe you do, but some don't. And I,
for one, have trouble justifying in this case why a standard
library function shouldn't deliver on a not-unreasonable
expectation. (The fact that it's hard to deliver on the expectation
doesn't make it unreasonable.)

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dan Pop

In said:
What is the sine of 162,873 radians? If you're working in radians,
you can represent this input value *exactly* even in a float.

You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?

In real life applications, the argument of sine is computed using
floating point arithmetic (on non-integer values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.

So, I was asking about *concrete* examples where it makes sense to call
sine with integral arguments or with arguments that are provably *exact*
representations of the intended value.

To *me*, as a user, having a sine that spends CPU cycles to provide
the answer with the precision implied by the assumption that the
argument represents an exact value, is unacceptable. If I call
sin(DBL_MAX) I deserve *any* garbage in the -1..1 range, even if DBL_MAX
is an exact integer value.

Dan
 
P

P.J. Plauger

In <[email protected]> "P.J. Plauger"

You *can*, but does it make physical sense to call sine with an integer
argument (even if represented as a float)?

Yes. It also makes sense to call sine with a double having 40 bits of
fraction *that are exact* and expect the 53-bit sine corresponding to
that exact number, regardless of whether there's also an exact integer
contribution as well. Same problem.
In real life applications, the argument of sine is computed using
floating point arithmetic (on non-integer values), so it *is* a fuzzy
value, with the degree of fuzziness implied by its magnitude.

Not necessarily. Once again, you're presuming that everybody programs
like you do. Library vendors don't have that luxury.
So, I was asking about *concrete* examples where it makes sense to call
sine with integral arguments or with arguments that are provably *exact*
representations of the intended value.

And I gave one.
To *me*, as a user, having a sine that spends CPU cycles to provide
the answer with the precision implied by the assumption that the
argument represents an exact value, is unacceptable.

What if the cycles are spent only on large arguments? All you have
to do then is avoid the large arguments you know to be meaningless
in your application.
If I call
sin(DBL_MAX) I deserve *any* garbage in the -1..1 range, even if DBL_MAX
is an exact integer value.

Probably. You also deserve *any* number of CPU cycles spent generating
that garbage.

Fortunately for you, most sine functions meet your quality
requirements.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dan Pop

In said:
Yes. It also makes sense to call sine with a double having 40 bits of
fraction *that are exact* and expect the 53-bit sine corresponding to
that exact number, regardless of whether there's also an exact integer
contribution as well. Same problem.

Concrete examples, please.
Not necessarily. Once again, you're presuming that everybody programs
like you do. Library vendors don't have that luxury.

Concrete examples, please. Assuming a competent approach of the problem.
And I gave one.

Nope, you gave me nothing in the way of a *concrete* example. Or maybe
the term is beyond your grasp... Clue: "concrete" and "hypothetical"
are not exactly synonyms in any language I'm familiar with.
What if the cycles are spent only on large arguments? All you have
to do then is avoid the large arguments you know to be meaningless
in your application.

Even not so large arguments can still have plenty of fuziness and
getting a 53-bit accurate answer for the value actually represented is
still a waste of CPU resources.
Probably. You also deserve *any* number of CPU cycles spent generating
that garbage.

A *good* implementation (which is what we're talking about, right?) is
supposed to produce garbage (where garbage is asked for) as fast as
possible.
Fortunately for you, most sine functions meet your quality
requirements.

Glad to hear it. I hope I'll never be forced to use yours.

Dan
 
D

Dik T. Winter

>
> You *can*, but does it make physical sense to call sine with an integer
> argument (even if represented as a float)?

Must everything make physical sense? Perhaps it makes mathematical sense?
> In real life applications, the argument of sine is computed using
> floating point arithmetic (on non-integer values), so it *is* a fuzzy
> value, with the degree of fuzziness implied by its magnitude.

Not in mathematical applications, where the argument to the sine function
can very well be exact.
 
D

Dan Pop

In said:
Must everything make physical sense? Perhaps it makes mathematical sense?

And does it? Last time I checked, mathematics define the sine as having
a real argument and the C programming language provides zilch support for
real numbers.
Not in mathematical applications, where the argument to the sine function
can very well be exact.

Please elaborate, again, with concrete examples. Where do these
"mathematical applications" get their input data from? How do they
handle the *incorrect* (from the mathematics POV) result of calling sine
for pretty much *any* argument except 0?

Dan
 
P

P.J. Plauger

Concrete examples, please.
.....
Concrete examples, please. Assuming a competent approach of the problem.
.....

Nope, you gave me nothing in the way of a *concrete* example. Or maybe
the term is beyond your grasp... Clue: "concrete" and "hypothetical"
are not exactly synonyms in any language I'm familiar with.

The term is beyond my grasp.
Even not so large arguments can still have plenty of fuziness and
getting a 53-bit accurate answer for the value actually represented is
still a waste of CPU resources.

So all the sine function needs is an indication of how much fuzz to
assume in the argument. I've yet to hear a specific criterion. Concrete
example please.
A *good* implementation (which is what we're talking about, right?) is
supposed to produce garbage (where garbage is asked for) as fast as
possible.

I agree that it's a Quality of Implementation issue just how fast a
library function supplies nonsense when called with nonsense arguments.
But I've yet to hear an objective criterion for determining how much
of an argument is garbage. Absent that, the best way I know for library
writers to satisfy customers is to assume that every input value is
exact, and to produce the closest possible representation to the nearest
internal representation of the corresponding function value.

Of course, if you don't want to take responsibility for analyzing error
propagation, that's fine. Just don't foist the burden on somebody who
doesn't know how fuzzy your inputs really are.
Glad to hear it. I hope I'll never be forced to use yours.

Me too.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dan Pop

In said:
So all the sine function needs is an indication of how much fuzz to
assume in the argument. I've yet to hear a specific criterion. Concrete
example please.

Assume all the bits of the argument are exact and the *only* source
of fuziness is the magnitude of the value. By the time the least
significant bit of the mantissa means more than 2 * pi, the fuziness
is complete (any result in the -1..1 range is correct).

Is that concrete enough for you?

Dan
 
O

osmium

You might mention to Mr. Pop the notion of a revolution counter, such as a
Veeder-Root counter on the propeller shaft of an ocean going ship. My
guess is that there are a lot of radians between England and New Jersey. If
Dan Pop needs the name of a particular ship to make the point sufficiently
concrete, I expect some one could provide the name of such a ship.
 
D

Dan Pop

In said:
You might mention to Mr. Pop the notion of a revolution counter, such as a
Veeder-Root counter on the propeller shaft of an ocean going ship. My
guess is that there are a lot of radians between England and New Jersey. If
Dan Pop needs the name of a particular ship to make the point sufficiently
concrete, I expect some one could provide the name of such a ship.

You may also want to mention a concrete example where applying the sine
function to such a revolution counter makes sense. Unless I'm missing
something, only the last, possibly incomplete, revolution counts in the
context of the sine function, so please clue me in.

Dan
 
O

osmium

Dan said:
You may also want to mention a concrete example where applying the sine
function to such a revolution counter makes sense. Unless I'm missing
something, only the last, possibly incomplete, revolution counts in the
context of the sine function, so please clue me in.

When I was a child I said, "prime numbers, what are they good for?" Now
finally I, and a lot of other people know what they are good for. For a
mathematician sitting in an ivory tower to decide which equations make sense
and which are worth solving is the height of arrogance. My reading is that
that is the point P.J. Plauger has been trying to make. The world is so
complex that the work load must be divided amongst several heads. P.J. does
his mathematician thing and engineers do their engineering thing. When his
fussiness causes his sine routine to take 50 msec when Intel does it in 2
pico seconds, we might revisit the problem. I don't think we are there yet.
 
D

Dan Pop

In said:
When I was a child I said, "prime numbers, what are they good for?" Now
finally I, and a lot of other people know what they are good for. For a
mathematician sitting in an ivory tower to decide which equations make sense
and which are worth solving is the height of arrogance. My reading is that
that is the point P.J. Plauger has been trying to make. The world is so
complex that the work load must be divided amongst several heads. P.J. does
his mathematician thing and engineers do their engineering thing. When his
fussiness causes his sine routine to take 50 msec when Intel does it in 2
pico seconds, we might revisit the problem. I don't think we are there yet.

I somehow failed to find in this empty verbiage the answer to the concrete
question I was asking in my previous post...

Dan
 
P

P.J. Plauger

In <[email protected]> "P.J. Plauger"

Assume all the bits of the argument are exact and the *only* source
of fuziness is the magnitude of the value. By the time the least
significant bit of the mantissa means more than 2 * pi, the fuziness
is complete (any result in the -1..1 range is correct).

Is that concrete enough for you?

Yes, it's concrete enough to show that you still don't get it.

If the argument were in quadrants, I'd agree that for sufficiently
large arguments the answer is always one of {0, 1, -1}, and for
still larger arguments the answer is always zero. You'd still
want to return the appropriate member of this simple set, just as you
want to return 1e38 from the round function when called with 1e38.
Here, round is operating nowhere near its most interesting range,
but there's still a well defined function value for each input value,
and it's not the place of the library to set the Uninteresting Argument
Exception in the FPP, or set errno to EUNINTERESTING_ARGUMENT.

The argument to sine is not really in quadrants, but in radians.
That means that even when there are no fraction bits represented in
the argument x, there *still are* in x/(pi/2) (which is how you convert
radians to quadrants). So each argument value still corresponds to
some reduced argument that can be expressed to full machine precision
(almost always as some value other than {1, 0, -1} but still computable
with enough hard work). Which means that there's a corresponding
function return value that's also computable to full machine precision
that you really ought to return.

You may think that it's unlikely that all the bits of that value are
meaningful in a given application, and you're probably right. But
you're not *definitely* right. So as a library vendor, best engineering
practice is to supply all the bits that *might* make sense, in case
they actually do. A good quality implementation will continue to
compute the sine of small arguments quickly, but if it has to take
progressively longer to reduce ever larger arguments, then so be it.
You don't want the cost, reduce the arguments quickly, by your own
crude methods that are good enough for your application, before calling
sine.

Note also that even sin(1e-38) *might not* be good to full precision,
because the argument happens not to be good to full precision, but
you have no criterion for judging how many bits are worth computing.
Since it's so easy to compute them, and since they're often all
meaningful, nobody wastes much time debating the cost of producing
bits that are potentially garbage.

Now it so happens that it gets progressively *much* harder to follow
this best engineering practice as x gets larger in magnitude. The
temptation is strong to decree that the bits are meaningless beyond
some point and just return garbage. I've done this very thing in the
past, but I'm less inclined to do it now -- at least not without some
sanctioned way to report significance loss in lieu of computing a
difficult function value.

You also didn't get that you're still not giving a sufficiently concrete
criterion for when the sine function should stop trying. If you don't
like that a sine function wastes your program microseconds computing
what you shouldn't have told it to compute, then you need to be more
precise about where it can and should give up. You state above that
the function result is definitely meaningless once 1 ulp in the argument
weighs more than 2*pi, but why go that far? Aside from a phase factor,
you've lost all angle information at pi/2. But then how meaningful is it
to have just a couple of bits of fraction information? To repeat what
you stated above:

So you've taken on a serious responsibility here. You have to tell us
library vendors just how small "not so large" is, so we know where to
produce quick garbage instead of slower correct answers. If you don't,
you have no right to deem our products unacceptable, or even simply
wasteful.

Of course, you also need to get some standards organization to agree
with you, so we all have the same *concrete* criteria to obey. But I'm
sure you can sweet talk one of them into doing as you say, once you
actually say it.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

CBFalconer

P.J. Plauger said:
.... snip ...

I agree that it's a Quality of Implementation issue just how fast
a library function supplies nonsense when called with nonsense
arguments. But I've yet to hear an objective criterion for
determining how much of an argument is garbage. Absent that, the
best way I know for library writers to satisfy customers is to
assume that every input value is exact, and to produce the
closest possible representation to the nearest internal
representation of the corresponding function value.

Of course, if you don't want to take responsibility for analyzing
error propagation, that's fine. Just don't foist the burden on
somebody who doesn't know how fuzzy your inputs really are.

I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.

We can, at the cost of some computational complexity, assume the
input argument is exact and reduce it to a principal value with an
accuracy governed by the precision of PI. There is no essential
difference between this operation and forming a real value for
1/3. The critical thing there is the precision of our knowledge
of 3.

For a concrete example, consider a mechanism that rotates and
detents at 45 degree intervals. The sine of the resultant angle
can only have 5 discrete values. However the net rotation is
described by the sum of a counter (of full turns) and one of 8
discrete angles, in units of 45 deg. After letting that engine
whir for a day and a half at a 1000 rpm and a half and recording
the final angle, we want to know the sine of that angle. The
computation machinery knows nothing about detents, all it has to
work with is PI, the net rotation angle, and the computation
algorithm for sin(x).

At some point the accuracy of the results will become worse than
the accuracy of the detents, and all blows up. This is not the
same point as that reached by simple modulo PI arithmetic.
 
P

P.J. Plauger

I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.

We can, at the cost of some computational complexity, assume the
input argument is exact and reduce it to a principal value with an
accuracy governed by the precision of PI. There is no essential
difference between this operation and forming a real value for
1/3. The critical thing there is the precision of our knowledge
of 3.

For a concrete example, consider a mechanism that rotates and
detents at 45 degree intervals. The sine of the resultant angle
can only have 5 discrete values. However the net rotation is
described by the sum of a counter (of full turns) and one of 8
discrete angles, in units of 45 deg. After letting that engine
whir for a day and a half at a 1000 rpm and a half and recording
the final angle, we want to know the sine of that angle. The
computation machinery knows nothing about detents, all it has to
work with is PI, the net rotation angle, and the computation
algorithm for sin(x).

At some point the accuracy of the results will become worse than
the accuracy of the detents, and all blows up. This is not the
same point as that reached by simple modulo PI arithmetic.

I think that you're saying a close approximation to what I've
said.

Thanks,

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,078
Latest member
MakersCBDBlood

Latest Threads

Top