Sine code for ANSI C

D

Dan Pop

In said:
Ah, so, what method do *you* use to solve linear systems on the computer?

I don't solve linear systems on the computer. Yet, as far as I know, all
the usual methods come bundled with an error analysis, that allows
computing the precision of the results. If you don't compute this
precision, what's the point in solving the linear systems in the first
place? There are more efficient random number generators...

Dan
 
P

P.J. Plauger

In <[email protected]> "P.J. Plauger"

sinf is useless in portable C code.

But wait a minute. You've just outlined a "concrete example" where
you know you deserve only 5 digits of precision. sinf will meet
your needs *portably*. And it won't waste your precious CPU resources
to give you 16 digits of *bogus* precison. How can you call that
useless when you just outlined a concrete use case where it portably
meets your needs exactly?
ALL of them, as I have already explained you. It's the magnitude of the
value that decides how many digits of precision I deserve in the result.

But you've yet to commit to any *concrete* rule for deciding how many
digits you deserve given the magnitude of the argument value. And you
say we're at fault if a library takes more time than it should. We're
looking to you for guidance in this area, particularly since you seem
to have such a clear vision of the right way to do things. Please provide
wording for a correction to the C Standard so we can all critique it and
then adopt it. We're counting on you for leadership.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dan Pop

In said:
But wait a minute. You've just outlined a "concrete example" where
you know you deserve only 5 digits of precision. sinf will meet
your needs *portably*. And it won't waste your precious CPU resources
to give you 16 digits of *bogus* precison. How can you call that
useless when you just outlined a concrete use case where it portably
meets your needs exactly?

sinf is a C99 invention and the portability of C99 code is extremely
impaired by the number of implementations claiming C99-conformance.

C99 is perfectly futile to anyone concerned about the portability of his
code. At best, it's a guide about what C89 user space identifiers not
to use in your code (because C99 has no qualms stomping on the C89 user
name space).
But you've yet to commit to any *concrete* rule for deciding how many
digits you deserve given the magnitude of the argument value.

I have already provided the rule, last week. And when I asked if it was
concrete enough for you, you didn't object.

Dan
 
P

P.J. Plauger

sinf is a C99 invention and the portability of C99 code is extremely
impaired by the number of implementations claiming C99-conformance.

Actually it's described in C89.
C99 is perfectly futile to anyone concerned about the portability of his
code. At best, it's a guide about what C89 user space identifiers not
to use in your code (because C99 has no qualms stomping on the C89 user
name space).

sinf has been reserved since 1989, so I can't imagine how using it
could possibly "stomp on" any program. But that's okay, I understand
that you're trolling.
I have already provided the rule, last week. And when I asked if it was
concrete enough for you, you didn't object.

Oh sorry, I thought you were explaining your (mis)understanding
of the problem with this interaction:

: > >So all the sine function needs is an indication of how much fuzz to
: > >assume in the argument. I've yet to hear a specific criterion. Concrete
: > >example please.
: >
: > Assume all the bits of the argument are exact and the *only* source
: > of fuziness is the magnitude of the value. By the time the least
: > significant bit of the mantissa means more than 2 * pi, the fuziness
: > is complete (any result in the -1..1 range is correct).
: >
: > Is that concrete enough for you?
:
: Yes, it's concrete enough to show that you still don't get it.

So what you're committing to is the requirement that the sine
should *not* be computed for any value where 1 ulp weighs more
than 2*pi. And presumably, sine should deliver the best possible
approximation to any argument less than this magnitude. (This
is fine with me, since it still requires an argument reduction
using about 2 1/2 words of precision, so it will break the
vast majority of sine functions that have been in use for the
past several decades. But hey, we're covered, so I'm happy.)
Of course, you still need to specify what to return for the
garbage value, and what error to report (if any). I await your
further guidance.

So you're willing to stand behind this criterion even after I
described, in a later posting, the various ways in which it was
arbitrary? If this is submitted as a DR to WG14, you're
willing, as it's original author and proposer, to defend it
against cranky attacks by people who haven't thought through
the matter as carefully as you?

Brave of you. We can call it the Pop Criterion.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

Christian Bau

[email protected] (Dan Pop) said:
I don't solve linear systems on the computer. Yet, as far as I know, all
the usual methods come bundled with an error analysis, that allows
computing the precision of the results. If you don't compute this
precision, what's the point in solving the linear systems in the first
place? There are more efficient random number generators...

It depends. Do you want a set of numbers that is close to the correct
solution of the systems, or do you want a set of numbers that is very
close to being a solution, that is each equation is solved with a small
error?

For many systems, you can get the second much easier than the first. Or
lets say you want to minimise f (x). It often doesn't matter whether you
find an x' that is close to the x with the smallest value or not, but
what you want is an x' such that f (x') is close to the smallest value
of f (x).
 
D

Dik T. Winter

>
> It depends. Do you want a set of numbers that is close to the correct
> solution of the systems, or do you want a set of numbers that is very
> close to being a solution, that is each equation is solved with a small
> error?

The actual methods most commonly used give the exact solution to a system
that differs from the original system with a small error. This does *not*
mean that the solution is close to the exact solution of the exact original
problem. It is close to it when the condition number of the original
matrix is good. And that is what the error analysis does do (backward
error analysis as introduced by J.H. Wilkinson).

To compute the precision of the result you have to do much more than is
feasable in most cases. Have a look at Karlsruhe arithmetic from the
University at Karlsruhe.
 
D

Dan Pop

In said:
Actually it's described in C89.

Are you trolling? It may be described in C89 (as a reserved identifier)
but no conforming C89 implementation is required to provide it. Your
own published implementation of the standard C89 library doesn't...
sinf has been reserved since 1989, so I can't imagine how using it
could possibly "stomp on" any program. But that's okay, I understand
that you're trolling.

You're trolling. I was talking in general terms: there are dozens of C99
standard library functions whose names belong to the C89 user name space:
as well as additions to other headers said:
Oh sorry, I thought you were explaining your (mis)understanding
of the problem with this interaction:

: > >So all the sine function needs is an indication of how much fuzz to
: > >assume in the argument. I've yet to hear a specific criterion. Concrete
: > >example please.
: >
: > Assume all the bits of the argument are exact and the *only* source
: > of fuziness is the magnitude of the value. By the time the least
: > significant bit of the mantissa means more than 2 * pi, the fuziness
: > is complete (any result in the -1..1 range is correct).
: >
: > Is that concrete enough for you?
:
: Yes, it's concrete enough to show that you still don't get it.

So what you're committing to is the requirement that the sine
should *not* be computed for any value where 1 ulp weighs more
than 2*pi.

That's too strong. I would still expect

sin(DBL_MAX) * sin(DBL_MAX) + cos(DBL_MAX) * cos(DBL_MAX)

to be reasonably close to 1.0, if not equal to it. But, other than that,
any value in the [-1, 1] range is OK once 1 ulp exceeds 2*pi.
And presumably, sine should deliver the best possible
approximation to any argument less than this magnitude.

Not even that. If the value in question covers the range [min, max],
sine should provide the best possible approximation of sin(x), where x
is one value in the range [min, max].
(This
is fine with me, since it still requires an argument reduction
using about 2 1/2 words of precision, so it will break the
vast majority of sine functions that have been in use for the
past several decades.

Only the ones that have been *misused*. If I measure the volume of a
sphere with a precision of 1 cubical centimeter, does it make any sense
to compute *and display* its radius with picometer precision?
But hey, we're covered, so I'm happy.)
Of course, you still need to specify what to return for the
garbage value, and what error to report (if any). I await your
further guidance.

For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.
So you're willing to stand behind this criterion even after I
described, in a later posting, the various ways in which it was
arbitrary?

Yup. I didn't buy any of your sophistry.
If this is submitted as a DR to WG14, you're
willing, as it's original author and proposer, to defend it
against cranky attacks by people who haven't thought through
the matter as carefully as you?

Considering that the current standard doesn't impose *any* precision on
sin, I would not expect my DR to be taken seriously. Whatever
considerations led to the current committee decision would be still valid
after the submission of my DR (no precision specification is *more*
permissive than *any* precision specification).

If I were to submit a DR, it would be one requiring each implementation
to document the precision of the 4 arithmetic operations on each floating
point type. But I know, from past c.s.c discussion, that the committee
would turn a deaf ear to my DR.
Brave of you. We can call it the Pop Criterion.

Call it the common sense criterion.

Dan
 
D

Dan Pop

In said:
To compute the precision of the result you have to do much more than is
feasable in most cases.

What is the value of a result with unknown precision? To repeat my
earlier example, if you don't know if it's 1 +/- 0.001 or 1 +/- 1000,
the value of 1 is of pretty little use.

Dan
 
P

P.J. Plauger

: > Assume all the bits of the argument are exact and the *only* source
: > of fuziness is the magnitude of the value. By the time the least
: > significant bit of the mantissa means more than 2 * pi, the fuziness
: > is complete (any result in the -1..1 range is correct).
: >
: > Is that concrete enough for you?
:
: Yes, it's concrete enough to show that you still don't get it.

So what you're committing to is the requirement that the sine
should *not* be computed for any value where 1 ulp weighs more
than 2*pi.

That's too strong. I would still expect

sin(DBL_MAX) * sin(DBL_MAX) + cos(DBL_MAX) * cos(DBL_MAX)

to be reasonably close to 1.0, if not equal to it. But, other than that,
any value in the [-1, 1] range is OK once 1 ulp exceeds 2*pi.

There you go again, getting un-concrete. You'll have to specify
what "reasonably" means, or some nit-picker will take apart
your revision of the sine specification.
And presumably, sine should deliver the best possible
approximation to any argument less than this magnitude.

Not even that. If the value in question covers the range [min, max],
sine should provide the best possible approximation of sin(x), where x
is one value in the range [min, max].

Okay, then I see no reason why we shouldn't apply this principle
to *every* math function. That would mean, for example, that
the range of permissible values can be huge for large arguments
to exp, lgamma, tgamma, as well as many combinations of arguments
to pow.
Only the ones that have been *misused*. If I measure the volume of a
sphere with a precision of 1 cubical centimeter, does it make any sense
to compute *and display* its radius with picometer precision?

No. But if you specify the behavior of a math function and don't
provide a way to specify the uncertainty in its arguments, does
it make any sense to demand that the math functions know how
much uncertainty is there? I've been talking about the requirements
on the authors of math functions and you keep going back to how
they should be properly used. Two different universes of discourse.
For the garbage value, see above. Set errno to any value you consider
sensible and document it. ERANGE would be fine with me.

Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and

2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).

That should stand up well to criticism.
Yup. I didn't buy any of your sophistry.


Considering that the current standard doesn't impose *any* precision on
sin, I would not expect my DR to be taken seriously.

So that's your copout for not actually doing anything?
Whatever
considerations led to the current committee decision would be still valid
after the submission of my DR (no precision specification is *more*
permissive than *any* precision specification).

Not even if you were brave enough to pioneer requirements on
precision in the math library? That's not too brave, given that
other languages have done so.
If I were to submit a DR, it would be one requiring each implementation
to document the precision of the 4 arithmetic operations on each floating
point type. But I know, from past c.s.c discussion, that the committee
would turn a deaf ear to my DR.

So that's your copout for not actually doing anything?
Call it the common sense criterion.

I won't call it squat if you never submit it as a DR. It's just
more idle carping.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dik T. Winter

>
> What is the value of a result with unknown precision? To repeat my
> earlier example, if you don't know if it's 1 +/- 0.001 or 1 +/- 1000,
> the value of 1 is of pretty little use.

I have precisely stated what the result means in numerical algebra.
But if you do not trust it see that every use of linear systems with
orders about 40 is abandoned. Or look up in books by Wikinson and
Reinsch. Getting guaranteed results is very expensive. The Karlsuhe
group has even had IBM design specific extensions to their 370 to
make it even feasable.
 
D

Dr Chaos

Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and

I'd think about NaN in an IEEE context.
2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).

That should stand up well to criticism.

in the case of sin(), yes, because large arguments in floating point
really are mathematically silly. The same is not true of say logarithmic
gamma, or just plain logarithm.
 
P

P.J. Plauger

I'd think about NaN in an IEEE context.

And sin(x) suddenly becomes NaN beyond some arbitrary number of
radians? I'm still waiting for the justification for any particular
magnitude of the cutoff.
in the case of sin(), yes, because large arguments in floating point
really are mathematically silly.

They really aren't, but this seems to be a point too subtle for
many to grasp.
The same is not true of say logarithmic
gamma, or just plain logarithm.

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
J

John Cochran

But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem, to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?

Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.
 
D

Dan Pop

In said:
: > Assume all the bits of the argument are exact and the *only* source
: > of fuziness is the magnitude of the value. By the time the least
: > significant bit of the mantissa means more than 2 * pi, the fuziness
: > is complete (any result in the -1..1 range is correct).
: >
: > Is that concrete enough for you?
:
: Yes, it's concrete enough to show that you still don't get it.

So what you're committing to is the requirement that the sine
should *not* be computed for any value where 1 ulp weighs more
than 2*pi.

That's too strong. I would still expect

sin(DBL_MAX) * sin(DBL_MAX) + cos(DBL_MAX) * cos(DBL_MAX)

to be reasonably close to 1.0, if not equal to it. But, other than that,
any value in the [-1, 1] range is OK once 1 ulp exceeds 2*pi.

There you go again, getting un-concrete. You'll have to specify
what "reasonably" means, or some nit-picker will take apart
your revision of the sine specification.

As close to 1.0 as the C standard requires 2.0 - 1.0 to be ;-)

If you're not interested in a serious discussion, please don't waste my
time.
And presumably, sine should deliver the best possible
approximation to any argument less than this magnitude.

Not even that. If the value in question covers the range [min, max],
sine should provide the best possible approximation of sin(x), where x
is one value in the range [min, max].

Okay, then I see no reason why we shouldn't apply this principle
to *every* math function. That would mean, for example, that
the range of permissible values can be huge for large arguments
to exp, lgamma, tgamma, as well as many combinations of arguments
to pow.

Precisely. I didn't mean sine to be treated as an exception.
No. But if you specify the behavior of a math function and don't
provide a way to specify the uncertainty in its arguments, does
it make any sense to demand that the math functions know how
much uncertainty is there?

The amount of uncertainty assumed by the library is dictated by the
nature of the floating point representation being used. In most real
life applications, the amount is higher than that, but this is none of the
library implementor concern.
I've been talking about the requirements
on the authors of math functions and you keep going back to how
they should be properly used. Two different universes of discourse.

They stop being different once common sense is applied. The implementor
need not waste any resources providing more precision than a proper usage
of the function requires. The additional "precision" is garbage, anyway:
only a fool would look for *significant* information in those bits.
Okay, so a programmer knows that a call to sine was silly if

1) the return value is between -1 and +1, and

2) errno indicates that some function value is either too large to
represent (overflow) or too small to represent (underflow).

The *exact* meaning of ERANGE is as documented for each function using
it. But, as I said, use *any* value *you* consider sensible and document
it.

Again, I'm not interested in this kind of idioting nit picking...
So that's your copout for not actually doing anything?

Why should I do *anything*? Are we discussing here the C standard or your
implementation of the C library? Are you so stupid as to be unable to
tell the difference between the two?

I have never expressed any objection to what the C standard says about the
<math.h> stuff, so why the hell should I have to submit *any* DR on this
issue?

Dan
 
D

Dr Chaos

And sin(x) suddenly becomes NaN beyond some arbitrary number of
radians?

Yes.
I'm still waiting for the justification for any particular
magnitude of the cutoff.

When the least significant bit of the argument induces a change of at
least 2 pi its value (so that the output is essentially indeterminate
with a 1 lsb fluctuation), then it is extremely likely that the
programmer is making a blunder and ought to know.
They really aren't, but this seems to be a point too subtle for
many to grasp.

Explain a situation, using computations with standard fixed-precision
floating point, when computing sin of a value which was passed in
standard floating poitn formats would be valuable for magnitudes
beyond the above.
But the repeated argument is that the silliness stems from the fact
that the floating-point value x stands for the interval (x - 1ulp,
x + 1ulp). If it didn't, it would stand for the value it seems to
stand for, and the library writer might be obliged to actually
compute the function value corresponding to it. Well, what's sauce
for the goose is sauce for the gander. Why should sine get progressively
fuzzier as that interval covers a wider range of function values and
not require/allow/expect exactly the same from every other function
that does the same? The exponential suffers from *exactly* the
same fuzziness problem,

It only does in the legalistic sense, but not in the useful esense
because in those cases people would be comparing magnitudes, and
possibly dividing or taking logarithms of the large results.
(And they should have been taking logarithms first most likely)

And the error relative to the value remains small, and that's what
people almost always care about using exp.

At some point you have to return +Inf for exp() of too large an
argument? Is it *really* positive infinity? No. It is far from
positive infinity, well, infinitely so.

There's an arbitrary cutoff right there. Why aren't you forcing
people to convert to an arbitrary precision format?
to take just the best established of the
fast-moving functions I mentioned earlier. Aren't we doing all our
customers a disservice by giving them a misleading value for exp(x)
when x is large?

sin and cos are different from exp, and especially lgamma.

I would be pleased with a library which signaled excessively large arguments
for sin/cos. It's a blunder, like dereferencing NULL.

For a library which had arguments of arbitrary precision, then yes,
it would be a fine idea to get sin() well.
 
P

P.J. Plauger

Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.

Sigh. Let's try again. You're still arguing from the notion that
a given floating-point number represents a range of values, from
the next lower representable value to the next higher. If that
were the one, obvious, and only meaning of a floating-point argument
value, then I'd agree that the uncertainty in the value of sin(x)
eventually gets so large that it makes little sense to ascribe a
given value to the argument. All we can say is the value is in the
range [-1, 1].

But if that's how we're going to treat sine, it's hard to make a
case for treating any other function differently. The fast moving
functions, of which exp and tgamma are just two examples, *also*
get fuzzier as their arguments get larger. I should stick to
calculus, but I'll instead give a concrete example. If you compute
exp(700.0), you get an answer of about 1.014232054735004e+304.
Now try computing the exponential of the next larger representable
value (which you can obtain from nextafter(700.0, INF)). It looks
like 1.01423205473512e+304, printed to the same precision. That
value is 947 ulp bigger than exp(700.0).

So by the logic repeatedly expressed in this thread, it's perfectly
fine for exp(700.0) to return *any* value within plus or minus
900-odd ulp. That's nearly *eleven garbage bits*. After all,
whoever computed that 700.0 ought to know the effect of a 1 ulp
error on the argument, so it's silly -- and a waste of time --
for the library to try too hard to get the exponential right.
Right?

So much for the requirements on exp, cosh, sinh, tgamma, and
pow -- to name the functions I know off the top of my head would
go to hell.

But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off. Pragmatically speaking, I do know that plenty of customers
will complain if your exponential function sucks as much as
the interval interpretation would permit.

Now here's the harder thing to understand. *Every* finite
argument has a well defined sine corresponding to it. It's
not easy to compute, but it's well defined. It would be
easier to understand if the sine took an argument in quadrants.
For larger arguments, the granularity gets steadily worse,
but the arguments still stand for an obvious number of
quadrants, or degrees if you prefer to think in those terms.
Once you get to where the least significant bit weighs 0.5,
you can represent only multiples of 45 degrees. All your
results are 0, 1, or sqrt(1/2), possibly with a minus sign.
Go to the next larger exponent and your values are only
0, 1, and -1, corresponding to multiples of 90 degrees. Go
to the next larger exponent and your values are all zero,
because the sine of any multiple of 180 degrees is zero.
And thus it remains for all the larger finite representable
values.

You might argue that the sine in quadrants becomes silly
once you get to counting by 180 degrees, or multiples
thereof. But the round function gets equally silly for
comparable numbers -- once the fraction bits go away, the
answer is obvious and trivial to compute. *But it's still
well defined and meaningful.* If you were to start
throwing exceptions, or returning NaNs, just because the
rounded result is so obvious, I assure you that many
people would have occasion to complain, and justifiably
so. Why should sine be any different?

The added wrinkle with the conventional sine is that it
counts in radians. Thus *every* nonzero argument to sine
corresponds to an angle in quadrants or degrees that
takes an infinite number of bits to precisely represent
(pi being irrational and all). Nevertheless, you can in
principle multiply any finite representable floating-point
value by enough bits of pi to retain the reduced angle
(in the interval [0, 2*pi) say) to sufficient precision
to define the corresponding sine function to the same
precision as the argument. How that happens is up to
the implementor. If the program is computing exact multiples
of radians, it doesn't even have to know how to represent
pi to generate meaningful, and often exact, arguments
to the sine function.

I don't know of any technique to do this calculation other
than to do argument reduction to ever more precision for
ever larger arguments, but it can and has been done.
Since the C Standard says nothing about any limitations
on the size of the argument to sine, it's hard to make a
case that a library vendor has any right *not* to do all
this work. FWIW, I wish there were a way to get off the
hook. But I've yet to see presented in this thread any
rationale for *not* computing sine properly that holds
water. And I've yet to see a rationale for picking a
cutoff point, or an error code, or an error return value,
for large sines that also holds water.

Also FWIW, I just computed the change in the sine function
between sin(700.0) and sin(nextafter(700.0, INF)). It's
-859 ulp. That's a *smaller* interval than for exp in
the same argument range. So tell me again why it's okay to
punt on sine and not on exp.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

Why should I do *anything*? Are we discussing here the C standard or your
implementation of the C library? Are you so stupid as to be unable to
tell the difference between the two?

I have never expressed any objection to what the C standard says about the
<math.h> stuff, so why the hell should I have to submit *any* DR on this
issue?

Don't get me wrong, I never expected you to. It's obvious by now
that you're a stuck opposer. You love to be rude to people --
by calling them stupid, for example -- and you love to sit on
the sidelines and criticize the work of others. But I'm quite
certain you will *never* do anything that would put you in
the position of having your own detailed technical proposals
be criticized by others.

I respond to you because it gives me the opportunity to explain
things to others. I have no expectation that you'll ever let in
a new idea or admit you're anything but right all the time.
Once I find myself just responding to your jibes and shifting
arguments, I know it's time to quit.

This is that time.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
D

Dr Chaos

Sigh. Let's try again. You're still arguing from the notion that
a given floating-point number represents a range of values, from
the next lower representable value to the next higher.

I think they're arguing from the notion that "the purpose of
computing is insight, not numbers."
But if that's how we're going to treat sine, it's hard to make a
case for treating any other function differently.
The fast moving
functions, of which exp and tgamma are just two examples, *also*
get fuzzier as their arguments get larger. I should stick to
calculus, but I'll instead give a concrete example. If you compute
exp(700.0), you get an answer of about 1.014232054735004e+304.
Now try computing the exponential of the next larger representable
value (which you can obtain from nextafter(700.0, INF)). It looks
like 1.01423205473512e+304, printed to the same precision. That
value is 947 ulp bigger than exp(700.0).

relative error is what?
So by the logic repeatedly expressed in this thread, it's perfectly
fine for exp(700.0) to return *any* value within plus or minus
900-odd ulp. That's nearly *eleven garbage bits*.

What matters is the number of non-garbage bits.

Suppose that number were zero.
After all,
whoever computed that 700.0 ought to know the effect of a 1 ulp
error on the argument, so it's silly -- and a waste of time --
for the library to try too hard to get the exponential right.

To some degree, yes, but the case is far stronger for sin/cos.
But if you assume, for the sake of computing any of these fast
moving functions, that the argument represents some *exact*
value, then there is a well defined function value that can
be delivered corresponding to that exact value. And I'll bet
there's more than one physicist, engineer, and/or economist
who'd rather get that value than something about 1,000 ulp
off.

Let's stick to sin and cos. I haven't heard of one explicit
example of somebody who really thought about this and really
needed it.

By constrast, If I had a student writing some optics simulation
software, say simulating chaos in erbium-doped fiber ring lasers, if
he or she took sin or cos() of a very large value, they were making a
clear error by doing so. It conceivably could happen if they
implemented some textbook formulae naively.

I would feel more comfortable if the library automatically signaled
this, as it would be an instructive point, and it might prevent
wasted calculation or worse, an improper scientific inference.
Pragmatically speaking, I do know that plenty of customers
will complain if your exponential function sucks as much as
the interval interpretation would permit.

who would complain, and what would be the particular application
they'd complain about?
You might argue that the sine in quadrants becomes silly
once you get to counting by 180 degrees, or multiples
thereof. But the round function gets equally silly for
comparable numbers -- once the fraction bits go away, the
answer is obvious and trivial to compute. *But it's still
well defined and meaningful.* If you were to start
throwing exceptions, or returning NaNs, just because the
rounded result is so obvious, I assure you that many
people would have occasion to complain, and justifiably
so. Why should sine be any different?

Because the uses of sine are different, and rounding
produces useful results.
But I've yet to see presented in this thread any
rationale for *not* computing sine properly that holds
water.

A programmer's conceptual blunder.
And I've yet to see a rationale for picking a
cutoff point, or an error code, or an error return value,
for large sines that also holds water.
Also FWIW, I just computed the change in the sine function
between sin(700.0) and sin(nextafter(700.0, INF)). It's
-859 ulp. That's a *smaller* interval than for exp in
the same argument range. So tell me again why it's okay to
punt on sine and not on exp.

relative error, and facts of actual use.
 
C

Christian Bau

Huh?

What large values for exp(x)?

For most of the floating point implementations out there, an argument
of about 709.78 give or take will give a result close to the upper limit
representable. The main difference for sin() vs exp() is that for all of
the values you can pass to exp(), there is an answer and it is possible
to come up with a floating point number closest to the correct answer to
infinite precision. However, with sin() once you get to an input where the
magnitude of the ulp is greater than 2 pi, it becomes impossible to decide
what the correct answer is.

If the argument x is around 709 then according to the theory that a
double number represents an interval, x represents quite a large
interval; for example +/- 2^-44 with IEEE 64 bit representation. Not
quite as bad as for the sine function, but you have to start thinking
whether or not a maths library should return results with full precision
or not.
 
D

Dr Chaos

Dr Chaos said:
Let's stick to sin and cos. I haven't heard of one explicit
example of somebody who really thought about this and really
needed it.

By constrast, If I had a student writing some optics simulation
software, say simulating chaos in erbium-doped fiber ring lasers, if
he or she took sin or cos() of a very large value, they were making a
clear error by doing so. It conceivably could happen if they
implemented some textbook formulae naively.

I would feel more comfortable if the library automatically signaled
this, as it would be an instructive point, and it might prevent
wasted calculation or worse, an improper scientific inference.

I have a suggestion so that your heroic efforts go to something
more useful:

Here is something that might, conceivably, occasionally be used:

sin_2pi_product(double z1, double z2)

return the sin of the 2*pi*z1*z2, where z1 and z2 are rationals in
IEEE double and 2 pi is expressed to sufficient number of
significant digits as necessary.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top