Sine code for ANSI C

D

Dik T. Winter

> No. The *precision* will be reduced. Whether or not the *accuracy*
> is reduced depends on your conjecture about the bits that don't
> exist off the right end of the fraction. Those of us who write
> libraries for a living shouldn't presume that those missing bits
> are garbage. We serve our customers best by assuming that they're
> all zeros, and delivering up the best approximation to the correct
> answer for that assumed value.

Right. The basic functions should be black boxes that assume the input
value is exact. See how the precision of these functions are defined
in Ada (and I had not nothing to do with it).
 
C

CBFalconer

Dik T. Winter said:
P.J. Plauger said:
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic].

Have you looked at how it was done at DEC? There is a write-up
about it in the old SigNum Notices of the sixties or seventies.
Mary Decker, I think.

It has been my opinion for a very long time that he real sine
function is not so very important. The most important in
numerical mathematics is sin2pi(x) which calculates sin(2.pi.x),
and which allows easy and exact range reduction. Alas, that
function is not in C.

That just makes the problem show up elsewhere. Compare that
function with an argument of (1e18 + 0.1) against an argument of
(0.1), for example. At least on any common arithmetic system
known to me.
 
C

CBFalconer

P.J. Plauger said:
No. The *precision* will be reduced. Whether or not the *accuracy*
is reduced depends on your conjecture about the bits that don't
exist off the right end of the fraction. Those of us who write
libraries for a living shouldn't presume that those missing bits
are garbage. We serve our customers best by assuming that they're
all zeros, and delivering up the best approximation to the correct
answer for that assumed value.

Yes, precision is a much better word. However there is an
argument for assuming the first missing bit is a 1.
 
P

P.J. Plauger

I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic].

Have you looked at how it was done at DEC? There is a write-up about
it in the old SigNum Notices of the sixties or seventies. Mary Decker,
I think.

I think I'm familiar with that approach, but I've studied so many
over the past ten years that I'm no longer certain. Our latest
approach is to use an array of half-precision values to represent
2/pi to as many bits as we see fit. We have a highly portable and
reasonably fast internal package that does the magic arithmetic
needed for range reduction.
It has been my opinion for a very long time that he real sine function
is not so very important. The most important in numerical mathematics
is sin2pi(x) which calculates sin(2.pi.x), and which allows easy and
exact range reduction. Alas, that function is not in C.

Yes, sinq and its variants avoids the problem of needing all those
bits of pi. But many formulas actually compute radians before calling
on sin/cos/tan. For them to work in quadrants (or whole go-roundies)
would just shift the 2*pi factor to someplace in the calculation
that retains even less precision.

(If the OP is not suitably frightened about writing his/her own
sine function by now, s/he should be.)

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

Dik T. Winter said:
P.J. Plauger said:
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic].

Have you looked at how it was done at DEC? There is a write-up
about it in the old SigNum Notices of the sixties or seventies.
Mary Decker, I think.

It has been my opinion for a very long time that he real sine
function is not so very important. The most important in
numerical mathematics is sin2pi(x) which calculates sin(2.pi.x),
and which allows easy and exact range reduction. Alas, that
function is not in C.

That just makes the problem show up elsewhere. Compare that
function with an argument of (1e18 + 0.1) against an argument of
(0.1), for example. At least on any common arithmetic system
known to me.

No, again. Working in quadrants, or go-roundies, *does* solve the
argument-reduction problem -- you just chuck the leading bits
that don't contribute to the final value. But you're *still*
conjecturing that a lack of fraction bits is a lack of accuracy,
and that's not necessarily true. Hence, it behooves the library
writer to do a good job in case it *isn't* true.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

Yes, precision is a much better word. However there is an
argument for assuming the first missing bit is a 1.

Right, and I acknowledged that argument earlier in this thread.
But when you're talking about how to write a professional sine
function, the argument is irrelevant.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

CBFalconer

P.J. Plauger said:
Dik T. Winter said:
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic].

Have you looked at how it was done at DEC? There is a write-up
about it in the old SigNum Notices of the sixties or seventies.
Mary Decker, I think.

It has been my opinion for a very long time that he real sine
function is not so very important. The most important in
numerical mathematics is sin2pi(x) which calculates sin(2.pi.x),
and which allows easy and exact range reduction. Alas, that
function is not in C.

That just makes the problem show up elsewhere. Compare that
function with an argument of (1e18 + 0.1) against an argument of
(0.1), for example. At least on any common arithmetic system
known to me.

No, again. Working in quadrants, or go-roundies, *does* solve the
argument-reduction problem -- you just chuck the leading bits
that don't contribute to the final value. But you're *still*
conjecturing that a lack of fraction bits is a lack of accuracy,
and that's not necessarily true. Hence, it behooves the library
writer to do a good job in case it *isn't* true.

In the example I gave above, using any binary representation, it
is true. Maybe I should have used + (1.0/3.0) to ensure problems
on any non-trinary system :)

At any rate, the ideal system produces instantaneous results with
no range limitations and the full accuracy of the arithmetic
representation. So the designer should pick a compromise that
will suffice for most anticipated users. Here there be dragons.
 
P

P.J. Plauger

In the example I gave above, using any binary representation, it
is true.

What is true? A typical IEEE double representation will render
1e18 + 0.1 as 1e18, if that's what you intend to signify.
When fed as input to a function, the function can't know that
some poor misguided programmer had aspirations for some of the
bits not retained. But that has nothing to do with how hard
the sine function should work to deliver the best representation
of sin(x) when x == 1e18. You're once again asking the library
writer to make up stories about bits dead and gone -- that's
the responsibility of the programmer in designing the program
and judging the meaningfulness of the result returned by sin(x)
*for that particular program.* The library writer can't and
shouldn't care about what might have been.
Maybe I should have used + (1.0/3.0) to ensure problems
on any non-trinary system :)

So what if you did? That still doesn't change the analysis.
There exists a best internal approximation for 1/3 in any given
floating-point representation. There also exists a best internal
approximation for the sine *of that representation.* The library
can't and shouldn't surmise that 0.333333 (speaking decimal)
is the leading part of 0.33333333333... It could just as well
be the leading part of 0.3333334.
At any rate, the ideal system produces instantaneous results with
no range limitations and the full accuracy of the arithmetic
representation. So the designer should pick a compromise that
will suffice for most anticipated users. Here there be dragons.

I mostly agree with that sentiment, except for the "most anticipated
users" part. If I followed that maxim to its logical extreme, my
company would produce very fast math functions that yield results
of mediocre accuracy, with spurious overflows and traps for extreme
arguments. An incredibly large number of math libraries meet that
de facto spec, *and the users almost never complain.*

We aim to please more exacting users, however, to the point that
they often don't notice the pains we've taken to stay out of their
way. So, to review your checklist:

-- We don't aim for instantaneous results, even though a few of
the better designed modern CPUs can now evaluate the common
math functions in single hardware instructions. Our compromise
is to keep the code as fast as possible while still writing in
highly portable C.

-- We *do* aim for no range limitations. We think through every
possible input representation, and produce either a suitable
special value (Inf, -Inf, 0, -0, or NaN) or a finite result close
to the best internal approximation of the function value for that
input, treating the input as exact.

-- We provide full accuracy only for a handful of the commonest
math functions. But we generally aim for worst-case errors of
3 ulp (units in the least-significant place) for float, 4 for
double, and 5 for 113-bit long double. The rare occasions
where we fail to achieve this goal are around the zeros of
messy functions.

Most important, we have tools to *measure* how well we do,
and how well other libraries do. At some point in the not
too distant future, we'll publish some objective comparisons
at our web site. We find that programmers are more likely to
consider upgrading an existing library when they have a
better understanding of its limitations.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

CBFalconer

P.J. Plauger said:
.... snip ...


I mostly agree with that sentiment, except for the "most anticipated
users" part. If I followed that maxim to its logical extreme, my
company would produce very fast math functions that yield results
of mediocre accuracy, with spurious overflows and traps for extreme
arguments. An incredibly large number of math libraries meet that
de facto spec, *and the users almost never complain.*

They probably know no better. However I would not castigate traps
for extremes arguments too much - it is surely better for the user
to know something has happened than to use values with zero
significant digits. In a way it is unfortunate that hardware has
replaced software floating point, because it makes it
impracticable to fix problems.

Not all applications require the same FP library, and the usual
library/linker etc construction of C applications makes such
customization fairly easy.
We aim to please more exacting users, however, to the point that
they often don't notice the pains we've taken to stay out of their
way. So, to review your checklist:

-- We don't aim for instantaneous results, even though a few of
the better designed modern CPUs can now evaluate the common
math functions in single hardware instructions. Our compromise
is to keep the code as fast as possible while still writing in
highly portable C.

-- We *do* aim for no range limitations. We think through every
possible input representation, and produce either a suitable
special value (Inf, -Inf, 0, -0, or NaN) or a finite result close
to the best internal approximation of the function value for that
input, treating the input as exact.

A precision measure (count of significant bits) would be handy.
-- We provide full accuracy only for a handful of the commonest
math functions. But we generally aim for worst-case errors of
3 ulp (units in the least-significant place) for float, 4 for
double, and 5 for 113-bit long double. The rare occasions
where we fail to achieve this goal are around the zeros of
messy functions.

Those objectives are generally broad enough to eliminate any worry
over whether the next missing bit is 0 or 1.
Most important, we have tools to *measure* how well we do,
and how well other libraries do. At some point in the not
too distant future, we'll publish some objective comparisons
at our web site. We find that programmers are more likely to
consider upgrading an existing library when they have a
better understanding of its limitations.

Constructing those tools, in itself, is an application requiring a
much different math library. I suspect Gauss has no small hand.
 
P

P.J. Plauger

They probably know no better. However I would not castigate traps
for extremes arguments too much - it is surely better for the user
to know something has happened than to use values with zero
significant digits.

Only sin/cos/tan suffers from catastrophic significance loss, and
these functions rarely trap. I'm talking about things like

hypot(DBL_MAX / 2, DBL_MAX / 2)

which has a well defined and precise value, but which often
causes an intermediate overflow and/or trap in naive implementations.
In a way it is unfortunate that hardware has
replaced software floating point, because it makes it
impracticable to fix problems.

I don't understand that comment.
Not all applications require the same FP library, and the usual
library/linker etc construction of C applications makes such
customization fairly easy.

Indeed. That's how we can sell replacement and add-on libraries.
A precision measure (count of significant bits) would be handy.

That's the ulp measure described below.
Those objectives are generally broad enough to eliminate any worry
over whether the next missing bit is 0 or 1.

No, they're orthogonal to that issue, as I keep trying to explain
to you. We do the best we can with the function arguments given.
It is the inescapable responsibility of the programmer to
understand the effect of uncertainties in those argument values on
the uncertainties in the function results.
Constructing those tools, in itself, is an application requiring a
much different math library. I suspect Gauss has no small hand.

His spirit indeed lives on. We mostly use Maplesoft these
days, having long since given up on Mathematica as too
buggy.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
C

CBFalconer

P.J. Plauger said:
.... snip ...


I don't understand that comment.

I mean that the library/routine creator is fairly well restricted
to the FP format supplied by the hardware, on penalty of
horrendous time penalties.
Indeed. That's how we can sell replacement and add-on libraries.
.... snip ...


That's the ulp measure described below.

I meant something that was automatically associated with each
value, and revised by the process of calculation. For example,
multiplying something with 3 digit precision by something with two
digit precision can have no better that 2 digit precision,
regardless of actual significant bits generated. Subtraction of
similar sized numbers can produce violent reductions. This is the
sort of thing one can build into a custom software FP
representation, rather than doing some sort of worst case analysis
up front.

The replacement library can hardly replace the underlying FP
operations. barring the sort of kluges designed to use or bypass
emulators.
No, they're orthogonal to that issue, as I keep trying to explain
to you. We do the best we can with the function arguments given.
It is the inescapable responsibility of the programmer to
understand the effect of uncertainties in those argument values on
the uncertainties in the function results.

Where you assume that those arguments are exact. In reality, if
they come from almost any physical entity, they express some form
of measurement range. The only exactness is in the individual
terms of a Taylor series, for example. If you compute something
to 3 ulp, a 1/2 bit error in the argument value is practically
meaningless and can usually be ignored, barring a nearby
singularity. All of which leads to the same thing.

Without suitable actions and corrections results can have
unexpected biases. The horrible example is failure to round (as
did some of the early Microsoft Basic fp routines, and my first
efforts :). A more subtle example is rounding up (or down) from
0.5.

None of this is criticism, but I think it is essential to have a
clear idea of the possible failure mechanisms of our algorithms in
order to use them properly. Early nuclear pulse height analyzers
gave peculiar results until the effects of differential
non-linearity were recognized. After that the causes of such
non-linearity were often wierd and seemingly totally disconnected.
 
C

Christian Bau

"P.J. Plauger said:
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.

I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.

Obviously you don't get _exact_ values for any argument except sin (0.0)
:).

I guess you mean "the exact value, correctly rounded to the nearest
valid floating point number", since libraries that round to one of the
two nearest valid floating point numbers are already available?
 
W

-wombat-

suri said:
im sorry i meant i *could not* find the file

I stopped using Linux shortly after growing out of puberty. I developed more
sophisticated tastes in life... but I digress...

As someone else pointed out, there is no canonical ANSI C implementation.
There are various and sundry different ways of computing transcendental
functions with various accuracies and efficiencies.

The FreeBSD code mentions "a special Remez algorithm", but it boils down to
a Horner method polynomial computation (and if you don't know what the
Horner method is, google it):

----------------------------------------------------------------------------
/*
* Copyright (c) 1987, 1993
* The Regents of the University of California. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by the University of
* California, Berkeley and its contributors.
* 4. Neither the name of the University nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* @(#)trig.h 8.1 (Berkeley) 6/4/93
*/

#include "mathimpl.h"

vc(thresh, 2.6117239648121182150E-1 ,b863,3f85,6ea0,6b02,
-1, .85B8636B026EA0)
vc(PIo4, 7.8539816339744830676E-1 ,0fda,4049,68c2,a221,
0, .C90FDAA22168C2)
vc(PIo2, 1.5707963267948966135E0 ,0fda,40c9,68c2,a221,
1, .C90FDAA22168C2)
vc(PI3o4, 2.3561944901923449203E0 ,cbe3,4116,0e92,f999,
2, .96CBE3F9990E92)
vc(PI, 3.1415926535897932270E0 ,0fda,4149,68c2,a221,
2, .C90FDAA22168C2)
vc(PI2, 6.2831853071795864540E0 ,0fda,41c9,68c2,a221,
3, .C90FDAA22168C2)

ic(thresh, 2.6117239648121182150E-1 , -2, 1.0B70C6D604DD4)
ic(PIo4, 7.8539816339744827900E-1 , -1, 1.921FB54442D18)
ic(PIo2, 1.5707963267948965580E0 , 0, 1.921FB54442D18)
ic(PI3o4, 2.3561944901923448370E0 , 1, 1.2D97C7F3321D2)
ic(PI, 3.1415926535897931160E0 , 1, 1.921FB54442D18)
ic(PI2, 6.2831853071795862320E0 , 2, 1.921FB54442D18)

#ifdef vccast
#define thresh vccast(thresh)
#define PIo4 vccast(PIo4)
#define PIo2 vccast(PIo2)
#define PI3o4 vccast(PI3o4)
#define PI vccast(PI)
#define PI2 vccast(PI2)
#endif

#ifdef national
static long fmaxx[] = { 0xffffffff, 0x7fefffff};
#define fmax (*(double*)fmaxx)
#endif /* national */

static const double
zero = 0,
one = 1,
negone = -1,
half = 1.0/2.0,
small = 1E-10, /* 1+small**2 == 1; better values for small:
* small = 1.5E-9 for VAX D
* = 1.2E-8 for IEEE Double
* = 2.8E-10 for IEEE Extended
*/
big = 1E20; /* big := 1/(small**2) */

/* sin__S(x*x) ... re-implemented as a macro
* DOUBLE PRECISION (VAX D format 56 bits, IEEE DOUBLE 53 BITS)
* STATIC KERNEL FUNCTION OF SIN(X), COS(X), AND TAN(X)
* CODED IN C BY K.C. NG, 1/21/85;
* REVISED BY K.C. NG on 8/13/85.
*
* sin(x*k) - x
* RETURN --------------- on [-PI/4,PI/4] , where k=pi/PI, PI is the
rounded
* x
* value of pi in machine precision:
*
* Decimal:
* pi = 3.141592653589793 23846264338327 .....
* 53 bits PI = 3.141592653589793 115997963 ..... ,
* 56 bits PI = 3.141592653589793 227020265 ..... ,
*
* Hexadecimal:
* pi = 3.243F6A8885A308D313198A2E....
* 53 bits PI = 3.243F6A8885A30 = 2 * 1.921FB54442D18
* 56 bits PI = 3.243F6A8885A308 = 4 * .C90FDAA22168C2
*
* Method:
* 1. Let z=x*x. Create a polynomial approximation to
* (sin(k*x)-x)/x = z*(S0 + S1*z^1 + ... + S5*z^5).
* Then
* sin__S(x*x) = z*(S0 + S1*z^1 + ... + S5*z^5)
*
* The coefficient S's are obtained by a special Remez algorithm.
*
* Accuracy:
* In the absence of rounding error, the approximation has absolute error
* less than 2**(-61.11) for VAX D FORMAT, 2**(-57.45) for IEEE DOUBLE.
*
* Constants:
* The hexadecimal values are the intended ones for the following constants.
* The decimal values may be used, provided that the compiler will convert
* from decimal to binary accurately enough to produce the hexadecimal
values
* shown.
*
*/

vc(S0, -1.6666666666666646660E-1 ,aaaa,bf2a,aa71,aaaa, -2,
-.AAAAAAAAAAAA71)
vc(S1, 8.3333333333297230413E-3 ,8888,3d08,477f,8888,
-6, .8888888888477F)
vc(S2, -1.9841269838362403710E-4 ,0d00,ba50,1057,cf8a, -12,
-.D00D00CF8A1057)
vc(S3, 2.7557318019967078930E-6 ,ef1c,3738,bedc,a326,
-18, .B8EF1CA326BEDC)
vc(S4, -2.5051841873876551398E-8 ,3195,b3d7,e1d3,374c, -25,
-.D73195374CE1D3)
vc(S5, 1.6028995389845827653E-10 ,3d9c,3030,cccc,6d26,
-32, .B03D9C6D26CCCC)
vc(S6, -6.2723499671769283121E-13 ,8d0b,ac30,ea82,7561, -40,
-.B08D0B7561EA82)

ic(S0, -1.6666666666666463126E-1 , -3, -1.555555555550C)
ic(S1, 8.3333333332992771264E-3 , -7, 1.111111110C461)
ic(S2, -1.9841269816180999116E-4 , -13, -1.A01A019746345)
ic(S3, 2.7557309793219876880E-6 , -19, 1.71DE3209CDCD9)
ic(S4, -2.5050225177523807003E-8 , -26, -1.AE5C0E319A4EF)
ic(S5, 1.5868926979889205164E-10 , -33, 1.5CF61DF672B13)

#ifdef vccast
#define S0 vccast(S0)
#define S1 vccast(S1)
#define S2 vccast(S2)
#define S3 vccast(S3)
#define S4 vccast(S4)
#define S5 vccast(S5)
#define S6 vccast(S6)
#endif

#if defined(vax)||defined(tahoe)
# define sin__S(z) (z*(S0+z*(S1+z*(S2+z*(S3+z*(S4+z*(S5+z*S6)))))))
#else /* defined(vax)||defined(tahoe) */
# define sin__S(z) (z*(S0+z*(S1+z*(S2+z*(S3+z*(S4+z*S5))))))
#endif /* defined(vax)||defined(tahoe) */

/* cos__C(x*x) ... re-implemented as a macro
* DOUBLE PRECISION (VAX D FORMAT 56 BITS, IEEE DOUBLE 53 BITS)
* STATIC KERNEL FUNCTION OF SIN(X), COS(X), AND TAN(X)
* CODED IN C BY K.C. NG, 1/21/85;
* REVISED BY K.C. NG on 8/13/85.
*
* x*x
* RETURN cos(k*x) - 1 + ----- on [-PI/4,PI/4], where k = pi/PI,
* 2
* PI is the rounded value of pi in machine precision :
*
* Decimal:
* pi = 3.141592653589793 23846264338327 .....
* 53 bits PI = 3.141592653589793 115997963 ..... ,
* 56 bits PI = 3.141592653589793 227020265 ..... ,
*
* Hexadecimal:
* pi = 3.243F6A8885A308D313198A2E....
* 53 bits PI = 3.243F6A8885A30 = 2 * 1.921FB54442D18
* 56 bits PI = 3.243F6A8885A308 = 4 * .C90FDAA22168C2
*
*
* Method:
* 1. Let z=x*x. Create a polynomial approximation to
* cos(k*x)-1+z/2 = z*z*(C0 + C1*z^1 + ... + C5*z^5)
* then
* cos__C(z) = z*z*(C0 + C1*z^1 + ... + C5*z^5)
*
* The coefficient C's are obtained by a special Remez algorithm.
*
* Accuracy:
* In the absence of rounding error, the approximation has absolute error
* less than 2**(-64) for VAX D FORMAT, 2**(-58.3) for IEEE DOUBLE.
*
*
* Constants:
* The hexadecimal values are the intended ones for the following constants.
* The decimal values may be used, provided that the compiler will convert
* from decimal to binary accurately enough to produce the hexadecimal
values
* shown.
*/

vc(C0, 4.1666666666666504759E-2 ,aaaa,3e2a,a9f0,aaaa,
-4, .AAAAAAAAAAA9F0)
vc(C1, -1.3888888888865302059E-3 ,0b60,bbb6,0cca,b60a, -9,
-.B60B60B60A0CCA)
vc(C2, 2.4801587285601038265E-5 ,0d00,38d0,098f,cdcd,
-15, .D00D00CDCD098F)
vc(C3, -2.7557313470902390219E-7 ,f27b,b593,e805,b593, -21,
-.93F27BB593E805)
vc(C4, 2.0875623401082232009E-9 ,74c8,320f,3ff0,fa1e,
-28, .8F74C8FA1E3FF0)
vc(C5, -1.1355178117642986178E-11 ,c32d,ae47,5a63,0a5c, -36,
-.C7C32D0A5C5A63)

ic(C0, 4.1666666666666504759E-2 , -5, 1.555555555553E)
ic(C1, -1.3888888888865301516E-3 , -10, -1.6C16C16C14199)
ic(C2, 2.4801587269650015769E-5 , -16, 1.A01A01971CAEB)
ic(C3, -2.7557304623183959811E-7 , -22, -1.27E4F1314AD1A)
ic(C4, 2.0873958177697780076E-9 , -29, 1.1EE3B60DDDC8C)
ic(C5, -1.1250289076471311557E-11 , -37, -1.8BD5986B2A52E)

#ifdef vccast
#define C0 vccast(C0)
#define C1 vccast(C1)
#define C2 vccast(C2)
#define C3 vccast(C3)
#define C4 vccast(C4)
#define C5 vccast(C5)
#endif

#define cos__C(z) (z*z*(C0+z*(C1+z*(C2+z*(C3+z*(C4+z*C5))))))
----------------------------------------------------------------------------
 
C

Christian Bau

CBFalconer said:
At any rate, the ideal system produces instantaneous results with
no range limitations and the full accuracy of the arithmetic
representation. So the designer should pick a compromise that
will suffice for most anticipated users. Here there be dragons.

Alternatively, a choice of functions that use different compromises
between speed and precision. Let the user of the library decide what is
more important.
 
C

Christian Bau

"P.J. Plauger said:
We aim to please more exacting users, however, to the point that
they often don't notice the pains we've taken to stay out of their
way.

Just a thought: High quality arithmetic especially benefits the clueless
user, who would have no idea why things go wrong if they go wrong and
who would have no chance to fix things if they go wrong.
 
C

CBFalconer

Christian said:
Alternatively, a choice of functions that use different compromises
between speed and precision. Let the user of the library decide what
is more important.

That is not as feasible as changing complete libraries. For one
thing, the language specifies the names involved. For another, an
efficient library is highly interlinked. A routine to accept an
argument and a pointer to a structure with a list of coefficients
for polynomial expansion is likely to be heavily used. So a mix
and match strategy is very likely to create virtually insoluble
bugs. The need for -lm in linking makes this easy, by simply
replacing the library proper at the appropriate times.
 
P

P.J. Plauger

P.J. Plauger said:
I've gone to both extremes over the past several decades.
Our latest math library, still in internal development,
can get exact function values for *all* argument values.
It uses multi-precision argument reduction that can gust
up to over 4,000 bits [sic]. "The Standard C Library"
represents an intermediate viewpoint -- it stays exact
until about half the fraction bits go away.

I still haven't decided how hard we'll try to preserve
precision for large arguments in the next library we ship.

Obviously you don't get _exact_ values for any argument except sin (0.0)
:).

And +1, and -1, of course. Yes, IEC 60559 requires that the
floating-point inexact status bit be set for any value that
can't be precisely represented, and we do that. What I
meant by "exact" above was shorthand for "the nearest
internal representation to the actual function value, taking
the argument(s) as exact."
I guess you mean "the exact value, correctly rounded to the nearest
valid floating point number", since libraries that round to one of the
two nearest valid floating point numbers are already available?

Several libraries do give the best rounded result for sin(x), for
small enough x. One or two are perverse enough to do proper
argument reduction over a huge range, perhaps even the full
range of values. Several more sensible libraries compute a very
good sin(x) over a large but not unbounded range. I've yet to
decide which of these choices best serves our customers.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

Paul Hsieh

-wombat- said:
Some platforms have hardware instructions that compute sin() and the
compiler will emit them, bypassing the libm library's implementation. This
is pretty much true for ia32 and ia64.

I'm pretty sure its not true of IA64 and its not really true of AMD64
either. (Though I personally don't agree with the choice in either
case.)

As to the original question:

http://www.research.scea.com/gdc2003/fast-math-functions.html

shows some techniques that I think are close to the state of the art.
 
C

Christian Bau

"P.J. Plauger said:
And +1, and -1, of course.

Of course not. sin (+1) and sin (-1) are irrational, and sin (x) is not
+/- 1 for any rational x.
What I
meant by "exact" above was shorthand for "the nearest
internal representation to the actual function value, taking
the argument(s) as exact."

That would be "the exact value, correctly rounded to the nearest valid
floating point number", right?
Several libraries do give the best rounded result for sin(x), for
small enough x.

I think that is actually not too difficult if you write could
specifically for one floating point implementation: Do the calculation
in double double precision, then round to double. Estimate the maximum
rounding error which should be only a tiny bit more than 1/2 ulp. Then
some non-trivial maths should give you a small set of worst-case numbers
(those values of x such that sin (x) is almost exactly between two
neighbouring floating point numbers).

If you are very careful to keep the rounding errors small then you
should have very few values x for which you cannot prove that you get
the correct result. Check those values by hand (even if you cannot
_prove_ that sin (x) is rounded correctly, about half the time or more
it should happen anyway). Play around with coefficients to get the
correct result more often, and write an explicit test for the remaining
cases.
One or two are perverse enough to do proper
argument reduction over a huge range, perhaps even the full
range of values.

flibm does that. And an implementation that is suitable for Java is
forced to do this by the Java standard.
Several more sensible libraries compute a very
good sin(x) over a large but not unbounded range. I've yet to
decide which of these choices best serves our customers.

Some people would just pick which choice produces the fastest benchmark
results :-(

An interesting mathematical problem: Find approximations for sine and
cosine, such that

s*s + c*c <= 1.0

using floating-point arithmetic is guaranteed.
 
D

Dik T. Winter

> I meant something that was automatically associated with each
> value, and revised by the process of calculation. For example,
> multiplying something with 3 digit precision by something with two
> digit precision can have no better that 2 digit precision,
> regardless of actual significant bits generated. Subtraction of
> similar sized numbers can produce violent reductions. This is the
> sort of thing one can build into a custom software FP
> representation, rather than doing some sort of worst case analysis
> up front.

This would mean that almost all results in numerical algebra would be
without value. E.g. calculating the solution of a system of 40 linear
equations might easily result in something without value in this measure.
Moreover, you have to be extremely careful here. I once designed an
arcsine routine where subtraction of equal sized numbers was commonplace,
however, in the ultimate result the introduced inaccuracies did not play
a role, as careful analysis did prove.
>
> Where you assume that those arguments are exact. In reality, if
> they come from almost any physical entity, they express some form
> of measurement range. The only exactness is in the individual
> terms of a Taylor series, for example. If you compute something
> to 3 ulp, a 1/2 bit error in the argument value is practically
> meaningless and can usually be ignored, barring a nearby
> singularity. All of which leads to the same thing.

Wrong. If you compute (for instance) the sine with an accuracy of 3
ulp, that means that you expect exact values. A 1/2 bit error in the
argument might give a huge change in the result.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,078
Latest member
MakersCBDBlood

Latest Threads

Top