Sine code for ANSI C

G

glen herrmannsfeldt

CBFalconer wrote:

(snip regarding sin() function)
And that problem is inherent. Adding precision bits for the
reduction will not help, because the input value doesn't have
them. It is the old problem of differences of similar sized
quantities.


When I was in high school I knew someone with a brand
new HP-55 calculator. (You can figure out when that was
if you want.) He was so proud of it, and sure of the
answers it gave. For the sine (in degrees) of 9.999999999e99
it gave something like 0.53, which is obviously wrong
because 9.999999999e99 is a multiple of 180.

Yes, argument reduction is always a problem, because
people will expect it to be right, no matter how useless
the result is. It is a little easier in degrees than
in radians, yet few languages (PL/I being one) support
SIND() and such.

-- glen
 
G

glen herrmannsfeldt

P.J. Plauger wrote:

(snip, someone wrote)

(someone else wrote)
No. The *precision* will be reduced. Whether or not the *accuracy*
is reduced depends on your conjecture about the bits that don't
exist off the right end of the fraction. Those of us who write
libraries for a living shouldn't presume that those missing bits
are garbage. We serve our customers best by assuming that they're
all zeros, and delivering up the best approximation to the correct
answer for that assumed value.

Why assume they are all zero? In what cases would someone
want the sine of a number that was a large integer number of
radians? Well, 50,000 isn't so large, but say that it
was 1e50?

The first math library that I used would refuse to do
double precision sine at pi*2**50, pretty much no fraction
bits left. I can almost imagine some cases for an integer
number of degrees, but that would almost only make sense
in decimal floating point arithmetic. (Many calculators
but few computers.)

-- glen
 
G

glen herrmannsfeldt

Dik T. Winter wrote:
(snip)
(someone wrote)
(someone else wrote)
Must everything make physical sense? Perhaps it makes mathematical sense?
(snip)

Not in mathematical applications, where the argument to the sine function
can very well be exact.

Show me a useful, real life, mathematical problem that requires the
evaluation of 167873 radians. Now, I can almost imagine problems
involving large integer numbers of degrees. If those degrees
are multiplied by (pi/180), possibly with a different value
of pi than is used in the argument reduction, all useful bits
are lost.

Yes, radians are nice in many cases, but this is not one.

-- glen
 
G

glen herrmannsfeldt

osmium wrote:

(snip)
You might mention to Mr. Pop the notion of a revolution counter, such as a
Veeder-Root counter on the propeller shaft of an ocean going ship. My
guess is that there are a lot of radians between England and New Jersey. If
Dan Pop needs the name of a particular ship to make the point sufficiently
concrete, I expect some one could provide the name of such a ship.

Design a rotating shaft counter that can count in exact radians
and I will figure out how to calculate the sine of it.

I can easily design one that will count revolutions, degrees,
or most any other integer multiple of revolutions, but not
radians.

There is no point in doing argument reduction with an exact,
to thousands of bits, representation of pi when the user can't
generate arguments with such pi, and has no source of such
arguments.

Go over to comp.lang.pl1, and you will find the sind() and
cosd() functions, which could do exact argument reduction
on large integers. (I have no idea whether they do or not.)

-- glen
 
G

glen herrmannsfeldt

P.J. Plauger wrote:
(snip)
Yes, it's concrete enough to show that you still don't get it.
(snip)

You may think that it's unlikely that all the bits of that value are
meaningful in a given application, and you're probably right. But
you're not *definitely* right. So as a library vendor, best engineering
practice is to supply all the bits that *might* make sense, in case
they actually do. A good quality implementation will continue to
compute the sine of small arguments quickly, but if it has to take
progressively longer to reduce ever larger arguments, then so be it.
You don't want the cost, reduce the arguments quickly, by your own
crude methods that are good enough for your application, before calling
sine.

By returning a value you are making a claim that there is some
sense in calculating that value. There is no sense in calculating
such large values of radians such that the uncertainty in the
angle is greater than pi.
Note also that even sin(1e-38) *might not* be good to full precision,
because the argument happens not to be good to full precision, but
you have no criterion for judging how many bits are worth computing.
Since it's so easy to compute them, and since they're often all
meaningful, nobody wastes much time debating the cost of producing
bits that are potentially garbage.

Well, small arguments it radians are easy to compute. One
could, for example, do a numeric derivative at this point,
say (sin(2e-38)-sin(1e-38))/(2e38-1e38) and likely get an
exact 1.00.

(snip)
You also didn't get that you're still not giving a sufficiently concrete
criterion for when the sine function should stop trying. If you don't
like that a sine function wastes your program microseconds computing
what you shouldn't have told it to compute, then you need to be more
precise about where it can and should give up. You state above that
the function result is definitely meaningless once 1 ulp in the argument
weighs more than 2*pi, but why go that far? Aside from a phase factor,
you've lost all angle information at pi/2. But then how meaningful is it
to have just a couple of bits of fraction information? To repeat what
you stated above:

(snip)

Any of those are close enough for me.
So you've taken on a serious responsibility here. You have to tell us
library vendors just how small "not so large" is, so we know where to
produce quick garbage instead of slower correct answers. If you don't,
you have no right to deem our products unacceptable, or even simply
wasteful.

At that point, all you are saying is that the user should do
their own argument reduction, using the appropriate method
which only the user can know, before calling sin().

Java, at least, defines a standard value for pi, so that programs
can used that in generating arguments. Should you assume that
the argument, in radians, is a multiple of that value of pi,
or a much more accurate one?

-- glen
 
G

glen herrmannsfeldt

CBFalconer said:
"P.J. Plauger" wrote:
... snip ...
(snip)

I am gradually coming around to your point of view here, so I am
rewording it. To me, the argument is that the input argument
represents a range of values (absent contrary information) with a
known uncertainty. The most probable actual value is the exact
value. Since d(sinx)/dx is strictly bounded, the resultant
function is never going to fly off in all directions, and will not
become totally meaningless unless that argument uncertainty is in
the order of PI.

You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.
We can, at the cost of some computational complexity, assume the
input argument is exact and reduce it to a principal value with an
accuracy governed by the precision of PI. There is no essential
difference between this operation and forming a real value for
1/3. The critical thing there is the precision of our knowledge
of 3.

Which value of pi do you use? Rounded to the precision of
the argument? (As the user might have used in generating it).
Much more accurate than the argument? (Unlikely used by
the user.)
For a concrete example, consider a mechanism that rotates and
detents at 45 degree intervals. The sine of the resultant angle
can only have 5 discrete values. However the net rotation is
described by the sum of a counter (of full turns) and one of 8
discrete angles, in units of 45 deg. After letting that engine
whir for a day and a half at a 1000 rpm and a half and recording
the final angle, we want to know the sine of that angle. The
computation machinery knows nothing about detents, all it has to
work with is PI, the net rotation angle, and the computation
algorithm for sin(x).

Now you multiply the angle, in multiples of 45 degrees,
by (pi/4) accurate to 53 or so bits for an IEEE double.
If the argument reduction is done with a 4000 bit accurate
pi, you find many more than 5 values for sine.
At some point the accuracy of the results will become worse than
the accuracy of the detents, and all blows up. This is not the
same point as that reached by simple modulo PI arithmetic.

It may have been fixed by now, but it was well known in
the past that Postscript could not do a proper 90 degree
or 180 degree rotation. It did it by calculating the sine
and cosine of angles in radians, converted from degrees.
It seems that the value of pi used was different than that
used in argument reduction, so that sin(180 degrees) was
not zero. If it is not zero, it is possible that a horizontal
line in the input will not be horizontal, in the output,
when rounded (or truncated, I forget) to pixel positions.
I have been told by professional typesetters that it was
visible in the output of high resolution phototypesetters.

-- glen
 
P

P.J. Plauger

P.J. Plauger wrote:

(snip, someone wrote)


(someone else wrote)



Why assume they are all zero?

For the same reason that the library assumes 0.5 (which really looks
like 0.5000000 or thereabouts) is 0.5. The value of a floating-point
number is defined by its bits. Part of that definition effectively
requires you to treat all the bits you don't see as zeros.
In what cases would someone
want the sine of a number that was a large integer number of
radians? Well, 50,000 isn't so large, but say that it
was 1e50?

One of the interesting challenges for us library writers is that
we have to serve all possible customers. We don't get to rule on
which ones are being silly -- or overly demanding. But if we
slight so much as a single case, however extreme or apparently
unlikely, customers rightly castigate us, and reviewers emphasize
the failure all out of proportion to the number of cases we get
right. So *you* can decide that 50,000 radians is worth reducing
properly but 1e50 is not. Mazeltov. Until the C Standard lets us
off the hook beyond some given argument magnitude, however, we
have no excuse not to try.

In fact, one of the commercial test suites we currently use tests
sin(x) up to about 1e18. Is that silly? I can't find any place in
the C Standard that says it isn't. So far, no major potential
customer has seized on this suite as an acceptance test, but *you*
try to explain to some high-level decision maker why it's okay
to pay us for a product that fails a handful of tests.
The first math library that I used would refuse to do
double precision sine at pi*2**50, pretty much no fraction
bits left.

That's nice. Who got to pick the cutoff point?
I can almost imagine some cases for an integer
number of degrees, but that would almost only make sense
in decimal floating point arithmetic. (Many calculators
but few computers.)

Really? I can't imagine why.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

Dik T. Winter wrote:
(snip)
(someone wrote)

(someone else wrote)

sense?

Show me a useful, real life, mathematical problem that requires the
evaluation of 167873 radians.

Now you're playing Dan Pop's game. Somebody contrives a problem
that's arguably practical and you get to rule, from the catbird
seat, that it's not really a real-life problem. (And yet both
you and Dan Pop toss out equally academic problems as though
they're pragmatic engineering issues.)

But it doesn't *matter* whether or not I can score points at your
game, because that's not the game I'm in (as I keep repeating).
My company endeavors to make high quality libraries that satisfy
programming language standards and meet the needs of demanding
customers. We don't have the luxury of demanding in turn that our
customers prove that everything they do makes sense. Our treaty
point is the C Standard, in this case. We don't match it, they
have a reason to be pissed. We match it, they don't have a case.
(Well, there are also QOI issues, but that's not at issue here.)
Now, I can almost imagine problems
involving large integer numbers of degrees.

Why is that better? It's just easier to comprehend the truncations
involved, and easier to compute the associated nonsense, or valid
reduced argument. Still the same problem.
If those degrees
are multiplied by (pi/180), possibly with a different value
of pi than is used in the argument reduction, all useful bits
are lost.

Indeed. As a library vendor, you use the best possible value of
pi. If that's not what the customer used, it's the customer's
problem. (I once had a customer who was sure we had a bad tangent
function, until I pointed out that he was approximating pi in
his program with 22/7.)
Yes, radians are nice in many cases, but this is not one.

Only because it's harder to reduce the argument accurately.
In either case, you can't give any more meaning to an argument
reduced from a large magnitude, *or any less.*

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

osmium wrote:

(snip)


Design a rotating shaft counter that can count in exact radians
and I will figure out how to calculate the sine of it.

I can easily design one that will count revolutions, degrees,
or most any other integer multiple of revolutions, but not
radians.

There is no point in doing argument reduction with an exact,
to thousands of bits, representation of pi when the user can't
generate arguments with such pi, and has no source of such
arguments.

Your logic is specious. What if the customer is computing
directly in radians? It's an artifact of the library function
that requires the use of many bits of pi. It turns out that
the easiest way to compute pi is to convert it to quadrants
and compute it over the interval [-0.5, 0.5]. So we have to
work in high precision to give the customer the best answer.
The customer *does not* have to work in equally high precision
to give us an input worthy of computing a sine.

You've once again fallen into the trap of making up stories
about the validity of arguments presented to a library function.
That's *not* a luxury we library writers can indulge.
Go over to comp.lang.pl1, and you will find the sind() and
cosd() functions, which could do exact argument reduction
on large integers. (I have no idea whether they do or not.)

Yes, it's easier to do. Why do you think the resulting reduced
arguments are any worthier of evaluating, or any less?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

P.J. Plauger wrote:
(snip)


By returning a value you are making a claim that there is some
sense in calculating that value. There is no sense in calculating
such large values of radians such that the uncertainty in the
angle is greater than pi.

You've made up a story again, about the "uncertainty" in the reduced
angle. All the library writer knows is that there's little *precision*
in the representation of that reduced angle. But it still has a well
defined value and it may well be meaningful to the customer.
Well, small arguments it radians are easy to compute. One
could, for example, do a numeric derivative at this point,
say (sin(2e-38)-sin(1e-38))/(2e38-1e38) and likely get an
exact 1.00.

You missed the point. Of *course* they're easy to compute. So
too is the sine of 1e38 quadrants -- it's zero. But should we
decide how much garbage to give back to customers based on how
easy it is for us to compute it? Suppose I make up a story that
most people who call sine with tiny arguments don't know what
the hell they're doing. I could modulate your claim above:

: By returning a value you are making a claim that there is some
: sense in calculating that value. There is no sense in calculating
: such tiny values of radians such that the magnitude of the
: angle is much less than than pi.

Who knows, maybe I could even cite studies. But in any case, I
wouldn't last long in front of a customer waving a bug report
under my nose. And I would have no defense in the C Standard.

Conjectures about the validity of input arguments just don't
cut it.
(snip)


(snip)

Any of those are close enough for me.

That's nice. Would you mind writing that up as a DR? I'd love it
if the C Standard was changed to make less work for me.
At that point, all you are saying is that the user should do
their own argument reduction, using the appropriate method
which only the user can know, before calling sin().

Yes, that's exactly what I'm saying. And I said it in response to
Dan Pop's claim. He alleges that taking a long time to compute the
sine of a large angle is "wasteful" of CPU cycles, even "unacceptable."

Groucho Marks: Doctor, doctor, it hurts when I do this. What should I do?

Doctor: Don't do that.
Java, at least, defines a standard value for pi, so that programs
can used that in generating arguments. Should you assume that
the argument, in radians, is a multiple of that value of pi,
or a much more accurate one?

Whatever value pi may have when truncated to machine precision
is useless for reducing arguments properly. The error shows up
after just a few radians. It's hard to defend a conjecture that
the sine of 5*pi is never worth computing properly.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
P

P.J. Plauger

You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.

Another conjecture. And what if it wasn't? Then you'd have no
excuse other than to take the argument at face value and do
the best you can with it. I say again -- it is up to the programmer
to do error-propagation analysis and decide how much precision to
trust in the final answer. The library writer doesn't have the
luxury of giving back garbage bits because they *might* not be
meaningful.

But I kinda like the freedom you offer us. We could write *all*
library functions on the basis that the return value can be
anywhere between f(x - 1/2 ulp) and f(x + 1/2 ulp) -- speaking
symbolically, of course. Boy would that make it easier to compute
exponentials, powers, gamma functions, etc. But I bet you wouldn't
like the results.
Which value of pi do you use? Rounded to the precision of
the argument? (As the user might have used in generating it).
Much more accurate than the argument? (Unlikely used by
the user.)

Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 
G

Grumble

Tim said:
Possibly referring to compilers complying with ABIs disallowing
x87, or taking advantage of higher performance of SSE parallel
libraries. Use of fsin on IA64 is extremely unlikely, even though
it's still there.

I assume you meant AMD64, not IA64?

Where did you read that the AMD64 ABI disallowed x87?

http://www.x86-64.org/abi.pdf
 
C

CBFalconer

P.J. Plauger said:
.... snip ...

Whatever value pi may have when truncated to machine precision
is useless for reducing arguments properly. The error shows up
after just a few radians. It's hard to defend a conjecture that
the sine of 5*pi is never worth computing properly.

This I have to strongly disagree with. What you have to do is
effectively extend the significant bits of the argument while
reducing it. As long as you assume the argument is exact, you can
do this from the unlimited supply of zeroes. The accuracy of 10
mod 3 is dependant on the accuracy of 3. It cannot exceed the
accuracy of either 10 or 3, but it need lose nothing in its
computation.
 
C

CBFalconer

glen said:
You say that, and then your argument seems to indicate the
opposite. If the argument was generated using floating
point arithmetic that either truncated our rounded the
result (do you know which?), the uncertainty is at least
0.5ulp.

Consider how you compute 1e30 DIV 3 to any desired accuracy. The
result depends on the accuracy of 3 alone if you assume that 1e30
is exact. Similarly, consider the results on division of 1e30 by
0.33333. (exactly 5 3s)
 
S

Sam Dennis

P.J. Plauger said:
[Trigonometry on angles that make ordinary libraries dizzy and such]
Until the C Standard lets us off the hook beyond some given argument
magnitude, however, we have no excuse not to try.

I think that the Standard lets implementors off the hook at any and all
magnitudes, signs and, generally, values, actually, and not even _just_
for the library... from the outset! C99, 5.2.4.2.2p4:

`The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation-defined. The implementation may state that
the accuracy is unknown.'

So if I write my implementation to always produce 42. for an expression
involving floating-point mathematics (except in the very few situations
where there are absolute requirements imposed) and document this, it is
not disqualified from full conforming hosted implementation status (not
through another (?) loophole).

I think that it'd be particularly ironic to combine this with unlimited
precision for floating-point constants.
 
D

Dan Pop

In said:
P.J. Plauger said:
[Trigonometry on angles that make ordinary libraries dizzy and such]
Until the C Standard lets us off the hook beyond some given argument
magnitude, however, we have no excuse not to try.

I think that the Standard lets implementors off the hook at any and all
magnitudes, signs and, generally, values, actually, and not even _just_
for the library... from the outset! C99, 5.2.4.2.2p4:

`The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation-defined. The implementation may state that
the accuracy is unknown.'

So if I write my implementation to always produce 42. for an expression
involving floating-point mathematics (except in the very few situations
where there are absolute requirements imposed) and document this, it is
not disqualified from full conforming hosted implementation status (not
through another (?) loophole).

True and irrelevant, as we're discussing about what a high quality
implementation should do.

If you're looking for loopholes in the C standard, it contains one so
large as to make even a *completely* useless implementation (a two line
shell script) fully conforming.

1 The implementation shall be able to translate and execute at
^^
least one program that contains at least one instance of every
^^^^^^^^^^^^^^^^^
one of the following limits:

Other, strictly conforming programs, must be merely accepted (whatever
that means, the standard provides no further clues).

So consider the following "implementation" for Unix systems:

echo "Program accepted."
cp /bin/true a.out

combined with an implementation of /bin/true that exercises all the
translation limits enumerated in 5.2.4.1 and the documentation required
by the standard.

Now, every translation unit, either correct or not, will receive the one
diagnostic required by the standard, all strictly conforming programs are
"accepted" (as well as all other programs ;-) and the *one* C program
mentioned above is translated and executed.

The real question is: how many users is my implementation going to have?
Answer: as many as yours ;-) This illustrates the importance of a factor
that is not even mentioned in the C standard: the quality of
implementation.

Dan
 
S

Sam Dennis

Dan said:
Sam Dennis said:
`The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation-defined. The implementation may state that
the accuracy is unknown.'

So if I write my implementation to always produce 42. [...]

True and irrelevant, as we're discussing about what a high quality
implementation should do.

Ridiculous accuracy for silly arguments is unnecessary for high quality
programs, which should accommodate libraries coded by mere mortals that
can furthermore calculate logarithms in less time than it takes for the
operator to look up the answer himself. (Admittedly, such programs can
not operate on my hypothetical implementation if they actually make use
of the floating-point facilities provided by C itself; the portable way
to get guaranteed accuracy is to use a library and its types for all of
one's non-integral calculations, written in strictly conforming C.)
If you're looking for loopholes in the C standard, it contains one so
large as to make even a *completely* useless implementation (a two line
shell script) fully conforming.

1 The implementation shall be able to translate and execute at
least one program that contains at least one instance of every
one of the following limits:

I've decided that that was probably a political decision to allow buggy
compilers to claim conformance. Specifying accuracy for floating-point
operations such that enough current implementations can conform, on the
other hand, is a genuinely hellish task.
[A conforming implementation, once documentation is attached]
echo "Program accepted."
cp /bin/true a.out

(The extension below can be ignored if you don't consider `accepted' to
require even an appropriate diagnostic from a one line file that starts
with `#error'.)

Well, obviously there's the #error thing, which means that #include has
to be handled, which means...

cat "$@"
echo "Program accepted."
cp /bin/true a.out

....and the documentation must state that #include source file searching
is not supported and that all output is diagnostics (or similar words).
But, yes, that'd be a conforming implementation; and, because any input
is `acceptable' to it, any data is a conforming program; that's no news
to you, of course.

Yes, the Standard has huge loopholes. Those two are best ignored, with
useful definitions of `conforming' implementation and programs mentally
substituted (in the latter case, by `strictly conforming program').
This illustrates the importance of a factor that is not even mentioned
in the C standard: the quality of implementation.

Oh, but it would be a perfectly good implementation in most other ways.

Actually, I've been thinking for a long time about writing this, except
that the quality of implementation, as it's normally understood, should
be variable, for conformance testing of programs.

Catching all the potential problems isn't feasible, but a useful number
is probably just about manageable.
 
G

glen herrmannsfeldt

P.J. Plauger wrote:
(I wrote)
Another conjecture. And what if it wasn't? Then you'd have no
excuse other than to take the argument at face value and do
the best you can with it. I say again -- it is up to the programmer
to do error-propagation analysis and decide how much precision to
trust in the final answer. The library writer doesn't have the
luxury of giving back garbage bits because they *might* not be
meaningful.

Consider this program fragment, as no real examples have
been presented so far:

for(x=0;x+60!=x;x+=60) printf("%20.15g\n",sin(x*A));

A should have the appropriate value so that x is
expressed in degrees. Now, consider the effect
of ordinary argument reduction, and unusually
accurate argument reduction.

If you do argument reduction using an appropriate
approximation to pi/4 to machine precision, you get
the answer expected by the programmer. If you use
a pi/4 much more accurate than machine precision, the
result will slowly diverge from the expected value.

But I kinda like the freedom you offer us. We could write *all*
library functions on the basis that the return value can be
anywhere between f(x - 1/2 ulp) and f(x + 1/2 ulp) -- speaking
symbolically, of course. Boy would that make it easier to compute
exponentials, powers, gamma functions, etc. But I bet you wouldn't
like the results.
(snip)
Again, you're *presuming* that the user did some sloppy calculation
to arrive at a large argument to sine. If that's what happened, then
the result will be correspondingly inaccurate. But that's *always*
true of every calculation, whether it involves sin(x) or any other
math function. It's true even if no math functions are called at
all. The authors of the math functions can only take what they're
given and do the best possible job with them. Why is that such
a hard principle to grasp?

Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?

The IBM 360/91 was documented as not following the S/360
architecture because its floating point divide gave rounded
results (0.5ulp) instead of the truncated results (1ulp)
that the architecture specified.

Consider someone doing a single precision sine. Most
likely they use single precision instead of double
because they don't need so much accuracy and hope that
the result will be generated faster.

As there are plenty of multiple precision libraries
around, users wanting more than machine precision
know where to find it.

The OS/360 math library writers had to work extra hard
to get accurate results given the base 16 floating
point of S/360. Sometimes it took a little extra work
to get good answers, which is fine. (The last iteration
on the square root algorithm is done slightly different
to avoid precision loss in a halve operation.) A lot of
extra work to get useless answers is not. (The sine
routine gives a fatal error when the argument is greater
than pi*2**18 (single), pi*2**50 (double), or pi*2**100
(extended precision).

Giving a fatal error tells the user that something is
wrong and should be fixed. Supplying answers using bits
that don't really exist allows the user to believe that
useful answers are being generated, possibly wasting
much calculation and money.

Actually, the time when this happened to me, probably
when I was in ninth grade, (due to an undefined variable
if I remember right), someone carefully explained to me
why the library would refuse such a calculation. It made
sense at the time, and it still does.

-- glen
 
R

Richard Bos

glen herrmannsfeldt said:
Why did people buy expensive computers that Cray built,
even though they couldn't even divide to machine precision?

Because Crays are _very_ specialised machines, which can make
assumptions that a library writer for a normal computer cannot.

I must agree with Mr. Plauger, here. If a library gives me a value with
more precision than I need or expect, I can always ignore that extra
precision. If it gives me a value with less precision than it could, as
some people seem to be advocating, I can _never_ add that precision
myself without doing the library's job for it, which should not be the
aim of a good quality library.

Richard
 
D

Dan Pop

In said:
I've decided that that was probably a political decision to allow buggy
compilers to claim conformance.

It's actually a pragmatic decision: for any given implementation, it is
possible to construct a strictly conforming program exceeding the
implementation's resources, while still staying within the translation
limits mentioned in the standard.

One year or so ago I explored the ways of avoiding this wording by
adding a few more translation limits. Then, the standard could reasonably
require the translation and execution of *any* program staying within the
translation limits.
Specifying accuracy for floating-point
operations such that enough current implementations can conform, on the
other hand, is a genuinely hellish task.

I don't think so. If the standard could specify minimal accuracy
requirements for the floating point types, it could also specify
minimal accuracy requirements for the floating point operations. I have
already debated this issue in comp.std.c and my opponents came with bogus
counterarguments (like the unfeasibility of documenting the accuracy of
[A conforming implementation, once documentation is attached]
echo "Program accepted."
cp /bin/true a.out

(The extension below can be ignored if you don't consider `accepted' to
require even an appropriate diagnostic from a one line file that starts
with `#error'.)

Who says that the one and only correctly translated program has to
contain the #error pragma?
Oh, but it would be a perfectly good implementation in most other ways.

I.e. for people with no need for floating point. This is actually the
case with certain implementations for embedded control applications:
floating point support is too expensive and irrelevant to the typical
application domain of the processor, yet the standard says that it must
be provided. A much better solution would have been to make floating
point support optional for freestanding implementations.

In the case of hosted implementations, people would choose the ones
providing proper support for floating point even if they don't need it.

OTOH, considering that practically every hosted implementation in current
use today is for systems with native floating point support, the issue is
moot: the implementation delivers whatever the hardware delivers.

Dan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,772
Messages
2,569,593
Members
45,111
Latest member
VetaMcRae
Top