jacob navia wrote, On 13/12/07 09:03:
This is typical of the "regulars". Talk to say nothing. That part
of the standard has not anything to say about the precision
of intermediate results (what we are talking about here)
but about how big the LIMITS of the floating point representation
are.
It also talks about precision. It even gives a way to test it.
To the subject of precision, the standard says:
<quote>
The accuracy of the floating-point operations (+, -, *, /) and of the
library functions in <math.h> and <complex.h> that return floating-point
results is implementation defined.
The implementation may state that the accuracy is unknown
<end quote>
To be precise, accuracy and precision are not the same.
i.e. IT CONFIRMS what I am saying. There aren't any guarantees at all.
If there isn't even an accepted standard that is enforced, how
can you guarantee anything.
Question Mr "flash"
If you are going to use the honorific it is "Mr Gordon". If you are
trying to imply I am anonymous just do a whois lookup on my domain (it
is obvious if you look that it is a personal domain). I will even give
you a link for it
http://whois.domaintools.com/flash-gordon.me.uk
How could the standard guarantee ANYTHING about the precision of
floating point calculations when it doesn't even guarantee a common
standard?
Yes I am seriously saying that the absence of an enforced standard
makes any guarantee IMPOSSIBLE.
Please prove the contrary.
Easy, it provides information about precision, just not accuracy.
Importantly, it provides a mechanism to *test* the precision, so if your
method is appropriate you can apply it only if it is required.
How learned you are Mr Gordon....
Anything else?
Like all the "regulars", you speak with word games. If you have anything
else to propose please say it clearly. If not, say clearly
what "adverse side effects" would happen with my proposition.
That depends on what the library depends on and what it does. Some
algorithms become unstable if too high or too low a precision or
accuracy is used. There is no guarantee that the library, specifically
the maths function in it, don't use any such algorithms. For other
algorithms, the optimum value of various constants depends on the
precision and accuracy with which the calculations are done.
NOTHING is guaranteed by the standard. See above
Yes, see above.
This is stupid since the library can't change the way the program is
compiled. If you substitute the call to the user function by a call to
a library function the problem remains THE SAME. You fail to grasp what
the problem is.
You fail to grasp that a high quality extended precision maths library
is likely to have key parts implemented in assembler for common targets,
and by doing so it *can* guarantee precision and accuracy.
Note that GMP which one person suggested states on the from page, "with
highly optimized assembly code for the most common inner loops for a lot
of CPUs". It goes on to say in the manual, "All calculations are
performed to the precision of the destination variable. Each function is
defined to calculate with 'infinite precision' followed by a truncation
to the destination precision, but of course the work done is only what's
needed to determine a result under that definition." So it provides
precisely the guarantees you say are impossible. So GMP will give
guaranteed behaviour in a more portable manner than your suggestion,
although there are other costs to using GMP,
This is not necessary, you just change the precision that's all!
Unless it causes problems. Or some of the library functions change the
precision (e.g. to increase stability) and then set it to what the
implementation assumes it is normally set to.
This is an example of how the "regulars" spread nonsense just with the
only objective of "contradicting jacob"
No.
as they announced here
yesterday.
No one announced that.