higher precision doubles

S

supercalifragilisticexpialadiamaticonormalizeringe

My understanding when strictfp was being added was that the x86
instruction set allowed for efficient rounding to the correct 32 and 64
bit mantissa widths, but did not allow for the overflow and underflow
processing to be done based on the correct exponent widths.

Remind me: why was it considered desirable for (non-strict) arithmetic
not to use however much precision was available to it? (Anyone needing
strict adherence to IEEE 32-bit or 64-bit FP arithmetic would be using
strictfp anyway, after all.)
 
R

Roedy Green

On Wed, 10 Aug 2011 20:16:49 -0400,
supercalifragilisticexpialadiamaticonormalizeringelimatisticantations
Remind me: why was it considered desirable for (non-strict) arithmetic
not to use however much precision was available to it? (Anyone needing
strict adherence to IEEE 32-bit or 64-bit FP arithmetic would be using
strictfp anyway, after all.)

The reason for strictfp was to get the exact same results on every
machine and every optimising compiler. Extra precision only works when
values are kept on chip. You loose as soon as anything get stored in
RAM. A machine with more FP registers does not need to store as
often.

I think if extra precision were dropped on non-strict it would have
been an error.


--
Roedy Green Canadian Mind Products
http://mindprod.com
The modern conservative is engaged in one of man's oldest exercises in moral philosophy; that is,
the search for a superior moral justification for selfishness.
~ John Kenneth Galbraith (born: 1908-10-15 died: 2006-04-29 at age: 97)
 
A

Arne Vajhøj

I don't recall x86's mode-switching semantics off the top of my head,
but I do believe that it is possible to run 64-bit instructions in
32-bit mode.

I find that hard to believe. 64 bit instructions should be expecting
64 bit virtual addresses.

Arne
 
N

Nasser M. Abbasi

Motivating example, in Go we have:

func main() {
x := math.Sin(2*math.Pi)
fmt.Printf("x = %.30f, is zero = %v\n", x, x == 0)
}

x = 0.000000000000000000000000000000, is zero = true

In Java we have:
....



zero1=-2.4492935982947064E-16
zero2=-2.4492935982947064E-16

Well, both as not correct :)

Here is the correct math answer

Mathematica:

x = Sin[2 Pi]

0

An exact zero. As was created by God. Not a floating point zero.

--Nasser
 
L

Lew

Nasser said:
Jan said:
Motivating example, in Go we have:

func main() {
x := math.Sin(2*math.Pi)
fmt.Printf("x = %.30f, is zero = %v\n", x, x == 0)
}

x = 0.000000000000000000000000000000, is zero = true

In Java we have:
...



zero1=-2.4492935982947064E-16
zero2=-2.4492935982947064E-16

Well, both as not correct :)

Here is the correct math answer

Mathematica:

x = Sin[2 Pi]

0

An exact zero. As was created by God. Not a floating point zero.

Computer programming has nothing to do with religion.

I hate to break this to you, but Mathematica is a computer program that runs on computers, and therefore numerical analysis is relevant.

Since Mathematica runs on floating point hardware, create by humans, your statement about "God" and the result being "[n]ot a floating point zero" is flat out wrong. It actually *is* a floating point zero, perforce and ineluctably.

Truth is truth, and God has nothing to do with this one (except for giving humans the skill to invent computers and floating point representation, and also to make Mathematica hide the inaccuracies thereof, to a degree).
 
J

Jan Burse

Nasser said:
Mathematica:

x = Sin[2 Pi]
0

An exact zero. As was created by God. Not a floating point zero.

--Nasser

Well from my original post you should see that I am
interested in IEEE as a correctness measure and
not a symbolic computation solution.

Actually Mathematica, judging from playing with
Wolfram Alpha, is pretty good in solving symbolic
equations with sin and cos. Just try the following:

sin(x)=cos(x)

And it will give you:

x = 2*(pi * n - tan-1(1 +/- sqrt(2))

But since Galois we know that closed form solutions
can be limited. So when you try the following

sin(x)+x=cos(x)

You will get:

x ~ 0.456625

And then you can press the more digits button:

x ~ 0.45662470456763082444

And press the more digits button again:

x ~ 0.4566247045676308244376974571284573758982

And again:

x ~ 0.4566247045676308244376974571284573758982
3161389225632524227823077345386290064230

Etc.. This is already bettern than picking a IEEE
format, since we can gradually increase the precision.
Any pointers to Java libraries that can do that would be
highly appreciated.

I guess we could start with BigDecimal and some Newton
algorithms. But subproblem here is applying Newton to
-cos(x)+sin(x)+x would also demand to have a cos/sin on
BigDecimal. And I did not yet see a reference to a library
that could do that on BigDecimal.

A library should be possible when Mathematica can
do it. But maybe Taylor is not the prefered method here,
because of slow convergence. Maybe better some Chebychev
Polynom expansion.

But there is no Heaven here on Earth. All we can do
in the present case, we can gradually approximate the
real number but not jump on it.

Bye
 
J

Jan Burse

Lew said:
Since Mathematica runs on *floating point hardware*, create by humans,

This is not quite true what your are saying
here. Mathematica has actually 3 modes in
dealing with sin. The modes are as follows:

- Symbolic
- Machine Precision
- High Precision

What the poster showed, sin[2 Pi] = 0, was an
implicit FunctionExpand of sin, so a symbolic
manoeuvre. If he would force mathematica to
use machine precision or high precision he
would get another result. Here is an example:

N[pi,18] = 3.14159265358979324.

N[sin[3.14159265358979324],18] =

-1.53735661672049712×10^-18

Whether the machine precision uses the floating
point hardware is unspecified. Most likely, but
it could also use some software emulation. So as
to prevent differences from platform to platform,
like the strictfp modifier. Would need to further
dig into the Mathematica documentation for this
question.

For more details and many example uses
of the modes see for example here:
http://reference.wolfram.com/mathematica/ref/Sin.html
For a first intro into the modes see the "Scope" section.

Bye
 
J

Jan Burse

Jan said:
N[sin[3.14159265358979324],18] =

-1.53735661672049712×10^-18

Oops, just reading:
http://reference.wolfram.com/mathematica/tutorial/MachinePrecisionNumbers.html

So the second N[.,.] was not necessary. Could
simply do, but the output on Wolfram alfa looks
a little bit different:

sin[3.14159265358979324] =

-1.53736... ×10^-18


Or could try the following:

N[sin[314159265358979324 / 100000000000000000],9] =

-1.53735662×10^-18

N[sin[314159265358979324 / 100000000000000000],18] =

-1.53735661672049712×10^-18

N[sin[314159265358979324 / 100000000000000000],36] =

-1.53735661672049711580283060062489418×10^-18

Note how good the high precision computation is. The lower
precision number is always the rounding of the higher
precision number.

But mechanics for high precision numbers look a little bit
more complicated to me than BigDecimal. At least if
I look at:
http://reference.wolfram.com/mathematica/tutorial/ArbitraryPrecisionNumbers.html

There is a max extra precision, which is needed in the
evaluation function N[.,.]. BigDecimal does not know about
evaluation functions. And there is some special handling
of near zero values.

Bye
 
J

Jan Burse

Jan said:
There is a max extra precision, which is needed in the
evaluation function N[.,.]. BigDecimal does not know about
evaluation functions.

But the MathContext parameter in add(), mul(), etc.. of BigDecimal
is very similar to N[.,.] for these operations. But a library for
sin(), exp(), ... with a MathContext parameter would be nice.

Bye
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,901
Latest member
Noble71S45

Latest Threads

Top