more than 16 significant figures

J

Jeremy Watts

Hi,

Is there a way to acheive decimal numbers accurate to greater than 16
significant figures, without using BigDecimal?

In PHP you can adust the configuration file to acheive greater accuracy, is
there a way to do this in Java?
 
S

Stefan Schulz

Hi,

Is there a way to acheive decimal numbers accurate to greater than 16
significant figures, without using BigDecimal?

In PHP you can adust the configuration file to acheive greater accuracy, is
there a way to do this in Java?

Depends on what you want to do. I do not recall how accurate the internal
representation really is, but 16 figures sound a bit slim for doubles. So
probably you can use more by using an explicit NumberFormat instead of the
default Double.toString(double).
 
R

Remon van Vliet

Jeremy Watts said:
Hi,

Is there a way to acheive decimal numbers accurate to greater than 16
significant figures, without using BigDecimal?

In PHP you can adust the configuration file to acheive greater accuracy,
is there a way to do this in Java?

No, highest precision primary types in Java are 64 bits (i.e. long and
double). You'll have to use BigDecimal or a custom class to go beyond 64 bit
numbers.

Remon
 
P

Patricia Shanahan

Jeremy said:
Hi,

Is there a way to acheive decimal numbers accurate to greater than 16
significant figures, without using BigDecimal?

In PHP you can adust the configuration file to acheive greater accuracy, is
there a way to do this in Java?

Why do you want to avoid BigDecimal? Without knowing that, it is hard to
know what to suggest as an alternative.

Patricia
 
P

Patricia Shanahan

Stefan said:
Depends on what you want to do. I do not recall how accurate the internal
representation really is, but 16 figures sound a bit slim for doubles. So
probably you can use more by using an explicit NumberFormat instead of the
default Double.toString(double).

16 significant decimal digits is about right, your mileage may vary,
depends on the numbers and the calculation - some numbers, such as any
int value, can be stored exactly in a double.

There are 53 significant bits for a normalized double, equivalent to
almost 16 decimal digits.

Patricia
 
J

Jeremy Watts

Patricia Shanahan said:
Why do you want to avoid BigDecimal? Without knowing that, it is hard to
know what to suggest as an alternative.

i am working on some math routines, andsome of them are prone to 'ill
conditioning' - meaning that small round-off errors can completely skew an
answer in some circumstances.

most of my routines use BigDecimal to get around this (basically a sledge
hammer approach that uses a very large number of decimal places to ensure
that ill conditioning doesnt occur), however one of my routines is pretty
involved and it seems the use of BigDecimal is slowing it down very
significantly, so it seems a return to normal number handling is inevitable.

there are other measures i can employ to reduce the possibility of ill
conditioning, but wondered if an increase of 16 sig. figs. is possible as
this would help things along no end.
 
T

Tom N

Jeremy said:
i am working on some math routines, andsome of them are prone to 'ill
conditioning' - meaning that small round-off errors can completely
skew an answer in some circumstances.

most of my routines use BigDecimal to get around this (basically a
sledge hammer approach that uses a very large number of decimal places
to ensure that ill conditioning doesnt occur), however one of my
routines is pretty involved and it seems the use of BigDecimal is
slowing it down very significantly, so it seems a return to normal
number handling is inevitable.

there are other measures i can employ to reduce the possibility of ill
conditioning, but wondered if an increase of 16 sig. figs. is possible
as this would help things along no end.

Unless you are developing new mathematical algorithms, this is probably a problem that other people have
faced, and there are possibly well known solutions or work-around (other than increased precision).

Many of the number crunching algorithms still in use were developed in Fortran days and there wasn't a lot of
choice available other than algorithmic changes.

I remember one simple work-around from my numerical computing course at university. When adding
computing a series which produced increasingly small terms with alternating sign, the work around was to add
up all the positivie terms and separately add all the negative terms, and then add them together.

There may be a similar easy solution if you ask the right question of the right person (maybe in a "math"
newsgroup).

Alternately you could write the routine in another language which supports higher precision floats and call it from
Java.
 
P

Patricia Shanahan

Jeremy said:
i am working on some math routines, andsome of them are prone to 'ill
conditioning' - meaning that small round-off errors can completely skew an
answer in some circumstances.

most of my routines use BigDecimal to get around this (basically a sledge
hammer approach that uses a very large number of decimal places to ensure
that ill conditioning doesnt occur), however one of my routines is pretty
involved and it seems the use of BigDecimal is slowing it down very
significantly, so it seems a return to normal number handling is inevitable.

there are other measures i can employ to reduce the possibility of ill
conditioning, but wondered if an increase of 16 sig. figs. is possible as
this would help things along no end.

Once you go beyond the hardware supported precision you are going to
have to pay time for extra digits.

Are your numbers rational? If so, you can get infinite precision by
using a rational number package, and that may be faster than a very wide
BigDecimal.

Otherwise, your best bet for getting both performance and accuracy is to
look for a different algorithm.

Patricia
 
R

Roedy Green

Hi,

Is there a way to acheive decimal numbers accurate to greater than 16
significant figures, without using BigDecimal?

In PHP you can adust the configuration file to acheive greater accuracy, is
there a way to do this in Java?

long gives you 19 digits. See
http://mindprod.com/jgloss/primitive.html

--
Bush crime family lost/embezzled $3 trillion from Pentagon.
Complicit Bush-friendly media keeps mum. Rumsfeld confesses on video.
http://www.infowars.com/articles/us/mckinney_grills_rumsfeld.htm

Canadian Mind Products, Roedy Green.
See http://mindprod.com/iraq.html photos of Bush's war crimes
 
R

Roedy Green

I remember one simple work-around from my numerical computing course at university. When adding
computing a series which produced increasingly small terms with alternating sign, the work around was to add
up all the positivie terms and separately add all the negative terms, and then add them together.

I think you mean the opposite.

--
Bush crime family lost/embezzled $3 trillion from Pentagon.
Complicit Bush-friendly media keeps mum. Rumsfeld confesses on video.
http://www.infowars.com/articles/us/mckinney_grills_rumsfeld.htm

Canadian Mind Products, Roedy Green.
See http://mindprod.com/iraq.html photos of Bush's war crimes
 
L

Lasse Reichstein Nielsen

Roedy Green said:
I think you mean the opposite.

Add up the negatives first and then the positives? Or what do you mean
by opposite? :)

I think it's correct (since I remember being taught the same thing)

/L
 
T

Tom N

Roedy said:
I think you mean the opposite.

What is the opposite?

subtract none of the negative terms and together subtract none of the positive terms, but before that subtract
them apart?
 
R

Remon van Vliet

Tom N said:
What is the opposite?

subtract none of the negative terms and together subtract none of the
positive terms, but before that subtract
them apart?

I think what he means (and do correct me if i'm wrong), if for some odd
reason you're restricted to bytes, then this :

byte a = 70 + 40 + 80 + 90 - 50 - 60 - 100

goes wrong, where this wont :

byte a = 70 - 50 + 40 - 60 + 80 - 100 + 90

In other words, alternating the pos/neg so you dont get overflows and
related problems. Whether or not that's the opposite of what you said i dont
know, i dont think so though. I think both examples illustrate the reasoning
behind rethinking algorithms when precision becomes an issue.

By the way, am i the only one interested in knowing what problem is being
solved here? What problem requires such a high precision?

Remon
 
J

Joan

Jeremy Watts said:
Hi,

Is there a way to acheive decimal numbers accurate to greater than 16
significant figures, without using BigDecimal?

In PHP you can adust the configuration file to acheive greater accuracy, is
there a way to do this in Java?

If you google for "quad" you will find lots of information about the
accuracy/precision
problem and several packages that will give more digits.
 
G

George Neuner

16 significant decimal digits is about right, your mileage may vary,
depends on the numbers and the calculation - some numbers, such as any
int value, can be stored exactly in a double.

There are 53 significant bits for a normalized double, equivalent to
almost 16 decimal digits.

Java specifies IEEE 754-1985 representation for floating point. 754
guarantees a minimum 12 significant figures for the range of
representable decimal values. For portability reasons you should
assume that beyond 14 digits lies random garbage.

Before somebody objects that Intel/AMD, etc. performed extended
precision (80-bit) floating point, I will remind everyone that any
temporary values which may be stored back into memory in the course of
a computation are not required to maintain the extra precision of the
CPU.

I would also remind everyone that 754 compliant hardware is not
available universally[1]. A JVM implemented on non-compliant hardware
would have a couple of choices: forget about compliance entirely and
provide only the native FP format, translate to/from IEEE format for
storage, or provide a software IEEE emulation.


[1] actually I am not aware of *any* fully 754 compliant hardware
based system available anywhere. Software support can enable better
results from non-compliant hardware, but, AFAIK, the only fully
compliant IEEE-754 systems are software emulation libraries.

George
 
T

Tom N

Remon said:
"
positive terms, but before that subtract

I think what he means (and do correct me if i'm wrong),

I don't think Roedy knows what he means or he would have come back and explained himself.
if for some
odd reason you're restricted to bytes, then this :

byte a = 70 + 40 + 80 + 90 - 50 - 60 - 100
goes wrong, where this wont :

byte a = 70 - 50 + 40 - 60 + 80 - 100 + 90

In other words, alternating the pos/neg so you dont get overflows and
related problems. Whether or not that's the opposite of what you said
i dont know, i dont think so though

This artificial example is using bytes not floating point numbers (the latter don't overflow anywhere near as
easily) and it doesn't use increasingly small terms.

I'm not suggesting that adding all the positivie terms and negative terms separately is a universal solution.
Obviously solutions to errors in numeric computation are not universal or they would be applied routinely in
all cases.

Overflow of integers is not really a good example to use when discussing errors in floating point calculations.

Integers are able to represent accurately all values in their range, whereas floats are only able represent
accurately a small fraction of the values in their range due to the limited size of the mantissa. Most
programmers ignore this fact unless the error becomes glaringly obvious.

Whether or not adding all the positivie terms and negative terms separately is a good idea is an aside to the
the point I was making (and which you seem to agree to) that algorithmic changes would probably be better
than increased precision.

http://www.numerical-recipes.com/nronline_switcher.html
To quote "Numeric Recipes in C" from the above link, chapter 20.1 says "A convenient fiction is that a
computer’s floating-point arithmetic is "accurate enough." If you believe this fiction, then numerical analysis
becomes a very clean subject."
and
"Proper numerical analysts cringe when they hear a user say, "I was getting roundoff errors with single
precision, so I switched to double." The actual meaning is, "for this particular algorithm, and my particular
data, double precision seemed able to restore my erroneous belief in the ‘convenient fiction’."

I have just unsuccessfully spent a couple of hours going through my numerical computing books trying to
find an example of what I suggested.

The Technische Universiteit Eindhoven Numerical Methods and Algorithms course notes make the same
suggestion as I made.
http://www.win.tue.nl/casa/education/courses/2N330/inf/..\lectures\introduction%
5Cintroduction.pdf

This is an interesting historical paper (although I'd guess that it is still applicable today).
PITFALLS IN COMPUTATION, OR WHY A MATH BOOK ISN'T ENOUGH BY GEORGE E FORSYTHE
http://historical.ncstrl.org/litesite-data/stan/CS-TR-70-147.pdf
I think both examples illustrate
the reasoning behind rethinking algorithms when precision becomes an
issue.

By the way, am i the only one interested in knowing what problem is
being solved here? What problem requires such a high precision?

Yes we can only make generic suggestions without knowing this.
 
E

Esmond Pitt

George said:
Java specifies IEEE 754-1985 representation for floating point. 754
guarantees a minimum 12 significant figures for the range of
representable decimal values. For portability reasons you should
assume that beyond 14 digits lies random garbage.

Where? It specifies 53 bits of precision for doubles, and this is 15.95
decimal digits. In Table 3 it also seems to support M = 10^17-1 before
garbage digits occur in conversion to decimal.
 
G

George Neuner

Where? It specifies 53 bits of precision for doubles, and this is 15.95
decimal digits. In Table 3 it also seems to support M = 10^17-1 before
garbage digits occur in conversion to decimal.

The existence of the figures doesn't make them *significant*. The
conversion to decimal is *not* where the garbage comes from.


First off, let me say I'll have to look up the "12 digit" thing - I
learned that long ago and I don't have the document handy to provide a
cite just now. Historically the IEEE double precision format was
intended to provide functionality similar to that of the best
electronic calculators available at the time which had 10 displayed
digits and carried 12..13 significant digits internally.


WRT "garbage" ... the 754 document deals only with the storage formats
and operations on values. Nowhere does it explain the rationale for
ignoring the low bits of a value - for that you need to rely on
knowledge of hardware, the computation process and mathematical
intuition.

1) Most decimal values cannot be represented exactly in base 2.
Excepting the infrequent cases in which the value can be exactly
represented *within the confines of the format*, you have to assume
that most values will have at least 1 non-significant bit in their
mantissa. Applying a binary function to a pair of values each having
one non-significant bit results in a value with two non-significant
bits. Applying a transcendental function may result in several bits
becoming non-significant [e.g., logarithms]. The number of
non-significant bits in the result can not be less than the maximum
number of non-significant bits in the operand(s) and may be more
depending on the function.

2) Rounding and renormalizing intermediate results progressively
poisons the least significant bits of each successive calculation
involving them.

3) Storing intermediate results to memory may result in truncation and
loss of any extended precision available to the hardware. When
truncated values are later reused at extended precision, their
meaningless extended bits contaminate the computation making the
extended bits of the new result meaningless.

4) The FPU on some computers is actually 32-bit or 64-bit. Some
computers have no FP hardware at all. See #1.


Java (like most languages) makes no guarantee that intermediate values
preserve precision available to hardware. Register pressure in a
complicated expression may force intermediate values to be written
back to memory ... whether such values preserve the hardware's
precision is up to the JVM developer. If the hardware actually
provides less precision than an IEEE-754 double, the JVM developer has
to choose between being non-compliant or providing FP through software
emulation.


There's a reasonably non-technical paper that discusses the
convergence of mathematics and computer hardware called: "What Every
Computer Scientist Should Know About Floating-Point Arithmetic" by
David Goldberg. It can explain better than I can the rationale for
caution with floating point.


George
 
P

Patricia Shanahan

George Neuner wrote:
....
Java specifies IEEE 754-1985 representation for floating point. 754
guarantees a minimum 12 significant figures for the range of
representable decimal values. For portability reasons you should
assume that beyond 14 digits lies random garbage.

I'm very curious about the sources of the "12 significant figures" and
"14 digits" numbers. Section number references to ANSI/IEEE Std
754-1985, would be sufficient, because I do have a copy handy.
I would also remind everyone that 754 compliant hardware is not
available universally[1]. A JVM implemented on non-compliant hardware
would have a couple of choices: forget about compliance entirely and
provide only the native FP format, translate to/from IEEE format for
storage, or provide a software IEEE emulation.

I don't think "provide only the native FP format" would result in a
valid JVM. The JVM standard says:

"The floating-point types are float and double, which are conceptually
associated with the 32-bit single-precision and 64-bit double-precision
IEEE 754 values and operations as specified in IEEE Standard for Binary
Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985 (IEEE, New York)."

[http://java.sun.com/docs/books/vmspec/2nd-edition/html/Concepts.doc.html#19511]

Patricia
 
J

Joan

George Neuner said:
Where? It specifies 53 bits of precision for doubles, and this is 15.95
decimal digits. In Table 3 it also seems to support M = 10^17-1 before
garbage digits occur in conversion to decimal.

The existence of the figures doesn't make them *significant*. The
conversion to decimal is *not* where the garbage comes from.


First off, let me say I'll have to look up the "12 digit" thing - I
learned that long ago and I don't have the document handy to provide a
cite just now. Historically the IEEE double precision format was
intended to provide functionality similar to that of the best
electronic calculators available at the time which had 10 displayed
digits and carried 12..13 significant digits internally.


WRT "garbage" ... the 754 document deals only with the storage formats
and operations on values. Nowhere does it explain the rationale for
ignoring the low bits of a value - for that you need to rely on
knowledge of hardware, the computation process and mathematical
intuition.

1) Most decimal values cannot be represented exactly in base 2.
Excepting the infrequent cases in which the value can be exactly
represented *within the confines of the format*, you have to assume
that most values will have at least 1 non-significant bit in their
mantissa. Applying a binary function to a pair of values each having
one non-significant bit results in a value with two non-significant
bits. Applying a transcendental function may result in several bits
becoming non-significant [e.g., logarithms]. The number of
non-significant bits in the result can not be less than the maximum
number of non-significant bits in the operand(s) and may be more
depending on the function.

2) Rounding and renormalizing intermediate results progressively
poisons the least significant bits of each successive calculation
involving them.

3) Storing intermediate results to memory may result in truncation and
loss of any extended precision available to the hardware. When
truncated values are later reused at extended precision, their
meaningless extended bits contaminate the computation making the
extended bits of the new result meaningless.

4) The FPU on some computers is actually 32-bit or 64-bit. Some
computers have no FP hardware at all. See #1.

The pdp-11 in 1970 had software floating point, single, double, and tripple.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top