more than 16 significant figures

G

George Neuner

The pdp-11 in 1970 had software floating point, single, double, and tripple.

Yeah, and the Vax in 1980 had quadruple as well. But in 1990 the i386
had no FPU at all, and Sun 5 and DecStation 2 workstations had 64-bit
only FPUs. Today many embedded CPUs, MPUs and cores remain integer
only.

My point is that you can't assume properties of the computed values
solely from the format they happen to be stored in.

George
 
G

George Neuner

George Neuner wrote:
...
I would also remind everyone that 754 compliant hardware is not
available universally[1]. A JVM implemented on non-compliant hardware
would have a couple of choices: forget about compliance entirely and
provide only the native FP format, translate to/from IEEE format for
storage, or provide a software IEEE emulation.

I don't think "provide only the native FP format" would result in a
valid JVM. The JVM standard says:

"The floating-point types are float and double, which are conceptually
associated with the 32-bit single-precision and 64-bit double-precision
IEEE 754 values and operations as specified in IEEE Standard for Binary
Floating-Point Arithmetic, ANSI/IEEE Standard 754-1985 (IEEE, New York)."

[http://java.sun.com/docs/books/vmspec/2nd-edition/html/Concepts.doc.html#19511]


I *did* say that such a JVM would be non-compliant.


I'm not certain, however, whether that's entirely true in the case of
small devices running J2ME. The old JavaCard 1.1 spec said
programmers could not depend on floating point (among other things)
being available. Since J2ME has subsumed JavaCard I would assume that
compliance under ME is a matter of degree.

Hopefully someone who knows better will weigh in.

George
 
J

Joan

George Neuner said:
Yeah, and the Vax in 1980 had quadruple as well. But in 1990 the i386
had no FPU at all, and Sun 5 and DecStation 2 workstations had 64-bit
only FPUs. Today many embedded CPUs, MPUs and cores remain integer
only.

My point is that you can't assume properties of the computed values
solely from the format they happen to be stored in.

How about boolean? That should not be too hard.
 
R

Robert Maas, see http://tinyurl.com/uh3t

From: "Jeremy Watts said:
most of my routines use BigDecimal to get around this (basically a
sledge hammer approach that uses a very large number of decimal
places to ensure that ill conditioning doesnt occur)

You seem to just making a blind guess as to how much accuracy is
needed, then adding a whole lot more accuracy "just to be safe", and
still you have no idea whether you are getting the accuracy you really
need for final output. That's very slopping programming technique.

Why don't you use interval arithmetic? You can specify exactly how much
accuracy you need for final output, and build up a dataflow DAG from
there back to the raw input, and working backwards compute values
accurate enough at each stage that the next stage of calculation will
satisfy the requrements. Thus no unnecessary work is needed at any
stage of calculation, yet you are guaranteed to get output whose
precision achieves the requirement.
 
E

Esmond Pitt

George said:
[lesson in binary representation of decimal numbers]

Thanks George, I have known all that since about 1972, I just want to
know where your twelve-digit claim comes from. 53 significant bits are
specified in the standard, and that equals 15.9 significant decimal
digits in my book, unless the log tables have changed since I went to
school.

I can't find '12' anywhere in the IEEE 754 texts available to me, and
your historical claims about IEEE 754 being aligned to calculators
rather than the PDP-11, VAX, 8087 &c seem dubious too.
 
G

George Neuner

George said:
[lesson in binary representation of decimal numbers]

Thanks George, I have known all that since about 1972, I just want to
know where your twelve-digit claim comes from. 53 significant bits are
specified in the standard, and that equals 15.9 significant decimal
digits in my book, unless the log tables have changed since I went to
school.

I can't find '12' anywhere in the IEEE 754 texts available to me, and
your historical claims about IEEE 754 being aligned to calculators
rather than the PDP-11, VAX, 8087 &c seem dubious too.

I can't find it in the standard either so it obviously came from some
other source that I conflated in my mind. Most likely it is from one
of the many analyses of the standard functions - I'm still looking for
the actual cite. I apologize for creating confusion on this point.



Regarding the history, I didn't say the standard was aligned to
calculators - you inferred that. I said that it was to match the
functionality of the best calculators available at the time. This was
incidental and was nothing but a marketing point for micro sales, but
it is still true.

In 1976 when the committee began working, micros were almost
universally ignored by business. The impetus for the 754 working
committee was the impending release of Intel's 8087 which was a large
leap forward in functionality. The capabilities of the 8087 scared
micro manufacturers into cooperating to create a "real" computer math
standard to which they could all claim adherence for marketing
purposes.

VisiCalc appeared in 1979 and business woke up to the possibilities of
micros. When IBM entered the micro market in 1980, a large part of
it's strategy was to hype the spreadsheet capabilities of its new
machine. IBM had been part of the 754 working committee from the
beginning and gained leverage by pointing to their involvement in the
math standard to demonstrate that 8087 equipped micros were designed
to be better than the best calculators available. It was total hype,
but it was a selling point.


AFA 754 being based on VAX, 8087, etc. - it most definitely was.
William Kahan, who co-authored the draft proposals was both a fan of
the VAX and a designer of the 8087.


George
 
D

Dale King

Tom said:
Integers are able to represent accurately all values in their range, whereas floats are only able represent
accurately a small fraction of the values in their range due to the limited size of the mantissa.

It has more to do with the fact that the set of reals is uncountably
infinite while the set of integers is countably infinite. There are an
infinite number of reals between any two reals so no finite
representation can represent all values over any range.
 
P

Patricia Shanahan

Dale said:
It has more to do with the fact that the set of reals is uncountably
infinite while the set of integers is countably infinite. There are an
infinite number of reals between any two reals so no finite
representation can represent all values over any range.

I don't think uncountability is really an issue here. The rationals are
countably infinite, yet there are an infinite number of rationals
between any two rationals so no finite representation can represent all
values in a range.

Patricia
 
D

Dale King

Patricia said:
I don't think uncountability is really an issue here. The rationals are
countably infinite, yet there are an infinite number of rationals
between any two rationals so no finite representation can represent all
values in a range.

Good point. After some research, the concept I was going after was dense
ordered sets. The set of reals and the set of rationals are dense sets
which means that for any two points a, b in X and a < b then there is
another point x in X such that a < x < b.

That is the property that makes it impossible to represent all values
over a range with a finite representation.
 
R

Raymond DeCampo

Dale said:
Good point. After some research, the concept I was going after was dense
ordered sets. The set of reals and the set of rationals are dense sets
which means that for any two points a, b in X and a < b then there is
another point x in X such that a < x < b.

That is the property that makes it impossible to represent all values
over a range with a finite representation.

Not really. For example, the set { 1/n | n is a positive integer } does
not have the property, but you cannot have a finite representation of
all such numbers in [0, 1].

Ray
 
P

Patricia Shanahan

Raymond said:
Dale said:
Good point. After some research, the concept I was going after was
dense ordered sets. The set of reals and the set of rationals are
dense sets which means that for any two points a, b in X and a < b
then there is another point x in X such that a < x < b.

That is the property that makes it impossible to represent all values
over a range with a finite representation.


Not really. For example, the set { 1/n | n is a positive integer } does
not have the property, but you cannot have a finite representation of
all such numbers in [0, 1].

Ray

Yup. I think the actual condition for a finite representation is just:
Is the set of numbers you want to represent finite or not?

The dense set property is interesting because it indicates that there is
NO range containing at least two elements of the set over which all
elements of the set have a finite representation. Ray's set has a finite
representation for any range [x,1], where 0<x<1. The rationals and reals
don't.

Patricia
 
T

Tom N

Dale said:
It has more to do with the fact that the set of reals is uncountably
infinite while the set of integers is countably infinite. There are an
infinite number of reals between any two reals so no finite
representation can represent all values over any range.

Finite = limited.

If the mantissa was not limited in size, that is, if it was infinite, then it could represent all reals.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top