Math errors in python

C

Chris S.

Richard said:
Do you really think Pi equals 22/7 ?

Of course not. That's just a common approximation. Irrational numbers
are an obvious exception, but we shouldn't sacrifice the accuracy of
common decimal math just for their sake.
3.14285714286

What do you get on your $20 calculator ?

The same thing actually.
 
G

Gary Herron

Sqrt is a fair criticism, but Pi equals 22/7,

What? WHAT? Are you nuts? Pi and 22/7 are most certainly not equal.
They don't even share three digits beyond the decimal point. (Can you
really be that ignorant about numbers and expect to contribute
intelligently to a discussion about numbers. Pi is a non-repeating
and non-ending number in base 10 or any other base.)

exactly the form this
arithmetic is meant for. Any decimal can be represented by a fraction,
yet not all fractions can be represented by decimals. My point is that
such simple accuracy should be supported out of the box.


So are our brains, yet we somehow manage to compute 12.10 + 8.30
correctly using nothing more than simple skills developed in
grade-school. You could theoretically compute an infinitely long
equation by simply operating on single digits, yet Python, with all of
its resources, can't overcome this hurtle?

However, I understand Python's limitation in this regard. This
inaccuracy stems from the traditional C mindset, which typically
dismisses any approach not directly supported in hardware. As the FAQ
states, this problem is due to the "underlying C platform". I just find
it funny how a $20 calculator can be more accurate than Python running
on a $1000 Intel machine.

If you are happy doing calculations with decimal numbers like 12.10 +
8.30, then the Decimal package may be what you want, but that fails as
soon as you want 1/3. But then you could use a rational arithmetic
package and get 1/3, but that would fail as soon as you needed sqrt(2)
or Pi. But then you could try ... what? Can you see the pattern
here? Any representation of the infinity of numbers on a finite
computer *must* necessarily be unable to represent some (actually
infinity many) of those numbers. The inaccuracies stem from that
fact.

Hardware designers have settled on a binary representation of floating
point numbers, and both C and Python use the underlying hardware
implementation. (Try your calculation in C -- you'll get the same
result if you choose to print out enough digits.)

And BTW, your calculator is not, in general, more accurate than the
modern IEEE binary hardware representation of numbers used on most of
today's computers. It is more accurate on only a select subset of all
numbers, and it does a good job of fooling you in those cases where it
loses accuracy, by doing calculations on more digits then it displays,
and rounding off to the on-screen digits.

So while a calculator will fool you into believing it is accurate when
it is not, it is Python's design decision to not cater to fools.

Dr Gary Herron
 
?

=?iso-8859-1?Q?Michel_Claveau_-_abstraction_m=E9ta

Hi !


No. BCD use another work : two digits by Byte. The calculation is
basically integer, it's the library who manage the decimal point.

There are no problem of round.


@-salutations
 
P

Paul Rubin

Gary Herron said:
Any representation of the infinity of numbers on a finite computer
*must* necessarily be unable to represent some (actually infinity
many) of those numbers. The inaccuracies stem from that fact.

Well, finite computers can't even represent all the integers, but
we can reasonably think of Python as capable of doing exact integer
arithmetic.

The issue here is that Python's behavior confuses the hell out of some
new users. There is a separate area of confusion, that

a = 2 / 3

sets a to 0, and to clear that up, the // operator was introduced and
Python 3.0 will supposedly treat / as floating-point division even
when both operands are integers. That doesn't solve the also very
common confusion that (1.0/3.0)*3.0 = 0.99999999. Rational arithmetic
can solve that.

Yes, with rational arithmetic, it will still be true that
sqrt(5.)**2.0 doesn't quite equal 5, but hardly anyone ever complains
about that.

And yes, there are languages that can do exact arithmetic on arbitrary
algebraic numbers, but they're normally not used for general-purpose
programming.
 
H

Heiko Wundram

Am Sonntag, 19. September 2004 09:05 schrieb Chris S.:
That's nonsense. My 7-year old TI-83 performs that calculation just
fine, and you're telling me, in this day and age, that Python running on
a modern 32-bit processor can't even compute simple decimals accurately?
Don't defend bad code.

Do you actually know how your TI-83 works? If you did, you wouldn't be
complaining about bad code or something. The TI-83 is hiding something from
you, not Python.

This discussion is so senseless and inflamatory that I take the OP to be a
troll...

Heiko.
 
H

Heiko Wundram

Am Sonntag, 19. September 2004 09:39 schrieb Gary Herron:
That's called rational arithmetic, and I'm sure you can find a package
that implements it for you. However what would you propose for
irrational numbers like sqrt(2) and transcendental numbers like PI?

Just as an example, try gmpy. Unlimited precision integer and rational
arithmetic. But don't think that they implement anything more than the four
basic operations on rationals, because algorithms like sqrt and pow become so
slow, that nobody sensible would use them, but rather just stick to the
binary arithmetic the computer uses (although this might have some minor
effects on precision, but these can be bounded).

Heiko.
 
P

Peter Otten

Paul said:
I haven't tried 2.4 yet. After

The auther is currently working on an installer, but just dropping it into
2.3's site-packages should work, too.
a = Decimal("1") / Decimal("3")
b = a * Decimal("3")
print b

What happens? Is that arithmetic as the way I know it?

Decimal as opposed to rational:
Decimal("0.9999999999999999999999999999")

Many people can cope with the inaccuracy induced by base 10 representations
and are taken by surprise by base 2 errors.
But you are right I left too much room for interpretation.

Peter
 
P

Peter Otten

Chris said:
Call it what you will, it doesn't produce the correct result. From where
I come from, that's either bad or broken.

If there is a way to always get the "correct" result in numerical
mathematics, I don't know it. But I'm not an expert. Can you enlighten me?

Expressions like a*b+c are not affected by the choice of float/Decimal.
Values are normally read from a file or given interactively by a user. I
supposed that what you called inconvenient to be limited to decimal
constants (Decimal("1.2") vs. 1.2 for floats) and questioned its
significance, especially as scientific users will probably continue to use
floats.

Peter
 
A

Alex Martelli

Chris S. said:
Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this

Of course it doesn't. What a silly assertion.
arithmetic is meant for. Any decimal can be represented by a fraction,

And pi can't be represented by either (if you mean _finite_ decimals and
fractions).
yet not all fractions can be represented by decimals. My point is that
such simple accuracy should be supported out of the box.

In Python 2.4, decimal computations are indeed "supported out of the
box", although you do explicitly have to request them (the default
remains floating-point). In 2.3, you have to download and use any of
several add-on packages (decimal computations and rational ones have
very different characteristics, so you do have to choose) -- big deal.

So are our brains, yet we somehow manage to compute 12.10 + 8.30
correctly using nothing more than simple skills developed in

Using base 10, sure. Or, using fractions, even something that decimals
would not let you compute finitely, such as 1/7+1/6.
grade-school. You could theoretically compute an infinitely long
equation by simply operating on single digits,

Not in finite time, you couldn't (excepting a few silly cases where the
equation is "infinitely long" only because of some rule that _can_ be
finitely expressed, so you don't even have to LOOK at all the equation
to solve [which is what I guess you mean by "compute"...?] it -- if you
have to LOOK at all of the equation, and it's infinite, you can't get
done in finite time).
yet Python, with all of
its resources, can't overcome this hurtle?

The hurdle of decimal arithmetic, you mean? Download Python 2.4 and
play with decimal to your heart's content. Or do you mean fractions?
Then download gmpy and ditto. There are also packages for symbolic
computation and even more exotic kinds of arithmetic.

In practice, with the sole exception of monetary computations (which may
often be constrained by law, or at the very least by customary
practice), there is no real-life use in which the _accuracy_ of floating
point isn't ample. There are nevertheless lots of traps in arithmetic,
but switching to forms of arithmetic different from float doesn't really
make all the traps magically disappear, of course.

However, I understand Python's limitation in this regard. This
inaccuracy stems from the traditional C mindset, which typically
dismisses any approach not directly supported in hardware. As the FAQ

Ah, I see, a case of "those who can't be bothered to learn a LITTLE
history before spouting off" etc etc. Python's direct precursor, the
ABC language, used unbounded-precision rationals. As a result (obvious
to anybody who bothers to learn a little about the inner workings of
arithmetic), the simplest-looking string of computations could easily
consume all the memory at your computer's disposal, and then some, and
apparently unbounded amounts of time. It turned out that users object,
most of the time, to having some apparently trivial computation take
hours, rather than seconds, in order to be unboundedly precise rather
than, say, precise to "just" a couple hundred digits (far more digits
than you need to count the number of atoms in the Galaxy). So,
unbounded rationals as a default are out -- people may sometimes SAY
they want them, but in fact, in an overwhelming majority of the cases,
they actually do not (oh yes, people DO lie, first of all to
themselves:).

As for decimals, that's what a very-high level language aiming for a
niche very close to Python used from the word go. It got started WAY
before Python -- I was productively using it over 20 years ago -- and
had the _IBM_ brand on it, which at the time pretty much meant the
thousand-pounds gorilla of computers. So where is it now, having had
all of these advantages (started years before, had IBM behind it, AND
was totally free of "the traditional C mindset", which was very far from
traditional at the time, particularly within IBM...!)...?

Googlefight is a good site for this kind of comparisons... try:

<http://www.googlefight.com/cgi-bin/compare.pl?q1=python&q2=rexx
&B1=Make+a+fight%21&compare=1&langue=us>

and you'll see...:
"""
Number of results on Google for the keywords python and rexx:

python
(10 300 000 results)
versus
rexx
( 419 000 results)

The winner is: python
"""

Not just "the winner", an AMAZING winner -- over TWENTY times more
popular, despite all of Rexx's advantages! And while there are no doubt
many fascinating components to this story, a key one is among the pearls
of wisdom you can read by doing, at any Python interactive prompt:

and it is: "practicality beats purity". Rexx has always been rather
puristic in its adherence to its principles; Python is more pragmatic.
It turns out that this is worth a lot in the real world. Much the same
way, say, C ground PL/I into the dust. Come to think of it, Python's
spirit is VERY close to C (4 and 1/2 half of the 5 principles listed as
"the spirit of C" in the C ANSI Standard's introduction are more closely
followed by Python than by other languages which borrowed C's syntax,
such as C++ or Java), while Rexx does show some PL/I influence (not
surprising for an IBM-developed language, I guess).

Richard Gabriel's famous essay on "Worse is Better", e.g. at
<http://www.jwz.org/doc/worse-is-better.html>, has more, somewhat bitter
reflections in the same vein.

Python never had any qualms in getting outside the "directly supported
in hardware" boundaries, mind you. Dictionaries and unbounded precision
integers are (and have long been) Python mainstays, although neither the
hardware nor the underlying C platform has any direct support for
either. For non-integer computations, though, Python has long been well
served by relying on C, and nowadays typically the HW too, to handle
them, which implied the use of floating-point; and leaving the messy
business of implementing the many other possibly useful kinds of
non-integer arithmetic to third-party extensions (many in fact written
in Python itself -- if you're not in a hurry, they're fine, too).

With Python 2.4, somebody finally felt enough of an itch regarding the
issue of getting support for decimal arithmetic in the Python standard
library to go to the trouble of scratching it -- as opposed to just
spouting off on a mailing list, or even just implementing what they
personally needed as just a third-party extension (there are _very_ high
hurdles to jump, to get your code into the Python standard library, so
it needs strong motivation to do so as opposed to just releasing your
own extension to the public).
states, this problem is due to the "underlying C platform". I just find
it funny how a $20 calculator can be more accurate than Python running
on a $1000 Intel machine.

You can get a calculator much cheaper than that these days (and "intel
machines" not too out of the mainstream for well less than half, as well
as several times, your stated price). It's pretty obvious that the
price of the hardware has nothing to do with that "_CAN_ be more
accurate" issue (my emphasis) -- which, incidentally, remains perfectly
true even in Python 2.4: it can be less, more, or just as accurate as
whatever calculator you're targeting, since the precision of decimal
computation is one of the aspects you can customize specifically...


Alex
 
G

Grant Edwards

Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
arithmetic is meant for.

Any decimal can be represented by a fraction, yet not all
fractions can be represented by decimals. My point is that
such simple accuracy should be supported out of the box.

It is. Just not with floating point.
So are our brains, yet we somehow manage to compute 12.10 + 8.30
correctly using nothing more than simple skills developed in
grade-school. You could theoretically compute an infinitely long
equation by simply operating on single digits, yet Python, with all of
its resources, can't overcome this hurtle?

Sure it can.
However, I understand Python's limitation in this regard. This
inaccuracy stems from the traditional C mindset, which
typically dismisses any approach not directly supported in
hardware. As the FAQ states, this problem is due to the
"underlying C platform". I just find it funny how a $20
calculator can be more accurate than Python running on a $1000
Intel machine.

You're clueless on so many different points, I don't even know
where to start...
 
G

Gary Herron

A nice thoughtful answer Alex, but possibly wasted, as it's been
suggested that he is just a troll. (Note his asssertion that Pi=22/7
in one post and the assertion that it is just a common approximation
in another, and this in a thread about numeric imprecision.)

Gary Herron


Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this

Of course it doesn't. What a silly assertion.
arithmetic is meant for. Any decimal can be represented by a fraction,

And pi can't be represented by either (if you mean _finite_ decimals and
fractions).
yet not all fractions can be represented by decimals. My point is that
such simple accuracy should be supported out of the box.

In Python 2.4, decimal computations are indeed "supported out of the
box", although you do explicitly have to request them (the default
remains floating-point). In 2.3, you have to download and use any of
several add-on packages (decimal computations and rational ones have
very different characteristics, so you do have to choose) -- big deal.
So are our brains, yet we somehow manage to compute 12.10 + 8.30
correctly using nothing more than simple skills developed in

Using base 10, sure. Or, using fractions, even something that decimals
would not let you compute finitely, such as 1/7+1/6.
grade-school. You could theoretically compute an infinitely long
equation by simply operating on single digits,

Not in finite time, you couldn't (excepting a few silly cases where the
equation is "infinitely long" only because of some rule that _can_ be
finitely expressed, so you don't even have to LOOK at all the equation
to solve [which is what I guess you mean by "compute"...?] it -- if you
have to LOOK at all of the equation, and it's infinite, you can't get
done in finite time).
yet Python, with all of
its resources, can't overcome this hurtle?

The hurdle of decimal arithmetic, you mean? Download Python 2.4 and
play with decimal to your heart's content. Or do you mean fractions?
Then download gmpy and ditto. There are also packages for symbolic
computation and even more exotic kinds of arithmetic.

In practice, with the sole exception of monetary computations (which may
often be constrained by law, or at the very least by customary
practice), there is no real-life use in which the _accuracy_ of floating
point isn't ample. There are nevertheless lots of traps in arithmetic,
but switching to forms of arithmetic different from float doesn't really
make all the traps magically disappear, of course.
However, I understand Python's limitation in this regard. This
inaccuracy stems from the traditional C mindset, which typically
dismisses any approach not directly supported in hardware. As the FAQ

Ah, I see, a case of "those who can't be bothered to learn a LITTLE
history before spouting off" etc etc. Python's direct precursor, the
ABC language, used unbounded-precision rationals. As a result (obvious
to anybody who bothers to learn a little about the inner workings of
arithmetic), the simplest-looking string of computations could easily
consume all the memory at your computer's disposal, and then some, and
apparently unbounded amounts of time. It turned out that users object,
most of the time, to having some apparently trivial computation take
hours, rather than seconds, in order to be unboundedly precise rather
than, say, precise to "just" a couple hundred digits (far more digits
than you need to count the number of atoms in the Galaxy). So,
unbounded rationals as a default are out -- people may sometimes SAY
they want them, but in fact, in an overwhelming majority of the cases,
they actually do not (oh yes, people DO lie, first of all to
themselves:).

As for decimals, that's what a very-high level language aiming for a
niche very close to Python used from the word go. It got started WAY
before Python -- I was productively using it over 20 years ago -- and
had the _IBM_ brand on it, which at the time pretty much meant the
thousand-pounds gorilla of computers. So where is it now, having had
all of these advantages (started years before, had IBM behind it, AND
was totally free of "the traditional C mindset", which was very far from
traditional at the time, particularly within IBM...!)...?

Googlefight is a good site for this kind of comparisons... try:

<http://www.googlefight.com/cgi-bin/compare.pl?q1=python&q2=rexx
&B1=Make+a+fight%21&compare=1&langue=us>

and you'll see...:
"""
Number of results on Google for the keywords python and rexx:

python
(10 300 000 results)
versus
rexx
( 419 000 results)

The winner is: python
"""

Not just "the winner", an AMAZING winner -- over TWENTY times more
popular, despite all of Rexx's advantages! And while there are no doubt
many fascinating components to this story, a key one is among the pearls

of wisdom you can read by doing, at any Python interactive prompt:
and it is: "practicality beats purity". Rexx has always been rather
puristic in its adherence to its principles; Python is more pragmatic.
It turns out that this is worth a lot in the real world. Much the same
way, say, C ground PL/I into the dust. Come to think of it, Python's
spirit is VERY close to C (4 and 1/2 half of the 5 principles listed as
"the spirit of C" in the C ANSI Standard's introduction are more closely
followed by Python than by other languages which borrowed C's syntax,
such as C++ or Java), while Rexx does show some PL/I influence (not
surprising for an IBM-developed language, I guess).

Richard Gabriel's famous essay on "Worse is Better", e.g. at
<http://www.jwz.org/doc/worse-is-better.html>, has more, somewhat bitter
reflections in the same vein.

Python never had any qualms in getting outside the "directly supported
in hardware" boundaries, mind you. Dictionaries and unbounded precision
integers are (and have long been) Python mainstays, although neither the
hardware nor the underlying C platform has any direct support for
either. For non-integer computations, though, Python has long been well
served by relying on C, and nowadays typically the HW too, to handle
them, which implied the use of floating-point; and leaving the messy
business of implementing the many other possibly useful kinds of
non-integer arithmetic to third-party extensions (many in fact written
in Python itself -- if you're not in a hurry, they're fine, too).

With Python 2.4, somebody finally felt enough of an itch regarding the
issue of getting support for decimal arithmetic in the Python standard
library to go to the trouble of scratching it -- as opposed to just
spouting off on a mailing list, or even just implementing what they
personally needed as just a third-party extension (there are _very_ high
hurdles to jump, to get your code into the Python standard library, so
it needs strong motivation to do so as opposed to just releasing your
own extension to the public).
states, this problem is due to the "underlying C platform". I just find
it funny how a $20 calculator can be more accurate than Python running
on a $1000 Intel machine.

You can get a calculator much cheaper than that these days (and "intel
machines" not too out of the mainstream for well less than half, as well
as several times, your stated price). It's pretty obvious that the
price of the hardware has nothing to do with that "_CAN_ be more
accurate" issue (my emphasis) -- which, incidentally, remains perfectly
true even in Python 2.4: it can be less, more, or just as accurate as
whatever calculator you're targeting, since the precision of decimal
computation is one of the aspects you can customize specifically...


Alex
 
A

Alex Martelli

Gary Herron said:
What? WHAT? Are you nuts? Pi and 22/7 are most certainly not equal.
They don't even share three digits beyond the decimal point. (Can you
really be that ignorant about numbers and expect to contribute
intelligently to a discussion about numbers. Pi is a non-repeating
and non-ending number in base 10 or any other base.)

Any _integer_ base -- you can find infinitely many irrational bases in
which pi has repeating or terminating expansion (for example, you could
use pi itself as a base;-). OK, OK, I _am_ being silly!-)
If you are happy doing calculations with decimal numbers like 12.10 +
8.30, then the Decimal package may be what you want, but that fails as
soon as you want 1/3.

But it fails in exactly the same way as a cheap calculator of the same
precision, and some people just have a fetish for that.
But then you could use a rational arithmetic
package and get 1/3, but that would fail as soon as you needed sqrt(2)
or Pi. But then you could try ... what? Can you see the pattern

Uh, "constructive reals", such as those you can find at
<http://www.hpl.hp.com/personal/Hans_Boehm/crcalc/> ...?

"Numbers are represented exactly internally to the calculator, and then
evaluated on demand to guarantee an error in the displayed result that
is strictly less than one in the least significant displayed digit. It
is possible to scroll the display to the right to generate essentially
arbitrary precision in the result." It has trig, logs, etc.
here? Any representation of the infinity of numbers on a finite
computer *must* necessarily be unable to represent some (actually
infinity many) of those numbers. The inaccuracies stem from that
fact.

Yes, _but_. There is after all a *finite* set of reals you can describe
(constructively and univocally) by equations that you can write finitely
with a given finite alphabet, right? So, infinitely many (and indeed
infinitely many MORE, since reals overall are _uncountably_ infinite;-)
reals are of no possible constructive interest -- if we were somehow
given one, we would have no way to verify that it is what it is claimed
to be, anyway, since no claim for it can be written finitely over
whatever finite alphabet we previously agreed to use. So, I think we
can safely restrict discourse by ignoring, at least, the _uncountably_
infinite aspects of reals and sticking to some "potentially
constructively interesting" subset that is _countably_ infinite.

At this point, the theoretical problems aren't much worse than those you
meet with, say, integers, or just rationals, etc. Sure, you can't
represent any but a finite subset of integers (or rationals, etc) in a
finite computer _in a finite time_, yet that implies no _inaccuracy_
whatsoever -- specify your finite alphabet and the maximum size of
equation you want to be able to write, and I'll give you the specs for
how big a computer I will need to serve your needs. Easy!

A "constructive reals" library able to hold and manipulate all reals
that can be described as the sum of convergent series such that the Nth
term of the series is a ratio of polynomials in N whose tuples of
coefficients fit comfortably in memory (with space left over for some
computation), for example, would amply suffice to deal with all commonly
used 'transcendentals', such as the ones arising from trigonometry,
logarithms, etc, and many more besides. (My memories of arithmetic are
SO rusty I don't even recall if adding similarly constrained continuous
fractions to the mix would make any substantial difference, sigh...).

If you ask for some sufficiently big computation you may happen to run
out of memory -- not different from what happens if you ask for a
raising-to-power between two Python long's which happen to be too big
for your computer's memory. Buy more memory, move to a 64-bit CPU (and
a good OS for it), whatever: it's not a problem of _accuracy_, anyway.

It MAY be a problem of TIME -- if you're in any hurry, and have upgraded
your computer to have a few hundred terabytes of memory, you MAY be
disappointed at how deucedly long it takes to get that multiplication
between longs that just happened to overflow the memory resources of
your previous machine which had just 200 TB. If you ask for an infinite
representation of whatever, it will take an infinite time for you to see
it, of course -- your machine will keep emitting digits at whatever
rate, even very fast, but if the digits never stop coming then you'll
never stop staring at them able to truthfully say "I've seen them ALL".
But that's an effect that's easy to get even with such a simple
computation as 1/3... it may easily be held with perfect accuracy inside
the machine, just by using rationals, but if you want to see it as a
decimal number you'll never be done. Similarly for sqrt(2) and so on.

But again it's not a problem of _accuracy_, just one of patience;-). If
the machine is well programmed you'll never see even one wrong digit, no
matter how long you keep staring and hoping to catch an accuracy issue.

The reason we tend to use limited accuracy more often than strictly
needed is that we typically ARE in a hurry. E.g., I have measured the
radius of a semispherical fishbowl at 98.13 cm and want to know how much
water I need to fetch to fill it: I do NOT want to spend eons checking
out the millionth digit -- I started with a measurement that has four or
so significant digits (way more than _typical_ real-life measurements in
most cases, btw), it's obvious that I'll be satisfied with just a few
more significant digits in the answer. In fact, Python's floats are
_just fine_ for just about any real-life computation, excluding ones
involving money (which may often be constrained by law or at least by
common practice) and some involving combinatorial arithmetic (and thus,
typically, ratios between very large integers), but the latter only
apply to certain maniacs trying to compute stuff about games (such as,
yours truly;-).

So while a calculator will fool you into believing it is accurate when
it is not, it is Python's design decision to not cater to fools.

Well put (+1 QOTW). But constructive reals are still COOL, even if
they're not of much practical use in real life;-).


Alex
 
A

Alex Martelli

Paul Rubin said:
The issue here is that Python's behavior confuses the hell out of some
new users. There is a separate area of confusion, that

a = 2 / 3

sets a to 0, and to clear that up, the // operator was introduced and
Python 3.0 will supposedly treat / as floating-point division even
when both operands are integers. That doesn't solve the also very
common confusion that (1.0/3.0)*3.0 = 0.99999999. Rational arithmetic
can solve that.

Yes, but applying rational arithmetic by default might slow some
computations far too much for beginners' liking! My favourite for
Python 3.0 would be to have decimals by default, with special notations
to request floats and rationals (say '1/3r' for a rational, '1/3f' for a
float, '1/3' or '1/3d' for a decimal with some default parameters such
as number of digits). This is because my guess is that most naive users
would _expect_ decimals by default...

Yes, with rational arithmetic, it will still be true that
sqrt(5.)**2.0 doesn't quite equal 5, but hardly anyone ever complains
about that.

And yes, there are languages that can do exact arithmetic on arbitrary
algebraic numbers, but they're normally not used for general-purpose
programming.

Well, you can pretty easily use constructive reals with Python, see for
example <http://more.btexact.com/people/briggsk2/XR.html> -- that's a
vastly vaster set than just algebraic numbers. If we DO want precision,
after all, why should sqrt(5) be more important than log(3)?


Alex
 
A

Alex Martelli

Heiko Wundram said:
Am Sonntag, 19. September 2004 09:39 schrieb Gary Herron:

Just as an example, try gmpy. Unlimited precision integer and rational
arithmetic. But don't think that they implement anything more than the four
basic operations on rationals, because algorithms like sqrt and pow become so
slow, that nobody sensible would use them, but rather just stick to the
binary arithmetic the computer uses (although this might have some minor
effects on precision, but these can be bounded).

Guilty as charged, but with a different explanation. I don't support
raising a rational to a rational exponent, not because it would "become
slow", but because it could not return a rational result in general.
When it CAN return a rational result, I'm happy as a lark to support it:

I.e. raising to the power 1/2 (which is the same as saying, taking the
square root) is supported in gmpy only when the base is a rational which
IS the square of some other rational -- and similarly for other
fractional exponents.

Say you're content with finite precision, and you problem is that
getting only a few dozen bits' worth falls far short of your ambition,
as you want _thousands_. Well, you don't have to "stick to the
arithmetic your computer uses", with its paltry dozens of bits' worth of
precision -- you can have just as many as you wish. For example:

For example...:
mpf('1.41421356237309504880168872420969807856967187537694807317667973799
073247846210703885038753432764157273501384623091229702492483605585073721
264412149709993583141322266592750559275579995050115278206057147010955997
160597027453459686201472851741864088919860955232923048430871432145083976
260362799525140798968725339654633180882964062061525835239505474575028775
996172983557522033753185701135437460340849884716038689997069900481503054
402779031645424782306849293691862158057846311159666871301301561856898723
723528850926486124949771542183342042856860601468247207714358548741556570
696776537202264854470158588016207584749226572260020855844665214583988939
4437092659180031138824646815708263e0',2222)

Of course, this still has bounded accuracy (gmpy doesn't do constructive
reals...):
mpf('1.21406321925474744732602075007044436621136403661789690072865954475
776298522118244419272674806546441529118557492550101271984681584381130555
892259118178248950179953390159664508815540959644741794226362686473376767
055696411211498987561487078708187675060063022704148995680107509652317604
479364576039827518913272446772069713871266672454279184421635785339332972
791970690781583948212784883346298572710476658954707852342842150889381157
563045936231138515406709376167997169879900784347146377935422794796191261
624849740964942283842868779082292557869166024095318326003777296248197487
885858223175591943112711481319695526039760318353849240080721341697065981
8471278600062647147473105883272095e-674',2222)

i.e., there IS an error of about 10 to the minus 674 power, i.e. a
precision of barely more than a couple of thousands of bits -- but then,
that IS what you asked for, with that '2222'!-)

Computing square roots (or whatever) directly on rationals would be no
big deal, were there demand -- you'd still have to tell me what kind of
loss of accuracy you're willing to tolerate, though. I personally find
it handier to compute with mpf's (high-precision floats) and then turn
the result into rationals with a Stern-Brocot algorithm...:
mpq(87787840362947056221389837099888119784184900622573984346903816053706
510038371862119498502008227696594958892073744394524220336403937617412073
521953746033135074321986796669379393248887099312745495535792954890191437
233230436746927180393035328284490481153041398619700720943077149557439382
34750528988254439L,62075377226361968890337286609853165704271551096494666
544033975362265504696870569409265091955693911548812764050925469857560059
623789287922026078165497542602022607603900854658753038808290787475128940
694806084715129308978288523742413573494221901565588452667869917019091383
93125847495825105773132566685269L)

If you need the square root of two as a rational number with an error of
less than 1 in 2**-2000, I think this is a reasonable approach. As for
speed, this is quite decently usable in an interactive session in my
cheap and cheerful Mac iBook 12" portable (not the latest model, which
is quite a bit faster, much less the "professional" Powerbooks -- I'm
talking about an ageing, though good-quality, consumer-class machine!).

gmpy (or to be more precise the underlying GMP library) runs optimally
on AMD Athlon 32-bit processors, which happen to be dirt cheap these
days, so a cleverly-purchased 300-dollars desktop Linux PC using such an
Athlon chip would no doubt let you use way more than these humble couple
thousand bits for such interactive computations while maintaining a
perfectly acceptable interactive response time.


Alex
 
A

Alex Martelli

Gary Herron said:
A nice thoughtful answer Alex, but possibly wasted, as it's been
suggested that he is just a troll. (Note his asssertion that Pi=22/7
in one post and the assertion that it is just a common approximation
in another, and this in a thread about numeric imprecision.)

If he's not a troll, he _should_ be -- it's just too sad to consider the
possibility that somebody is really that ignorant and arrogant at the
same time (although, tragically, human nature is such as to make that
entirely possible). Nevertheless, newsgroups and mailing lists have an
interesting characteristic: no "thoughtful answer" need ever be truly
wasted, even if the person you're answering is not just a troll, but a
robotized one, _because there are other readers_ which may find
interest, amusement, or both, in that answer. On a newsgroup, or
very-large-audience mailing list, one doesn't really write just for the
person you're nominally answering, but for the public at large.


Alex
 
D

Dennis Lee Bieber

That's nonsense. My 7-year old TI-83 performs that calculation just
fine, and you're telling me, in this day and age, that Python running on

Most calculators use 1) BCD, and 2) they keep guard digits
(about two extra digits) that are not displayed. Using the guard digits,
the calculator performs rounding to the display resolution.

1.0 / 3.0 => 0.3333333| (displayed)
0.3333333|33 (internal)
* 3.0 => 0.9999999|99 (internally)
1.0 (displayed after rounding the
guards)

Strangely, HP's tended not to hold guard digits... My HP-48sx
gives the all-9s result, and I recall older models also not having
guards.

Most that use guard digits can be determined by 1) the example
sequence returning 1.0, and 2) do the 1/3, then manually subtract the
value you see on the display -- often you'll get something like
3.3E- said:
a modern 32-bit processor can't even compute simple decimals accurately?
Don't defend bad code.

Before you accuse Python of bad code (you might as well accuse
Intel and AMD, since they make the floating point processor in most
machines), take the time to learn how Calculators function internally.
My college actually offered a class on using scientific calculators,
including details on guard digits, arithmetic vs algebraic vs RPN, etc.
--
 
A

Alex Martelli

Dennis Lee Bieber said:
Strangely, HP's tended not to hold guard digits... My HP-48sx
gives the all-9s result, and I recall older models also not having
guards.

Nothing strange there -- HP's calculators were squarely aimed at
scientists and engineers, who are supposed to know what they're doing
when it comes to numeric computation (they mostly _don't_, but they like
to kid themselves that they do!-).


Alex
 
P

Paul Rubin

Yes, but applying rational arithmetic by default might slow some
computations far too much for beginners' liking!

I dunno, lots of Lisp dialects do rational arithmetic by default.
Well, you can pretty easily use constructive reals with Python, see for
example <http://more.btexact.com/people/briggsk2/XR.html> -- that's a
vastly vaster set than just algebraic numbers. If we DO want precision,
after all, why should sqrt(5) be more important than log(3)?

I don't know that it's generally tractable to do exact computation on
constructive reals. How do you implement comparison (<, >, ==)?
 
D

Dan Bishop

Michel Claveau - abstraction méta-galactique non trivial e en fuite perpétuelle. said:
Hi !


No. BCD use another work : two digits by Byte. The calculation is
basically integer, it's the library who manage the decimal point.

There are no problem of round.

Yes, there are. Rounding problems don't occur in the contrived
examples that show that "BCD is better than binary", but they do
occur, especially with division.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,767
Messages
2,569,572
Members
45,045
Latest member
DRCM

Latest Threads

Top