Math errors in python

A

Alex Martelli

Andrew Dalke said:
Uncle Tim:

Personally I've found that pie is usually round, though
if you're talking price I agree -- I can usually get a
slice for about $3, more like $3.14 with tax. I like
mine apple, with a bit of ice cream.

Strange spelling though.

Yeah, everybody knows it's spelled "py"!


Alex
 
P

Paul Foley

[Chris S.]
Sqrt is a fair criticism, but Pi equals 22/7, exactly the form this
arithmetic is meant for.
That's absurd. pi is 3,

Except in Indiana, where it's 4, of course.

--
Don't worry about people stealing your ideas. If your ideas are any good,
you'll have to ram them down people's throats.
-- Howard Aiken
(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
 
P

Paul Foley

There is not much of a precision/speed tradoff in Common Lisp, you can
use fractional numbers (which give you exact results with operations
+, -, * and /) internally and round them off to decimal before
display. With the OP's example:
(+ 1210/100 830/100)
102/5
(coerce * 'float)
20.4
Integers can have unlimited number of digits, but the precision of
floats and reals are still limited to what the hardware can do, so if

Most CL implementations only support the hardware float types, that's
true, but it's not required by the spec.

CLISP's long-float has arbitrary precision (set by the user in
advance).

[And the Common Lisp type named "real" is the union of floats and
rationals; they're certainly not limited by hardware support]


--
Don't worry about people stealing your ideas. If your ideas are any good,
you'll have to ram them down people's throats.
-- Howard Aiken
(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
 
F

Frithiof Andreas Jensen

(e-mail address removed) (Alex Martelli) wrote in (e-mail address removed):

Nothing strange there -- HP's calculators were squarely aimed at
scientists and engineers, who are supposed to know what they're doing
when it comes to numeric computation (they mostly _don't_, but they like
to kid themselves that they do!-).

Oi!!! I resemble that remark !

;-)
 
A

Alex Martelli

Frithiof Andreas Jensen
(e-mail address removed) (Alex Martelli) wrote in (e-mail address removed):


Oi!!! I resemble that remark !

;-)

OK, I should have used first person plural to count myself in, since,
after all, I _am_ an engineer...: _we_ mostly don't, but we like to kid
ourselves that we do!-)


Alex
 
H

Heiko Wundram

Am Sonntag, 19. September 2004 19:41 schrieb Alex Martelli:
gmpy (or to be more precise the underlying GMP library) runs optimally
on AMD Athlon 32-bit processors, which happen to be dirt cheap these
days, so a cleverly-purchased 300-dollars desktop Linux PC using such an
Athlon chip would no doubt let you use way more than these humble couple
thousand bits for such interactive computations while maintaining a
perfectly acceptable interactive response time.

But still, no algorithm implemented in software will ever beat the
FADD/FMUL/FDIV/FPOW/FSIN/FCOS etc. instructions in runtime, that was my
point... And error calculation is always possible, so that you can give
bounds to your result, even when using normal floating point arithmetic. And,
even when using GMPy, you have to know about the underlying limitations of
binary floating point so that you can reorganize your code if need be to add
precision (because one calculation might be much less precise if done in some
way than in another).

Heiko.
 
D

Dan Bishop

On 19 Sep 2004 15:24:31 -0700, (e-mail address removed) (Dan Bishop) wrote:
[...]
There are, of course, reasonably accurate rational approximations of
pi. For example, 355/113 (accurate to 6 decimal places), 312689/99532
(9 decimal places), or 3126535/995207 (11 decimal places). Also, the
IEEE 754 double-precision representation of pi is equal to the
rational number 4503599627370496/281474976710656.
(16L, 0L)

a little glitch somewhere ? ;-)

Oops. I meant 884279719003555/281474976710656.
 
A

Alex Martelli

Heiko Wundram said:
Am Sonntag, 19. September 2004 19:41 schrieb Alex Martelli:

But still, no algorithm implemented in software will ever beat the
FADD/FMUL/FDIV/FPOW/FSIN/FCOS etc. instructions in runtime, that was my

Yep, the hardware would have to be designed in a very lousy way for its
instructions to run slower than software running on the same CPU;-).

If you're not using some "vectorized" package such as Numeric or
numarray, though, it's unlikely that you care about speed -- and if you
_are_ using Numeric or numarray, it doesn't matter to you what type
Python itself uses for some literal such as 3.17292 -- it only matters
(speedwise) what your computational package is using (single precision,
double precision, whatever).
point... And error calculation is always possible, so that you can give
bounds to your result, even when using normal floating point arithmetic. And,

Sure! Your problems come when the bounds you compute are not good
enough for your purposes (given how deucedly loose error-interval
computations tend to be, that's going to happen more often than actual
accuracy loss in your computations... try an interval-arithmetic package
some day, to see what I mean...).
even when using GMPy, you have to know about the underlying limitations of
binary floating point so that you can reorganize your code if need be to add
precision (because one calculation might be much less precise if done in some
way than in another).

Sure. Throwing more precision at a badly analyzed and structured
algorithm is putting a band-aid on a wound. I _have_ taught numeric
analysis to undergrads and nobody could have passed my course unless
they had learned to quote that "party line" back at me, obviously.

In the real world, the band-aid stops the blood loss often enough that
few practising engineers and scientists are seriously motivated to
remember and apply all they've learned in their numeric analysis courses
(assuming they HAVE taken some: believe it or not, it IS quite possible
to get a degree in engineering, physics, etc, in most places, without
even getting ONE course in numeric analysis! the university where I
taught was an exception only for _some_ of the degrees they granted --
you couldn't graduate in _materials_ engineering without that course,
for example, but you COULD graduate in _buildings_ engineering while
bypassing it...).

Yes, this IS a problem. But I don't know what to do about it -- after
all, I _am_ quite prone to taking such shortcuts myself... if some
computation is giving me results that smell wrong, I just do it over
with 10 or 100 times more bits... yeah, I _do_ know that will only work
99.99% of the time, leaving a serious problem, possibly hidden and
unsuspected, more often than one can be comfortable with. In my case, I
have excuses -- I'm more likely to have fallen into some subtle trap of
_statistics_, making my precise computations pretty meaningless anyway,
than to be doing perfectly correct statistics in numerically smelly ways
(hey, I _have_ been brought up, as an example of falling into traps, in
"American Statistician", but not yet, AFAIK, in any journal dealing with
numerical analysis...:).


Alex
 
D

Dan Bishop

Andrew Dalke said:
Andrea:

Or that the walls were 0.25 cubits thick, if you're talking
inner diameter vs. outer. ;)

Or it could be 9.60 cubits across and 30.16 cubits around, and the
numbers are rounded to the nearest cubit.

Also, I've heard that the original Hebrew uses an uncommon spelling of
the word for "line" or "circumference". Perhaps that affects the
meaning.
 
G

Grant Edwards

the problem with BCD or other 'decimal' computations is that it either
doesn't have the dynamic range of binary floating point (~ +-10**310)

Huh? Why would BCD floating point have any less range than
binary floating point? Due to the space inefficiencies of BCD,
it would take a few more bits to cover the same range, but I
don't see your point.
 
G

Grant Edwards

This is from the Bible...

007:023 And he made a molten sea, ten cubits from the one brim to the
other: it was round all about, and his height was five cubits:
and a line of thirty cubits did compass it round about.

So it's clear that pi must be 3

If you've only got 1 significant digit in your measured values,
then Pi == 3 is a prefectly reasonable value to use.
 
D

Dennis Lee Bieber

Huh? Why would BCD floating point have any less range than
binary floating point? Due to the space inefficiencies of BCD,
it would take a few more bits to cover the same range, but I
don't see your point.

There /was/ an "or" in that sentence, which you trimmed out...

Though working with numbers that are stored in >150 bytes
doesn't interest me. Uhm, actually, to handle the +/- exponent range,
make that 300+ bytes (150+ bytes before the decimal, and the same after
it). As soon as you start storing an exponent as a separate component
you introduce a loss of precision in computations.
--
 
G

Grant Edwards

There /was/ an "or" in that sentence, which you trimmed out...

Sorry about that, but I wasn't addressing the other complaint,
just the lack of range part.
Though working with numbers that are stored in >150 bytes
doesn't interest me. Uhm, actually, to handle the +/- exponent
range, make that 300+ bytes (150+ bytes before the decimal,
and the same after it).

To get the same range and precision as a 32-bit IEEE, you need
4 bytes for mantissa and 2 for the exponent. That's 6 bytes,
not 300.
As soon as you start storing an exponent as a separate
component you introduce a loss of precision in computations.

I thought you were complaining about range and storage required
for BCD vs. binary.

Floating point BCD can have the same range and precision and
binary floating point with about a 50% penalty in storage
space.

If you're going to compare fixed point verses floating point,
that's a completely separate (and orthogonal) issue.
 
D

Dennis Lee Bieber

To get the same range and precision as a 32-bit IEEE, you need
4 bytes for mantissa and 2 for the exponent. That's 6 bytes,
not 300.
I'll concede that I may have missed something in the thread...
But if one were to propose using a floating BCD for something with only
7 significant decimal digits just to get decimal "repeating digits"
rather than binary ones... (1/3= 0.33333333....) I'd be looking for a
different proposal.

Add in that an exponent shift is a change of 10, vs a change of
2 (assuming common normallized binary -- my college machine used
exponents that were powers of 16, meaning a normalized binary could have
up to three 0-bits) -- somehow it just feels like the powers of two
would retain finer precision when doing addition.

--
 
G

Grant Edwards

I'll concede that I may have missed something in the thread...
But if one were to propose using a floating BCD for something
with only 7 significant decimal digits just to get decimal
"repeating digits" rather than binary ones... (1/3=
0.33333333....) I'd be looking for a different proposal.

What proposal? I was just pointing out that the poster who
claimed that BCD didn't have the range that binary did was
wrong. He wasn't comparing BCD vs. binary, he was comparing
fixed point vs. floating point. Fixed point BCD and fixed
point binary both have the same issues with range.
Add in that an exponent shift is a change of 10, vs a change
of 2 (assuming common normallized binary -- my college machine
used exponents that were powers of 16, meaning a normalized
binary could have up to three 0-bits) -- somehow it just feels
like the powers of two would retain finer precision when doing
addition.

For the same storage space, binary FP will have more precision
and/or range. BCD doesn't use 40% of the code space.
 
D

Dennis Lee Bieber

What proposal? I was just pointing out that the poster who

Hypothetical "proposal" -- not Python related. Merely that if I
were in the position of approving a proposed software design...
(typically I'm on the other side of that system -- the one proposing the
design and hoping for approval <G>).

--
 
C

Carl Banks

Chris S. said:
I just find
it funny how a $20 calculator can be more accurate than Python running
on a $1000 Intel machine.

Actually, if you look at Intel's track record, it isn't that surprising.

How many Intel Pentium engineers does it take to change a light bulb?
Three. One to screw in the bulb, and one to hold the ladder.
 
G

Grant Edwards

Actually, if you look at Intel's track record, it isn't that surprising.

How many Intel Pentium engineers does it take to change a light bulb?
Three. One to screw in the bulb, and one to hold the ladder.

Intel, where quality is Job 0.9999999997.
 
B

Brian van den Broek

Grant Edwards said unto the world upon 2004-09-21 16:12:
Intel, where quality is Job 0.9999999997.

Since we're playing:

Why'd Intel call it the Pentium chip?

'Cause they added 100 to 486 and got 585.999999999989

Brian vdB
 
R

Richard Hanson

Peter said:
The auther is currently working on an installer, but just dropping it into
2.3's site-packages should work, too.

I just dropped decimal.py from 2.4's Lib dir into 2.3.4's Lib dir.
Seems to work. Any gotchas with this route?

By the way, I got decimal.py revision 1.24 from CVS several days ago
and noted a speedup of over an order of magnitude -- almost
twenty-five times faster with this simple snippet calculating a square
root to 500 decimal places. :)

[On Win98SE:]

| from time import clock
| from decimal import *
|
| a = Decimal('18974018374087403187404701740918.7481704084710473048017483047104')
| t = clock()
| b = a.sqrt(Context(prec=500))
|
| print "Time: ", clock()-t
| print "b =", b

With decimal.py from 2.4a3.2 dropped into 2.3.4's Lib dir:

| IDLE 1.0.3
| >>> ================================ RESTART ================================
| >>>
| Time: 7.40197958397
| b = 4355917627100793.0054682072286...[elided]...67722472416430409564807807874919604463
| >>>

With decimal.py from CVS (revision 1.24) in 2.3.4's Lib dir:

| IDLE 1.0.3
| >>> ================================ RESTART ================================
| >>>
| Time: 0.300008380965
| b = 4355917627100793.0054682072286...[elided]...67722472416430409564807807874919604463
| >>>

For a check, I did:

| >>> setcontext(Context(prec=500))
| >>> b * b
| Decimal("18974018374087403187404701740918.748170408471047304801748304710400...[lotsa zeroes]...00")

Pretty damn impressive! -- Try it, you'll like it!

Good job to the crew for Decimal and the latest optimizations!


now-I-just-need-atan[2]()-ly y'rs,
Richard Hanson
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,053
Latest member
BrodieSola

Latest Threads

Top