python decimal library dmath.py v0.3 released

M

Mark H. Harris

hi folks,

Python Decimal Library dmath.py v0.3 Released

https://code.google.com/p/pythondecimallibrary/

This code provides the C accelerated decimal module with
scientific/transcendental functions for arbitrary precision.
I have also included pilib.py which is a PI library of historic
algorithms for generating PI, which uses dmath.py.

I wish to thank Oscar, Wolfgang, Steven, Chris (and others)
for the help you gave me understanding decimal, particularly
format, rounding, and context managers.

Everything is documented well (including issues) on the code
site, and things are working better, and are certainly cleaner.
No doubt there are still bugs, but its getting closer.

Thanks again.

marcus
 
M

Mark H. Harris

hi folks,

Terry, I posted this mod as an idea on python-ideas, as you suggested.
Also, I made the additional suggestion that decimal floating point be
considered as the primary floating point type for python4.x, with an
optional binary floating type (2.345b) if the user might like that.

As Steven pointed out last week, we have a fast module now for decimal
floating point; it seems this is a good time to consider this as a serious
idea for future python(s).

marcus
 
M

Mark H. Harris

Python Decimal Library dmathlib.py v0.3 Released

Is this available on PyPI? It seems there already is a "dmath" package
on PyPI that was written by someone else some time ago so you might
need to use a different name:

hi Oscar; thanks again for your help.

Yes, I actually intended to call it dmathlib.py at first, and then had my
brain stuck on dmath, partly because several folks threw that name
around, and partly because I got used to abbreviating it and forgot
to fix things up name-wise.

[ Done ]

Actually, code.google tries to help folks with that because if the name
is taken you have to create a unique name; which I did, but I forgot to
fix up all the references and heading. Thanks again for keeping me
accountable.

Kind regards,
marcus
 
M

Mark H. Harris

On 3 March 2014 11:34, Mark H. Harris wrote:
Is this available on PyPI? It seems there already is a "dmath" package
on PyPI that was written by someone else some time ago so you might
need to use a different name:

Oscar, thanks again for your help, and for keeping me accountable. I did
intend on using the naming convention pythondecimallibrary but got
dmath stuck in my mind from the discussions earlier last week. At any
rate the naming problem is fixed. Thanks again.

Python3.3 Decimal Library v0.3 is Released here:

https://code.google.com/p/pythondecimallibrary/

*pdeclib.py* is the decimal library, and *pilib.py* is the PI library.


marcus
 
O

Oscar Benjamin

Python3.3 Decimal Library v0.3 is Released here:

https://code.google.com/p/pythondecimallibrary/

*pdeclib.py* is the decimal library, and *pilib.py* is the PI library.

Is it on PyPI though? I was referring to a PyPI name so that people
could install it with "pip install pdeclib" (or whatever you called
it). That's how open source Python projects are usually distributed.


Oscar
 
M

Mark H. Harris

Is it on PyPI though? I was referring to a PyPI name so that people
could install it with "pip install pdeclib"
Oscar

hi Oscar, I'm sorry, I completely missed the point of your question. No its not on PyPI, but I don't mind putting it there. Are there special instructions, or is it fairly straight-forward?

marcus
 
W

Wolfgang Maier

Am Montag, 3. März 2014 12:34:30 UTC+1 schrieb Mark H. Harris:
hi folks,



Python Decimal Library dmath.py v0.3 Released



https://code.google.com/p/pythondecimallibrary/



This code provides the C accelerated decimal module with

scientific/transcendental functions for arbitrary precision.

I have also included pilib.py which is a PI library of historic

algorithms for generating PI, which uses dmath.py.



I wish to thank Oscar, Wolfgang, Steven, Chris (and others)

for the help you gave me understanding decimal, particularly

format, rounding, and context managers.

Hi Marcus and thanks for the acknowledgement.
Here's one more suggestion for your code.
Your current implementation of fact() for calculating factorials has nothing to offer that isn't provided by math.factorial. Since this is all about Decimal calculations, shouldn't you modify it to something like:

def fact(x):
""" fact(x) factorial {x} int x > 0

(x must be integral)
"""
return +Decimal(math.factorial(x))

to make it return a Decimal rounded to context precision?
 
M

Mark H. Harris

def fact(x):
""" fact(x) factorial {x} int x > 0

return +Decimal(math.factorial(x))
to make it return a Decimal rounded to context precision?

hi Wolfgang, I'm not sure. We're doing some things with very large factorials where (existentially) we want to know how many zeros are coming up andthe end of the very large number (thousands of digits) and 2) what are thelast significant figures (say twenty of them) that are just before the zero chain.
What I don't want is for the Decimal module to overflow (didn't know it would do that), and we don't want the number rounded in scientific notation atsome number of places. We want to actually see the digits; all of them.
Python will do multiplications til the proverbial cows come home; well, as long as you don't try it recursively --- killing the recursive depth. Decimal has some limits internally which I still do not understand (and I have been looking at the doc and playing with it for hours). If I want to build aBIGNUM int in memory only the memory should limit what can be built, not some arbitrary limit inside Decimal.
Does any of this make sense? and 2) can you help me understand the overflow in Decimal a little bit better. I know you're a busy guy, maybe you just know a link /

Thanks much.
 
M

Mark H. Harris

Wolfgang, answer is not so much, in fact, not at all.
But it is an interesting question for me; where I am
continuing to learn the limits of Decimal, and the
decimal context. I don't need rounding for integer
multiplication, of course.

I am interested in arbitrary limits, like emax, for instance.
The doc is a little ambiguous. Is emax the max exponent,
and if so, is 999999999 the limit, or is that the default
context value which might be bumped up?
If so, why have a limit on the emin & emax values?
I'm playing with it. Shouldn't a Decimal value
be able to continue to grow to the limit of memory if we
wanted to be silly about it? According to the doc 'clamping'
occurs if the exponent falls outside the range of emin &
emax (what is overflow vs clamping ?) if the significant digits
are allowed to grow and grow? Well, the doc then states that
overflow occurs if we blow past the emax exponent value?
What is the difference between overflow and clamping? Am
I able to set emin & emax arbitrarily high or low?

I have discovered just by playing with integer multiplication
that those BIGNUMS don't seem to have a physical limit. Of
course there isn't a decimal to keep track of, and they can
just grow and grow; wouldn't want to make a Decimal from
one of those, other than it is interesting to me as I'm trying
to understand Decimal floating point.

marcus
 
M

Mark H. Harris

On Monday, March 3, 2014 3:18:37 PM UTC-6, Mark H. Harris wrote:

Yeah, you can set Emin & Emax enormously large (or small), can set
off overflow, and set clamping.

I am needing a small utility (tk?) that will allow the context to be set
manually by the interactive user dynamically (for a particular problem).
Its like, one context doesn't fit all. Some of the pdeclib funcs will need
to take into account the various signals also.

The context manger is fabulous, but not for the average user; all the
try block stuff is encapsulated (which makes the coding clean) but it
is not obvious in any way what is happening with __init__() , __enter__()
and __exit__() (although I did find a couple of good articles on the
subject. The 'with localcontext(cts=None) as ctx' is genius, but not
for the average user who just wants to get work done with python.

So, the bottom line is we have this fabulous speedy decimal module
that is nothing short of wonderful for experts, and completely out
of the grasp of average users relatively new to python or with
limited experience with decimal floating point arithmetic. "They would
like to sing soft and sweet, like the cucumber, but they can't!"

So, we need a complete wrapper around the decimal module (or better
yet we need to make decimal floating point default) so that average
users may take advantage of precise floating point math without
having to be experts on decimal floating point arithmetic constraints
in the new high speed module. :-}

marcus
 
W

Wolfgang Maier

On said:
hi Wolfgang, I'm not sure. We're doing some things with very large factorials where (existentially) we want to know how many zeros are coming up and the end of the very large number (thousands of digits) and 2) what are the last significant figures (say twenty of them) that are just before the zero chain.

That's ok, but I do not understand
- why you shouldn't be able to use math.factorial for this purpose and
- why a factorial function accepting and returning ints should be part of your dmath package.

math.factorial is accurate and faster than your pure-Python function, especially for large numbers. Compare:
What I don't want is for the Decimal module to overflow (didn't know it would do that), and we don't want the number rounded in scientific notation at some number of places. We want to actually see the digits; all of them.

Well, that may be your use-case, but then math.factorial is for you.
On the other hand, you may be interested in getting context-rounded factorials and rounding to context precision is what you'd expect from a Decimal function, so, logically, that's how it should be implemented in your packageif you think it needs to have a fact function (which I'm not sure of).
Python will do multiplications til the proverbial cows come home; well, as long as you don't try it recursively --- killing the recursive depth.

While that's true you pay a heavy price for abusing this feature in terms of performance because with VERY large integers there will be just too much memory shuffling activity. (I haven't looked into how math.factorial handles this internally, but it's certainly performing much better for large numbers than Python integer multiplication.

Best,
Wolfgang
 
M

Mark H. Harris

Well, that may be your use-case, but then math.factorial is for you.
On the other hand, you may be interested in getting
context-rounded factorials and rounding to context
precision is what you'd expect from a Decimal function,
so, logically, that's how it should be implemented in your
package if you think it needs to have a fact function (which I'm not sure of).


hi Wolfgang, you are absolutely right. I see what you're getting at
now. I've stuffed something into the library that really does not
belong in that library; it was just convenient for me. I get that.

marcus
 
W

Wolfgang Maier

On Monday, March 3, 2014 2:03:19 PM UTC-6, Mark H. Harris wrote:

Wolfgang, answer is not so much, in fact, not at all.
But it is an interesting question for me; where I am
continuing to learn the limits of Decimal, and the
decimal context. I don't need rounding for integer
multiplication, of course.

You don't want it and you don't get it for integer multiplication, but you may get it with Decimal multiplication and not high-enough precision:
ctx.prec=2
Decimal(11)*Decimal(11)

Decimal('1.2E+2')

This is the very nature of rounding to context precision and functions dealing with Decimals shouldn't behave differently in my opinion. If you don't want rounding either use sufficiently high precision or use integers.
I am interested in arbitrary limits, like emax, for instance.
The doc is a little ambiguous. Is emax the max exponent,
and if so, is 999999999 the limit, or is that the default
context value which might be bumped up?

I don't find much ambiguity in the docs here:

" class decimal.Context(prec=None, rounding=None, Emin=None, Emax=None, capitals=None, clamp=None, flags=None, traps=None)

Creates a new context. If a field is not specified or is None, the default values are copied from the DefaultContext. If the flags field is not specified or is None, all flags are cleared.
...
The Emin and Emax fields are integers specifying the outer limits allowable for exponents. Emin must be in the range [MIN_EMIN, 0], Emax in the range [0, MAX_EMAX]."

So, Emax is the maximal exponent allowed in a specific context and the constant MAX_EMAX is the maximum Emax that you can set in any context.
Also (on my system):

Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow])

shows that my default context Emax is way smaller than MAX_EMAX.

I have discovered just by playing with integer multiplication
that those BIGNUMS don't seem to have a physical limit. Of
course there isn't a decimal to keep track of, and they can
just grow and grow; wouldn't want to make a Decimal from
one of those, other than it is interesting to me as I'm trying
to understand Decimal floating point.

decimal can handle BIGNUMS fairly well, you just need to increase context Emax. Have you ever tried to calculate stuff with ints as big as MAX_EMAX (10**999999999999999999) or even close to it and still had a responsive system ??
My point in suggesting the fix for your epx function was that the unnecessary juggling of extremely large values (Decimal or int) is a killer for performance.
 
M

Mark H. Harris

decimal can handle BIGNUMS fairly well, you just need to increase context Emax. Have you ever tried to calculate stuff with ints as big as MAX_EMAX (10**999999999999999999) or even close to it and still had a responsive system ??

hi Wolfgang, yes correct you are... I've been playing with it; very
interesting. I think I've almost got my arms around these things.
If I don't get it right in my head, then my library is gonna suck. Still
looking, but I think I may have one or two more funcs that have a
similar problem to epx(). I've decided to go through them one by
one while I'm testing and make sure that they are as efficient as I can
make them. Thanks again for your inputs.

marcus
 
S

Stefan Krah

[I found this via the python-ideas thread]

Wolfgang Maier said:
math.factorial is accurate and faster than your pure-Python function,
especially for large numbers.

It is slower for huge numbers than decimal if you use this
Python function:

http://www.bytereef.org/mpdecimal/quickstart.html#factorial-in-pure-python


Be sure to set MAX_EMAX and MIN_EMIN, that's missing in the example.


If you want to *see* all digits of a very large number, then decimal is
probably even faster than gmpy. See:

http://www.bytereef.org/mpdecimal/benchmarks.html#arbitrary-precision-libraries



Stefan Krah
 
M

Mark H. Harris

Be sure to set MAX_EMAX and MIN_EMIN, that's missing in the example.

hi Stefan!

Good to hear from you, sir. My hat is off to you for sure,
and thank you very kindly for your good work on the
decimal.Decimal code; ~very nice! {hands clapping}

Thanks for the input above, will read.

Stay in touch. I have an expansion of my decimal idea for
python (stay tuned on python-ideas) which I think will be
for the better, if we can convince wide adoption; could be
an up-hill climb, but who knows.

Its been good to meet you in the code, and now a true
pleasure to meet you on-line as well. Hope to see you sometime
at a python convention perhaps.

Best regards,

Mark H Harris
 
M

Mark H. Harris


Greetings, just a minor update here for version testing and portability.
I have tested pdeclib.py | pilib.py on the following distributions:

Py2.5 [ not supported, problems with "with" and localcontext ]
Py2.6.1 [ runs correctly as written ]
Py2.7.2 [ runs correctly as written ]
Py2.7.6 [ runs correctly as written ]
Py3.2.0 [ runs correctly as written ]
Py3.3.4 [ runs very fast ]

Py2.7.2 was interesting for me to verify, because that is the version
that is currently supported by the QPython people, for the Android
platform. I now have pdeclib module loaded on my phone running
quite well, Samsung Galaxy SII Android 4.1.2 QPython 0.9.7.2 (Py2.7.2),
although not as speedy as 3.3, it imports & runs without errors so far.

If you put pdeclib on your Android tablet|phone, you may run the script
from your personal directory, of course, or you may place it in the
python2.7 lib/site-packages folder and it will be on the PYTHONPATH
regardless of which directory you open python2.7.2 over.

marcus
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,899
Latest member
RodneyMcAu

Latest Threads

Top