numpy magic: cast scalar returns auto to python types float & int?

R

robert

Turning algs for old NumPy modules into numpy code I suffer from this:
Upon further processing of returns of numpy calculations, lots of data in an apps object tree will become elementary numpy types.
First there is some inefficiency in calculations. And then you get data inflation and questionable dependencies - e.g. with pickle,ZODB,mpi's ... :

<type 'numpy.int32'>


To avoid this you'd need a type cast in Python code everywhere you get scalars from numpy into a python variable. Error prone task. Or check/re-render your whole object tree.
Wouldn't it be much better if numpy would return Python scalars for float64 (maybe even for float32) and int32, int64 ... where possible? (as numarray and Numeric did)
I suppose numpy knows internally very quickly how to cast.
Or is there maybe a config-setting to turn numpy this way?

Robert
 
T

Tim Hochberg

robert said:
Turning algs for old NumPy modules into numpy code I suffer from this:
Upon further processing of returns of numpy calculations, lots of data in an apps object tree will become elementary numpy types.
First there is some inefficiency in calculations. And then you get data inflation and questionable dependencies - e.g. with pickle,ZODB,mpi's ... :


<type 'numpy.int32'>


To avoid this you'd need a type cast in Python code everywhere you get scalars from numpy into a python variable. Error prone task. Or check/re-render your whole object tree.
Wouldn't it be much better if numpy would return Python scalars for float64 (maybe even for float32) and int32, int64 ... where possible? (as numarray and Numeric did)
I suppose numpy knows internally very quickly how to cast.

The short answer is no, it would not be better. There are some trade
offs involved here, but overall, always returning numpy scalars is a
significant improvement over returning Python scalars some of the time.
Which is why numpy does it that way now; it was a conscious choice, it
didn't just happen. Please search the archives of numpy-discussion for
previous discussions of this and if that is not enlightening enough
please ask at on the numpy-discussion list (the address of which just
changed and I don't have it handy, but I'm sure you can find it).

For your particular issue, you might try tweaking pickle to convert
int64 objects to int objects. Assuming of course that you have enough of
these to matter, otherwise, I suggest just leaving things alone.

-tim
 
R

robert

Tim said:
The short answer is no, it would not be better. There are some trade
offs involved here, but overall, always returning numpy scalars is a
significant improvement over returning Python scalars some of the time.
Which is why numpy does it that way now; it was a conscious choice, it
didn't just happen. Please search the archives of numpy-discussion for
previous discussions of this and if that is not enlightening enough
please ask at on the numpy-discussion list (the address of which just
changed and I don't have it handy, but I'm sure you can find it).

Didn't find the relevant reasoning within time. Yet guess the reason is isolated-module-centric.
All further computations in python are much slower and I cannot even see a speed increase when (rare case) puting a numpy-ic scalar back into a numpy array:
a=array([1.,0,0,0,0])
f=1.0
fn=a[0]
type(fn)
timeit.Timer("f+f",glbls=globals()).timeit(10000) 0.0048265910890909324
timeit.Timer("f+f",glbls=globals()).timeit(100000) 0.045992158221226376
timeit.Timer("fn+fn",glbls=globals()).timeit(100000) 0.14901307289054877
timeit.Timer("a[1]=f",glbls=globals()).timeit(100000) 0.060825607723899111
timeit.Timer("a[1]=fn",glbls=globals()).timeit(100000) 0.059519575812004177
timeit.Timer("x=a[0]",glbls=globals()).timeit(100000) 0.12302317752676117
timeit.Timer("x=float(a[0])",glbls=globals()).timeit(100000)
0.31556273213496411

creation of numpy scalar objects seems not be cheap/advantagous anyway:
oa=array([1.0,1.0,1.0,1.0,1],numpy.object)
oa array([1.0, 1.0, 1.0, 1.0, 1], dtype=object)
timeit.Timer("x=a[0]",glbls=globals()).timeit(100000) 0.12025438987348025
timeit.Timer("x=oa[0]",glbls=globals()).timeit(100000) 0.050609225474090636
timeit.Timer("a+a",glbls=globals()).timeit(100000) 1.3081539692893784
timeit.Timer("oa+oa",glbls=globals()).timeit(100000)
1.5201345422392478

For your particular issue, you might try tweaking pickle to convert
int64 objects to int objects. Assuming of course that you have enough of
these to matter, otherwise, I suggest just leaving things alone.

( int64've not had so far don't know whats with python L's )

the main problem is with hundreds of all-day normal floats (now numpy.float64) and ints (numpy.int32) variables.
Speed issues, memory consumption... And a pickled tree cannot be read by an app which has not numpy available. and the pickles are very big.

I still really wonder how all this observations and the things which I can imagine so far can sum up to an overall advantage for letting numpy.float64 & numpy.int32 scalars out by default - and also possibly not for numpy.float32 which has somewhat importance in practice ?
Letting out nan and inf.. objects and offering an explicit type case is of course ok.

Robert
 
R

Robert Kern

robert said:
Didn't find the relevant reasoning within time. Yet guess the reason is isolated-module-centric.

I gave you a brief rundown on this list already.

http://mail.python.org/pipermail/python-list/2006-October/411145.html

And I'll note again that a fuller discussion is given in Chapter 2 of the _Guide
to NumPy_.

http://numpy.scipy.org/numpybooksample.pdf

And yet again, the best place for numpy questions is the numpy mailing list, not
here. Here, you will get maybe one or two people responding to you, usually me,
and I'm a cranky SOB. There you will get much nicer people answering your
questions and more fully.

http://www.scipy.org/Mailing_Lists

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 
R

robert

Robert said:

think I took this into account already for this discussion.
array([1.,.0,4,4,],float32) array([ 1., 0., 4., 4.], dtype=float32)
a=_
a+3 array([ 4., 3., 7., 7.], dtype=float32)
a+3.0 array([ 4., 3., 7., 7.], dtype=float32)
3+a array([ 4., 3., 7., 7.], dtype=float32)
3.0+a array([ 4., 3., 7., 7.], dtype=float32)
3.0*a
array([ 3., 0., 12., 12.], dtype=float32)


numpy does anyway not force the precision upwards for float32 x python-float. ( what is good so).

There remains the argument, that (float64,int32) scalars coming out should - by default - support the array interface.
How many people are there to expect and use this? I'd have never noticed it, if it wouldn't have been mentioned here. Have never seen such code nor seen somewhere or felt myself such a requirement. Thats very special an maybe can be turned on by a global option - if there is more than a handful of users for that.
I still think this is over-design and that it brings much much more disadvantages than advantages to let these beasts out by default into a general purpose language like python. The target area for numpy output is much bigger than that e.g. for matlab script someone uses in a rush to create a paper. Maybe for users who want to make an alt-matlab-only box out of python there could be a global switch somewhere or a enforcing versions of the data types ...

Seeing the speed impact and pickle-problems now everywere on post-computations upon numpy out, its a critical decision to migrate code to numpy. Almost a killer. Even if I spread float()-casts everywhere, this cost a lot of speed, makes code ugly etc., and its an holey sieve.

I think I'll stay as a voice to vote heavily against that scheme of numpy scalar types. 11 on a scale from 0 to 10 :)

And yet again, the best place for numpy questions is the numpy mailing list, not
here. Here, you will get maybe one or two people responding to you, usually me,
and I'm a cranky SOB. There you will get much nicer people answering your
questions and more fully.

http://www.scipy.org/Mailing_Lists

Maybe once I take the hurdle to use this. Access and searching on such lists is somewhat proprietry. Numerics is a major field in Python land. There are also lots of cross relations to other libs and techniques. Maybe there could be a nntp-comfortable comp.lang.python.numeric for users - and also a comp.lang.python.net, comp.lang.python.ui. I think that would greatly strentghen Pythons "marketing" in the numerics domain. main clp's posting frequency is too high anyway meanwhile.


Robert
 
R

Robert Kern

robert said:
There remains the argument, that (float64,int32) scalars coming out should - by default - support the array interface.
How many people are there to expect and use this? I'd have never noticed it, if it wouldn't have been mentioned here. Have never seen such code nor seen somewhere or felt myself such a requirement. Thats very special an maybe can be turned on by a global option - if there is more than a handful of users for that.

It derived from our experience building scipy. Writing a library of functions
that work on scalars, vectors and higher-dimensional arrays requires either a
certain amount of generic behavior in its types or a lot of hairy code. We went
for the former. "Global options" affecting the behavior of types don't fit very
well in a library.
I still think this is over-design and that it brings much much more disadvantages than advantages to let these beasts out by default into a general purpose language like python.

It's a judgement call. You judged differently than we have. said:
I think I'll stay as a voice to vote heavily against that scheme of numpy scalar types. 11 on a scale from 0 to 10 :)

Vote all you like; no one's taking a poll at this time.

[I wrote:]
Maybe once I take the hurdle to use this. Access and searching on such lists is somewhat proprietry. Numerics is a major field in Python land. There are also lots of cross relations to other libs and techniques. Maybe there could be a nntp-comfortable comp.lang.python.numeric for users - and also a comp.lang.python.net, comp.lang.python.ui. I think that would greatly strentghen Pythons "marketing" in the numerics domain. main clp's posting frequency is too high anyway meanwhile.

When you have to put "numpy" in your subject lines because you're asking
questions about how and why numpy does this one particular thing, it's time to
think about posting to the appropriate list. If you need NNTP, use GMane.

http://dir.gmane.org/gmane.comp.python.numeric.general

Here's the root of the problem: many of the people you want to talk to aren't
here. They don't read comp.lang.python; mostly it's just me, and I'm getting
crankier by the character.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top