Python from Wise Guy's Viewpoint

M

mike420

THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

THE BAD:

1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching
 
J

Jarek Zgoda

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

Yes. By deleting a name from namespace. You better read some tutorial,
this will save you some time.
 
P

Peter Hansen

Frode said:
Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?

Both are correct, in essence. (And depending on how one interprets
your second point, which is quite ambiguous.)

-Peter
 
P

Peter Hansen

Warning! Troll alert! I missed the three newsgroup cross-post
the first time, so I thought this might be a semi-serious question.

-Peter
 
F

Frode Vatvedt Fjeld

Jarek Zgoda said:
Yes. By deleting a name from namespace. You better read some
tutorial, this will save you some time.

Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?
 
J

John Thingstad

Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?
Neither is complely correct. Functions are internally delt with using
dictionaies.
The bytecode compiler gives it a ID and the look up is done using a
dictionary.
Removing the function from the dictionary removes the function.
(pythonese for hash-table)
 
F

Frode Vatvedt Fjeld

Peter Hansen said:
Both are correct, in essence. (And depending on how one interprets
your second point, which is quite ambiguous.)

But this implies a rather enormous overhead in calling a function,
doesn't it?

What I meant was that if you do the following, in sequence:

a. Define function foo.
b. Define function bar, that calls function foo.
c. Undefine function foo

Now, if you call function bar, will you get a "undefined function"
exception? But if point 1. really is true, I'd expect you get a
"undefined name" execption or somesuch.
 
P

Peter Hansen

(I'm replying only because I made the mistake of replying to a
triply-crossposted thread which was, in light of that, obviously
troll-bait. I don't plan to continue the thread except to respond
to Frode's questions. Apologies for c.l.p readers.)
But this implies a rather enormous overhead in calling a function,
doesn't it?

"Enormous" is of course relative. Yes, the overhead is more than in,
say C, but I think it's obvious (since people program useful software
using Python) that the overhead is not unacceptably high?

As John Thingstad wrote in his reply, there is a dictionary lookup
involved and dictionaries are extremely fast (yes, yet another relative
term... imagine that!) in Python so that part of the overhead is
relatively unimportant. There is actually other overhead which is
involved (e.g. setting up the stack frame which is, I believe, much larger
than the trivial dictionary lookup).

Note also that if you have a reference to the original function is,
say, a local variable, removing the original doesn't really remove it,
but merely makes it unavailable by the original name. The local variable
can still be used to call it.
What I meant was that if you do the following, in sequence:

a. Define function foo.
b. Define function bar, that calls function foo.
c. Undefine function foo

Now, if you call function bar, will you get a "undefined function"
exception? But if point 1. really is true, I'd expect you get a
"undefined name" execption or somesuch.

See below.

Python 2.3.1 (#47, Sep 23 2003, 23:47:32) [MSC v.1200 32 bit (Intel)] on win32.... print 'in foo'
........ foo()
....Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 2, in bar
NameError: global name 'foo' is not defined

On the other hand, as I said above, one can keep a reference to the original.
If I'd done "baz = foo" just before the "del foo", then I could easily have
done "baz()" and the original method would still have been called.

Python is dynamic. Almost everything is looked up in dictionaries at
runtime like this. That's its nature, and much of its power (as with
the many other such languages).

-Peter
 
J

Jarek Zgoda

Peter Hansen said:
Warning! Troll alert! I missed the three newsgroup cross-post
the first time, so I thought this might be a semi-serious question.

That's why I set FUT to this group.
 
F

Frode Vatvedt Fjeld

John Thingstad said:
[..] Functions are internally delt with using dictionaies. The
bytecode compiler gives it a ID and the look up is done using a
dictionary. Removing the function from the dictionary removes the
function. (pythonese for hash-table)

So to get from the ID to the bytecode, you go through a dictionary?
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?
 
J

Joachim Durchholz

Oh, you're trolling for an inter-language flame fest...
well, anyway:
3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

Multimethods suck.

The longer answer: Multimethods have modularity issues (if whatever
domain they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations; standard
dispatch strategies as I've seen in some Lisps just cover up the
undefined behaviour, with a slightly less than 50% chance of being correct).

Regards,
Jo
 
P

Paul Rubin

Frode Vatvedt Fjeld said:
[..] Functions are internally delt with using dictionaies. The
bytecode compiler gives it a ID and the look up is done using a
dictionary. Removing the function from the dictionary removes the
function. (pythonese for hash-table)

So to get from the ID to the bytecode, you go through a dictionary?
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?

Hah, you wish. If the function name is global, there is a dictionary
lookup, at runtime, on every call.

def square(x):
return x*x

def sum_of_squares(n):
sum = 0
for i in range(n):
sum += square(x)
return sum

print sum_of_squares(100)

looks up "square" in the dictionary 100 times. An optimization:

def sum_of_squares(n):
sum = 0
sq = square
for i in range(n):
sum += sq(x)
return sum

Here, "sq" is a local copy of "square". It lives in a stack slot in
the function frame, so the dictionary lookup is avoided.
 
M

Marcin 'Qrczak' Kowalczyk

The longer answer: Multimethods have modularity issues (if whatever domain
they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations;

This doesn't matter until you provide an equally powerful mechanism which
fixes that. Which is it?
 
A

Alex Martelli

Frode Vatvedt Fjeld wrote:
...
Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?

Both, depending on how you define "existing call". A "call" that IS
in fact existing, that is, pending on the stack, will NOT in any way
be "affected"; e.g.:

def foo():
print 'foo, before'
remove_foo()
print 'foo, after'

def remove_foo():
print 'rmf, before'
del foo
print 'rmf, after'

the EXISTING call to foo() will NOT be "affected" by the "del foo" that
happens right in the middle of it, since there is no further attempt to
look up the name "foo" in the rest of that call's progress.

But any _further_ lookup is indeed affected, since the name just isn't
bound to the function object any more. Note that other references to
the function object may have been stashed away in many other places (by
other names, in a list, in a dict, ...), so it may still be quite
possible to call that function object -- just not to look up its name
in the scope where it was earlier defined, once it has been undefined.

As for your worries elsewhere expressed that name lookup may impose
excessive overhead, in Python we like to MEASURE performance issues
rather than just reason about them "abstractly"; which is why Python
comes with a handy timeit.py script to time a code snippet accurately.
So, on my 30-months-old creaky main box (I keep mentioning its venerable
age in the hope Santa will notice...:)...:

[alex@lancelot ext]$ timeit.py -c -s'def foo():pass' 'foo'
10000000 loops, best of 3: 0.143 usec per loop
[alex@lancelot ext]$ timeit.py -c -s'def foo():return' 'foo()'
1000000 loops, best of 3: 0.54 usec per loop

So: a name lookup takes about 140 nanoseconds; a name lookup plus a
call of the simplest possible function -- one that just returns at
once -- about 540 nanoseconds. I.e., the call itself plus the
return take about 400 nanoseconds _in the simplest possible case_;
the lookup adds a further 140 nanoseconds, accounting for about 25%
of the overall lookup-call-return pure overhead.

Yes, managing less than 2 million function calls a second, albeit on
an old machine, is NOT good enough for some applications (although,
for many of practical importance, it already is). But the need for speed
is exactly the reason optimizing compilers exist -- for those times
in which you need MANY more millions of function calls per second.
Currently, the best optimizing compiler for Python is Psyco, the
"specializing compiler" by Armin Rigo. Unfortunately, it currently only
only supports Intel-386-and-compatible CPU's -- so I can use it on my
old AMD Athlon, but not, e.g., on my tiny Palmtop, whose little CPU is
an "ARM" (Intel-made these days I believe, but not 386-compatible)
[ for plans by Armin, and many others of us, on how to fix that in the
reasonably near future, see http://codespeak.net/pypy/ ]

Anyway, here's psyco in action on the issue in question:

import time
import psyco

def non_compiled(name):
def foo(): return
start = time.clock()
for x in xrange(10*1000*1000): foo()
stend = time.clock()
print '%s %.2f' % (name, stend-start)

compiled = psyco.proxy(non_compiled)

non_compiled('noncomp')
compiled('psycomp')


Running this on the same good old machine produces:

[alex@lancelot ext]$ python2.3 calfoo.py
noncomp 5.93
psycomp 0.13

The NON-compiled 10 million calls took an average of 593 nanoseconds
per call -- roughly the already-measured 540 nanoseconds for the
call itself, plus about 50 nanoseconds for each leg of the loop's
overhead. But, as you can see, Psyco has no trouble optimizing that
by over 45 times -- to about 80 million function calls per second,
which _is_ good enough for many more applications than the original
less-than-2 million function calls per second was.

Psyco entirely respects Python's semantics, but its speed-ups take
particular good advantage of the "specialized" cases in which the
possibilities for extremely dynamic behavior are not, in fact, being
used in a given function that's on the bottleneck of your application
(Psyco can also automatically use a profiler to find out about that
bottleneck, if you want -- here, I used the finer-grained approach
of having it compile ["build a compiled proxy for"] just one function
in order to be able to show the speed-ups it was giving).

Oh, BTW, you'll notice I explicitly ran that little test with
python2.3 -- that was to ensure I was using the OLD release of
psyco, 1.0; as my default Python I use the current CVS snapshot,
and on that one I have installed psyco 1.1, which does more
optimizations and in particular _inlines function calls_ under
propitious conditions -- therefore, the fact that running
just "python calfoo.py" would have shown a speed-up of _120_
(rather than just 45) would have been "cheating", a bit, as it's
not measuring any more anything related to name lookup and function
call overhead. That's a common problem with optimizing compilers:
once they get smart enough they may "optimize away" the very
construct whose optimization you were trying to check with a
sufficiently small benchmark. I remember when the whole "SPEC"
suite of benchmarks was made obsolete at a stroke by one advance
in compiler optimization techniques, for example:).

Anyway, if your main interest is in having your applications run
fast, rather than in studying optimization yields on specific
constructs in various circumstances, be sure to get the current
Psyco, 1.1.1, to go with the current Python, 2.3.2 (the pre-alpha
Python 2.4a0 is recommended only to those who want to help with
Python's development, including testing -- throughout at least 2004
you can count on 2.3.something, NOT 2.4, being the production,
_stable_ version of Python, recommended to all).


Alex
 
L

Lulu of the Lotus-Eaters

|1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell)
| 90% of the code is function applictions. Why not make it convenient?

Haskell is cool. But to do what you want, you need uniform currying of
all function calls (i.e. every call is a call with *exactly one*
argument, often returning a new function). That's not a reasonable
model for Python, for lots of reasons (but you are welcome to use
Haskell, I understand you can download versions of it for free).

|2. Statements vs Expressions business is very dumb. Try writing
| a = if x :
| y
| else: z

Try writing ANYTHING that isn't Python... wow, it doesn't run in the
Python interpreter.

|3. no multimethods (why? Guido did not know Lisp, so he did not know
| about them)

Been there, done that... we got them:

http://gnosis.cx/download/gnosis/magic/multimethods.py

|4. splintering of the language: you have the inefficient main language,
| and you have a different dialect being developed that needs type

I think this might be a reference to Pyrex. It's cool, but it's not a
fork of Python.

|5. Why do you need "def" ? In Haskell, you'd write
| square x = x * x

Again, you are welcome to use Haskell. If you'd like, you can also
write the following in Python:

square = lambda x: x*x

|6. Requiring "return" is also dumb (see #5)

'return' is NOT required in a function. Functions will happily return
None if you don't specify some other value you want returned.

|7. Syntax and semantics of "lambda" should be identical to
| function definitions (for simplicity and uniformity)

Obviously, they can't be *identical* in syntax... the word 'lambda' is
SPELLED differently than the word 'def'. The argument has been made for
code blocks in Python at times, but never (yet) convincingly enough to
persuade the BDFL.

|8. Can you undefine a function, value, class or unimport a module?

Yes.

|9. Syntax for arrays is also bad [a (b c d) e f] would be better
| than [a, b(c,d), e, f]

Hmmm... was the OP attacked by a pride of commas as a child?

It's true that the space bar is bigger on my keyboard than is the comma
key... but I don't find it all THAT hard to press ','.

Actually, the OP's example would require some new syntax for tuples as
well, since there's no way of knowing whether '(b c d)' would be a
function invocation or a tuple. Of course other syntaxes are
*possible*. In fact, here's a quick solution to everything s/he wants:

% cp hugs python

Yours, Lulu...

--
mertz@ | The specter of free information is haunting the `Net! All the
gnosis | powers of IP- and crypto-tyranny have entered into an unholy
..cx | alliance...ideas have nothing to lose but their chains. Unite
| against "intellectual property" and anti-privacy regimes!
-------------------------------------------------------------------------
 
A

Alex Martelli

Joachim said:
Oh, you're trolling for an inter-language flame fest...
well, anyway:


Multimethods suck.

Multimethods are wonderful, and we're using them as part of the
implementation of pypy, the Python runtime coded in Python. Sure,
we had to implement them, but that was a drop in the ocean in
comparison to the amount of other code in pypy as it stands, much
less the amount of code we want to add to it in the future. See
http://codespeak.net/ for more about pypy (including all of its
code -- subversion makes it available for download as well as for
online browsing).

So, you're both wrong:).


Alex
 
P

Pascal Costanza

Joachim said:
Oh, you're trolling for an inter-language flame fest...
well, anyway:



Multimethods suck.

Do they suck more or less than the Visitor pattern?
The longer answer: Multimethods have modularity issues (if whatever
domain they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations; standard
dispatch strategies as I've seen in some Lisps just cover up the
undefined behaviour, with a slightly less than 50% chance of being
correct).

So how do you implement an equality operator correctly with only single
dynamic dispatch?


Pascal
 
T

Terry Reedy

cc'ed in case you are not reading c.l.python, which I am limiting this
to.
So to get from the ID to the bytecode, you go through a dictionary?
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?

No. In Python, all names are associated with objects in namespaces.
Lookup is done as needed at the appropriate runtime. Function objects
are 1st class and are no different from any others in this respect.
The same goes for slots in collection objects being associated with
member objects.

The free online tutorial as www.python.org explains Python basics like
this.

Terry J. Reedy
 
K

Kenny Tilton

Joachim said:
Oh, you're trolling for an inter-language flame fest...
well, anyway:



Multimethods suck.

The longer answer: Multimethods have modularity issues

Lisp consistently errs on the side of more expressive power. The idea of
putting on a strait jacket while coding to protect us from ourselves
just seems batty. Similarly, a recent ex-C++ journal editor recently
wrote that test-driven development now gives him the code QA peace of
mind he once sought from strong static typing. An admitted former static
typing bigot, he finished by wondering aloud, "Will we all be coding in
Python ten years from now?"

kenny
 
K

Kenny Tilton

Kenny said:
Lisp consistently errs on the side of more expressive power. The idea of
putting on a strait jacket while coding to protect us from ourselves
just seems batty. Similarly, a recent ex-C++ journal editor recently
wrote that test-driven development now gives him the code QA peace of
mind he once sought from strong static typing. An admitted former static
typing bigot, he finished by wondering aloud, "Will we all be coding in
Python ten years from now?"

http://www.artima.com/weblogs/viewpost.jsp?thread=4639
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top