Article of interest: Python pros/cons for the enterprise

C

Carl Banks

The number of libraries you get "out of the box" appear to me as more
likely explanations for the productivity increase.

The productivity increase of the language appears to me as a more
likely explanation for the number of libraries you get "out of the
box".

:)


Carl Banks
 
S

Steven D'Aprano

| I even use "named anonymous functions" *cough* by assigning lambda |
functions to names:
|
| foo = lambda x: x+1

Even though I consider the above to be clearly inferior to

def foo(x): return x+1

since the latter names the function 'foo' instead of the generic
'<lambda>'.

Absolutely. If foo() was a function that the user would see, I would
certainly use the def form to create it.

But in a situation like this:


def parrot(x, y, z, func=None):
if func is None:
func = lambda x: x+1
return func(x+y+z)


I don't see any advantage to writing it as:

def parrot(x, y, z, func=None):
if func is None:
def func(x): return x+1
return func(x+y+z)
 
S

Sebastian Kaliszewski

Jeff said:
I disagree with you completely. Your points don't make any sense to me
at all. I believe I am putting forth less effort by having a generic
resource-management infrastructure, rather than a memory-specific
language feature -- that's not just an implication, it's my honest belief.

Yet your belief is provably wrong. It's been realised by langauge developers
long time ago.

It's quite simple.

1. Your "generic" resource-management infrastructure is not generic to begin
with! It does not work for mutually dependant resources.
2. Your "generic" infrastructure increases burden on the programmer
everywhere any resorce (including trivial one like memory) is used, while
GC kills that burden in 95% of the cases. C++ish approach puts the notion
of ownership everywhere - both in 95% of cases where it's useless and in
remaining 5% where it's actually needed. That's not reduced effort by any
means.
3. You can't handle clean-up errors in reasonable way in C++ish approach, so
anything more complex should not by handled that way anyway.


rgds
 
D

Duncan Booth

Steven D'Aprano said:
Absolutely. If foo() was a function that the user would see, I would
certainly use the def form to create it.

But in a situation like this:


def parrot(x, y, z, func=None):
if func is None:
func = lambda x: x+1
return func(x+y+z)


I don't see any advantage to writing it as:

def parrot(x, y, z, func=None):
if func is None:
def func(x): return x+1
return func(x+y+z)
I take it you never feel the need to inspect tracebacks, nor insert a
breakpoint or print statement at an arbitrary point in the code.

Granted none of those may apply in this particular simple case, but if
you pass functions/lambdas around a lot it can be frustrating when you
get an error such as:

TypeError: <lambda>() takes exactly 2 arguments (1 given)

and the traceback only tells you which line generated the TypeError, not
which lambda was involved. On the other hand:

TypeError: parrot_func() takes exactly 2 arguments (1 given)

while it might not identify the function uniquely in all situations at
least tells you something useful about the function being called.
 
P

Paul Rubin

Nicola Musatti said:
According to which metric? This statement appears as totally
gratuitous to me. You seem to forget that deallocation of local
objects only entails stack readjustment.

Why do you think they are local objects? They are passed as arguments
to functions that could be storing references elsewhere.
 
S

Steven D'Aprano

I take it you never feel the need to inspect tracebacks, nor insert a
breakpoint or print statement at an arbitrary point in the code.


Nah, my code is always perfect, first time, every time.

*wink*
Granted none of those may apply in this particular simple case,

This sort of simple case is the one where I would use lambda.

but if
you pass functions/lambdas around a lot it can be frustrating when you
get an error such as:

TypeError: <lambda>() takes exactly 2 arguments (1 given)

and the traceback only tells you which line generated the TypeError, not
which lambda was involved. On the other hand:

In the simple cases I'm talking about, there is only one lambda in scope
at a time. If that were not the case, I'd use def.

TypeError: parrot_func() takes exactly 2 arguments (1 given)

while it might not identify the function uniquely in all situations at
least tells you something useful about the function being called.

Sure. For a more complicated case where I'm passing the function around a
lot, def is the way to go. Likewise if the function is complex, I'd use
def instead of trying to cram the whole algorithm into a single
expression for lambda.

I'm not saying that I never use nested def, only that I sometimes assign
lambdas to names. And why not? A lambda function is a first class object
like everything else :)
 
D

Duncan Booth

Steven D'Aprano said:
In the simple cases I'm talking about, there is only one lambda in scope
at a time. If that were not the case, I'd use def.
In the fictional example you gave there are potentially two lambdas: the
one passed in or the default one.

I use lambda quite often myself, but I do feel that if you do have a name
for the function then you might as well tell the function what it's called.
It probably won't matter, but it doesn't hurt you (2 extra characters to
type), and it might just save you grief further down the line. Each to his
own though.
 
N

Nicola Musatti

The productivity increase of the language appears to me as a more
likely explanation for the number of libraries you get "out of the
box".

In the case of Python I suspect you have a point even without the
smiley, given how much of what's available was developed without any
major corporation's support. On the other hand, had the kind of money
that's been poured into Java and/or .NET been poured into *standard* C+
+, I dont' think it would be so far behind. Witness the kind of
libraries/framework that used to and still come with some commercial C+
+ implementation, and even some free/open source ones; Boost, ACE and
wxWidgets are the first that come to mind.

Cheers,
Nicola Musatti
 
A

Aahz

It's not abuse. It's meaningful and compact. The $scalars are
intuitive to anybody who has ever written a shell script, the @arrays
are immediately recognizable... I agree it takes some getting used to,
but then it becomes clear as day.

You're entitled to your opinion, but speaking as a former expert Perl
programmer, I disagree with you.
 
T

Terry Reedy

| On Sun, 24 Feb 2008 21:13:08 -0500, Terry Reedy wrote:
|
| > | I even use "named anonymous functions" *cough* by assigning lambda |
| > functions to names:
| > |
| > | foo = lambda x: x+1
| >
| > Even though I consider the above to be clearly inferior to
| >
| > def foo(x): return x+1
| >
| > since the latter names the function 'foo' instead of the generic
| > '<lambda>'.
|
| Absolutely. If foo() was a function that the user would see, I would
| certainly use the def form to create it.
|
| But in a situation like this:
|
|
| def parrot(x, y, z, func=None):
| if func is None:
| func = lambda x: x+1
| return func(x+y+z)

Since functions are constants with respect to code attribute, you might as
well condense that to

def parrot(x,y,z, func = lambda xyz: xyz+1):
return func(x+y+z)

Then you can claim some actual space saving.

| I don't see any advantage to writing it as:
|
| def parrot(x, y, z, func=None):
| if func is None:
| def func(x): return x+1
| return func(x+y+z)

Good habit?
Don't mislead the newbies ;-?

tjr
 
R

Robert Brown

Larry Bugbee said:
Python's dynamic typing is just fine. But if I know the type, I want
the ability to nail it. ...local variables, arguments, return values,
etc And if I don't know or care, I'd leave it to dynamic typing.

This is the approach taken by Common Lisp. Often just a few type
declarations, added to code in inner loops, results in vastly faster code.
Also, although I don't tend to use type declarations while interactively
developing code, I often add them later. Mostly, I add them to document the
code, but the added safety and faster execution time are nice benefits.

bob
 
P

Paul Rubin

Robert Brown said:
This is the approach taken by Common Lisp. Often just a few type
declarations, added to code in inner loops, results in vastly faster code.

That is just a dangerous hack of improving performance by turning off
some safety checks, I'd say. Static typing in the usual sense of the
phrase means that the compiler can guarantee at compile time that a
given term will have a certain type. That can be done by automatic
inference or by checking user annotations, but either way, it should
be impossible to compile code that computes improperly typed values.
Lisp and Python don't attempt to make any such compile time checks.
They check at runtime, or (in the case of Lisp with the checks turned
off) they don't check at all.
 
P

Paul Boddie

Witness the kind of
libraries/framework that used to and still come with some commercial C+
+ implementation, and even some free/open source ones; Boost, ACE and
wxWidgets are the first that come to mind.

Oh, that's another good reason for C++'s decline: the fragmentation of
the development community through a plethora of proprietary products,
each one with its advocates and a relatively small common ground
(admittedly growing over the years thanks to Free Software and
standards) between them all. When Java came along, even though the
best GUI offering was AWT, it was better than nothing and it was one
of the batteries included. Although Sun's Java was also proprietary,
it was easier for people to obtain and redistribute, often without per-
seat or per-unit licensing costs.

Of course, C++ isn't the only language with this problem. The Lisp
scene has also been plagued by an unnecessary deference to commercial
interests, which means that the hottest topic on comp.lang.lisp right
now is probably Paul Graham's much-anticipated but arguably
disappointing Lisp "successor", Arc, amongst the usual in-fighting and
parade-dampening.

Paul
 
N

Nicola Musatti

Oh, that's another good reason for C++'s decline: the fragmentation of
the development community through a plethora of proprietary products,
each one with its advocates and a relatively small common ground
(admittedly growing over the years thanks to Free Software and
standards) between them all. When Java came along, even though the
best GUI offering was AWT, it was better than nothing and it was one
of the batteries included. Although Sun's Java was also proprietary,
it was easier for people to obtain and redistribute, often without per-
seat or per-unit licensing costs.

C++ was born and acquired its popularity in a period when freely
available software wasn't as common as it is today and corporation
didn't see any kind of advantage in investing in it.

By the way, funny you should mention AWT, given how it was soon
superceded by Swing, which in turn competes against SWT. And given the
state of the Python web framekork scene if I were you I'd start
looking for another language ;-)

Cheers,
Nicola Musatti
 
N

Nicola Musatti

On Feb 25, 3:59 pm, Sebastian Kaliszewski
1. Your "generic" resource-management infrastructure is not generic to begin
with! It does not work for mutually dependant resources.

How so? Could you give a concrete example?
2. Your "generic" infrastructure increases burden on the programmer
everywhere any resorce (including trivial one like memory) is used, while
GC kills that burden in 95% of the cases. C++ish approach puts the notion
of ownership everywhere - both in 95% of cases where it's useless and in
remaining 5% where it's actually needed. That's not reduced effort by any
means.

Like others around here you seem not to be aware of the existence of
the standard C++ library. That and local variables usually deal with
well over half the cases of memory management in any non trivial
application, and boost::shared_ptr can deal with a good portion of the
rest.
3. You can't handle clean-up errors in reasonable way in C++ish approach, so
anything more complex should not by handled that way anyway.

So it's okay for a Python mechanism to deal with 95% of the cases, but
not for a C++ one? At least in C++ resource management only becomes
more complicated if you need more control.

Cheers,
Nicola Musatti
 
C

Carl Banks

The Lisp
scene has also been plagued by an unnecessary deference to commercial
interests, which means that the hottest topic on comp.lang.lisp right
now is probably Paul Graham's much-anticipated but arguably
disappointing Lisp "successor", Arc, amongst the usual in-fighting and
parade-dampening.

It looks like his main contribution was to get rid of superfluous
parentheses. Which, admittedly, is no small thing for Lisp.


Carl Banks
 
M

Marc 'BlackJack' Rintsch

At least in C++ resource management only becomes more complicated if you
need more control.

I think this is the point where so many people here disagree. I'm coming
from a "garbage collection" background in OOP programming. In C++
resource management becomes instantly more complicated because I have to
think about memory management and must actively manage it in *every case*.
Writing code in a RAII style and using smart pointer templates is a cost
for me. A cost that's quite high, because it feels completely wrong to
have to think about it and to write in that "value style", because that
goes against my expectations/picture of OOP -- a graph of
independent/loosely coupled objects communicating with each other. In
this sense C++ looks like a quite crippled and fragile OOP language to me.

Ciao,
Marc 'BlackJack' Rintsch
 
A

Aaron Watters

This I found less hard to believe. Python is more expressive than Java
and usually requires less code for the same task. Moreover the
availability of libraries is comparable.

I tend to cheat when I code in java and pretend
I'm writing in Python. But even then the biggest
pain comes in when I try to use really advanced
data structures and get all knotted up in the verbosity
-- and when I try to figure out what I was doing later
it's even worse. For example in Python I tend to build
things like dictionaries of tuples to lists of
dictionaries without thinking about it, but in Java
the equivalent of

D[ (x,y) ] = [ { a: b } ]

is too horrible to be imagined, even if you cheat
and use the non-type-safe containers. Of course
this is in addition to other Java annoyances like
no proper support for multivalued returns or
function pointers, and overgeneralized
libraries.

However, I have found in the corporate
environment that managers frequently don't
like it when you do in a few days that
things that they themselves don't know how
to do in less than several months. Especially
when it makes the other programmers angry.
Sometimes I think programmers should get
sociology/psychology/poli.sci degrees and pick up the
programming stuff on the job, since most of
what counts seems to be politics, really.

-- Aaron Watters

===
http://www.xfeedme.com/nucular/pydistro.py/go?FREETEXT=spam+eggs
 
P

Paul Rubin

Nicola Musatti said:
Read the title. This is about "C Traps and Pitfalls".

Whoops, I think the same author wrote a similar one about C++. He hangs
out here on this newsgroup sometimes. I didn't notice that my keyword search
got the wrong one.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,796
Messages
2,569,645
Members
45,371
Latest member
TroyHursey

Latest Threads

Top