Python Productivity Gain?

D

Dave Brueck

William said:
It used to be that Python programs were shorter, faster, readable,
writable, and simply better. But, this was during the days when most
programmers had Unix background. Nowdays, most of the programmers are
coming from Windows background, and Python programs have become as
verbose and unreadable as Visual Basic or Perl.

I have a tough time taking this comment seriously - didja forget some smilies?
If not, you're basing this opinion on ______?

Is there any evidence that implies that for the same tasks the programs are now
longer, slower, less readable, less writeable, or worse? (or that any slip has
been caused by more Windows programmers?)
Ruby has not been corrupted as such. It make complicated thing less
complicated. But, it still make simply thing not as simple as Python.

???
 
D

Dave Brueck

kbass said:
In different articles that I have read, persons have constantly eluded to
the productivity gains of Python. One person stated that Python's
productivity gain was 5 to 10 times over Java in some in some cases. The
strange thing that I have noticed is that there were no examples of this
productivity gain (i.e., projects, programs, etc.,...).

If you're interested in funding a study, I'm sure you could get someone to do a
truer test. Short of that, the evidence is mostly anectdotal because it's rare
"in the real world" to rewrite one non-trivial application in a different
language just for the heck of it. There are almost always changes in program
architecture or feature set so that it's tough to do an apples-to-apples
comparison.

That said,

http://tinyurl.com/39jsh

-Dave
 
P

Paul Rubin

Paul Prescod said:
Are you going to get the SAME PROGRAMMERS to solve the same problem
twice? If so, the second language will have a big advantage. Are you
going to get different programmers? How do you know they are the same
skill?

Maybe it doesn't matter. If you hire your programmers by running an
ad in the paper, and advertising "Python programmers wanted" gets you
better programmers than advertising "VB programmers wanted", maybe
that by itself is good enough reason to do your project in Python,
irrespective of whether Python is objectively better than VB.
 
J

jmdeschamps

Peter Hansen said:
Harry George wrote in a thought-provoking post:

My background is (roughly in order) APL, FORTRAN, BASIC, Assembly, C,
university :), Pascal, C++, Object Pascal, Java, LabVIEW, and Python
(with a dozen others I forget) and I'm telling you Python is a really
great language. I've also dumped my previously favourite languages
(to wit, BASIC, C, C++, Delphi, and Java) to focus on Python.

Now all you need are 19 others and we'll have a significant data point.
(Signifying what? That's what I want to know. ;-)

-Peter

Well here goes! I started in Prolog (coming from formal logic it was a
breeze), done Pascal, some Forth (not much), a little Smalltalk, C,
C++, Java, Javascript, HyperTalk/Supertalk,(+ otherTalks ) - VB, VBA,
AppleScript, started doing Perl (for CGI) then read about doing this
stuff in Python instead! Ported my research work from Java to Python
and never looked back (but sure you can always use another language:)
Wow, never had so much fun since Working with Cratfman on NeXT! Thanks
GvR!

Jean-Marc
ps That's 2, 18 to go!
 
A

Anton Vredegoor

GerritM said:
Fortran, Basic, Assembly many times, Pascal, C, Objective-C, Object Pascal,
C++, Java and Python.
Yes Python beats the rest wrt productivity for most of my applications :)

Now do we need 18 more for a SIGNIFICANT data point?

Fortran, C, Lisp, Elan, Gfabasic, Pure C, Borland Pascal, Borland C,
Borland C++, Visual Basic, Delphi, Python

Python is the first language out of this list for me that I can forget
about while using it, so that I can concentrate on the algorithmic
aspects of the problem.

My next language will probably one that suggests a better algorithm
after me typing in some pseudo code. Maybe using some interface to
comp.lang.xxxxxx for the interpreter?

17 to go.

Anton
 
H

Harry George

Paul Prescod said:
It would be easy to find 20 such people but it would be easy to find
20 such people for almost any language. I think that the PERCENTAGE of
people who are happy with Python and would gladly stick with it for a
few years is higher than with most other languages but if you're just
looking for 20 fanboys you can certainly find them for any modern
language.

Paul Prescod


I should have phrased this as: If there are 20 people you know who
have varied programming backgrounds, and they *all* are excited by
Python within 2 weeks of meeting it, then that is a significant data
point.

That's what is happening here. Assorted FORTRAN, COBOL, C/C++. VB,
Perl, Lisp, Prolog, and Java programmers. Basically the same reaction
from each one.
 
M

Matthias

Harry George said:
Normally a science passes through phases:

a) Natural History. Wander around, get the lay of the land,
collect specimens, and try to organize what you find into mnemonically
effective schemes.

b) Field Research. Pose a hypothesis, isolate a piece of the field as
best you can, and apply your experimental factors and controls.
Observe results and interpret with a large grain of salt.

c) Lab Research. Set up isolated envieonments with significant
attention to eliminating non-experimental reasons for variation. Pose
the hypotheses. Observe results, and interpret with recognition that
a lab may be a poor model for reality.

You are asking that we jump to lab research when the field barely
sustains field research. Mostly we are still in natural history and
anecdotes.

We agree in the description of the current situation. That only few
lab research (with a selection of suitably created language variants
[1]) is being done is a political decision, however. It's more sexy
to create-and-hype new languages/tools/processes than to do actual
research.
Of course, even in the natural history phase pioneers and advance
scouts are capable of detecting an easier pass through the mountains
of comlexity. If 20 people from varied background, each of whom has
worked in several languages, tell me that Python is a really great
language, then I'll take that as a significant data point. Especially
if they are dumping their previously favorite languages (as varied as
COBOL, Perl, Java, C++, VB, Modula-3, Lisp, Prolog) to focus on
Python.

The problem with this approach is that if you go to a LISP/ Prolog/
Modula-3/ etc. newsgroup and ask around, you will find 20 people from
varied background telling you that LISP/ Prolog/ Modula-3/ etc. is a
really great language. In better randomized samples, popularity vote
will be strongly biased toward languages with strong marketing.

If people say "I'm doing X and I'm very happy with language Y for
reason Z" that's fine. But we probably should stop over-selling
languages (or tools or processes) claiming large increases in
productivity without having solid evidence at our hands.

Matthias

---
[1] An example of a study that I would call scientific is Lutz
Prechelt, Walter F. Tichy: "A Controlled Experiment to Assess the
Benefits of Procedure Argument Type Checking", IEEE Trans on
Soft. Eng., 1998, where two almost identical languages are compared in
a small application domain. The only difference is that one language
does some type checking, the other does none.
 
C

Cameron Laird

.
[good analytic points]
.
.
We agree in the description of the current situation. That only few
lab research (with a selection of suitably created language variants
[1]) is being done is a political decision, however. It's more sexy
to create-and-hype new languages/tools/processes than to do actual
research.
.
.
.
No. Or even, "No!"

I'm 'bout as quick to impute covert political motives as anyone.
I sure don't see it in this case, though. Creating-and-hyping
is sooooo much different an activity than "actual reserach" that
I simply don't see them as alternatives for most individuals.
When I apply my razor-of-Occam, I come up with an abundance of
explanations of "the current situation" without needing to
invoke politics; the expense of research jumps to my mind, first,
as apparently is true for others who've written in this thread.

Maybe you mean something different by "political" than I under-
stand.

Incidentally, although I haven't made an opportunity to speak
with him deeply about this, I'm willing to bet that Guido *does*
regard Python as an experiment, and an instance of "actual re-
search".
 
P

Paul Prescod

Matthias said:
...

A more scientific approach would be: Take a language X, build variants
X-with-OOP, X-with-static-typing, X-with-funny-syntax and let
developers use it under controlled settings. Watch them. Generate
bug statistics. Look for differences. Try to explain them. This
would be hard work, difficult to do and expensive. But I expect this
approach would find better [1] languages faster. The benefits might
be substantial.

And what would the business model for this endeavor be?

The main business model for language development these days is battling
for control of developers. Sun tries to woo them to portable code with
Java and Microsoft tries to pull them back to Windows with C#.

Productivity is just a marketing checkbox for the people writing the
cheques.

Even people like ESR and RMS use languages as ideological tools: "Open
Java or Python will stomp you" and "Make Python GPL-compatible or we
can't recommend it for GNU projects.

Paul Prescod
 
P

Paul Prescod

Harry said:
...
Of course, even in the natural history phase pioneers and advance
scouts are capable of detecting an easier pass through the mountains
of comlexity. If 20 people from varied background, each of whom has
worked in several languages, tell me that Python is a really great
language, then I'll take that as a significant data point. Especially
if they are dumping their previously favorite languages (as varied as
COBOL, Perl, Java, C++, VB, Modula-3, Lisp, Prolog) to focus on
Python.

It would be easy to find 20 such people but it would be easy to find 20
such people for almost any language. I think that the PERCENTAGE of
people who are happy with Python and would gladly stick with it for a
few years is higher than with most other languages but if you're just
looking for 20 fanboys you can certainly find them for any modern language.

Paul Prescod
 
M

Matthias

[...]
Maybe you mean something different by "political" than I under-
stand.

I used politics in the broad sense of "social relations involving
authority or power" (WordNet), not in the narrow sense related to
governments or political parties. My wording could have been more
careful, sorry.
Incidentally, although I haven't made an opportunity to speak
with him deeply about this, I'm willing to bet that Guido *does*
regard Python as an experiment, and an instance of "actual re-
search".

I'm thankful that Python exists as it is my favorite language for some
application domains. But I think in your last sentence your concept
of "research" is different from the one I was using in previous posts.
I was using "research" as "systematic investigation to establish
facts" (WordNet) as in physics, psychology, some social sciences, or
engineering. I'm aware that other understandings are common and make
sense.

My posting was not against creators of new languages, tools,
methodologies but against publishing strong and general claims on the
potential benefits without having to offer the slightest piece of
unbiased evidence.

Some questions concerning programming languages debated on usenet
almost endlessly (typing, garbage collection, syntax issues) could be
_resolved_ using lab experiments. That there goes so much effort into
marketing and apparently only little into actual experiments is what I
criticize.
 
C

Cameron Laird

.
.
.
My posting was not against creators of new languages, tools,
methodologies but against publishing strong and general claims on the
potential benefits without having to offer the slightest piece of
unbiased evidence.

Some questions concerning programming languages debated on usenet
almost endlessly (typing, garbage collection, syntax issues) could be
_resolved_ using lab experiments. That there goes so much effort into
marketing and apparently only little into actual experiments is what I
criticize.

Me, too! That is, experiments to resolve questions about
garbage collection and so on intrigue me, and I keep push-
ing myself to design ways to cut the cost of their
resolution. I am quite impatient with all the marketing
dissipation. I applaud your clear denunciation of unsub-
stantiated claims.
 
C

Corey Coughlin

It is a difficult problem, but I don't think it's completely
insurmountable. Take some programmers right out of school, or just a
general population of people, give them training in language X for a
fixed period of time, set them up to perform some task, and see how
long it takes them. Sure, some of them will be better programmers
than others, but with a large enough sample population you should be
able to draw some conclusions on the average, if there is an effect to
be measured. And yes, the bigger the population, the better the
results, so it would be fairly expensive to conduct, but still, you
could draw conclusions. Getting funding would be tricky, though,
that's a given.
 
P

Peter Hansen

Corey said:
It is a difficult problem, but I don't think it's completely
insurmountable. Take some programmers right out of school, or just a
general population of people, give them training in language X for a
fixed period of time, set them up to perform some task, and see how
long it takes them. Sure, some of them will be better programmers
than others, but with a large enough sample population you should be
able to draw some conclusions on the average, if there is an effect to
be measured. And yes, the bigger the population, the better the
results, so it would be fairly expensive to conduct, but still, you
could draw conclusions. Getting funding would be tricky, though,
that's a given.

And this only looks at productivity for new programmers, which might
be an interesting problem for some people, but not others. I would
venture to say that Python provides a larger productivity boost for
_experienced_ programmers, especially those who are very experienced
with OOP and maybe even more so those who have adopted agile methods...

-Peter
 
B

beliavsky

kbass said:
In different articles that I have read, persons have constantly eluded to
the productivity gains of Python. One person stated that Python's
productivity gain was 5 to 10 times over Java in some in some cases. The
strange thing that I have noticed is that there were no examples of this
productivity gain (i.e., projects, programs, etc.,...). Can someone give me
some real life examples of productivity gains using Python as opposed other
programming languages.

From my our personal experience, I have been programming with Python for
about 6 months (but I have been programming in other languages for over 10
years) and I have noticed that the more I had gotten use to programming in
Python, the more my programming speed has increased. But ... this is true
with any language that you program in as long as you are learning the
methodologies and concepts of the programming language. Your thoughts.

Kevin

My main other language is Fortran 95. For simple text analysis or
"database" programming (sorting a list, merging two lists etc.),
programming in Python is much faster, since more functionality is
built-in and the code is generic. If you write a sorting routine in
Python, it will work for lists with any type of elements (int's,
real's etc).

For numerical work, I still prefer Fortran 95 to Numeric Python.
Genericity is not as important here, and in any case a single Fortran
95 code can be written to do a calculation in single, double, or
quadruple precision, using the KIND feature. Fortran 95, properly
used, is safer than Python, because (for example)
1. function interfaces are checked at compile time
2. constants can be declared
3. the DO loop is more restrictive -- the looping variable cannot be
changed inside the loop

Well-written numerical F95 code is clearer to me than comparable
Python code because one can specify what input arguments of a function
will not be changed (using the INTENT(IN) feature) and what the
dimensions of all arguments are. It's also clear from reading the
declarations what the function is returning, whereas a Python function
can return anything, depending on how it is executed.

There are several independent implementations of languages like C++
and Fortran on both Linux and Windows. I don't think this is true for
Python. They create stand-alone executables that don't require an
interpreter on the target computer. Their language committees are
unlikely to make changes that break old code, as the prospective
change in Python's integer division will do.

I do like Python, but no language is a panacea. I think Python should
add some OPTIONAL safety features.
 
P

Paul Prescod

Matthias said:
...

If people say "I'm doing X and I'm very happy with language Y for
reason Z" that's fine. But we probably should stop over-selling
languages (or tools or processes) claiming large increases in
productivity without having solid evidence at our hands.

I feel that anecdotal evidence is better than no evidence. Before we had
modern science people would tell each other berries to eat and not to
eat based on anecdotal evidence. Sometimes you would get it wrong
(tomatoes) but you got it right more often than wrong and that saved
lives (in our case it could be weeks of effort wasted).

I am quite comfortable saying that individual programmers are more
productive in Python than in most languages, with a very high degree of
confidence for lower-level languages and more fuzziness for higher-level
languages. I've converted many programmers who have no reason to be
biased in favour of Python and they all agree with me. I've seldom heard
anyone claim (for example) that they were more productive when they had
to manage memory allocations (or registers!) manually.

I would be less comfortable talking about teams, especially large teams.
I don't have as many anecdotes to draw from there and "logic" could lead
you either way (e.g. common wisdom is that static typing helps large
teams to avoid stepping on each other's toes).

Paul Prescod
 
H

Harry George

It is a difficult problem, but I don't think it's completely
insurmountable. Take some programmers right out of school, or just a
general population of people, give them training in language X for a
fixed period of time, set them up to perform some task, and see how
long it takes them. Sure, some of them will be better programmers
than others, but with a large enough sample population you should be
able to draw some conclusions on the average, if there is an effect to
be measured. And yes, the bigger the population, the better the
results, so it would be fairly expensive to conduct, but still, you
could draw conclusions. Getting funding would be tricky, though,
that's a given.

It is common for a ComSci prof or grad student to crank up such a
study, using undergrad and grad students as the subjects. These
subjects can generally be coerced to participate ("it is required for
the course"). For "novice programmer" research, high school students
are often used. These tend to be self-selected, and ar not
representative for the general population.

So it is possible to set up such an experiment, and even to attend to
all the statistical niceties. The problem is that the experimental
model fails to match reality in other ways.

For example, real world teams have usually solved interpersonal
pecking orders and courting rituals before the coding starts. They
have domain knowledge beyond reading a (possibly fake) case study.
They have well-honed development environments, and may have existing
sets of unittests. Their requirements/directions are subject to major
changes in midstream.

These conditions are hard to duplicate in a short term academic
settings. They cannot be solved by larger sample size. That's why I
suggest that "lab research" is not ready for prime time in this field.

Some researchers have gone out in the field to use working teams.
Others retrospectively examine past projects. These have the flavor
of "field research".
 
C

Cameron Laird

.
.
.
My main other language is Fortran 95. For simple text analysis or
"database" programming (sorting a list, merging two lists etc.),
programming in Python is much faster, since more functionality is
built-in and the code is generic. If you write a sorting routine in
Python, it will work for lists with any type of elements (int's,
real's etc).

For numerical work, I still prefer Fortran 95 to Numeric Python.
Genericity is not as important here, and in any case a single Fortran
95 code can be written to do a calculation in single, double, or
quadruple precision, using the KIND feature. Fortran 95, properly
used, is safer than Python, because (for example)
1. function interfaces are checked at compile time
2. constants can be declared
3. the DO loop is more restrictive -- the looping variable cannot be
changed inside the loop

Well-written numerical F95 code is clearer to me than comparable
Python code because one can specify what input arguments of a function
will not be changed (using the INTENT(IN) feature) and what the
dimensions of all arguments are. It's also clear from reading the
declarations what the function is returning, whereas a Python function
can return anything, depending on how it is executed.

There are several independent implementations of languages like C++
and Fortran on both Linux and Windows. I don't think this is true for
Python. They create stand-alone executables that don't require an
interpreter on the target computer. Their language committees are
unlikely to make changes that break old code, as the prospective
change in Python's integer division will do.

I do like Python, but no language is a panacea. I think Python should
add some OPTIONAL safety features.

These are good points to raise.

Fortran's my first language. I have little opportunity nowadays to
exercise it, much as I'd like to do so. I'm certainly not as current
with it as you.

When I read, "It's also clear from reading the declarations what the
function is returning ...", I take it that you have in mind such
distinctions as FLOAT vs. INT. Reasoning about types is a *frequent*
topic of discussion in comp.lang.python. I'll summarize my experience
this way: FLOAT vs. INT (and so on) takes little of my day-to-day
attention. I focus on unit tests and coding which is semantically
transparent in a more comprehensive way than just type-correctness.
Therefore, while I acknowledge the advantages you describe for Fortran,
I categorize them mostly as, "no big deal".

There are a few other factual clarifications I hope I can contribute.
Perhaps most important is to recommend "dual-level" programming; it's
eminently feasible to *combine* Fortran and Python. I think of language
choice in a connected, rather than discrete, space; my alternatives
are not 1.0 Python vs. 1.0 Fortran, but 1.0 Fortran vs. (0.6 Fortran,
0.4 Python) vs. ...

While there's only one "independent" implementation of Python, in the
strictest sense, it's frankly impressive how many *distinct* implemen-
tations there are <URL:
http://phaseit.net/claird/comp.lang.python/python_varieties.html >.
In particular, I think you'll have an interest in the performance-
enhancing Psyco and the "compilers" which produce stand-alone
executables. Perhaps you hadn't encountered them before.

No language is a panacea, indeed. Python comes remarkably close,
though.
 
B

beliavsky

These are good points to raise.

Thanks for your informative reply.
Fortran's my first language. I have little opportunity nowadays to
exercise it, much as I'd like to do so. I'm certainly not as current
with it as you.

If you are willing to spend the time to learn it, a subset Fortran 95
language called F is free for Windows, Linux, and other platforms --
see http://www.fortran.com/F . A project to create a full Fortran 95
open-source compiler is well underway and may be completed this year.
When I read, "It's also clear from reading the declarations what the
function is returning ...", I take it that you have in mind such
distinctions as FLOAT vs. INT. Reasoning about types is a *frequent*
topic of discussion in comp.lang.python. I'll summarize my experience
this way: FLOAT vs. INT (and so on) takes little of my day-to-day
attention. I focus on unit tests and coding which is semantically
transparent in a more comprehensive way than just type-correctness.
Therefore, while I acknowledge the advantages you describe for Fortran,
I categorize them mostly as, "no big deal".

It's not just float vs. int. Below is a very simple illustration --
code to compute the standard deviation of a set of numbers, in Fortran
95 and Python. In the F95 code, it is clear that
(1) x:)) is a 1-d array of real's that will not be changed inside the
function (note the intent(in))
(2) the function returns a single value (F95 functions can return
arrays and structures, if they are declared as such).
(3) the function has no side-effects because it is declared PURE.

In the python code, all you know is that sd() takes one argument. It
could change that argument or some other global variable. It could
return a scalar that is real, integer, or something else. It could
return a list, a 1-D Numeric array, a 2-D Numeric array etc.

pure function sd(x) result(value)
! compute the sd of a vector
real , intent(in) :: x:))
real :: value
integer :: n
real :: xmean
n = size(x)
value = 0.0
if (n < 2) return
xmean = sum(x)/n
value = sqrt(sum((x-xmean)**2)/(n-1.0))
end function sd

def sd(x):
""" compute the sd of a vector """
n = size(x)
if (n < 2): return -1
xmean = sum(x)/n
return sqrt(sum((x-xmean)**2)/(n-1.0))

In this case, and for other numerical work, I prefer Fortran 95 to
Python, even ignoring speed advantages and the advantage of an
executable over a script. F95 can look a lot like Python -- no curly
braces or semicolons, and a lack of C/C++ trickery in general.

Of course, Python and other scripting languages were not primarily
designed for numerical work. Numeric Python is powerful and elegant.

Another point. Python advocates often claim it is better than compiled
languages because the code is much shorter. They rebut worries about
the loss of safety by recommending unit testing. In a Fortran 95 or
C++ program, I don't think as many tests need to be written, because
the compiler catches many more things. If you include the amount of
testing code when counting the amount of code needed, I suspect
Python's brevity advantage will partly disappear. Also, I regard unit
tests to check what happens when a Python function is called with
invalid arguments (int instead of float, scalar instead of array) as
low-level, tedious work that I would rather delegate to a compiler.
Scripting advocates claim that their languages are higher level than
compiled languages, but in this case the reverse is true -- the
compiler does more work for you than the interpreter does.
 
D

Dave Brueck

beliavsky said:
Another point. Python advocates often claim it is better than compiled
languages because the code is much shorter. They rebut worries about
the loss of safety by recommending unit testing. In a Fortran 95 or
C++ program, I don't think as many tests need to be written, because
the compiler catches many more things. If you include the amount of
testing code when counting the amount of code needed, I suspect
Python's brevity advantage will partly disappear. Also, I regard unit
tests to check what happens when a Python function is called with
invalid arguments (int instead of float, scalar instead of array) as
low-level, tedious work that I would rather delegate to a compiler.
Scripting advocates claim that their languages are higher level than
compiled languages, but in this case the reverse is true -- the
compiler does more work for you than the interpreter does.

Interesting thread! One minor nit though: writing the type of low-level tests
you describe above is almost always a no-no for Python programs. Good tests
tend to be geared towards functionality, so that the number and type of tests
correlates more closely to the features and algorithmic complexity of the code
than it does to the language.

Those ultra tedious low-level tests are usually a waste of time because they
end up covering cases that either never happen in practice or cases that are
already covered by tests that cover real functionality. In fact, really the
only time where I've come across the need for those types of tests is in
_other_ languages (C++) because they are more common in languages that require
the programmer to manage more details, and because failing to test those
conditions results in a hard crash of the program (tests like "properly rejects
null pointers passed in" and "doesn't access beyond array bounds").

IOW, in the *worst* case you invest the same amount of time and effort testing
your Python application as you spend testing your C++ (or whatever)
application - I haven't come across ANY case in practice where you end up
investing more effort; in fact I'd say that in our projects we end up spending
considerably less effort because (1) we get to skip precisely the type of tests
you're talking about and (2) the setup/teardown/test code is much more consise
and reusable (which is why in the past we've used Python as the test language
for applications written in other languages).

In theory, yes, you could have a Python function that gets passed a scalar when
it was expecting an array, but in order for that to occur you almost always
have a gaping hole in your suite of higher-level _feature_ tests. With adequate
feature and system test coverage (which you'll need regardless of the
implementation language), the probability of encountering a scalar vs array
error is way less likely than e.g. a null pointer access bug.

-Dave
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,772
Messages
2,569,593
Members
45,108
Latest member
AlbertEste
Top