benchmark

C

Chris Mellon

Really how silly can it be when you suggest someone is taking a
position and tweaking the benchmarks to prove a point [...]

I certainly didn't intend to suggest that you had tweaked -anything- to
prove your point.

I do, however, think there is little value in slavishly implementing the
same algorithm in different languages. To constrain a dynamic language
by what can be achieved in a static language seemed like such an
-amazingly- artificial constraint to me.

I don't know about that... it can be very useful to (say) demonstrate
that Lisp-style lists are fast in Lisp, and slow in Python. Or that
try...except is fast in Python, and slow in Java.

That's true, but note that the original post doesn't attempt to draw
any conclusions about what's fast or slow from the benchmark, which is
one reason why it's a poor example of benchmarking.
And if your aim is to compare languages, then it's only fair to keep the
algorithm constant. Imagine how we would holler and shout if the
benchmark compared Ruby using Quicksort and Python using Bubblesort.

That's definitely true, and (for example) the Alioth benchmarks are
intended to benchmark specific algorithms for comparisons sake.
I guess what some of us are complaining about is that the algorithm
chosen doesn't suit Python's execution model very well, and hence Python
is slow. If the algorithm chosen had suited Python, and hence Python came
up looking really fast, we'd be ecstatic. How about that, hey? *wink*

The "best" way to implement this problem, as bitfield manipulation,
would actually show python in even worse light. I suspect the main
thing that this benchmark is actually testing is loop overhead, and
secondarily object allocation speed. Python is pretty slow in the
former and reasonable in the latter, so I don't find the results very
surprising at all.
...

I can't disagree with that one bit.

As with genitals, the important thing about benchmark comparison is
what you're going to do with the results.
 
D

Dhananjay

Really how silly can it be when you suggest someone is taking a
position and tweaking the benchmarks to prove a point [...]

I certainly didn't intend to suggest that you had tweaked -anything-
to prove your point.

While that was not how I read it first, I assume that was a misjudged
reading.
I do, however, think there is little value in slavishly implementing
the same algorithm in different languages. To constrain a dynamic
language by what can be achieved in a static language seemed like such
an -amazingly- artificial constraint to me. That you're a fan of
Python makes such a decision even more confusing.

It is a sufficiently well understood maxim, that any comparison
between two factors should attempt to keep other factors as equal as
possible (Ceteris Paribus - Everything else being equal), slavishly if
you will. It is my perception that had I changed the algorithms, I
would've been a much higher level of criticism a lot more for
comparing apples and oranges.

I simply could not understand your point with regards to dynamic vs.
static languages. If you are by any chance referring to make the code
a little less OO, I believe the entire exercise could be redone using
a procedural algorithm, and all the languages will run much much
faster than they currently do. But that would be essentially moving
from an OO based design to a procedural design. Is that what you are
referring to (I suspect not .. I suspect it is something else) ? If
not, would certainly appreciate you spending 5 mins describing that.

I am a fan of Python on its own merits. There is little relationship
between that and this exercise.

It's great that you saw value in Python enough to choose it for actual
project work. It's a shame you didn't endeavour to understand it well
enough before including it in your benchmark.

I have endeavoured hard, and maybe there's a shortcoming in the
results of that endeavour. But I haven't quite understood what it is I
haven't understood (hope that makes sense :) )
As for it being "disappointing", the real question is: has it been
disappointing for you in actual real-world code?

I am extremely happy with it. But there definitely are some projects I
worked on earlier I would simply not choose any dynamic language for
(not ruby / not python / not ruby / not groovy). These languages
simply cannot be upto the performance demands required of some
projects.
Honestly, performance benchmarks seem to be the dick size comparison
of programming languages.

Not sure if there is a real life equivalent use case if I was to use
this analogy further. But there are some days (mind you not most days)
one needs a really big dick. Always helpful to know the size.
 
T

Terry Reedy

Is there any reason why the psyco is not a part of the core python
feature set ?

Psyco was a PhD project. I do not believe the author ever offered it.
Last I knew, it was almost but not completely compatible.
Is there a particular reason it is better to be kept as
a separate extension ?

If he did, he would have to commit to updating it to work with new
version of Python (2.6/3.0) which I don't believe he wants to do. Last
I know, he was working with the PyPy project instead and its JIT
technology. On the otherhand, extensions are also restricted by
Python's release schedule, including no new features in bug-fix
(dot-dot) releases. So library extensions need to be rather stable but
maintained.
> Are there any implications of using psyco ?

It compiles statements to machine code for each set of types used in the
statement or code block over the history of the run. So code used
polymorphically with several combinations of types can end up with
several compiled versions (same as with C++ templates). (But a few
extra megabytes in the running image is less of an issue than it was
even 5 or so years ago.) And time spent compiling for a combination
used just once gains little. So it works best with numeric code used
just for ints or floats.

Terry J. Reedy
 
B

bearophileHUGS

alex23:
Honestly, performance benchmarks seem to be the dick size comparison
of programming languages.

I don't agree:
- benchmarks can show you what language use for your purpose (because
there are many languages, and a scientist has to choose the right tool
for the job);
- it can show where a language implementation needs improvements (for
example the Haskell community has improved one of their compilers
several times thank to the Shootout, the D community has not yet done
the same because the language is in a too much fast evolving phase
still, so performance tunings is premature still);
- making some code faster for a benchmark can teach you how to make
the code faster in general, how CPUs work, or even a some bits of
computer science;
- if the benchmarks are well chosen and well used, they can show you
what are the faster languages (you may say 'the faster
implementations', and that's partially true, but some languages have a
semantic that allows better or much better optimizations). A computer
is a machine useful for many purposes, programming languages allow
some users to make the machine act as they want. So computers and
languages give some power, they allow you to do something that you
can't do without a computer. A language can give you power because it
gives you the ability to write less bug-prone code, or it can give you
more pre-built modules that allow you to do more things in less time,
or it can give you the power to perform computations in less time, to
find a specific solution faster. So Python and C give you different
kinds of power, and they are both useful. Other languages like D/Java
try to become a compromise, they try to give you as much as possible
of both "powers" (and they sometimes succeed, a D/Ocaml program may be
almost as fast as C, while being on the whole much simpler/safer to
write than C code).

Bye,
bearophile
 
D

Dhananjay

(e-mail address removed) wrote:
 > Are there any implications of using psyco ?

It compiles statements to machine code for each set of types used in the
statement or code block over the history of the run.  So code used
polymorphically with several combinations of types can end up with
several compiled versions (same as with C++ templates).  (But a few
extra megabytes in the running image is less of an issue than it was
even 5 or so years ago.)  And time spent compiling for a combination
used just once gains little.  So it works best with numeric code used
just for ints or floats.

Terry J. Reedy

Sounds to me very much like polymorphic inline caching / site caching,
which
is something I have seen been worked upon and getting introduced in
recent
versions of groovy / jruby and ruby 1.9 (and I read its being looked
at in
Microsoft CLR as well .. but I could be wrong there). I am no expert
in this
so please correct me if I deserve to be.

But if site caching is indeed being adopted by so many dynamic
language
runtime environments, I kind of wonder what makes python hold back
from
bringing it in to its core. Is it that a question of time and effort,
or
is there something that doesn't make it appropriate to python ?

Cheers,
Dhananjay
 
S

sturlamolden

I know one benchmark doesn't mean much but it's still disappointing to see
Python as one of the slowest languages in the test:

http://blog.dhananjaynene.com/2008/07/performance-comparison-c-java-p...


And how does this reflect the performance of real world Python
programs?

Google uses Python to run the YouTube web site. NASA uses Python to
process image data from the Hubble space telescope. Would they do that
if Python was unbearably sluggish? Do you get faster downloads from a
bittorrent client written in Java (e.g. Azureus) than the original
BitTorrent client (a Python program)?

Using a high level language efficiently is an art. The key is using
Python's built-in data types and extension libraries (e.g. PIL and
NumPy). That is the opposite of what authors of these 'benchmarks'
tend to do.




It seems the majority of these 'benchmarks' are written by people who
think like C++ programmers.
 
A

alex23

Is it that a question of time and effort,
or is there something that doesn't make it appropriate to python ?

I don't think I've ever seen anyone who has raised concerns about the
speed of python actually offer to contribute to resolving it, so I'm
guessing it's the former.
 
C

cokofreedom

I don't think I've ever seen anyone who has raised concerns about the
speed of python actually offer to contribute to resolving it, so I'm
guessing it's the former.

Contribute to resolve it? Part of me just wants to say that to "speed"
up python would be such a huge undertaking, the outcome would alter
the language beyond what people liked. Another part thinks, why speed
it up, it is pretty fast presently, and I've rarely seen real-world
applications that need that 80/20 rule applied heavily.

Benchmarks for showing what languages are good at is fine, but in
general most conform to a standard range of speed. I cannot find the
article but there was a good piece about how it takes most programmers
the same time to program in any language. Reading through the code is
another matter, I think Python is faster than most in that respect.

I'd look to increase the worst-case scenario's of Python before trying
to speed up everything. Hell the tim_sort is pretty damn fast.
 
K

Kris Kennaway

Angel said:
Steven said:
Wait... I've just remembered, and a quick test confirms... Python only
prints bare objects if you are running in a interactive shell. Otherwise
output of bare objects is suppressed unless you explicitly call print.

Okay, I guess he is forgiven. False alarm, my bad.
Well.. there must be somthing because this is what I got in a normal script
execution:

[angel@jaulat test]$ python iter.py
Time per iteration = 357.467989922 microseconds
[angel@jaulat test]$ vim iter.py
[angel@jaulat test]$ python iter2.py
Time per iteration = 320.306909084 microseconds
[angel@jaulat test]$ vim iter2.py
[angel@jaulat test]$ python iter2.py
Time per iteration = 312.917997837 microseconds

What is the standard deviation on those numbers? What is the confidence
level that they are distinct? In a thread complaining about poor
benchmarking it's disappointing to see crappy test methodology being
used to try and demonstrate flaws in the test.

Kris
 
K

Kris Kennaway

jlist said:
I think what makes more sense is to compare the code one most
typically writes. In my case, I always use range() and never use psyco.
But I guess for most of my work with Python performance hasn't been
a issue. I haven't got to write any large systems with Python yet, where
performance starts to matter.

Hopefully when you do you will improve your programming practices to not
make poor choices - there are few excuses for not using xrange ;)

Kris
 
M

M8R-n7vorv

Hopefully when you do you will improve your programming practices to not
make poor choices - there are few excuses for not using xrange ;)

Kris

And can you shed some light on how that relates with one of the zens
of python ?

There should be one-- and preferably only one --obvious way to do it.

Dhananjay
 
C

cokofreedom

And can you shed some light on how that relates with one of the zens
of python ?

There should be one-- and preferably only one --obvious way to do it.

Dhananjay

And that is xrange, but if you need a list range is better :p
 
P

Peter Otten

And can you shed some light on how that relates with one of the zens
of python ?

There should be one-- and preferably only one --obvious way to do it.

For the record, the impact of range() versus xrange() is negligable -- on my
machine the xrange() variant even runs a tad slower. So it's not clear
whether Kris actually knows what he's doing.

For the cases where xrange() is an improvement over range() "Practicality
beats purity" applies. But you should really care more about the spirit
than the letter of the "zen".

Peter
 
M

M8R-n7vorv

And that is xrange, but if you need a list range is better :p

Interesting to read from PEP-3000 : "Python 2.6 will support forward
compatibility in the following two ways:

* It will support a "Py3k warnings mode" which will warn
dynamically (i.e. at runtime) about features that will stop working in
Python 3.0, e.g. assuming that range() returns a list."
 
K

Kris Kennaway

Peter said:
For the record, the impact of range() versus xrange() is negligable -- on my
machine the xrange() variant even runs a tad slower. So it's not clear
whether Kris actually knows what he's doing.

You are only thinking in terms of execution speed. Now think about
memory use. Using iterators instead of constructing lists is something
that needs to permeate your thinking about python or you will forever be
writing code that wastes memory, sometimes to a large extent.

Kris
 
P

Peter Otten

Kris said:
You are only thinking in terms of execution speed.

Yes, because my remark was made in the context of the particular benchmark
supposed to be the topic of this thread.
Now think about memory use.

Now you are moving the goal posts, But still, try to increase the chain
length. I guess you'll find that the impact of range() -- in
Chain.__init__() at least -- on the memory footprint is also negligable
because the Person objects consume much more memory than the tempory list.
Using iterators instead of constructing lists is something
that needs to permeate your thinking about python or you will forever be
writing code that wastes memory, sometimes to a large extent.

I like and use an iterator/generator/itertools-based idiom myself, but
for "small" sequences lists are quite competitive, and the notion of what a
small list might be is constantly growing.

In general I think that if you want to promote a particular coding style you
should pick an example where you can demonstrate actual benefits.

Peter
 
B

bearophileHUGS

Peter Otten:
In general I think that if you want to promote a particular coding style you
should pick an example where you can demonstrate actual benefits.

That good thing is that Python 3 has only xrange (named range), so
this discussion will be mostly over ;-)

Bye,
bearophile
 
K

Kris Kennaway

Peter said:
Yes, because my remark was made in the context of the particular benchmark
supposed to be the topic of this thread.

No, you may notice that the above text has moved off onto another
discussion.

Kris
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,767
Messages
2,569,571
Members
45,045
Latest member
DRCM

Latest Threads

Top