benchmark

B

bearophileHUGS

I know one benchmark doesn't mean much but it's still disappointing to see
Python as one of the slowest languages in the test:

http://blog.dhananjaynene.com/2008/07/performance-comparison-c-java-p...

That Python code is bad, it contains range() instead of xrange, the
big loop is in the main code instead of inside a function, uses ==
None, etc. That person can try this (with Psyco), I have changed very
little, the code is essentially the same:


import time, psyco
from psyco.classes import __metaclass__

class Person:
def __init__(self, count):
self.count = count
self.prev = None
self.next = None
def shout(self, shout, deadif):
if shout < deadif:
return shout + 1
self.prev.next = self.next
self.next.prev = self.prev
return 1

class Chain:
def __init__(self, size):
self.first = None
last = None
for i in xrange(size):
current = Person(i)
if self.first is None:
self.first = current
if last is not None:
last.next = current
current.prev = last
last = current
self.first.prev = last
last.next = self.first
def kill(self, nth):
current = self.first
shout = 1
while current.next != current:
shout = current.shout(shout, nth)
current = current.next
self.first = current
return current


def main():
ITER = 100000
start = time.time()
for i in xrange(ITER):
chain = Chain(40)
chain.kill(3)
end = time.time()
print 'Time per iteration = %s microseconds ' % ((end - start) *
1000000 / ITER)

psyco.full()
main()

us = microseconds
On my PC (that seems similar to his one) this version needs about 38.9
us/iter instead of 189.

On my PC the Java version takes 1.17 us, while the C++ version (with
MinGW 4.2.1) takes 9.8 us.
A raw D translation needs 14.34 us, while a cleaned up (that uses
structs, no getters/setters) needs 4.67 us.
I don't know why my C++ is so much slow (doing the same things to the C
++ version doesn't change its running time much).

Bye,
bearophile
 
S

Steven D'Aprano

python-ruby-jython-jruby-groovy/

Just ignore that. If the code had been designed for Python from the
start, it would have performed a lot better.


I recommend folks copy and paste his code into an interactive session,
and watch the thousands of <__main__.Person object at 0xb7f18e2c> that
flash onto the screen. You want to know why it's so slow? That's part of
the reason.

Doing so on my computer gives a final result of:

"Time per iteration = 502.890818119 microseconds"

When I make a single, two character modification to the program
(inserting "t=" at the beginning of the line "chain.kill(3)"), I get this
instead:

"Time per iteration = 391.469910145 microseconds"

In other words, about 20% of the time he measures is the time taken to
print junk to the screen.
 
J

jlist

I think what makes more sense is to compare the code one most
typically writes. In my case, I always use range() and never use psyco.
But I guess for most of my work with Python performance hasn't been
a issue. I haven't got to write any large systems with Python yet, where
performance starts to matter.
 
A

alex23

Steven said:
In other words, about 20% of the time he measures is the time taken to
print junk to the screen.

Which makes his claim that "all the console outputs have been removed
so that the benchmarking activity is not interfered with by the IO
overheads" somewhat confusing...he didn't notice the output? Wrote it
off as a weird Python side-effect?

I find his reluctance to entertain more idiomatic implementations
particularly telling. It's seems he's less interested in actual
performance comparisons and more into showing that writing static lang
style code in dynamic langs is bad, which isn't really anything new to
anyone anywhere, I would've thought.

All up, this reminds me of last years benchmarking fiasco that
demonstrated Storm's ORM handling being -incredibly-faster than
SQLAlchemy's SQL expression handling, something that just didn't seem
to be born out by user experience. Eventually, Mike Beyer reverse
engineered the benchmarks to discover that, surprise surprise, the
tests -weren't equal-; in one test SQLA was issuing a commit per
insert, while Storm was performing -no commits whatsoever-.

Benchmarks, IMO, are like statistics. You can tweak them to prove
pretty much any position you already take.
 
S

Steven D'Aprano

Which makes his claim that "all the console outputs have been removed so
that the benchmarking activity is not interfered with by the IO
overheads" somewhat confusing...he didn't notice the output? Wrote it
off as a weird Python side-effect?

Wait... I've just remembered, and a quick test confirms... Python only
prints bare objects if you are running in a interactive shell. Otherwise
output of bare objects is suppressed unless you explicitly call print.

Okay, I guess he is forgiven. False alarm, my bad.
 
A

Angel Gutierrez

Steven said:
Wait... I've just remembered, and a quick test confirms... Python only
prints bare objects if you are running in a interactive shell. Otherwise
output of bare objects is suppressed unless you explicitly call print.

Okay, I guess he is forgiven. False alarm, my bad.
Well.. there must be somthing because this is what I got in a normal script
execution:

[angel@jaulat test]$ python iter.py
Time per iteration = 357.467989922 microseconds
[angel@jaulat test]$ vim iter.py
[angel@jaulat test]$ python iter2.py
Time per iteration = 320.306909084 microseconds
[angel@jaulat test]$ vim iter2.py
[angel@jaulat test]$ python iter2.py
Time per iteration = 312.917997837 microseconds

iter.py - Original script
iter2.py - xrange instead of range
iter2.py (2nd) - 't=' added
 
M

M8R-n7vorv

That Python code is bad, it contains range() instead of xrange, the
big loop is in the main code instead of inside a function, uses ==
None, etc. That person can try this (with Psyco), I have changed very
little, the code is essentially the same:

Yes, this was pointed out in the comments. I had updated the code to
use
xrange and is and is not instead of range, == and !=, which is how
the
benchmark got updated to 192 microseconds. Moving the main loop into
a main function resulted in no discernible difference.

Testing with psyco resulted in a time of 33 microseconds per
iteration.
On my PC the Java version takes 1.17 us, while the C++ version (with
MinGW 4.2.1) takes 9.8 us.
A raw D translation needs 14.34 us, while a cleaned up (that uses
structs, no getters/setters) needs 4.67 us.
I don't know why my C++ is so much slow (doing the same things to the C
++ version doesn't change its running time much).

Bye,
bearophile

Wonder what optimisation level you are using. I to the best of my
recollection used -O3

Cheers,
Dhananjay
http://blog.dhananjaynene.com
 
B

Bruno Desthuilliers

Steven D'Aprano a écrit :
I recommend folks copy and paste his code into an interactive session,
and watch the thousands of <__main__.Person object at 0xb7f18e2c> that
flash onto the screen. You want to know why it's so slow? That's part of
the reason.

This only happens when run in an interactive session.
 
M

M8R-n7vorv

Which makes his claim that "all the console outputs have been removed
so that the benchmarking activity is not interfered with by the IO
overheads" somewhat confusing...he didn't notice the output? Wrote it
off as a weird Python side-effect?

Gee, I really hope I am a little more capable than writing it off. And
to
answer your question bluntly no I did not notice the output because
there
wasn't any. Run a python program as "python filename.py" instead
of using the interactive console, and you will not get any output
except
exceptions or anything that your code explicitly spews out.
I find his reluctance to entertain more idiomatic implementations
particularly telling. It's seems he's less interested in actual
performance comparisons and more into showing that writing static lang
style code in dynamic langs is bad, which isn't really anything new to
anyone anywhere, I would've thought.

I am reluctant to entertain more idiomatic implementations was in the
context of what to me made sense as a part of the exercise. I have
fully
and in great detail explained the rationale in the post itself.

Benchmarks, IMO, are like statistics. You can tweak them to prove
pretty much any position you already take.

How's this position of mine for starters :
http://blog.dhananjaynene.com/2008/06/whyhow-i-ended-up-selecting-python-for-my-latest-project/
? And if you are not sure, you could browse this as well :
http://blog.dhananjaynene.com/2008/07/presentation-contrasting-java-and-dynamic-languages/

Really how silly can it be when you suggest someone is taking a
position and tweaking the benchmarks to prove a point, when I am
actually quite enthusiastic about python, really like coding using it,
and it was disappointing to me just like to jack who started off this
thread that python did not do so well. In fact I would argue that it
wasn't entirely easy to actually publish the findings given the fact
that these would not have been the findings I would've been wished
for.

Cheers,
Dhananjay
http://blog.dhananjaynene.com
 
B

bearophileHUGS

jlist:
I think what makes more sense is to compare the code one most
typically writes. In my case, I always use range() and never use psyco.

If you don't use Python 3 and your cycles can be long, then I suggest
you to start using xrange a lot :) (If you use Psyco you don't need
xrange).


M8R-n7v...:
Wonder what optimisation level you are using. I to the best of my
recollection used -O3

For this program I have used the best (simple) combination of flags I
have found by try & errors:
-O3 -s -fomit-frame-pointer

I'll try adding a freelist, to see (and probably show you) the
results. The HotSpot speeds up this code first of all because its GC
is much better than the current one used by D, because it optimizes
away the getters/setters that D is unable to do, and probably manages
the methods two classes as static ones (while D manages all of them as
virtual).

Bye,
bearophile
 
B

Bruno Desthuilliers

Stefan Behnel a écrit :
Just ignore that. If the code had been designed for Python from the start, it
would have performed a lot better.

Currently it looks like syntax-adapted Java code to me.

Yeps, sure. I don't have time for this now, but I think I would have
choose an itertool based solution here instead of that hand-made linked
list.
 
M

M8R-n7vorv

I know one benchmark doesn't mean much but it's still disappointing to see
Python as one of the slowest languages in the test:

http://blog.dhananjaynene.com/2008/07/performance-comparison-c-java-p...

I was actually disappointed myself with the results from a python
perspective. One thing I did notice was that both ruby and groovy have
substantially improved performance in their new versions. There is a
likelihood that maybe this particular code is not best suited for
pythonic idioms, but the same would've been the case I guess for ruby,
jruby and groovy, yet they performed really well. While I am a
relative newcomer to the python world, from what I have seen, ruby /
jruby and groovy are all making substantial improvements in their
performance (If this post did not include the newer versions of these
languages, python would've been on top of all of them) but I haven't
seen any evidence of the same in upcoming versions of python.

Having said that the same code with psyco works really fast (and beats
all the other dynamic languages v. handsomely). But I did not include
it in the comparisons because psyco is really not a part of core
feature set of python and I was unclear of the implications thereof.

Is there any reason why the psyco is not a part of the core python
feature set ? Is there a particular reason it is better to be kept as
a separate extension ? Are there any implications of using psyco ?

Cheers,
Dhananjay
http://blog.dhananjaynene.com
 
M

M8R-n7vorv

Yes, this was pointed out in the comments. I had updated the code to
use
xrange and is and is not instead of range, == and !=, which is how
the
benchmark got updated to 192 microseconds. Moving the main loop into
a main function resulted in no discernible difference.

Testing with psyco resulted in a time of 33 microseconds per
iteration.

I have since updated the post to reflect the python with psyco timings
as well.
 
A

alex23

Really how silly can it be when you suggest someone is taking a
position and tweaking the benchmarks to prove a point [...]

I certainly didn't intend to suggest that you had tweaked -anything-
to prove your point.

I do, however, think there is little value in slavishly implementing
the same algorithm in different languages. To constrain a dynamic
language by what can be achieved in a static language seemed like such
an -amazingly- artificial constraint to me. That you're a fan of
Python makes such a decision even more confusing.

It's great that you saw value in Python enough to choose it for actual
project work. It's a shame you didn't endeavour to understand it well
enough before including it in your benchmark.

As for it being "disappointing", the real question is: has it been
disappointing for you in actual real-world code?

Honestly, performance benchmarks seem to be the dick size comparison
of programming languages.
 
C

Chris Mellon

Really how silly can it be when you suggest someone is taking a
position and tweaking the benchmarks to prove a point [...]

I certainly didn't intend to suggest that you had tweaked -anything-
to prove your point.

I do, however, think there is little value in slavishly implementing
the same algorithm in different languages. To constrain a dynamic
language by what can be achieved in a static language seemed like such
an -amazingly- artificial constraint to me. That you're a fan of
Python makes such a decision even more confusing.

It's great that you saw value in Python enough to choose it for actual
project work. It's a shame you didn't endeavour to understand it well
enough before including it in your benchmark.

As for it being "disappointing", the real question is: has it been
disappointing for you in actual real-world code?

Honestly, performance benchmarks seem to be the dick size comparison
of programming languages.
-

I actually think that modelling this problem the way he chose to, with
a Person class and by manually popping stuff out of a linked list
instead of more simply representing the alive/dead state of the
soldiers is a poor solution in general. Whenever you talk about
performance, you need to have a context to evaluate it in and you need
an idea of what you're trying to measure and why it's important for
your purposes. A solution which models the soldiers as bits in a
bitfield is going to run much, much, much faster in C/C++/D than the
current OO/linked list one (not to mention in much less space), and
the JIT in Java/C# and probably python with psyco can improve that as
well.
 
S

Steven D'Aprano

Really how silly can it be when you suggest someone is taking a
position and tweaking the benchmarks to prove a point [...]

I certainly didn't intend to suggest that you had tweaked -anything- to
prove your point.

I do, however, think there is little value in slavishly implementing the
same algorithm in different languages. To constrain a dynamic language
by what can be achieved in a static language seemed like such an
-amazingly- artificial constraint to me.

I don't know about that... it can be very useful to (say) demonstrate
that Lisp-style lists are fast in Lisp, and slow in Python. Or that
try...except is fast in Python, and slow in Java.

And if your aim is to compare languages, then it's only fair to keep the
algorithm constant. Imagine how we would holler and shout if the
benchmark compared Ruby using Quicksort and Python using Bubblesort.

I guess what some of us are complaining about is that the algorithm
chosen doesn't suit Python's execution model very well, and hence Python
is slow. If the algorithm chosen had suited Python, and hence Python came
up looking really fast, we'd be ecstatic. How about that, hey? *wink*

....
Honestly, performance benchmarks seem to be the dick size comparison of
programming languages.

I can't disagree with that one bit.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top