Comparison Ruby, Python, Php, Groovy ecc.

M

Marc Heiler

Two things I notice.

It seems ruby 1.9 indeed managed to keep up with the python versions
more than the older ruby versions. And it seems to be (almost) as fast
as perl for that test.

The other thing, which is very strange, is that the ironruby
implementation is significantly slower than ironpython, whereas the
other versions arent by as much. What is wrong here?

IronRuby 0.9.0 6,038 39x
IronPyhon 2.0.2 0,978 6x

vs

Ruby 1.9.1 p129 2,688 18x
Python 3.1.1 1,566 10x
 
P

Pascal J. Bourguignon

Urabe Shyouhei said:
Are those executables compiled with identical compilers + compile flags?

The question is understandable, but these implementation might very
well be written in different languages, so it doesn't really matter.
We can assume that they are as much as possible, if they come from a
common distribution.


I added a comment to the web site, but I'm not sure it was taken into
account (I didn't got the same feed back as for a second shorter
comment). So here it is again:


For completeness, could you please try Common Lisp too?

You could use sbcl 1.0.29 (MS-Windows port in progress) at:
http://prdownloads.sourceforge.net/sbcl/sbcl-1.0.29-x86-windows-binary.msi


-------(bench1.lisp)----------------------------------------------------
(declaim (optimize (speed 3) (space 2) (debug 0) (safety 0)))
(declaim (ftype (function (single-float single-float) fixnum) iterate))

(defparameter *bailout* 16.0)
(defparameter *max-iterations* 1000)


(defun bench1 ()
(format t "Rendering...~%") (force-output)
(loop :for y fixnum :from -39 to 39 :do
(terpri)
(loop :for x fixnum :from -39 to 39 :do
(princ (if (zerop (iterate (the single-float (/ x 40.0))
(the single-float (/ y 40.0))))
"*"
" "))))
(finish-output))

(defun iterate (x y)
(declare (single-float x y))
(loop
:with cr single-float = (- y 0.5)
:with ci single-float = x
:with zi single-float = 0.0
:with zr single-float = 0.0
:for i fixnum :from 0 :below *max-iterations*
:do (let ((temp (* zr zi))
(zr2 (* zr zr))
(zi2 (* zi zi)))
(declare (single-float temp zr2 zi2))
(setf zr (+ (- zr2 zi2) cr)
zi (+ temp temp ci))
(when (< (the single-float *bailout*) (the single-float (+ zi2 zr2)))
(return-from iterate i)))
:finally (return-from iterate 0)))


(time (bench1))
 
U

Urabe Shyouhei

Pascal said:
The question is understandable, but these implementation might very
well be written in different languages, so it doesn't really matter.
We can assume that they are as much as possible, if they come from a
common distribution.

It does matter very much. At least identical ones should be used for each
underlying languages to write them. The report says that test was held on
Windows XP, so I suspect there is no such thing as "a common distribution" on it.
 
M

Marco Mastrodonato

Urabe said:
Are those executables compiled with identical compilers + compile flags?

Ruby p111 is mswin32 (onclick installer) the others are mingw32
downloaded from http://rubyinstaller.org/downloads/
Iron and java version, Python, groovy, php ...was been downloaded from
main site. The perl's exe is strawberry's installation. Java was been
compiled by netbeans.
 
M

Marco Mastrodonato

Marc said:
Two things I notice.

It seems ruby 1.9 indeed managed to keep up with the python versions
more than the older ruby versions. And it seems to be (almost) as fast
as perl for that test.

The other thing, which is very strange, is that the ironruby
implementation is significantly slower than ironpython, whereas the
other versions arent by as much. What is wrong here?

1. Python is still faster, but version 1.9.1 goes very well catching up
good results, goes better than perl, its script was been very optimized
to get that result, 2.7s against 4s of the normal version.

2. I think there aren't correlation between these versions: Ironruby,
ironpython, ruby and python are different projects and with different
development.
 
U

Urabe Shyouhei

Marco said:
Ruby p111 is mswin32 (onclick installer) the others are mingw32
downloaded from http://rubyinstaller.org/downloads/
Iron and java version, Python, groovy, php ...was been downloaded from
main site. The perl's exe is strawberry's installation. Java was been
compiled by netbeans.

So the next thing you should do is to recompile by yourself to uniform
compilation environment among them. Fairness is the most essential part when
you want to do an emotional Yo-Yo on a benchmark like that.

And luckily, all implementations nominated are open sourced.
 
M

Marco Mastrodonato

Pascal said:
I added a comment to the web site, but I'm not sure it was taken into
account (I didn't got the same feed back as for a second shorter
comment). So here it is again:


For completeness, could you please try Common Lisp too?

I got only the second comment, anyway, i'll add lisp asap and thanks for
your work
 
L

lith

Lua 5.1.4

Does this make use of the lua jit[1]?

BTW I'm not so sure that the type statements in groovy make the code
run faster. IIRC with older versions they simply introduced type
checks that had the adverse effect.

[1] http://luajit.org
 
M

Marco Mastrodonato

Urabe said:
So the next thing you should do is to recompile by yourself to uniform
compilation environment among them. cut ...

Yes i could do that but will be a different comparison. I don't think
there are many people who compile themself from the sources, on windows
 
B

brabuhr

Yes i could do that but will be a different comparison. I don't think
there are many people who compile themself from the sources, on windows

Another thing to consider when running the benchmark runs to not write
the program output to the console (e.g. ./program > /dev/null or
program.exe > NUL). Presumably the intention is to compare the speed
of the implementations calculating the result and not test how fast
the desktop environment can display the result. While it may not in
all cases make a significant difference in the results, it is best to
remove that variable from the test.

For example, in one case I saw a benchmark that claimed something to
the effect of "look BRANDX is not slow: this program in BRANDX is only
slightly slower than C!". But, on closer examination it took the C
program less than 1 second to calculate the result and it took the
BRANDX program a few seconds to calculate the result; but, in both
cases, it took cmd.exe many seconds to display the result.

Using your Java program* (10 runs):
Min. 1st Qu. Median Mean 3rd Qu. Max. Std-dev
158.0 178.8 196.5 195.6 205.5 249.0 27.37882

and, sending out to to /dev/null:
Min. 1st Qu. Median Mean 3rd Qu. Max. Std-dev
106.0 108.2 109.0 109.0 110.0 112.0 1.699673

[*] Modified to send the time output to standard error
 
M

Marco Mastrodonato

unknown said:
Another thing to consider when running the benchmark runs to not write
the program output to the console

unknown, thanks for your advice. I agree with you, i didn't care to test
the stdout write speed. Honestly i tried to write into a variable and
send to output only at the end... i saw there was no difference (ruby's
script) and i went back to the original, more nice to use because you
can early feel the speed. In your test, instead, there's a discrete
difference
 
U

Urabe Shyouhei

Marco said:
Yes i could do that but will be a different comparison. I don't think
there are many people who compile themself from the sources, on windows

If you really need speed recompiling is the easiest way to achieve it. Does it
really worth comparing those executables? Their "slowness" might be sourced
from a bad compilation. I don't know about other language but 1.8.6-p111
versus 1.8.6-p368 case is (I believe) due to difference of their compilers.
What is your point on that article, then? Are you really comparing languages?
not compilers behind them?
 
M

Marco Mastrodonato

Urabe said:
If you really need speed recompiling is the easiest way to achieve it.

The aim is a languages's comparison, using the downlodable package
...without the need to compile every interpreter. I could be agree with
you about a "real comparison", but this is the practise done by the
99,5% of the windows users.

@Pascal
Sorry, i'm having some trouble with lisp, have i to use the txt format?
Or have i to compile it first? Take a look:

C:\Lavoro\Progetti\Test\Bench\multilanguage>sbcl --no-userinit --eval
'(load (compile-file "bench1.lisp"))' --eval '(quit)'
This is SBCL 1.0.29, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.

SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses. See the CREDITS and COPYING files in the
distribution for more information.

This is experimental prerelease support for the Windows platform: use
at your own risk. "Your Kitten of Death awaits!"

debugger invoked on a END-OF-FILE:
end of file on #<SB-IMPL::STRING-INPUT-STREAM {23B59BD1}>

Type HELP for debugger help, or (SB-EXT:QUIT) to exit from SBCL.

restarts (invokable by number or by possibly-abbreviated name):
0: [CONTINUE] Ignore runtime option --eval "'(load".
1: [ABORT ] Skip rest of --eval and --load options.
2: Skip to toplevel READ/EVAL/PRINT loop.
3: [QUIT ] Quit SBCL (calling #'QUIT, killing the process).

(SB-IMPL::STRING-INCH #<SB-IMPL::STRING-INPUT-STREAM {23B59BD1}> T NIL)
0]
 
P

Piyush Ranjan

I just ran a few of these benchmarks on my machine. I got different results
Linux 2.6.28-11-generic GNU/Linux
Intel core 2 duo 2GHz, 2MB L2 cache, 3GB ram DDR

Language Time for 100 iterations times slower than java with
-server java1.6 =96server 0.18 1 Ruby1.8 7.78 44.07 Ruby1.9.2 4.2 23.78
Jruby 2.5 14.16 Jruby1.3.1=97sever 2.31 13.1 java1.6 -client
0.18 1.01 python 2.6.2
3.04 17.21






















Jruby is the fastest here with 13 times slower than java. I am sure there
are other command line options which may allow jruby to perform faster, tha=
t
I am not aware of. Also shows ruby1.9.2 is slower than python 2.6.2
 
P

Piyush Ranjan

Okay that table got completely screwed. I will try once more or you
may get these results here http://pastie.org/594583


Language Time for 100 iterations Times slower
than java with -server
java1.6 =96server 0.18 1
Ruby1.8.7 7.78 44.07
Ruby1.9.2 4.2 23.78
Jruby 2.5 14.16
Jruby1.3.1=97sever 2.31 13.1
java1.6 -client 0.18 1.01
python 2.6.2 3.04 17.21


Jruby is the fastest here with 13 times slower than java. I am sure there
are other command line options which may allow jruby to perform faster, th=
at
I am not aware of. Also shows ruby1.9.2 is slower than python 2.6.2



I just ran a few of these benchmarks on my machine. I got different resul= ts
Linux 2.6.28-11-generic GNU/Linux
Intel core 2 duo 2GHz, 2MB L2 cache, 3GB ram DDR

=A0 =A0 =A0 =A0 Language Time for 100 iterations times slower than java w= ith
-server =A0java1.6 =96server 0.18 1 =A0Ruby1.8 7.78 44.07 =A0Ruby1.9.2 4.= 2 23.78
Jruby 2.5 14.16 =A0Jruby1.3.1=97sever 2.31 13.1 =A0java1.6 -client
0.18 1.01 =A0python 2.6.2
3.04 17.21






















Jruby is the fastest here with 13 times slower than java. I am sure there
are other command line options which may allow jruby to perform faster, t= hat
I am not aware of. Also shows ruby1.9.2 is slower than python 2.6.2
[/QUOTE]
 
N

Nathan Keel

Marco said:
Comparison script languages for the fractal geometry, these are the
languages i tested:

Java
Lua 5.1.4
Php 5.3.0
Python 2.6.2
Python 3.1.1
Jython 2.5.0
Groovy 1.6.3
Jruby 1.3.1
Ruby 1.9.1 p129
Ruby 1.8.6 p368
Ruby 1.8.6 p111
IronRuby 0.9.0
IronPython 2.0.2
Perl 5.10.0

Let me know yours comment
http://mastrodonato.info/index.php/...t-languages-for-the-fractal-geometry/?lang=en

A decent try and honest, but there are too many variables involved.
Even a quick look at the Perl code could use a little fine tuning (it
looked like v4 code in a lot of ways (from 15 years ago), improperly
using local, etc.), and where there are wasteful assignments (not a big
deal). I'd recommend posting the code in their respective news groups
and asking for any advice on how to speed up the code/write it more
efficiently, and see what people come up with. Implementing the same
code essentially the same way in different languages may show a slower
time elapsed, but perhaps if you recoded it in a way to take advantage
of that language you would see some (slightly, but important)
differences in your results. Don't get me wrong, I appreciate what
you're doing here and it's a good attempt, I just think there are too
many variables. I haven't ran Windows for a long time, but I wouldn't
be surprised if a *nix variant provided different results (I know it
does for me on Linux using a very comparable system). For that matter,
you might offer something in C and C++ to compare to compiled Java,
unless you believe that those aren't viable comparison languages for
some reason? (If so, know that myself and many others develop online
and offline using them, because they are faster and sometimes that's
important with huge trafficked sites where every tiny otherwise trivial
thing makes a difference).
 
B

brabuhr

or-the-fractal-geometry/?lang=3Den

A decent try and honest, but there are too many variables involved.
... =A0Don't get me wrong, I appreciate what you're doing here and it's
a good attempt, I just think there are too many variables...
you might offer something in C and C++ to compare to compiled Java,
unless you believe that those aren't viable comparison languages for
some reason?

OpenJDK Client VM (build 14.0-b08, mixed mode, sharing)
java Bench1 > /dev/null
Java Elapsed 0.08
Java Elapsed 0.079
Java Elapsed 0.079

ruby 1.9.2dev (2009-08-25 trunk 24642) [i686-linux]
ruby bench1.rb > /dev/null
Ruby Elapsed 3.515
Ruby Elapsed 3.352
Ruby Elapsed 3.523

jruby 1.3.0 (ruby 1.8.6p287) (2009-06-03 5dc2e22) (OpenJDK Client VM
1.6.0_0) [i386-java]
jruby bench1.rb > /dev/null
Ruby Elapsed 4.185
Ruby Elapsed 3.760
Ruby Elapsed 3.626

ruby 1.9.2dev (2009-08-25 trunk 24642) [i686-linux]
ruby -rubygems bench2.rb > /dev/null
Ruby Elapsed 0.059
Ruby Elapsed 0.058
Ruby Elapsed 0.060

jruby 1.3.0 (ruby 1.8.6p287) (2009-06-03 5dc2e22) (OpenJDK Client VM
1.6.0_0) [i386-java]
jruby -rubygems bench2.rb > /dev/null
Ruby Elapsed 0.409
Ruby Elapsed 0.410
Ruby Elapsed 0.412


require 'ffi-inliner'

BAILOUT =3D 16
MAX_ITERATIONS =3D 1000

class Bench2
extend Inliner

def initialize
puts "Rendering..."
for y in -39..39
for x in -39..39
print iterate(x/40.0, y/40.0) =3D=3D 0 ? "*" : " "
end
print "\n"
end
end

inline <<-EO
int n;

int iterate(double x, double y)
{
int i =3D 0;
double zi =3D 0.0;
double zr =3D 0.0;
double zi2, zr2, temp;
double ci =3D x;
double cr =3D y-0.5;
while(i < #{MAX_ITERATIONS}) {
i++;
temp =3D zr * zi;
zr2 =3D zr * zr;
zi2 =3D zi * zi;
zr =3D zr2 - zi2 + cr;
zi =3D temp + temp + ci;
if(zi2 + zr2 > #{BAILOUT}) return i;
}
return 0;
}
EO
end

time =3D Time.now
Bench2.new
STDERR.puts "Ruby Elapsed %.3f" % (Time.now - time)


In this case, the original Ruby version of the method looked so much
like C that I don't think you lose much in readability by inlining
(except for loss of vim's syntax highlighting).
 
L

lith

Comparison script languages for the fractal geometry, these are the
http://shootout.alioth.debian.org/u32q/benchmark.php?test=mandelbrot

OpenJDK Client VM (build 14.0-b08, mixed mode, sharing)> java Bench1 > /dev/null

Java Elapsed 0.08
Java Elapsed 0.079
Java Elapsed 0.079
ruby 1.9.2dev (2009-08-25 trunk 24642) [i686-linux]> ruby -rubygems bench2.rb > /dev/null

Ruby Elapsed 0.059
Ruby Elapsed 0.058
Ruby Elapsed 0.060

I guess this wasn't the first run when the inline code got compiled?

Anyway, since the runtime is so short, in the case of the java version
you're to some extent measuring the JVM startup time.

Another thing: if I'm not totally mistaken, a ruby float is a double
in the Java world.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top