Metaruby, BFTS, Cardinal and Rubicon - State of play?

R

Ryan Davis

I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
me interested and reminded me of metaruby. I did a little reading and
am wondering the general state of play with regard to both metaruby
and bfts. Is rubicon (rubytests on rubyforge) basically dead, waiting
the release of bfts?

Metaruby and BFTS are moving along, slowly (for no other reason than
the sheer number of projects we have on our plate, not lack of
interest). We'd happily take people interested in either one.

Rubicon is dead for all intents and purposes.
 
C

Chris Roos

I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
me interested and reminded me of metaruby. I did a little reading and
am wondering the general state of play with regard to both metaruby
and bfts. Is rubicon (rubytests on rubyforge) basically dead, waiting
the release of bfts?

Metaruby and BFTS are moving along, slowly (for no other reason than
the sheer number of projects we have on our plate, not lack of
interest). We'd happily take people interested in either one.
I'd like to help but am unsure as to my time/skill applicability. Is
it possible to get a look at the source out of general interest?

Chris
 
M

M. Edward (Ed) Borasky

Ryan said:
I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
me interested and reminded me of metaruby. I did a little reading and
am wondering the general state of play with regard to both metaruby
and bfts. Is rubicon (rubytests on rubyforge) basically dead, waiting
the release of bfts?

Metaruby and BFTS are moving along, slowly (for no other reason than the
sheer number of projects we have on our plate, not lack of interest).
We'd happily take people interested in either one.

Rubicon is dead for all intents and purposes.
Remind me again what Metaruby is. I know what BFTS is and can't wait to
get my mitts on it.
 
P

pat eyler

I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
me interested and reminded me of metaruby.

I'm glad somebody read it ;)

I'm even happier that it's drawing some attention to the various
Ruby implementations and the growing toolkit around them.
Remind me again what Metaruby is. I know what BFTS is and can't wait to
get my mitts on it.


Metaruby is the reimplimentation of Ruby in Ruby, with a translation
mechanism to convert the core to C.
 
A

Austin Ziegler

On Sep 13, 2006, at 1:48 AM, Chris Roos wrote:
I read Pat Eyler's interview with Kevin Tew[1] this morning. It got
me interested and reminded me of metaruby.
I'm glad somebody read it ;)

I'm even happier that it's drawing some attention to the various
Ruby implementations and the growing toolkit around them.

I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
shootout. Microbenchmarks don't show anything useful, even if they're
run correctly -- which the shootout has never been run correctly. It
isn't even administered correctly. (I was similarly annoyed that Joel
Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)

-austin
 
M

M. Edward (Ed) Borasky

Robert said:
And you benchmark algorithms written in the same language, BUT the shootout
benchmarks *different languages* and I have looked at the algorithms they
use just once, that was enough.

The point is use it as a tool if you find it useful, but here it is used
for
advocacy.

Cheers
Robert
It's a perfectly natural desire to want to compare languages. To do that
requires micro-benchmarks that will execute in all of the languages. But
I think you're missing my point, which is that Ruby is slower than the
other dynamic languages on microbenchmarks because the implementation of
Ruby hasn't been performance-tuned to the extent that Perl, Python and
PHP have been tuned. It's not, as far as I can tell, because of anything
fundamental in the syntax or semantics of the Ruby language that
prohibits that tuning.

So rather than whine about advocacy or say "I looked at the algorithms
they use just once, that was enough", why not look at the algorithms
they use and tune the Ruby interpreter so it executes those algorithms
as efficiently as Perl, PHP and Python? Benchmarketing is a fact of life
in the "computer industry". Fortunes are made and lost because one gizmo
is faster than another gizmo on some "meaningless benchmark".

In short:

1. I don't see any fundamental reason why Ruby can't be as fast as Perl,
Python, or PHP.
2. It isn't there yet.
 
A

Austin Ziegler

Well ... as a working performance engineer, I'm going to defend
microbenchmarks as virtually (no pun intended) the *only* way to
improve performance over all for the Ruby interpreter, coupled of
course with profiling said interpreter and careful design of the data
structures the interpreter must maintain during execution.

Benchmarking for internal purposes is fine. What the shootout does is
something different entirely. Have you ever really *looked* at the code
they run for the various different versions? Some of it is so blatantly
tweaked to run faster on the benchmark that it's not funny. (There's a
Perl example I looked at a couple of years ago that *deliberately* had
obfuscated code because the obfuscated code took advantage of internals
that you're not supposed to use and ran faster than the other versions.)
There's no excuse for that sort of thing showing up on a benchmarking
site.

It's no different than NVidia or ATI detecting a benchmark program and
optimizing certain things for that program only.

It gets worse, Ed: the administrators behind the shootout don't care.
They never have. They continually promote their website, but when
challenged on the methodology used or technical issues, they give the
quote that television psychics use: for entertainment purposes only.

They're dishonest and run a benchmark comparison site that is so flawed
that you can't even remotely trust it.

The saying goes "lies, damned lies, and statistics". Well, any published
benchmark is even worse than statistics in that line.

I'm *not* against the concept of benchmarking. I'm definitely against
the concept of comparative benchmarking in the way that the shootout
does it. I will often benchmark the code that I write to make sure that
I'm writing efficient code.

But I won't pretend that the results are useful for comparisons.

They never are.

-austin
 
R

Ryan Davis

I'd be happier if Mr Tew didn't try to lend legitimacy to the alioth
shootout. Microbenchmarks don't show anything useful, even if they're
run correctly -- which the shootout has never been run correctly. It
isn't even administered correctly. (I was similarly annoyed that Joel
Spolsky used it in his latest slam on Ruby. Stupid, Joel, stupid.)

Thank you, once again, for derailing a thread with your personal
vendetta.
 
A

Austin Ziegler

Thank you, once again, for derailing a thread with your personal
vendetta.

Look. We *know* that the shootout is crap. We've known this for three
years now. But we *still* have people come in and use it for a variety
of reasons, most of which are completely bogus.

If we, as Ruby users, want to have a set of benchmarks, we should
develop our own and hold them to a rigorous standard. That is, the
exact *opposite* of what Mr. Guouy's shootout does.

I'd gladly support a Ruby benchmark suite that people could use in
talking about things. But in a variety of different threads we've seen
that not only don't the shootout people know anything about
benchmarks, most other people don't either (see the "For
performance..." thread).

Zed Shaw has had to do a lot of teaching about statistics. I'm sure
that Ed Borasky could teach us a lot about benchmarking with *good*
benchmarks to be tested under controlled or controllable situations so
we could improve the performance of Ruby in various situations.

-austin
 
C

Chris Roos

Just scanned the new messages. There appears to be no answer to my
initial question about getting a look at the source (unless I missed
it in the noise?)

Cheers,

Chris
 
M

M. Edward (Ed) Borasky

Austin said:
Look. We *know* that the shootout is crap. We've known this for three
years now. But we *still* have people come in and use it for a variety
of reasons, most of which are completely bogus.

Before tossing the whole shootout overboard, I'd like to get my hands on
the complete set of timings for all the benchmarks for all the
languages. I described roughly the process in another email.
If we, as Ruby users, want to have a set of benchmarks, we should
develop our own and hold them to a rigorous standard. That is, the
exact *opposite* of what Mr. Guouy's shootout does.

I'd gladly support a Ruby benchmark suite that people could use in
talking about things. But in a variety of different threads we've seen
that not only don't the shootout people know anything about
benchmarks, most other people don't either (see the "For
performance..." thread).

If the BFTS has room, why shouldn't benchmarks be part of the test
suite? I've said before, if you aren't running performance tests, you
aren't doing test driven development. :)

There are some Ruby benchmarks -- the YARV project has a small suite
they use, and there's my MatrixBenchmark.
Zed Shaw has had to do a lot of teaching about statistics. I'm sure
that Ed Borasky could teach us a lot about benchmarking with *good*
benchmarks to be tested under controlled or controllable situations so
we could improve the performance of Ruby in various situations.
Again, see my other email ... I think there's something to be gained
from at least a rough analytical pass at the data from the shootout.
What I proposed would at least identify those benchmarks in the set that
are reasonable indicators of comparative language performance and which
are "special cases".

And despite the objections I've heard, I think it's a perfectly
reasonable idea to compare *general-purpose* dynamic scripting languages
like Perl, Python and Ruby on benchmarks. People are going to ask the
question; we might as well see what sort of confidence one can have in
the answer the shootout gives. Right now, what we have is Ruby turning
out near the bottom on the overall score.
 
I

Isaac Gouy

Austin said:
Look. We *know* that the shootout is crap. We've known this for three
years now. But we *still* have people come in and use it for a variety
of reasons, most of which are completely bogus.

If we, as Ruby users, want to have a set of benchmarks, we should
develop our own and hold them to a rigorous standard. That is, the
exact *opposite* of what Mr. Guouy's shootout does.

I'd gladly support a Ruby benchmark suite that people could use in
talking about things. But in a variety of different threads we've seen
that not only don't the shootout people know anything about
benchmarks, most other people don't either (see the "For
performance..." thread).

Zed Shaw has had to do a lot of teaching about statistics. I'm sure
that Ed Borasky could teach us a lot about benchmarking with *good*
benchmarks to be tested under controlled or controllable situations so
we could improve the performance of Ruby in various situations.

-austin

I just read the comment you made on Pat Eyler's blog.
You do seem to be engaged in a personal vendetta.
That's a shame.
 
A

Austin Ziegler

I just read the comment you made on Pat Eyler's blog.
You do seem to be engaged in a personal vendetta.
That's a shame.

It wouldn't be necessary if you made the changes that I recommended
two years ago, so that people take what you've done a hell of a lot
less seriously, or that you take it a lot more seriously and do some
rigorous analysis and reject invalid submissions.

As it stands, you refuse to do either. So, yes, I *will* bring up the
fact that the shootout is a "silly place" every time anyone refers to
it for serious matters. It's not worth looking at because you pimp it
yet refuse to take responsibility for ensuring quality results.

The shootout is utterly worthless, Ed Borasky's opinions
notwithstanding. To make it useful, you need to start weeding out bad
implementations and bad assumptions, which you have yet to do despite
being told about this problem at least two years ago.

-austin
 
I

Isaac Gouy

Austin said:
It wouldn't be necessary if you made the changes that I recommended
two years ago, so that people take what you've done a hell of a lot
less seriously, or that you take it a lot more seriously and do some
rigorous analysis and reject invalid submissions.

Austin, you've stepped from criticism to personal vendetta.

You make abusive comments without any reference to what is actually
shown on the shootout website. In a similar tirade, six months ago, you
complained about a benchmark that, even then, hadn't been shown in the
shootout for at least 6 months.

And now once again your making abusive comments based on your memories
about something that no longer exists
"(There's a Perl example I looked at a couple of years ago ..."

Where exactly is that Perl program shown in the current benchmarks?
Here's the link http://shootout.alioth.debian.org/gp4/
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,534
Members
45,008
Latest member
Rahul737

Latest Threads

Top