A better benchmarking syntax (was: Automatic Benchmark Iterations)

  • Thread starter Daniel Schierbeck
  • Start date
D

Daniel Schierbeck

Reading Phrogz' post about automatic benchmark iterations, and then
seeing Mauricios' lovely Adaptative Benchmark[1], I came to think we
might need a friendlier syntax for benchmarks all together. Minutes
later, I discovered that someone had almost the same idea as me[2].
Anyway, I'd like to just throw it out here, and hear what you people
think.

The idea is to make benchmarks syntactically similar to the current
Test::Unit test cases, e.g.

class SortBenchmark < Benchmark
def setup
@array = [1, 6, 2, 9, 4, 6, 2]
end

def teardown
@array = nil
end

def report_quicksort
@array.quicksort
end

def report_mergesort
@array.mergesort
end
end

Automatic iteration could be added, either as a class method call or by
creating a different base class (IterativeBenchmark?)

So, what do y'all think? If I'm not the only one liking this, I might
whip something up when I get some spare time...


Cheers,
Daniel



1. http://eigenclass.org/hiki.rb?cmd=view&p=adaptative
+benchmark&key=TDD
2. http://djberg96.livejournal.com/52734.html
 
D

Daniel Berger

Reading Phrogz' post about automatic benchmark iterations, and then
seeing Mauricios' lovely Adaptative Benchmark[1], I came to think we
might need a friendlier syntax for benchmarks all together. Minutes
later, I discovered that someone had almost the same idea as me[2].
Anyway, I'd like to just throw it out here, and hear what you people
think.

The idea is to make benchmarks syntactically similar to the current
Test::Unit test cases, e.g.

class SortBenchmark < Benchmark
def setup
@array = [1, 6, 2, 9, 4, 6, 2]
end

def teardown
@array = nil
end

def report_quicksort
@array.quicksort
end

def report_mergesort
@array.mergesort
end
end

Automatic iteration could be added, either as a class method call or by
creating a different base class (IterativeBenchmark?)

So, what do y'all think? If I'm not the only one liking this, I might
whip something up when I get some spare time...

+1 - please do. :)

Dan
 
T

Trans

Reading Phrogz' post about automatic benchmark iterations, and then
seeing Mauricios' lovely Adaptative Benchmark[1], I came to think we
might need a friendlier syntax for benchmarks all together. Minutes
later, I discovered that someone had almost the same idea as me[2].
Anyway, I'd like to just throw it out here, and hear what you people
think.

The idea is to make benchmarks syntactically similar to the current
Test::Unit test cases, e.g.

class SortBenchmark < Benchmark
def setup
@array = [1, 6, 2, 9, 4, 6, 2]
end

def teardown
@array = nil
end

def report_quicksort
@array.quicksort
end

def report_mergesort
@array.mergesort
end
end

Automatic iteration could be added, either as a class method call or by
creating a different base class (IterativeBenchmark?)

So, what do y'all think? If I'm not the only one liking this, I might
whip something up when I get some spare time...

Cool! Go BDD with it:

benchmark "compare sorting methods" do

compare "quick sort"
@array.quicksort
end

compare "merge sort" do
@array.mergesort
end

end

Or something like that.

T.
 
P

pat eyler

Cool! Go BDD with it:

benchmark "compare sorting methods" do

compare "quick sort"
@array.quicksort
end

compare "merge sort" do
@array.mergesort
end

end

Or something like that.

I'd like to see a way to specify with or without rehearsals, and with or without
statistics (see
http://on-ruby.blogspot.com/2006/12/benchmarking-lies-and-statistics.html
for more info).

Maybe:

benchmark "compare sorting methods" do
with
tsort
rehearsal
end

compare "quick sort"
@array.quicksort
end

compare "merge sort" do
@array.mergesort
end

end
 
D

Daniel Schierbeck

Reading Phrogz' post about automatic benchmark iterations, and then
seeing Mauricios' lovely Adaptative Benchmark[1], I came to think we
might need a friendlier syntax for benchmarks all together. Minutes
later, I discovered that someone had almost the same idea as me[2].
Anyway, I'd like to just throw it out here, and hear what you people
think.

The idea is to make benchmarks syntactically similar to the current
Test::Unit test cases, e.g.

class SortBenchmark < Benchmark
def setup
@array = [1, 6, 2, 9, 4, 6, 2]
end

def teardown
@array = nil
end

def report_quicksort
@array.quicksort
end

def report_mergesort
@array.mergesort
end
end

Automatic iteration could be added, either as a class method call or by
creating a different base class (IterativeBenchmark?)

So, what do y'all think? If I'm not the only one liking this, I might
whip something up when I get some spare time...

Cool! Go BDD with it:

benchmark "compare sorting methods" do

compare "quick sort"
@array.quicksort
end

compare "merge sort" do
@array.mergesort
end

end

Or something like that.

I must admit to being a great fan of RSpec, although the pace of API
changes has thrown me off for now, but I'm not sure this path is the
best. I did actually think about it at first. The strength of Test::Unit
is its simple syntax: just remember #setup, #teardown, and #test_*. I'd
like to add the same simplicity to any new benchmarking system. I'll
give it some more thought :)

Any idea for a name? "benchmark" is unfortunately already taken...


Cheers,
Daniel
 
D

Daniel Berger

Reading Phrogz' post about automatic benchmark iterations, and then
seeing Mauricios' lovely Adaptative Benchmark[1], I came to think we
might need a friendlier syntax for benchmarks all together. Minutes
later, I discovered that someone had almost the same idea as me[2].
Anyway, I'd like to just throw it out here, and hear what you people
think.
The idea is to make benchmarks syntactically similar to the current
Test::Unit test cases, e.g.
class SortBenchmark < Benchmark
def setup
@array = [1, 6, 2, 9, 4, 6, 2]
end
def teardown
@array = nil
end
def report_quicksort
@array.quicksort
end
def report_mergesort
@array.mergesort
end
end
Automatic iteration could be added, either as a class method call or by
creating a different base class (IterativeBenchmark?)
So, what do y'all think? If I'm not the only one liking this, I might
whip something up when I get some spare time...
Cool! Go BDD with it:
benchmark "compare sorting methods" do
compare "quick sort"
@array.quicksort
end
compare "merge sort" do
@array.mergesort
end

Or something like that.

I must admit to being a great fan of RSpec, although the pace of API
changes has thrown me off for now, but I'm not sure this path is the
best. I did actually think about it at first. The strength of Test::Unit
is its simple syntax: just remember #setup, #teardown, and #test_*. I'd
like to add the same simplicity to any new benchmarking system.

I agree. Keep it simple.
Any idea for a name? "benchmark" is unfortunately already taken...

Assuming you keep the test-unit like syntax, I vote for bench-unit.

Regards,

Dan
 
P

pat eyler

I must admit to being a great fan of RSpec, although the pace of API
changes has thrown me off for now, but I'm not sure this path is the
best. I did actually think about it at first. The strength of Test::Unit
is its simple syntax: just remember #setup, #teardown, and #test_*. I'd
like to add the same simplicity to any new benchmarking system. I'll
give it some more thought :)

Whichever way you go, please do consider configurable (and
extensible) additons like rehearsals and statistics.

Any idea for a name? "benchmark" is unfortunately already taken...

what about BenchBench (in a play on "Benchmarkers' Tool
Bench")
 
D

Daniel Schierbeck

I'd like to see a way to specify with or without rehearsals, and with or without
statistics (see
http://on-ruby.blogspot.com/2006/12/benchmarking-lies-and-statistics.html
for more info).

I think statistics may be beyond the immediate scope of this -- I'd need
the help of someone better at it than me, that's for sure!

One thing on my priority list is a nice Rake task that makes it easy to
run the benchmarks and set different options.


Daniel
 
D

Daniel Schierbeck

Whichever way you go, please do consider configurable (and
extensible) additons like rehearsals and statistics.



what about BenchBench (in a play on "Benchmarkers' Tool
Bench")

I'm not sure... the other guy[1] suggested BenchUnit, which I think
reads well, but I not sure how descriptive it is.


Cheers,
Daniel


. http://djberg96.livejournal.com/52734.html
 
D

David Chelimsky

Reading Phrogz' post about automatic benchmark iterations, and then
seeing Mauricios' lovely Adaptative Benchmark[1], I came to think we
might need a friendlier syntax for benchmarks all together. Minutes
later, I discovered that someone had almost the same idea as me[2].
Anyway, I'd like to just throw it out here, and hear what you people
think.

The idea is to make benchmarks syntactically similar to the current
Test::Unit test cases, e.g.

class SortBenchmark < Benchmark
def setup
@array = [1, 6, 2, 9, 4, 6, 2]
end

def teardown
@array = nil
end

def report_quicksort
@array.quicksort
end

def report_mergesort
@array.mergesort
end
end

Automatic iteration could be added, either as a class method call or by
creating a different base class (IterativeBenchmark?)

So, what do y'all think? If I'm not the only one liking this, I might
whip something up when I get some spare time...

Cool! Go BDD with it:

benchmark "compare sorting methods" do

compare "quick sort"
@array.quicksort
end

compare "merge sort" do
@array.mergesort
end

end

Or something like that.

I must admit to being a great fan of RSpec, although the pace of API
changes has thrown me off for now,

Thanks for being a fan. FWIW, the recent API changes are the last
significant ones. What's new in 0.8 will be the basis of 1.0.

Cheers,
David
 
T

Tim Pease

I must admit to being a great fan of RSpec, although the pace of API
changes has thrown me off for now, but I'm not sure this path is the
best. I did actually think about it at first. The strength of Test::Unit
is its simple syntax: just remember #setup, #teardown, and #test_*. I'd
like to add the same simplicity to any new benchmarking system. I'll
give it some more thought :)

Any idea for a name? "benchmark" is unfortunately already taken...

Algorithm for generating product names

1) locate random object on desk
2) pick random character from alphabet
3) replace first character of object (from step 1) with the chosen
character (from step 2)

Example

1) apricot
2) 'h'
3) hpricot

From my own desk

1) light bulb
2) 'f'
3) fight_bulb

See, piece of cake!

Blessings,
TwP
 
D

Daniel Schierbeck

On Mar 7, 8:26 am, Daniel Schierbeck <[email protected]>
wrote:
Reading Phrogz' post about automatic benchmark iterations, and then
seeing Mauricios' lovely Adaptative Benchmark[1], I came to think we
might need a friendlier syntax for benchmarks all together. Minutes
later, I discovered that someone had almost the same idea as me[2].
Anyway, I'd like to just throw it out here, and hear what you people
think.

The idea is to make benchmarks syntactically similar to the current
Test::Unit test cases, e.g.

class SortBenchmark < Benchmark
def setup
@array = [1, 6, 2, 9, 4, 6, 2]
end

def teardown
@array = nil
end

def report_quicksort
@array.quicksort
end

def report_mergesort
@array.mergesort
end
end

Automatic iteration could be added, either as a class method call or by
creating a different base class (IterativeBenchmark?)

So, what do y'all think? If I'm not the only one liking this, I might
whip something up when I get some spare time...

Cool! Go BDD with it:

benchmark "compare sorting methods" do

compare "quick sort"
@array.quicksort
end

compare "merge sort" do
@array.mergesort
end

end

Or something like that.

I must admit to being a great fan of RSpec, although the pace of API
changes has thrown me off for now,

Thanks for being a fan. FWIW, the recent API changes are the last
significant ones. What's new in 0.8 will be the basis of 1.0.

Cheers,
David

Lovely :) It's a great tool!


Daniel
 
D

Daniel Schierbeck

Algorithm for generating product names

1) locate random object on desk
2) pick random character from alphabet
3) replace first character of object (from step 1) with the chosen
character (from step 2)

Example

1) apricot
2) 'h'
3) hpricot


1) light bulb
2) 'f'
3) fight_bulb

See, piece of cake!

1) 'laptop'
2) 'b'
3) 'baptop' ?!

That's a weird cake...


Daniel ;)
 
D

Daniel Schierbeck

I've given this some thought, and it seems like a much bigger project
than I immediately imagined -- i was surprised to find a much greater
need of features than anticipated. I am currently pondering whether to
do this as a Summer of Code project (that is, if I'm lucky enough to get
elected.)

Unfortunately, I do not have great experience with benchmarking, neither
the underlying math (well, I have some), nor the actual need of the
people who would use an extended framework.

The only thing I have ever used benchmarking for was to compare the
speed of two methods with the same functionality (e.g. sorting).

So what I initially want to ask you guys is this: what else do you use
benchmarking for? Do you benchmark your entire application, and compare
results from different revisions? I'd really like to know.

Second, if there are others who would be interested in contributing to
any project, given that the need is real, I would be delighted to hear
from them!


Cheers,
Daniel
 
P

pat eyler

I've given this some thought, and it seems like a much bigger project
than I immediately imagined -- i was surprised to find a much greater
need of features than anticipated. I am currently pondering whether to
do this as a Summer of Code project (that is, if I'm lucky enough to get
elected.)

I think this would make an awesome summer of code project, and would
benefit a lot of other projects downstream. Please do sign up.

Unfortunately, I do not have great experience with benchmarking, neither
the underlying math (well, I have some), nor the actual need of the
people who would use an extended framework.

The math isn't my strongpoint either (on the statistical side), but I'm happy
to kibbitz (and to try to engage other folks who can help) if you'd like.

The only thing I have ever used benchmarking for was to compare the
speed of two methods with the same functionality (e.g. sorting).

So what I initially want to ask you guys is this: what else do you use
benchmarking for? Do you benchmark your entire application, and compare
results from different revisions? I'd really like to know.

well, the JRuby/rubinius/etc folks could sure use a common benchmarking
suite to look at sets of results from different builds (and different
platforms).
I do some revision v revision benchmarking of apps as well.
Second, if there are others who would be interested in contributing to
any project, given that the need is real, I would be delighted to hear
from them!

I'll weigh in some more as scope and plans become more clear (and after
I get the other eleventy-six things off my plate).
 
J

James Edward Gray II

I think this would make an awesome summer of code project, and would
benefit a lot of other projects downstream. Please do sign up.

I agree and I would be happy to mentor it, so be sure and get it in
the list. ;)

James Edward Gray II
 
J

Jan Friedrich

Daniel said:
So what I initially want to ask you guys is this: what else do you use
benchmarking for? Do you benchmark your entire application, and compare
results from different revisions? I'd really like to know.
At the moment I want to do benchmarking two revisions (in version
control) of a method in one project, so a solution to do such things
would be nice but is probably not easy to handle.

In addition if you test some small peace of code you has to execute this
functionality multiple times (often 1000 times or more) to get
significant results. I would be happy to see the possibility for this
common task in a framework.


regards
Jan
 
J

Jan Friedrich

Daniel said:
So what I initially want to ask you guys is this: what else do you use
benchmarking for? Do you benchmark your entire application, and compare
results from different revisions? I'd really like to know.
At the moment I want to do benchmarking two revisions (in version
control) of a method in one project, so a solution to do such things
would be nice but is probably not easy to handle.

In addition if you test some small piece of code you has to execute this
functionality multiple times (often 1000 times or more) to get
significant results. I would be happy to see the possibility for this
common task in a framework.


regards
Jan
 
D

Daniel Schierbeck

The math isn't my strongpoint either (on the statistical side), but I'm happy
to kibbitz (and to try to engage other folks who can help) if you'd like.

I would surely need help of statistics gurus to get it right, so please
do. The project itself might not begin immediately though (I've got
exams three weeks from now :x)
well, the JRuby/rubinius/etc folks could sure use a common benchmarking
suite to look at sets of results from different builds (and different
platforms).
I do some revision v revision benchmarking of apps as well.

I thought that was what had the most potential. I might take a look
around to see if there are any existing solutions (there probably are
for Rails) and if I could extract some code into a library. I'll
probably have to write a lot of things from the ground up, though.
I'll weigh in some more as scope and plans become more clear (and after
I get the other eleventy-six things off my plate).

I know the feeling ;)


Cheers,
Daniel
 
D

Daniel Schierbeck

I agree and I would be happy to mentor it, so be sure and get it in
the list. ;)

How could I say no now :D

I'll see how things look for my summer, and then I'll consider whether
to do this. It looks like it could be a very cool project.


Cheers, and thanks for the support,
Daniel
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,575
Members
45,053
Latest member
billing-software

Latest Threads

Top