Non-OO interface to mysql

P

Peter J. Holzer

If you repeat that a few times you will see that in about 50 % of the
cases the procedural call is slightly faster than the oo call and in
50 % of the cases the oo call is slightly faster - or in other words,
the difference is not even measurable with a simple benchmark.

Totally agree but this particular code gives OO an
edge due to method caching.

A slight tweak: $arg . ("a".."z")[int(rand(26))]
consistently tilts for the faster procedural call ...
at least for some small Solaris and Linux sample
runs.

Interesting. I hadn't expected that the values of the arguments have an
influence on the method cache. How does the method cache work?

hp
 
K

Klaus

On Dec 21, 8:53 pm, "Peter J. Holzer" <[email protected]> wrote:

[Non-OO DB interface]
No need to write wrapper functions. A plain...
    $rows = DBI::mysql_selectall_arrayref($dbh, "select * from
table");
...should work.

No, because $dbh is not an object of the class DBI, but of the class
DBI::db.
DBI uses quite a lot of classes internally, and several of them depend
on the DBD driver actually in use, so you generally don't know (and
shouldn't care) which class an object belongs to (just which interface
it implements, to borrow some terminology from Java).

Thanks for your correction.
 
C

C.DeRykus

Totally agree but this particular code gives OO an
edge due to method caching.
A slight tweak: $arg . ("a".."z")[int(rand(26))]
consistently tilts for the faster procedural call ...
at least for some small Solaris and Linux sample
runs.

Interesting. I hadn't expected that the values of the arguments have an
influence on the method cache. How does the method cache work?

Actually, I just assumed that expensive method pointer
lookups were being cached and that leveled the playing
field. Normally, that extra lookup would slow the OO
call and procedural would win. Also, I assume that if
the method argument is constant, as it is here, the call
result itself could be cached too and that'd apply to the procedural
call too.
But, caching the method pointer lookup would seem
independent of the constant argument caching. Maybe
Perl doesn't bother with the method pointer caching if the arguments
can't be cached. I don't know....
 
P

Peter J. Holzer

If you repeat that a few times you will see that in about 50 % of the
cases the procedural call is slightly faster than the oo call and in
50 % of the cases the oo call is slightly faster - or in other words,
the difference is not even measurable with a simple benchmark.
Totally agree but this particular code gives OO an
edge due to method caching.
A slight tweak: $arg . ("a".."z")[int(rand(26))]
consistently tilts for the faster procedural call ...
at least for some small Solaris and Linux sample
runs.

Interesting. I hadn't expected that the values of the arguments have an
influence on the method cache. How does the method cache work?

Actually, I just assumed that expensive method pointer
lookups were being cached and that leveled the playing
field. Normally, that extra lookup would slow the OO
call and procedural would win.

Well, the purpose of my test code was to find out how expensive these
pointer lookups really are. A single pointer lookup would be completely
insignificant compared to a normal perl subroutine call (which is
already quite expensive). A hash lookup would be more expensive. If @ISA
had to be searched (recursively) for each call, it would be even more
expensive. My simple benchmark indicated that the overhead was very low
(about 15ns per call), so I assumed it was only an extra pointer lookup
or so. Your modification dramatically increased the overhead of the oo
code (to about 285 ns per call). So my assumption was obviously wrong.
What irritates me is that I don't see what could cause the difference.

Also, I assume that if the method argument is constant, as it is
here, the call result itself could be cached too

No, this is not possible. Consider the call «rand(26)» in your code
(rand is a builtin, but it could easily be implemented in Perl): The
argument is a constant (26), but the return value changes every time.
If perl would cache the return value the result wouldn't be very random,
would it?
and that'd apply to the procedural call too.

Right.

In both the oo and the procedural case the compiler could determine that
a function is "pure" (i.e., the result depends only on the arguments)
and optimize accordingly. Perl doesn't afaik, with one important
exception: A function with prototype () returning a constant is inlined.

There is a module "Memoize" to add a caching layer to any sub.

hp
 
C

C.DeRykus

....

Well, the purpose of my test code was to find out how expensive these
pointer lookups really are. A single pointer lookup would be completely
insignificant compared to a normal perl subroutine call (which is
already quite expensive). A hash lookup would be more expensive. If @ISA
had to be searched (recursively) for each call, it would be even more
expensive. My simple benchmark indicated that the overhead was very low
(about 15ns per call), so I assumed it was only an extra pointer lookup
or so. Your modification dramatically increased the overhead of the oo
code (to about 285 ns per call). So my assumption was obviously wrong.
What irritates me is that I don't see what could cause the difference.


No, this is not possible. Consider the call «rand(26)» in your code
(rand is a builtin, but it could easily be implemented in Perl): The
argument is a constant (26), but the return value changes every time.
If perl would cache the return value the result wouldn't be very random,
would it?

You're right that doesn't make sense generally although
I was thinking about the orig. sub foo() in the benchmark
code.
Right.

In both the oo and the procedural case the compiler could determine that
a function is "pure" (i.e., the result depends only on the arguments)
and optimize accordingly. Perl doesn't afaik, with one important
exception: A function with prototype () returning a constant is inlined.

It certainly appears though that Perl sneaks in some kind
of optimization. And, I would guess that, since foo() was
just trivially returning the exact arg that's passed, it's a
prime candidate for optimizing.. somewhere in op.c maybe...?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,534
Members
45,008
Latest member
Rahul737

Latest Threads

Top