Possible inaccuracy in Python 2.4 when using profiler calibration

B

Brian Quinlan

I have a misinformed theory that I'd like to share with the list.

I believe that profiler calibration no longer makes sense in Python 2.4
because C functions are tracked and they have a different call overhead
than Python functions (and calibration is done only using Python
functions). Here is my reasoning (in code form):

% c:\python24\python
........ print p.calibrate(1000000)
....
4.00375499355e-006
3.95700929469e-006
3.94034406478e-006
4.00315854962e-006
3.99454335716e-006

Code
----
import profile
profile.Profile.bias = 3.90e-006 # very conservative

def bar():
l = []
for i in range(100000):
l +=

def foo():
l = []
for i in range(100000):
l.append(i) # C function that can be tracked

def baz():
bar()
foo()

profile.run('baz()', "out.prof")
from pstats import Stats
s = Stats('out.prof')
s.sort_stats('time', 'calls')
s.print_stats()

from timeit import Timer

t1 = Timer(
'bar()', 'from __main__ import bar',)
print 'bar():', t1.timeit(1000) / 1000

t2 = Timer(
'foo()', 'from __main__ import foo',)
print 'foo():', t2.timeit(1000) / 1000

Output
------

Thu Jun 15 10:22:29 2006 out.prof

100008 function calls in -0.090 CPU seconds

Ordered by: internal time, call count

ncalls tottime percall cumtime percall filename:lineno(function)
1 0.058 0.058 0.062 0.062 cal-test.py:4(bar)
2 0.006 0.003 0.006 0.003 :0(range)
1 0.004 0.004 -0.090 -0.090 cal-test.py:14(baz)
1 0.001 0.001 0.001 0.001 :0(setprofile)
1 0.000 0.000 -0.090 -0.090 profile:0(baz())
1 0.000 0.000 -0.090 -0.090 <string>:1(?)
0 0.000 0.000 profile:0(profiler)
1 -0.066 -0.066 -0.157 -0.157 cal-test.py:9(foo)
100000 -0.094 -0.000 -0.094 -0.000 :0(append)


bar(): 0.0582713573932
foo(): 0.0370039776005

Analysis
--------

As you can see, the profiler result for "bar" is pretty reasonable but
it is not for "foo" or "append". I believe that is because the calling
of the C function "append" takes less time than is accounted for in the
bias measurement (which was generated by measuring the call time of a
Python function).

So the bias computation doesn't make sense in Python 2.4.

What do y'all think? Is this a well known fact? Should I construct a
test to see if C function call overhead is actually less than Python
function call overhead?

Cheers,
Brian
 
D

Dieter Maurer

Brian Quinlan said:
I have a misinformed theory that I'd like to share with the list.

I believe that profiler calibration no longer makes sense in Python
2.4 because C functions are tracked and they have a different call
overhead than Python functions (and calibration is done only using
Python functions). Here is my reasoning (in code form):

I fear it never made sense -- even with pure Python functions:

I tried to calibrate under Linux and failed miserably:
apparently, the "clock" resolution is very coarse
introducing a very high variance which is very bad for
calibration.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,042
Latest member
icassiem

Latest Threads

Top