Unexpected timing results

S

Steven D'Aprano

I have two code snippets to time a function object being executed. I
expected that they should give roughly the same result, but one is more
than an order of magnitude slower than the other.

Here are the snippets:

def timer1():
timer = time.time
func = lambda : None
itr = [None] * 1000000
t0 = timer()
for _ in itr:
func()
t1 = timer()
return t1 - t0

def timer2():
timer = time.time
func = lambda : None
itr = [None] * 1000000
t = 0.0
for _ in itr:
t0 = timer()
func()
t1 = timer()
t += t1 - t0
return t

Here are the results:
6.1631934642791748

Of course I expect timer2 should take longer to execute in total,
because it is doing a lot more work. But it seems to me that all that
extra work should not affect the time measured, which (I imagine) should
be about the same as timer1. Possibly even less as it isn't timing the
setup of the for loop.

Any ideas what is causing the difference? I'm running Python 2.3 under
Linux.

Thanks,
 
L

Larry Bates

Steven said:
I have two code snippets to time a function object being executed. I
expected that they should give roughly the same result, but one is more
than an order of magnitude slower than the other.

Here are the snippets:

def timer1():
timer = time.time
func = lambda : None
itr = [None] * 1000000
t0 = timer()
for _ in itr:
func()
t1 = timer()
return t1 - t0

def timer2():
timer = time.time
func = lambda : None
itr = [None] * 1000000
t = 0.0
for _ in itr:
t0 = timer()
func()
t1 = timer()
t += t1 - t0
return t

Here are the results:
6.1631934642791748

Of course I expect timer2 should take longer to execute in total,
because it is doing a lot more work. But it seems to me that all that
extra work should not affect the time measured, which (I imagine) should
be about the same as timer1. Possibly even less as it isn't timing the
setup of the for loop.

Any ideas what is causing the difference? I'm running Python 2.3 under
Linux.

Thanks,

Steven,

In timer2 you are making a million extra calls to timer()
function and doing a million additions and a million
subtractions that are not being done in timer1() function.
I think that is where your "extra" time is located.

-Larry Bates
 
K

Kent Johnson

Steven said:
I have two code snippets to time a function object being executed. I
expected that they should give roughly the same result, but one is more
than an order of magnitude slower than the other.

Here are the snippets:

def timer1():
timer = time.time
func = lambda : None
itr = [None] * 1000000
t0 = timer()
for _ in itr:
func()
t1 = timer()
return t1 - t0

def timer2():
timer = time.time
func = lambda : None
itr = [None] * 1000000
t = 0.0
for _ in itr:
t0 = timer()
func()
t1 = timer()
t += t1 - t0
return t

Here are the results:

0.54168200492858887

6.1631934642791748

Of course I expect timer2 should take longer to execute in total,
because it is doing a lot more work. But it seems to me that all that
extra work should not affect the time measured, which (I imagine) should
be about the same as timer1. Possibly even less as it isn't timing the
setup of the for loop.

Any ideas what is causing the difference? I'm running Python 2.3 under
Linux.

You are including the cost of one call to timer() in each loop: The cost
of returning the time from the first call to timer() and the the cost of
getting the time in the second call. On my computer with Python 2.4:

D:\Projects\CB>python -m timeit -s "import time;timer=time.time" "t=timer()"
1000000 loops, best of 3: 0.498 usec per loop

which is almost exactly the same as the difference between timer1() and
timer2() on my machine:
timer1 0.30999994278
timer2 0.812000274658

using Python 2.4.2 on Win2k.

Kent
 
S

Steven D'Aprano

....

In timer2 you are making a million extra calls to timer()
function and doing a million additions and a million
subtractions that are not being done in timer1() function.
I think that is where your "extra" time is located.

But those extra additions, subtractions, etc. are not being timed, because
they are outside the part where the stopwatch is ticking, in a manner of
speaking.

I know that timer2 will take longer to execute than timer1. That's not
the surprising bit. But I'm not measuring all the execution time, only
part of it, and not all that execution time is being timed.

The only extra work I can see is a call to timer() each loop. The first
call shouldn't count, because it happens before the clock starts. Is
calling time.time() really such as expensive operation, especially when
I've rebound the name so as to avoid a name lookup?

I guess it is: I've run two more tests, both going back to the form of
timer1, but changing the function being timed. The first uses math.sin.
The second uses time.timer.

Here is the code:

def timer3():
timer = time.time
func = math.sin
itr = [None] * 1000000
t0 = timer()
for _ in itr:
func(0.1)
t1 = timer()
return t1 - t0

def timer4():
timer = time.time
func = timer
itr = [None] * 1000000
t0 = timer()
for _ in itr:
func()
t1 = timer()
return t1 - t0

And the results:
5.2274949550628662



It looks like the time function under Linux at least is very slow.
 
M

Magnus Lycka

Steven said:
It looks like the time function under Linux at least is very slow.

Perhaps you should try doing the same thing in C.
Then you can see whether the problem is in the
wrapper or in the system call.
 
S

Steven D'Aprano

Perhaps you should try doing the same thing in C.
Then you can see whether the problem is in the
wrapper or in the system call.

Good idea.

Now, has anyone got a copy of "Learning C for Dummies" I can borrow?

*chagrined smile*
 
R

Roel Schroeven

Steven D'Aprano schreef:
Good idea.

Now, has anyone got a copy of "Learning C for Dummies" I can borrow?

*chagrined smile*

First, for reference, the Python version on the machine I used for
testing (Debian 3.1, P4 3GHz):

timer1: 0.36007809639
timer2: 3.18800234795

On Windows the difference is much less.

C (compiled with gcc 3.3.5 without optimizations) (see below for the code):

timer1: 0.000603
timer2: 2.624557

I think it looks like the problem is in the system call.



C code:

#include <stdio.h>
#include <sys/time.h>
#include <time.h>

void func(void)
{
}

double gettime()
{
struct timeval tv;
gettimeofday(&tv, NULL);
return tv.tv_sec + tv.tv_usec/1e6;
}

double timer1(void)
{
double t0, t1;
int i;
t0 = gettime();
for (i = 0; i < 100000; ++i)
{
func();
}
t1 = gettime();
return t1 - t0;
}

double timer2(void)
{
int i;
double t = 0.0;
double t0, t1;
for (i = 0; i < 1000000; ++i)
{
t0 = gettime();
func();
t1 = gettime();
t += t1 - t0;
}
return t;
}

int main(void)
{
printf("timer1: %lf\n", timer1());
printf("timer2: %lf\n", timer2());
return 0;
}
 
S

Steven D'Aprano

On Fri, 24 Feb 2006 10:51:09 +0000, Roel Schroeven wrote:

[snip]
I think it looks like the problem is in the system call.

Thank you Roel for going the extra mile. I appreciate the unexpected bonus.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,763
Messages
2,569,563
Members
45,039
Latest member
CasimiraVa

Latest Threads

Top