Calculating Elapsed Time

J

Jean Johnson

Hello -

I have a start and end time that is written using the
following:

time.strftime("%b %d %Y %H:%M:%S")

How do I calculate the elapsed time?

JJ



__________________________________________
Yahoo! DSL – Something to write home about.
Just $16.99/mo. or less.
dsl.yahoo.com
 
B

Bengt Richter

Hello -

I have a start and end time that is written using the
following:

time.strftime("%b %d %Y %H:%M:%S")

How do I calculate the elapsed time?
83.0
(seconds elapsed)

Perhaps time was available as a number before stftime was used? E.g., t2 can round trip: 'Dec 06 2005 22:08:34'


for more info, see time module docs at

http://www.python.org/doc/current/lib/module-time.html

and in general, learn how to find info starting at

http://www.python.org/doc/

also, interactively,
import time
help(time)

Regards,
Bengt Richter
 
M

malv

"Note that even though the time is always returned as a floating point
number, not all systems provide time with a better precision than 1
second." says the doc.
Can anything be said about precision if indeed your system returns
figures after the decimal point?
Thx.
malv
 
P

Peter Hansen

malv said:
"Note that even though the time is always returned as a floating point
number, not all systems provide time with a better precision than 1
second." says the doc.
Can anything be said about precision if indeed your system returns
figures after the decimal point?

A few things.

1. "Precision" is probably the wrong word there. "Resolution" seems
more correct.

2. If your system returns figures after the decimal point, it probably
has better resolution than one second (go figure). Depending on what
system it is, your best bet to determine why is to check the
documentation for your system (also go figure), since the details are
not really handled by Python. Going by memory, Linux will generally be
1ms resolution (I might be off by 10 there...), while Windows XP has
about 64 ticks per second, so .015625 resolution...

-Peter
 
F

Fredrik Lundh

Peter said:
Going by memory, Linux will generally be 1ms resolution (I might be
off by 10 there...), while Windows XP has about 64 ticks per second,
so .015625 resolution...

here's a silly little script that measures the difference between
two distinct return values, and reports the maximum frequency
it has seen this far:

import time

def test(func):
mm = 0
t0 = func()
while 1:
t1 = func()
if t0 != t1:
m = max(1 / (t1 - t0), mm)
if m != mm:
print m
mm = m
t0 = t1

test(time.time)
# test(time.clock)

if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

</F>
 
G

Grant Edwards

2. If your system returns figures after the decimal point, it
probably has better resolution than one second (go figure).
Depending on what system it is, your best bet to determine
why is to check the documentation for your system (also go
figure), since the details are not really handled by
Python. Going by memory, Linux will generally be 1ms
resolution (I might be off by 10 there...),

In my experience, time.time() on Linux has a resolution of
about 1us. The delta I get when I do

print time.time()-time.time()

is usually about 2-3us, but some of that is probably due to the
overhead involved.
 
G

Grant Edwards

here's a silly little script that measures the difference between
two distinct return values, and reports the maximum frequency
it has seen this far:

import time

def test(func):
mm = 0
t0 = func()
while 1:
t1 = func()
if t0 != t1:
m = max(1 / (t1 - t0), mm)
if m != mm:
print m
mm = m
t0 = t1

test(time.time)
# test(time.clock)

if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

At least under Linux, I suspect you're just measuring loop time
rather than the granularity of the time measurement. I don't
know which library call the time modules uses, but if it's
gettimeofday(), that is limited to 1us resolution.
clock_gettime() provides an API with 1ns resolution. Not sure
what the actual data resolution is...
 
G

Grant Edwards

At least under Linux, I suspect you're just measuring loop time
rather than the granularity of the time measurement. I don't
know which library call the time modules uses, but if it's
gettimeofday(), that is limited to 1us resolution.
clock_gettime() provides an API with 1ns resolution. Not sure
what the actual data resolution is...

Depending on which clock is used, the resolution of
clock_gettime() appears to be as low as 1ns. I usually get
deltas of 136ns when calling clock_gettime() using CLOCK_PROCESS_CPUTIME_ID.
I suspect the function/system call overhead is larger than the
clock resolution.

IIRC, time.time() uses gettimeofday() under Linux, so you can
expect 1us resolution.
 
F

Fredrik Lundh

Grant said:
At least under Linux, I suspect you're just measuring loop time
rather than the granularity of the time measurement.

Yeah, I said it was silly. On the other hand, the Linux box is a lot faster
than the Windows box I'm using, and I do get the same result no matter
what Python version I'm using...

(and in this context, neither 262144 nor 1789772 are random numbers...)

</F>
 
G

Grant Edwards

Yeah, I said it was silly. On the other hand, the Linux box is a lot faster
than the Windows box I'm using, and I do get the same result no matter
what Python version I'm using...

(and in this context, neither 262144 nor 1789772 are random numbers...)

262144 is 3.8us. That seems pretty large. What do you get
when you do this:

import time
for i in range(10):
print time.time()-time.time()

After the first loop, I usually get one of three values:

3.099us, 2.14,us, 2.86us.

In any case, the resolution of time.time() _appears_ to be less
than 1us.
 
B

Bengt Richter

In my experience, time.time() on Linux has a resolution of
about 1us. The delta I get when I do

print time.time()-time.time()

is usually about 2-3us, but some of that is probably due to the
overhead involved.
Try
9.9999904632568359

(This NT4 box is slow ;-)
BTW time.time is just the 100hz scheduling slice
149147.75106031806

Regards,
Bengt Richter
 
G

Grant Edwards

0.00095367431640625

Yup. That has less overhead than my original example because
you've avoided the extra name lookup:
.... print t()-t()
....
-4.05311584473e-06
-1.90734863281e-06
-1.90734863281e-06
-2.14576721191e-06
-2.86102294922e-06
-1.90734863281e-06
-2.14576721191e-06
-2.14576721191e-06
-9.53674316406e-07
-1.90734863281e-06

The min delta seen is 0.95us. I'm guessing thats
function/system call overhead and not timer resolution.
 
P

Peter Hansen

Fredrik said:
Peter said:
Going by memory, Linux will generally be 1ms resolution (I might be
off by 10 there...), while Windows XP has about 64 ticks per second,
so .015625 resolution...
[snip script]
if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

For the record, the WinXP box I'm on is also 100 for time.time. Which
is quite odd, as I have a distinct memory of us having done the above
type of measurement before and having had it come out at 64. Colour me
puzzled.

-Peter
 
C

Christopher Subich

Peter said:
A few things.

1. "Precision" is probably the wrong word there. "Resolution" seems
more correct.

2. If your system returns figures after the decimal point, it probably
has better resolution than one second (go figure). Depending on what
system it is, your best bet to determine why is to check the
documentation for your system (also go figure), since the details are
not really handled by Python. Going by memory, Linux will generally be
1ms resolution (I might be off by 10 there...), while Windows XP has
about 64 ticks per second, so .015625 resolution...

One caevat: on Windows systems, time.clock() is actually the
high-precision clock (and on *nix, it's an entirely different
performance counter). Its semantics for time differential, IIRC, are
exactly the same, so if that's all you're using it for it might be worth
wrapping time.time / time.clock as a module-local timer function
depending on sys.platform.
 
G

Grant Edwards

... print t()-t()
...
-4.05311584473e-06
-1.90734863281e-06
-1.90734863281e-06
-2.14576721191e-06
-2.86102294922e-06
-1.90734863281e-06
-2.14576721191e-06
-2.14576721191e-06
-9.53674316406e-07
-1.90734863281e-06

The min delta seen is 0.95us. I'm guessing thats
function/system call overhead and not timer resolution.

After looking at the implimentation of time.time under Linux,
it should have _exactly_ 1us resolution. I suspect that on a
Linux platform the resolution of time.time() is being limited
by IEEE double representation.

Yup:
16

1us resolution for the time.time() value requires 16
significant digits. That's more than IEEE 64bit floating point
can represent accurately, and why the apparent resolution of
time.time() under Linux is only approximately 1us.
 
F

Fredrik Lundh

except if I run it under my latest 2.4 build, where I get 524288 ...
262144 is 3.8us. That seems pretty large. What do you get
when you do this:

import time
for i in range(10):
print time.time()-time.time()

After the first loop, I usually get one of three values:

3.099us, 2.14,us, 2.86us.

I get two different values:

-1.90734863281e-06
-2.14576721191e-06

on this hardware (faster than the PC I'm using right now, but still not a
very fast machine). let's check a faster linux box:

$ python2.4 test.py
-6.91413879395e-06
-1.90734863281e-06
-1.90734863281e-06
-1.90734863281e-06
-1.90734863281e-06
-2.14576721191e-06
-1.90734863281e-06
-2.14576721191e-06
-1.90734863281e-06
-1.90734863281e-06

if I keep running the script over and over again, I do get individual

-1.19209289551e-06

items from time to time on both machines...

</F>
 
C

Christopher Subich

Fredrik said:
if I run this on the Windows 2K box I'm sitting at right now, it settles
at 100 for time.time, and 1789772 for time.clock. on linux, I get 100
for time.clock instead, and 262144 for time.time.

Aren't the time.clock semantics different on 'nix? I thought, at least
on some 'nix systems, time.clock returned a "cpu time" value that
measured actual computation time, rather than wall-clock time [meaning
stalls in IO don't count].

This is pretty easily confirmed, at least on one particular system
(interactive prompt, so the delay is because of typing):

Python 2.2.3 (#1, Nov 12 2004, 13:02:04)
[GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-42)] on linux2
Type "help", "copyright", "credits" or "license" for more information.(0.0019519999999999989, 7.6953330039978027)

So caevat programmer when using time.clock; its meaning is different on
different platforms.
 
P

Peter Hansen

Peter said:
For the record, the WinXP box I'm on is also 100 for time.time. Which
is quite odd, as I have a distinct memory of us having done the above
type of measurement before and having had it come out at 64. Colour me
puzzled.

But another XP (SP1) box at a customer site is reporting 64Hz. Mine is
SP2. It isn't reasonable to think that actually changed with SP2.
Colour me even more puzzled... time for some research, or for an expert
to weigh in.

-Peter
 
G

Grant Edwards

I get two different values:

-1.90734863281e-06
-2.14576721191e-06

on this hardware (faster than the PC I'm using right now, but still not a
very fast machine). let's check a faster linux box:

$ python2.4 test.py
-6.91413879395e-06
-1.90734863281e-06
-1.90734863281e-06
-1.90734863281e-06
-1.90734863281e-06
-2.14576721191e-06
-1.90734863281e-06
-2.14576721191e-06
-1.90734863281e-06
-1.90734863281e-06

if I keep running the script over and over again, I do get individual

-1.19209289551e-06

items from time to time on both machines...

We're seeing floating point representation issues.

The resolution of the underlying call is exactly 1us. Calling
gettimeofday() in a loop in C results in deltas of exactly 1 or
2 us. Python uses a C double to represent time, and a double
doesn't have enough bit to accurately represent 1us resolution.
 
B

Bengt Richter

We're seeing floating point representation issues.

The resolution of the underlying call is exactly 1us. Calling
gettimeofday() in a loop in C results in deltas of exactly 1 or
2 us. Python uses a C double to represent time, and a double
doesn't have enough bit to accurately represent 1us resolution.
Is there a timer chip that is programmed to count at exactly 1us steps?
If this is trying to be platform independent, I think it has to be
faking it sometimes. E.g., I thought on windows you could sometimes
get a time based on a pentium time stamp counter, which gets 64 bits
with a RDTSC instruction and counts at full CPU clock rate (probably
affected by power management slowdown when applicable, but I don't know this),
If you write in C, you can set a base value (which ISTR clock does
the first time it's called) and return deltas that could fit at full time
stamp counter LSB resolution in the 53 bits of a double for quite a while,
even at a Ghz (over 100 days, I think ... let's see: 104.24999137431705
yep. So there has to be a floating convert and multiply in order to get seconds.

It would be interesting to dedicate a byte code to optimized inline raw time stamp reading
into successive 64-slots of a pre-allocated space, and have a way to get a call to
an identified funtion name be translated to this byte code instead of normal function call
instructions. One could do it with a byte-code-hacking decorator for code within a function
if the byte code were available, and a module giving access to the buffer were available.
Then the smallest interval would be a loop through the byte code switch (I think you
can read the register in user mode, unless a bit has been set to prevent it, so there
shouldn't even be kernel call and context switch overhead. Of course, if no counter is available,
the byte code would probably have to raise an exception instead, or fake it with a timer chip register.

I have a rdtsc.dll module, but I haven't re-compiled it for current version. Another back burner
thing. (I was trying to get CPU chip version detection right so the module would refuse to
load if there wasn't a suitable chip, IIRC). Sigh.

Regards,
Bengt Richter
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,578
Members
45,052
Latest member
LucyCarper

Latest Threads

Top