time.time or time.clock

Discussion in 'Python' started by Ron Adam, Jan 13, 2008.

  1. Ron Adam

    Ron Adam Guest

    I'm having some cross platform issues with timing loops. It seems
    time.time is better for some computers/platforms and time.clock others, but
    it's not always clear which, so I came up with the following to try to
    determine which.


    import time

    # Determine if time.time is better than time.clock
    # The one with better resolution should be lower.
    if time.clock() - time.clock() < time.time() - time.time():
    clock = time.clock
    else:
    clock = time.time


    Will this work most of the time, or is there something better?


    Ron
     
    Ron Adam, Jan 13, 2008
    #1
    1. Advertising

  2. Ron Adam

    John Machin Guest

    On Jan 14, 7:05 am, Ron Adam <> wrote:
    > I'm having some cross platform issues with timing loops. It seems
    > time.time is better for some computers/platforms and time.clock others, but


    Care to explain why it seems so?

    > it's not always clear which, so I came up with the following to try to
    > determine which.
    >
    > import time
    >
    > # Determine if time.time is better than time.clock
    > # The one with better resolution should be lower.
    > if time.clock() - time.clock() < time.time() - time.time():
    > clock = time.clock
    > else:
    > clock = time.time
    >
    > Will this work most of the time, or is there something better?
    >


    Manual:
    """
    clock( )

    On Unix, return the current processor time as a floating point number
    expressed in seconds. The precision, and in fact the very definition
    of the meaning of ``processor time'', depends on that of the C
    function of the same name, but in any case, this is the function to
    use for benchmarking Python or timing algorithms.

    On Windows, this function returns wall-clock seconds elapsed since the
    first call to this function, as a floating point number, based on the
    Win32 function QueryPerformanceCounter(). The resolution is typically
    better than one microsecond.
    [snip]

    time( )

    Return the time as a floating point number expressed in seconds since
    the epoch, in UTC. Note that even though the time is always returned
    as a floating point number, not all systems provide time with a better
    precision than 1 second. While this function normally returns non-
    decreasing values, it can return a lower value than a previous call if
    the system clock has been set back between the two calls.
    """

    AFAICT that was enough indication for most people to use time.clock on
    all platforms ... before the introduction of the timeit module; have
    you considered it?

    It looks like your method is right sometimes by accident. func() -
    func() will give a negative answer with a high resolution timer and a
    meaningless answer with a low resolution timer, where "high" and "low"
    are relative to the time taken for the function call, so you will pick
    the high resolution one most of the time because the meaningless
    answer is ZERO (no tick, no change). Some small fraction of the time
    the low resolution timer will have a tick between the two calls and
    you will get the wrong answer (-big < -small). In the case of two
    "low" resolution timers, both will give a meaningless answer and you
    will choose arbitrarily.

    HTH,
    John
     
    John Machin, Jan 13, 2008
    #2
    1. Advertising

  3. John Machin wrote:

    > AFAICT that was enough indication for most people to use time.clock on
    > all platforms ...


    which was unfortunate, given that time.clock() isn't even a proper clock
    on most Unix systems; it's a low-resolution sample counter that can
    happily assign all time to a process that uses, say, 2% CPU and zero
    time to one that uses 98% CPU.

    > before the introduction of the timeit module; have you considered it?


    whether or not "timeit" suites his requirements, he can at least replace
    his code with

    clock = timeit.default_timer

    which returns a good wall-time clock (which happens to be time.time() on
    Unix and time.clock() on Windows).

    </F>
     
    Fredrik Lundh, Jan 13, 2008
    #3
  4. Ron Adam

    Ron Adam Guest

    John Machin wrote:
    > On Jan 14, 7:05 am, Ron Adam <> wrote:
    >> I'm having some cross platform issues with timing loops. It seems
    >> time.time is better for some computers/platforms and time.clock others, but

    >
    > Care to explain why it seems so?
    >
    >> it's not always clear which, so I came up with the following to try to
    >> determine which.
    >>
    >> import time
    >>
    >> # Determine if time.time is better than time.clock
    >> # The one with better resolution should be lower.
    >> if time.clock() - time.clock() < time.time() - time.time():
    >> clock = time.clock
    >> else:
    >> clock = time.time
    >>
    >> Will this work most of the time, or is there something better?
    >>

    >
    > Manual:
    > """
    > clock( )
    >
    > On Unix, return the current processor time as a floating point number
    > expressed in seconds. The precision, and in fact the very definition
    > of the meaning of ``processor time'', depends on that of the C
    > function of the same name, but in any case, this is the function to
    > use for benchmarking Python or timing algorithms.
    >
    > On Windows, this function returns wall-clock seconds elapsed since the
    > first call to this function, as a floating point number, based on the
    > Win32 function QueryPerformanceCounter(). The resolution is typically
    > better than one microsecond.
    > [snip]
    >
    > time( )
    >
    > Return the time as a floating point number expressed in seconds since
    > the epoch, in UTC. Note that even though the time is always returned
    > as a floating point number, not all systems provide time with a better
    > precision than 1 second. While this function normally returns non-
    > decreasing values, it can return a lower value than a previous call if
    > the system clock has been set back between the two calls.
    > """
    >
    > AFAICT that was enough indication for most people to use time.clock on
    > all platforms ... before the introduction of the timeit module; have
    > you considered it?


    I use it to time a Visual Python loop which controls frame rate updates and
    set volocities according to time between frames, rather than frame count.
    The time between frames depends both on the desired frame rate, and the
    background load on the computer, so it isn't constant.

    time.clock() isn't high enough resolution for Ubuntu, and time.time() isn't
    high enough resolution on windows.

    I do use timeit for bench marking, but haven't tried using in a situation
    like this.


    > It looks like your method is right sometimes by accident. func() -
    > func() will give a negative answer with a high resolution timer and a
    > meaningless answer with a low resolution timer, where "high" and "low"
    > are relative to the time taken for the function call, so you will pick
    > the high resolution one most of the time because the meaningless
    > answer is ZERO (no tick, no change). Some small fraction of the time
    > the low resolution timer will have a tick between the two calls and
    > you will get the wrong answer (-big < -small).


    If the difference is between two high resolution timers then it will be
    good enough. I think the time between two consectutive func() calls is
    probably low enough to rule out low resolution timers.


    In the case of two
    > "low" resolution timers, both will give a meaningless answer and you
    > will choose arbitrarily.


    In the case of two low resolution timers, it will use time.time. In this
    case I probably need to raise an exception. My program won't work
    correctly with a low resolution timer.

    Thanks for the feed back, I will try to find something more dependable.

    Ron
     
    Ron Adam, Jan 14, 2008
    #4
  5. Ron Adam

    Ron Adam Guest

    John Machin wrote:
    > On Jan 14, 7:05 am, Ron Adam <> wrote:
    >> I'm having some cross platform issues with timing loops. It seems
    >> time.time is better for some computers/platforms and time.clock others, but

    >
    > Care to explain why it seems so?
    >
    >> it's not always clear which, so I came up with the following to try to
    >> determine which.
    >>
    >> import time
    >>
    >> # Determine if time.time is better than time.clock
    >> # The one with better resolution should be lower.
    >> if time.clock() - time.clock() < time.time() - time.time():
    >> clock = time.clock
    >> else:
    >> clock = time.time
    >>
    >> Will this work most of the time, or is there something better?
    >>

    >
    > Manual:
    > """
    > clock( )
    >
    > On Unix, return the current processor time as a floating point number
    > expressed in seconds. The precision, and in fact the very definition
    > of the meaning of ``processor time'', depends on that of the C
    > function of the same name, but in any case, this is the function to
    > use for benchmarking Python or timing algorithms.
    >
    > On Windows, this function returns wall-clock seconds elapsed since the
    > first call to this function, as a floating point number, based on the
    > Win32 function QueryPerformanceCounter(). The resolution is typically
    > better than one microsecond.
    > [snip]
    >
    > time( )
    >
    > Return the time as a floating point number expressed in seconds since
    > the epoch, in UTC. Note that even though the time is always returned
    > as a floating point number, not all systems provide time with a better
    > precision than 1 second. While this function normally returns non-
    > decreasing values, it can return a lower value than a previous call if
    > the system clock has been set back between the two calls.
    > """
    >
    > AFAICT that was enough indication for most people to use time.clock on
    > all platforms ... before the introduction of the timeit module; have
    > you considered it?


    I use it to time a Visual Python loop which controls frame rate updates and
    set volocities according to time between frames, rather than frame count.
    The time between frames depends both on the desired frame rate, and the
    background load on the computer, so it isn't constant.

    time.clock() isn't high enough resolution for Ubuntu, and time.time() isn't
    high enough resolution on windows.

    I do use timeit for bench marking, but haven't tried using in a situation
    like this.


    > It looks like your method is right sometimes by accident. func() -
    > func() will give a negative answer with a high resolution timer and a
    > meaningless answer with a low resolution timer, where "high" and "low"
    > are relative to the time taken for the function call, so you will pick
    > the high resolution one most of the time because the meaningless
    > answer is ZERO (no tick, no change). Some small fraction of the time
    > the low resolution timer will have a tick between the two calls and
    > you will get the wrong answer (-big < -small).


    If the difference is between two high resolution timers then it will be
    good enough. I think the time between two consectutive func() calls is
    probably low enough to rule out low resolution timers.


    In the case of two
    > "low" resolution timers, both will give a meaningless answer and you
    > will choose arbitrarily.


    In the case of two low resolution timers, it will use time.time. In this
    case I probably need to raise an exception. My program won't work
    correctly with a low resolution timer.

    Thanks for the feed back, I will try to find something more dependable.

    Ron
     
    Ron Adam, Jan 14, 2008
    #5
  6. Ron Adam

    Ron Adam Guest

    Fredrik Lundh wrote:
    > John Machin wrote:
    >
    >> AFAICT that was enough indication for most people to use time.clock on
    >> all platforms ...

    >
    > which was unfortunate, given that time.clock() isn't even a proper clock
    > on most Unix systems; it's a low-resolution sample counter that can
    > happily assign all time to a process that uses, say, 2% CPU and zero
    > time to one that uses 98% CPU.
    >
    > > before the introduction of the timeit module; have you considered it?

    >
    > whether or not "timeit" suites his requirements, he can at least replace
    > his code with
    >
    > clock = timeit.default_timer
    >
    > which returns a good wall-time clock (which happens to be time.time() on
    > Unix and time.clock() on Windows).



    Thanks for the suggestion Fredrik, I looked at timeit and it does the
    following.


    import sys
    import time

    if sys.platform == "win32":
    # On Windows, the best timer is time.clock()
    default_timer = time.clock
    else:
    # On most other platforms the best timer is time.time()
    default_timer = time.time



    I was hoping I could determine which to use by the values returned. But
    maybe that isn't as easy as it seems it would be.


    Ron
     
    Ron Adam, Jan 14, 2008
    #6
  7. Ron Adam

    Ron Adam Guest

    Fredrik Lundh wrote:
    > John Machin wrote:
    >
    >> AFAICT that was enough indication for most people to use time.clock on
    >> all platforms ...

    >
    > which was unfortunate, given that time.clock() isn't even a proper clock
    > on most Unix systems; it's a low-resolution sample counter that can
    > happily assign all time to a process that uses, say, 2% CPU and zero
    > time to one that uses 98% CPU.
    >
    > > before the introduction of the timeit module; have you considered it?

    >
    > whether or not "timeit" suites his requirements, he can at least replace
    > his code with
    >
    > clock = timeit.default_timer
    >
    > which returns a good wall-time clock (which happens to be time.time() on
    > Unix and time.clock() on Windows).



    Thanks for the suggestion Fredrik, I looked at timeit and it does the
    following.


    import sys
    import time

    if sys.platform == "win32":
    # On Windows, the best timer is time.clock()
    default_timer = time.clock
    else:
    # On most other platforms the best timer is time.time()
    default_timer = time.time



    I was hoping I could determine which to use by the values returned. But
    maybe that isn't as easy as it seems it would be.


    Ron
     
    Ron Adam, Jan 14, 2008
    #7
  8. Ron Adam

    Guest

    """
    <snipped>
    time.clock() isn't high enough resolution for Ubuntu, and time.time()
    isn't
    high enough resolution on windows.

    Take a look at datetime. It is good to the micro-second on Linux and
    milli-second on Windows.
    """

    import datetime
    begin_time=datetime.datetime.now()
    for j in range(100000):
    x = j+1 # wait a small amount of time
    print "Elapsed time =", datetime.datetime.now()-begin_time

    ## You can also access the individual time values
    print begin_time.second
    print begin_time.microsecond ## etc.
     
    , Jan 14, 2008
    #8
  9. wrote:
    > """
    > <snipped>
    > time.clock() isn't high enough resolution for Ubuntu, and time.time()
    > isn't > high enough resolution on windows.
    >
    > Take a look at datetime. It is good to the micro-second on Linux and
    > milli-second on Windows.


    datetime.datetime.now() does the same thing as time.time(); it uses the
    gettimeofday() API for platforms that have it (and so does time.time()),
    and calls the fallback implementation in time.time() if gettimeofdat()
    isn't supported. from the datetime sources:

    #ifdef HAVE_GETTIMEOFDAY
    struct timeval t;
    #ifdef GETTIMEOFDAY_NO_TZ
    gettimeofday(&t);
    #else
    gettimeofday(&t, (struct timezone *)NULL);
    #endif
    ...
    #else /* ! HAVE_GETTIMEOFDAY */

    /* No flavor of gettimeofday exists on this platform. Python's
    * time.time() does a lot of other platform tricks to get the
    * best time it can on the platform, and we're not going to do
    * better than that (if we could, the better code would belong
    * in time.time()!) We're limited by the precision of a double,
    * though.
    */

    (note the "if we could" part).

    </F>
     
    Fredrik Lundh, Jan 14, 2008
    #9
  10. Ron Adam

    John Machin Guest

    On Jan 15, 4:50 am, wrote:
    > """
    > <snipped>
    > time.clock() isn't high enough resolution for Ubuntu, and time.time()
    > isn't
    > high enough resolution on windows.
    >
    > Take a look at datetime. It is good to the micro-second on Linux and
    > milli-second on Windows.
    > """


    On Windows, time.clock has MICROsecond resolution, but your method
    appears to have exactly the same (MILLIsecond) resolution as
    time.time, but with greater overhead, especially when the result is
    required in seconds-and-a-fraction as a float:

    >>> def datetimer(start=datetime.datetime(1970,1,1,0,0,0), nowfunc=datetime.datetime.now):

    .... delta = nowfunc() - start
    .... return delta.days * 86400 + delta.seconds +
    delta.microseconds / 1000000.0
    ....
    >>> tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,

    diff))
    ['1200341583.484', '1200381183.484', '39600.0']
    >>> tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,

    diff))
    ['1200341596.484', '1200381196.484', '39600.0']
    >>> tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,

    diff))
    ['1200341609.4530001', '1200381209.4530001', '39600.0']
    >>> tt = time.time(); td = datetimer(); diff = td - tt; print map(repr, (tt, td,

    diff))
    ['1200341622.562', '1200381222.562', '39600.0']
    >>>


    The difference of 39600 seconds (11 hours) would be removed by using
    datetime.datetime.utcnow.

    >
    > import datetime
    > begin_time=datetime.datetime.now()
    > for j in range(100000):
    > x = j+1 # wait a small amount of time
    > print "Elapsed time =", datetime.datetime.now()-begin_time
    >
    > ## You can also access the individual time values
    > print begin_time.second
    > print begin_time.microsecond ## etc.


    Running that on my Windows system (XP Pro, Python 2.5.1, AMD Turion 64
    Mobile cpu rated at 2.0 GHz), I get
    Elapsed time = 0:00:00.031000
    or
    Elapsed time = 0:00:00.047000
    Using 50000 iterations, I get it down to 15 or 16 milliseconds. 15 ms
    is the lowest non-zero interval that can be procured.

    This is consistent with results obtained by using time.time.

    Approach: get first result from timer function; call timer in a tight
    loop until returned value changes; ignore the first difference so
    found and save the next n differences.

    Windows time.time appears to tick at 15 or 16 ms intervals, averaging
    about 15.6 ms. For comparison, Windows time.clock appears to tick at
    about 2.3 MICROsecond intervals.

    Finally, some comments from the Python 2.5.1 datetimemodule.c:

    /* No flavor of gettimeofday exists on this platform. Python's
    * time.time() does a lot of other platform tricks to get the
    * best time it can on the platform, and we're not going to do
    * better than that (if we could, the better code would belong
    * in time.time()!) We're limited by the precision of a double,
    * though.
    */

    HTH,
    John
     
    John Machin, Jan 14, 2008
    #10
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Valentin Tihomirov

    Are clock and divided clock synchronous?

    Valentin Tihomirov, Oct 23, 2003, in forum: VHDL
    Replies:
    11
    Views:
    3,396
    louis lin
    Oct 28, 2003
  2. Replies:
    4
    Views:
    764
    Peter Alfke
    Apr 27, 2006
  3. Replies:
    5
    Views:
    2,265
    Ricardo
    Jun 23, 2006
  4. himassk
    Replies:
    1
    Views:
    1,261
    Paul Uiterlinden
    May 16, 2007
  5. pankaj.goel
    Replies:
    6
    Views:
    968
    pankaj.goel
    Nov 25, 2008
Loading...

Share This Page