timeGetTime

Discussion in 'C Programming' started by marcus, Apr 19, 2005.

  1. marcus

    marcus Guest

    how come this snippet has got an output like:
    12297359.000000
    12297359.000000
    12297375.000000
    12297375.000000
    12297390.000000
    12297390.000000
    12297406.000000
    .
    .

    while (1)
    printf("%f\n", timeGetTime());

    why does it sometimes claim that no time has passed and sometimes
    claim that about 16 milliseconds has past? It should be millisecond
    resolution on it (ok I know windows is not a realtime operating
    system, but anyway I think it should do better than this I was
    planning to do some profiling on my code)
    marcus, Apr 19, 2005
    #1
    1. Advertising

  2. marcus wrote:
    > why does it sometimes claim that no time has passed and sometimes
    > claim that about 16 milliseconds has past?


    Because that's the *granularity* with which the system clock runs on
    your machine.

    > It should be millisecond


    Should? Who said that?

    > resolution on it (ok I know windows is not a realtime operating
    > system, but anyway I think it should do better than this I was
    > planning to do some profiling on my code)


    Don't use it for profiling. Use something else (QueryPerformanceCounter
    for example). Many systems have better ways to keep time, you just need
    to research your particular system, and for that you need to post to
    a newsgroup that deals with your system, comp.os.ms-windows.programmer,
    in your case.
    Victor Bazarov, Apr 19, 2005
    #2
    1. Advertising

  3. marcus wrote:
    >
    > how come this snippet has got an output like:
    > 12297359.000000
    > 12297359.000000
    > 12297375.000000
    > 12297375.000000
    > 12297390.000000
    > 12297390.000000
    > 12297406.000000
    > .
    > .
    >
    > while (1)
    > printf("%f\n", timeGetTime());
    >
    > why does it sometimes claim that no time has passed and sometimes
    > claim that about 16 milliseconds has past?


    Because the time couting chip in your computer has
    a resolution of 16 milliseconds only?

    > It should be millisecond
    > resolution on it


    It has milliseconds resolution.
    All the time values are in the unit 'milliseconds'.
    But nowhere the documentation says that the value will
    increase with 1 millisecond :)

    > (ok I know windows is not a realtime operating
    > system,


    'realtime operating system' has nothing to do with it.

    > but anyway I think it should do better than this I was
    > planning to do some profiling on my code)


    You can do it.
    Execute the code in question 16 times and divide the resulting
    time by 16 and you get an accuracy of 1 millisecond. (Well
    sort of, if the process didn't get swapped or interrupted
    and nothing else is happening on your machine besides running
    your program. You get the idea)

    --
    Karl Heinz Buchegger
    Karl Heinz Buchegger, Apr 19, 2005
    #3
  4. marcus wrote:
    >
    > how come this snippet has got an output like:
    > 12297359.000000
    > 12297359.000000
    > 12297375.000000
    > 12297375.000000
    > 12297390.000000
    > 12297390.000000
    > 12297406.000000
    > .
    > .
    >
    > while (1)
    > printf("%f\n", timeGetTime());
    >
    > why does it sometimes claim that no time has passed and sometimes
    > claim that about 16 milliseconds has past?


    Because the time couting chip in your computer has
    a resolution of 16 milliseconds only?

    > It should be millisecond
    > resolution on it


    It has milliseconds resolution.
    All the time values are in the unit 'milliseconds'.
    But nowhere the documentation says that the value will
    increase with 1 millisecond :)

    > (ok I know windows is not a realtime operating
    > system,


    'realtime operating system' has nothing to do with it.

    > but anyway I think it should do better than this I was
    > planning to do some profiling on my code)


    You can do it.
    Execute the code in question 16 times and divide the resulting
    time by 16 and you get an accuracy of 1 millisecond. (Well
    sort of, if the process didn't get swapped or interrupted
    and nothing else is happening on your machine besides running
    your program. You get the idea)

    --
    Karl Heinz Buchegger
    Karl Heinz Buchegger, Apr 19, 2005
    #4
  5. marcus

    Maett Guest

    Am 19 Apr 2005 10:03:32 -0700 schrieb marcus <>:

    > how come this snippet has got an output like:
    > 12297359.000000
    > 12297359.000000
    > 12297375.000000
    > 12297375.000000
    > 12297390.000000
    > 12297390.000000
    > 12297406.000000
    > .
    > .
    >
    > while (1)
    > printf("%f\n", timeGetTime());
    >
    > why does it sometimes claim that no time has passed and sometimes
    > claim that about 16 milliseconds has past? It should be millisecond
    > resolution on it (ok I know windows is not a realtime operating
    > system, but anyway I think it should do better than this I was
    > planning to do some profiling on my code)
    >


    The basic windows clock tick usually is 10 ms (on a PC without intel hyperthreading) or 15.625 ms (on a PC with intel hyperthreading). But since the time value you used increments in multiples of 1ms, you observe increments of 15 or 16 ms.
    Maett, Apr 19, 2005
    #5
  6. (marcus) writes:
    > how come this snippet has got an output like:
    > 12297359.000000
    > 12297359.000000
    > 12297375.000000
    > 12297375.000000
    > 12297390.000000
    > 12297390.000000
    > 12297406.000000
    > .
    > .
    >
    > while (1)
    > printf("%f\n", timeGetTime());


    There is no standard C (or C++, as far as I know) function called
    timeGetTime. You should ask in a newsgroup dedicated to whatever
    system you're using (assuming the documentation doesn't answer your
    question).

    (It's probably an issue involving the underlying resolution of
    whatever time information timeGetTime() accesses.)

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
    Keith Thompson, Apr 19, 2005
    #6
  7. marcus

    marbac Guest

    marcus wrote:

    >
    > why does it sometimes claim that no time has passed and sometimes
    > claim that about 16 milliseconds has past? It should be millisecond
    > resolution on it (ok I know windows is not a realtime operating
    > system, but anyway I think it should do better than this I was
    > planning to do some profiling on my code)


    If you plan to measure the speed of one algorithm compared to another:

    http://www.math.uwaterloo.ca/~jamuir/rdtscpm1.pdf

    In my environment (Linux/gcc) the function rdtscll is located in <asm/msr.h>
    I can imagine that there is something equivalent in your system.

    regards marbac
    marbac, Apr 19, 2005
    #7
  8. marcus

    Randy Howard Guest

    In article <6Ze9e.11779$>,
    says...
    >
    > If you plan to measure the speed of one algorithm compared to another:
    >
    > http://www.math.uwaterloo.ca/~jamuir/rdtscpm1.pdf
    >
    > In my environment (Linux/gcc) the function rdtscll is located in <asm/msr.h>
    > I can imagine that there is something equivalent in your system.


    Fundamentally broken on SMP systems, and perhaps dual-core as well.
    This is also OT for both of these groups you have posted it to.

    --
    Randy Howard (2reply remove FOOBAR)
    "Making it hard to do stupid things often makes it hard
    to do smart ones too." -- Andrew Koenig
    Randy Howard, Apr 19, 2005
    #8
  9. Randy Howard <> writes:
    > [RDTSC is f]undamentally broken on SMP systems, and perhaps
    > dual-core as well.


    Not necessarily; some SMP motherboards synchronize the TSCs of all
    CPUs. I would expect that multi-core CPUs would have a single shared
    TSC, or individual but synchronized TSCs for each core.

    DES
    --
    Dag-Erling Smørgrav -
    =?iso-8859-1?q?Dag-Erling_Sm=F8rgrav?=, Apr 20, 2005
    #9
  10. marcus

    Randy Howard Guest

    In article <>, says...
    > Randy Howard <> writes:
    > > [RDTSC is f]undamentally broken on SMP systems, and perhaps
    > > dual-core as well.

    >
    > Not necessarily; some SMP motherboards synchronize the TSCs of all
    > CPUs.


    Since not all of them do (most I've seen do not, and I've worked on
    dozens of different IA32 SMP platforms), then that is worthless
    information since you can not rely upon it.

    > I would expect that multi-core CPUs would have a single shared
    > TSC, or individual but synchronized TSCs for each core.


    I haven't tried it on a dual-core system yet, but hopefully they
    will do so. Even so, it doesn't do much good to those that
    intend to rely on rdtsc for timing, as not all platforms will
    work as expected.

    --
    Randy Howard (2reply remove FOOBAR)
    "Making it hard to do stupid things often makes it hard
    to do smart ones too." -- Andrew Koenig
    Randy Howard, Apr 20, 2005
    #10
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. marcus

    timeGetTime

    marcus, Apr 19, 2005, in forum: C++
    Replies:
    9
    Views:
    5,762
    Randy Howard
    Apr 20, 2005
  2. Adam
    Replies:
    4
    Views:
    425
Loading...

Share This Page