timeGetTime

M

marcus

how come this snippet has got an output like:
12297359.000000
12297359.000000
12297375.000000
12297375.000000
12297390.000000
12297390.000000
12297406.000000
 
V

Victor Bazarov

marcus said:
why does it sometimes claim that no time has passed and sometimes
claim that about 16 milliseconds has past?

Because that's the *granularity* with which the system clock runs on
your machine.
It should be millisecond

Should? Who said that?
resolution on it (ok I know windows is not a realtime operating
system, but anyway I think it should do better than this I was
planning to do some profiling on my code)

Don't use it for profiling. Use something else (QueryPerformanceCounter
for example). Many systems have better ways to keep time, you just need
to research your particular system, and for that you need to post to
a newsgroup that deals with your system, comp.os.ms-windows.programmer,
in your case.
 
K

Karl Heinz Buchegger

marcus said:
how come this snippet has got an output like:
12297359.000000
12297359.000000
12297375.000000
12297375.000000
12297390.000000
12297390.000000
12297406.000000
.
.

while (1)
printf("%f\n", timeGetTime());

why does it sometimes claim that no time has passed and sometimes
claim that about 16 milliseconds has past?

Because the time couting chip in your computer has
a resolution of 16 milliseconds only?
It should be millisecond
resolution on it

It has milliseconds resolution.
All the time values are in the unit 'milliseconds'.
But nowhere the documentation says that the value will
increase with 1 millisecond :)
(ok I know windows is not a realtime operating
system,

'realtime operating system' has nothing to do with it.
but anyway I think it should do better than this I was
planning to do some profiling on my code)

You can do it.
Execute the code in question 16 times and divide the resulting
time by 16 and you get an accuracy of 1 millisecond. (Well
sort of, if the process didn't get swapped or interrupted
and nothing else is happening on your machine besides running
your program. You get the idea)
 
K

Karl Heinz Buchegger

marcus said:
how come this snippet has got an output like:
12297359.000000
12297359.000000
12297375.000000
12297375.000000
12297390.000000
12297390.000000
12297406.000000
.
.

while (1)
printf("%f\n", timeGetTime());

why does it sometimes claim that no time has passed and sometimes
claim that about 16 milliseconds has past?

Because the time couting chip in your computer has
a resolution of 16 milliseconds only?
It should be millisecond
resolution on it

It has milliseconds resolution.
All the time values are in the unit 'milliseconds'.
But nowhere the documentation says that the value will
increase with 1 millisecond :)
(ok I know windows is not a realtime operating
system,

'realtime operating system' has nothing to do with it.
but anyway I think it should do better than this I was
planning to do some profiling on my code)

You can do it.
Execute the code in question 16 times and divide the resulting
time by 16 and you get an accuracy of 1 millisecond. (Well
sort of, if the process didn't get swapped or interrupted
and nothing else is happening on your machine besides running
your program. You get the idea)
 
M

Maett

Am 19 Apr 2005 10:03:32 -0700 schrieb marcus said:
how come this snippet has got an output like:
12297359.000000
12297359.000000
12297375.000000
12297375.000000
12297390.000000
12297390.000000
12297406.000000
.
.

while (1)
printf("%f\n", timeGetTime());

why does it sometimes claim that no time has passed and sometimes
claim that about 16 milliseconds has past? It should be millisecond
resolution on it (ok I know windows is not a realtime operating
system, but anyway I think it should do better than this I was
planning to do some profiling on my code)

The basic windows clock tick usually is 10 ms (on a PC without intel hyperthreading) or 15.625 ms (on a PC with intel hyperthreading). But since the time value you used increments in multiples of 1ms, you observe increments of 15 or 16 ms.
 
K

Keith Thompson

how come this snippet has got an output like:
12297359.000000
12297359.000000
12297375.000000
12297375.000000
12297390.000000
12297390.000000
12297406.000000
.
.

while (1)
printf("%f\n", timeGetTime());

There is no standard C (or C++, as far as I know) function called
timeGetTime. You should ask in a newsgroup dedicated to whatever
system you're using (assuming the documentation doesn't answer your
question).

(It's probably an issue involving the underlying resolution of
whatever time information timeGetTime() accesses.)
 
M

marbac

marcus said:
why does it sometimes claim that no time has passed and sometimes
claim that about 16 milliseconds has past? It should be millisecond
resolution on it (ok I know windows is not a realtime operating
system, but anyway I think it should do better than this I was
planning to do some profiling on my code)

If you plan to measure the speed of one algorithm compared to another:

http://www.math.uwaterloo.ca/~jamuir/rdtscpm1.pdf

In my environment (Linux/gcc) the function rdtscll is located in <asm/msr.h>
I can imagine that there is something equivalent in your system.

regards marbac
 
R

Randy Howard

If you plan to measure the speed of one algorithm compared to another:

http://www.math.uwaterloo.ca/~jamuir/rdtscpm1.pdf

In my environment (Linux/gcc) the function rdtscll is located in <asm/msr.h>
I can imagine that there is something equivalent in your system.

Fundamentally broken on SMP systems, and perhaps dual-core as well.
This is also OT for both of these groups you have posted it to.
 
?

=?iso-8859-1?q?Dag-Erling_Sm=F8rgrav?=

Randy Howard said:
[RDTSC is f]undamentally broken on SMP systems, and perhaps
dual-core as well.

Not necessarily; some SMP motherboards synchronize the TSCs of all
CPUs. I would expect that multi-core CPUs would have a single shared
TSC, or individual but synchronized TSCs for each core.

DES
 
R

Randy Howard

Randy Howard said:
[RDTSC is f]undamentally broken on SMP systems, and perhaps
dual-core as well.

Not necessarily; some SMP motherboards synchronize the TSCs of all
CPUs.

Since not all of them do (most I've seen do not, and I've worked on
dozens of different IA32 SMP platforms), then that is worthless
information since you can not rely upon it.
I would expect that multi-core CPUs would have a single shared
TSC, or individual but synchronized TSCs for each core.

I haven't tried it on a dual-core system yet, but hopefully they
will do so. Even so, it doesn't do much good to those that
intend to rely on rdtsc for timing, as not all platforms will
work as expected.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,019
Latest member
RoxannaSta

Latest Threads

Top