Using clock() to measure run time

C

Charles M. Reinke

I'm using the function clock() to measure the run time of a program so that
I can compare among several different algorithms. My code looks like:

#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <time.h>

int main() {
clock_t start, stop;
double t = 0.0;

/* Start timer */
assert((start = clock())!=-1);

/* Do lotsa fancy calculations */

/* Stop timer */
stop = clock();
t = (double) (stop-start)/CLOCKS_PER_SEC;

printf("Run time: %f\n", t);

return(0);
} /* main */

The question is, does this give me the "real life" time that passes while
the process is excuting, or just the processor time actully used by this
process. Put another way, if I run the exact same code when the machine is
"idle" and again when the processor is being shared by a bunch of other
processes, will the above give me *roughly* the same results or a
significantly longer time for the latter case?

Thanx all!

--

Charles M. Reinke
Georgia Institute of Technology
School of Electrical and Computer Engineering
(404) 385-2579
 
J

Jens.Toerring

Charles M. Reinke said:
I'm using the function clock() to measure the run time of a program so that
I can compare among several different algorithms. My code looks like:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <time.h>
int main() {
clock_t start, stop;
double t = 0.0;
/* Start timer */
assert((start = clock())!=-1);
/* Do lotsa fancy calculations */
/* Stop timer */
stop = clock();
t = (double) (stop-start)/CLOCKS_PER_SEC;
printf("Run time: %f\n", t);
return(0);
} /* main */
The question is, does this give me the "real life" time that passes while
the process is excuting, or just the processor time actully used by this
process. Put another way, if I run the exact same code when the machine is
"idle" and again when the processor is being shared by a bunch of other
processes, will the above give me *roughly* the same results or a
significantly longer time for the latter case?

clock() is supposed the CPU time, not the wallclock time, so it should
not matter (too much) if the machine is under load or not. Here are the
relevant sentences from the C89 standard:

The clock function returns the implementation's best approximation
to the processor time used by the program since the beginning of an
implementation-defined era related only to the program invocation. To
determine the time in seconds, the value returned by the clock
function should be divided by the value of the macro CLK_TCK. If the
processor time used is not available or its value cannot be
represented, the function returns the value (clock_t)-1.

Please note that here you're asked to divide by CLK_TCK, not
CLOCKS_PER_SEC. In the C99 standard it tells you instead to divide by
CLOCKS_PER_SEC and if it is defined I would prefer it over CLK_TCK,
it also being required by the (newer) POSIX standard.

Regards, Jens
 
G

Gordon Burditt

I'm using the function clock() to measure the run time of a program so that
I can compare among several different algorithms. My code looks like:

#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <time.h>

int main() {
clock_t start, stop;
double t = 0.0;

/* Start timer */
assert((start = clock())!=-1);

/* Do lotsa fancy calculations */

/* Stop timer */
stop = clock();
t = (double) (stop-start)/CLOCKS_PER_SEC;

printf("Run time: %f\n", t);

return(0);
} /* main */

The question is, does this give me the "real life" time that passes while
the process is excuting, or just the processor time actully used by this
process.

clock() is supposed to give an approximation to processor time used.
On a uni-tasking machine (e.g. MS-DOS), it may give something more
like wall clock time. On a multi-processor machine it may run
faster than real time. In that sense, clock() is like an employee
time clock (or the game clock in basketball, football, or whatever):
it runs only some of the time, is likely to only run 8 hours on a
weekday, and if you've got multiple employees, they may rack up
more than 24 hours of work in 1 day.
Put another way, if I run the exact same code when the machine is
"idle" and again when the processor is being shared by a bunch of other
processes, will the above give me *roughly* the same results or a
significantly longer time for the latter case?

Generally, I'd say yes, assuming the OS makes an attempt to measure
actual CPU time (UNIX does, and I think recent versions of Windows
do also, assuming the C implementation uses the features), you get
roughly the same results, subject to things like task switching
messing up the cache so a run with a lot of task switches has a lot
more cache misses. There is also the issue of what process the
task switch itself gets "charged" to.

Gordon L. Burditt
 
C

Charles M. Reinke

Gordon Burditt said:
clock() is supposed to give an approximation to processor time used.
On a uni-tasking machine (e.g. MS-DOS), it may give something more
like wall clock time. On a multi-processor machine it may run
faster than real time. In that sense, clock() is like an employee
time clock (or the game clock in basketball, football, or whatever):
it runs only some of the time, is likely to only run 8 hours on a
weekday, and if you've got multiple employees, they may rack up
more than 24 hours of work in 1 day.


Generally, I'd say yes, assuming the OS makes an attempt to measure
actual CPU time (UNIX does, and I think recent versions of Windows
do also, assuming the C implementation uses the features), you get
roughly the same results, subject to things like task switching
messing up the cache so a run with a lot of task switches has a lot
more cache misses. There is also the issue of what process the
task switch itself gets "charged" to.

Gordon L. Burditt

Your explanation was very helpful. I'm using Linux on a dual-processor
machine, so I think I should be OK.

-Charles
 
C

Charles M. Reinke

case?
clock() is supposed the CPU time, not the wallclock time, so it should
not matter (too much) if the machine is under load or not. Here are the
relevant sentences from the C89 standard:

The clock function returns the implementation's best approximation
to the processor time used by the program since the beginning of an
implementation-defined era related only to the program invocation. To
determine the time in seconds, the value returned by the clock
function should be divided by the value of the macro CLK_TCK. If the
processor time used is not available or its value cannot be
represented, the function returns the value (clock_t)-1.

Please note that here you're asked to divide by CLK_TCK, not
CLOCKS_PER_SEC. In the C99 standard it tells you instead to divide by
CLOCKS_PER_SEC and if it is defined I would prefer it over CLK_TCK,
it also being required by the (newer) POSIX standard.

Regards, Jens

Thanx, this is exactly what I was looking for. BTW, I tried CLK_TCK in
place of CLOCKS_PER_SEC and got something that looked more like microseconds
rather than sec. I think I'll stick with CLOCKS_PER_SEC, since I don't
really need that much precision for processes that take 28+ min. (in
parallel over 4 processors) to run. :^)

-Charles
 
K

Keith Thompson

Charles M. Reinke said:
case?

Thanx, this is exactly what I was looking for. BTW, I tried CLK_TCK in
place of CLOCKS_PER_SEC and got something that looked more like microseconds
rather than sec. I think I'll stick with CLOCKS_PER_SEC, since I don't
really need that much precision for processes that take 28+ min. (in
parallel over 4 processors) to run. :^)

Watch out for overflow. If clock_t is a 32-bit signed integer type
and CLOCKS_PER_SEC is 1000000 (1e6), then it will overflow in less
than 36 minutes; assuming signed integer overflow wraps around, you
can get repeated results in less than 72 minutes.

This program will show the characteristics of the clock_t type.

#include <time.h>
#include <stdio.h>
#include <limits.h>

int main(void)
{
int bits = sizeof(clock_t) * CHAR_BIT;
if ((clock_t)1 / 2 > 0) {
printf("clock_t is a %d-bit floating-point type\n", bits);
}
else {
printf("clock_t is a %d-bit %s integer type\n",
bits, (clock_t)-1 < 0 ? "signed" : "unsigned");
}
printf("CLOCKS_PER_SEC = %ld\n", (long)CLOCKS_PER_SEC);
return 0;
}
 
C

Charles M. Reinke

Keith Thompson said:
Watch out for overflow. If clock_t is a 32-bit signed integer type
and CLOCKS_PER_SEC is 1000000 (1e6), then it will overflow in less
than 36 minutes; assuming signed integer overflow wraps around, you
can get repeated results in less than 72 minutes.

Keith Thompson (The_Other_Keith) (e-mail address removed)
San Diego Supercomputer Center <*>
We must do something. This is something. Therefore, we must do this.

Thanx for the warning. I've already experienced overflow when running on
only 1 processor, but it gave a negative number so knew what was going on
immediately. I didn't know how many times it had wrapped around though (and
it wasn't worth the effort to code around the problem), so I just threw
those results out.

<OT>
BTW, I noticed you're at SDSC--I'll be at ORNL this summer to test the 3D
parallel version of our code on their machines (current version is 2D).
It'll be *fun* to get off the beowulf cluster and work with a "real"
supercomputer for a while.
</OT>

-Charles
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,074
Latest member
StanleyFra

Latest Threads

Top