Mike Wahler said:
I understand it to be saying that the type of the value
returned by your standard library implementation's 'clock()'
function (type 'clock_t') can only represent time intervals
of up to approximately 36 minutes before "wrapping" from its
maximum value back to zero, and start counting up again.
The specific underlying types and the values involved
are implementation defined. I use "your" as abbreviation
for "your implementation's" below:
This means that the resolution of your 'clock()' function
combined with the range of your type 'clock_t' results in
a maximum range of 2147 seconds.
This suggests to me that your 'clock_t' type is a 32-bit
integer type, using 31 'value' bits, and one bit to indicate the
sign for the -1 'error value', and that your 'CLOCKS_PER_SEC'
macro is defined as 1000000 (one million), since the highest
value representable with 31 bits is 2147483647, and
2147483647 / 1000000 == 2147 (using integer division).
This agrees with the statement above about 'defined in
microseconds.' 2147 / 60 == 35.78, or about 36 minutes.
As I say, the specifics of this stuff is implementation
defined, and the above is my deduction. To determine
for sure the exact limitations and behavior, you should
write a small test program that uses 'sizeof(clock_t)',
'CHAR_BIT', and 'CLOCKS_PER_SEC' to determine the exact
values your implementation uses.
I believe in your case the above test program would probably
be only academic, though, because:
About your specific question:
If you store your elapsed time in a type 'clock_t'
object then yes, your implementation limits this value
to about 36 minutes before it "wraps" around back
to zero.
But you need not store the value in a type 'clock_t'
object. Use a type with a range large enough for
our anticipated needs, e.g. type 'double'. The required
range of type 'double' is significantly higher than
that of a 31 bit integer value.
double start = (double)clock();
double elapsed = 0;
/* etc */
elapsed = (double)clock() - start;
After again reviewing the ISO standard, I have less
confidence in my previous conclusion. As a matter
of fact, the more I think about it, I don't think
it's valid at all.
Here is everything the standard has to say about 'clock()':
(specifically, note 7.23.1 / 4)
<begin ISO 9899 quote>
7.23 Date and time <time.h>
7.23.1 Components of time
[...]
2 The macros defined are NULL (described in 7.17); and
CLOCKS_PER_SEC
which expands to a constant expression with type clock_t (described
below) that is the number per second of the value returned by the
clock function.
3 The types declared are size_t (described in 7.17);
clock_t
and
time_t
which are arithmetic types capable of representing times; and
struct tm
which holds the components of a calendar time, called the
broken-down time.
4 The range and precision of times representable in clock_t
and time_t are implementation-defined.
[...]
7.23.2.1 The clock function
Synopsis
1 #include <time.h>
clock_t clock(void);
Description
2 The clock function determines the processor time used.
Returns
3 The clock function returns the implementation’s best approximation
to the processor time used by the program since the beginning of an
implementation-defined era related only to the program invocation.
To determine the time in seconds, the value returned by the clock
function should be divided by the value of the macro CLOCKS_PER_SEC.
If the processor time used is not available or its value cannot be
represented, the function returns the value (clock_t)(-1). (266)
(266) In order to measure the time spent in a program, the clock
function should be called at the start of the program and its
return value subtracted from the value returned by subsequent
calls.
<end ISO 9899 quote>
Contrast (from above):
4 The range and precision of times representable in clock_t
and time_t are implementation-defined.
with the quotation of your man page:
"...Because of this, the value returned [by clock()] will
wrap around after accumulating only 2147 seconds of CPU time
(about 36 minutes)."
It is not clear to me whether only actual value returned
will wrap, and that some larger range is internally used
by clock(), or if the internal range used has the stated
limitation.
The ISO standard is imo equally vague, only mentioning
"the range and precision of times representable in
clock_t and time_t", saying nothing about the type
clock() uses internally to keep time between invocations
or the range of that type.
7.23.2.1 / 3 talks about an "implementation-defined era"
but does not seem to say what unit of measurement is used
to specify this "era". I'm assuming that falls under
'implementation-defined' as well.
So lacking (imo) definitive knowledge about this, it looks
like we'd need an algorithm to figure out how far the
value has 'wrapped' each time, and make adjustments,
accumulating the adjusted values. Or use some other
measurement tool with a range sufficient for your needs.
'time()' probably gives a much larger range, but probably
also a lesser resolution, so if you don't need the resolution
provided by 'clock()' that might be an option.
Perhaps someone else can give a more definitive answer
and address my uncertainties stated above.
Maybe I'm just too tired right now, and simply cannot
see the obvious, it wouldn't be the first time.
-Mike