About time spent have big difference in two running

X

xianwei

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int
main ( int argc, char *argv[] )
{
long i = 10000000L;
clock_t start, end;
double duration;

printf("Time do %ld loop spent ", i);
start = clock();
while (i--);
end = clock();

duration = (double)(end - start) / CLOCKS_PER_SEC;
printf("%f seconds\n", duration);

return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */

I run the above the program,
The first time I spent 0.031000 seconds.
The second time I spent 0.015000 seconds
If I try again and again, the time spent on will 0.031 or 0.015
seconds
Why have such big difference?

thank you!!!
 
C

CBFalconer

xianwei said:
.... snip ...

I run the above the program,
The first time I spent 0.031000 seconds.
The second time I spent 0.015000 seconds
If I try again and again, the time spent on will 0.031 or 0.015
seconds. Why have such big difference?

Because the resolution of your clock is obviously roughly 0.0155 S.
 
E

Eric Sosman

xianwei said:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int
main ( int argc, char *argv[] )
{
long i = 10000000L;
clock_t start, end;
double duration;

printf("Time do %ld loop spent ", i);
start = clock();
while (i--);
end = clock();

duration = (double)(end - start) / CLOCKS_PER_SEC;
printf("%f seconds\n", duration);

return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */

I run the above the program,
The first time I spent 0.031000 seconds.
The second time I spent 0.015000 seconds
If I try again and again, the time spent on will 0.031 or 0.015
seconds

Have you ever seen a hummingbird, and wondered
how fast its wings flutter? You might try to find
an answer by measuring the time for ten million beats.
So you set up your highly accurate wingbeat counter,
and then you start your timer: an hourglass ...
 
C

Charlie Gordon

xianwei said:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int
main ( int argc, char *argv[] )
{
long i = 10000000L;
clock_t start, end;
double duration;

printf("Time do %ld loop spent ", i);
start = clock();
while (i--);
end = clock();

duration = (double)(end - start) / CLOCKS_PER_SEC;
printf("%f seconds\n", duration);

return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */

I run the above the program,
The first time I spent 0.031000 seconds.
The second time I spent 0.015000 seconds
If I try again and again, the time spent on will 0.031 or 0.015
seconds
Why have such big difference?

I suspect the clock() function on your system has a granularity of around 15
milliseconds. If this is the case, the clock() function will return the
same value for all calls during each 15 millisecond interval. Depending on
when exactly you start your measurements within that interval, a task
lasting cloless than 15ms can be "clocked" as lasting 0ms or 15ms.
Similarly, one that takes between 15 and 30 might be reposted as taking
exactly 15ms or exactly 30ms.

On top of this granularity issue, you should look into the clock() function.
Does it measure elapsed time? total processor time? processor time spent in
your program vs time spent in the system? or something else even... Your
"timings" will also be affected by other tasks the computer performs while
your program executes, and many other characteristics of you system (cache
memory, bus sharing with i/o devices, etc.)

For your peticular concern, I suggest you try and synchronize your timing
efforts with this small loop:

clock_t last, start;

last = start = clock();
while (start == last) {
start = clock();
}

You should try and measure longer computations, by repeating them in a loop
or increasing the constants.

You should consider using more accurate timing functions such as
non-standard gettimeofday in Linux.

You should repeat the tests many many times, and average the results,
discarding extreme values.

Effective code profiling is *very* difficult. Drawing conclusions or making
changes from profiling data is not easy either: what holds on one
architecture does not necessarily on another one, even just slightly
different. There is no definitive truth in this domain.
 
X

xianwei

Have you ever seen a hummingbird, and wondered
how fast its wings flutter? You might try to find
an answer by measuring the time for ten million beats.

Thank you, you are right, I should replace one millions to ten
millions.
When I do that, the time keep in 0.285 - 0.231 seconds, I think this
is
well.

To test how fast humming wings flutter sound not like a good
idea !! :)
 
X

xianwei

"xianwei" <[email protected]> a ¨¦crit dans le message de (e-mail address removed)...


#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int
main ( int argc, char *argv[] )
{
long i = 10000000L;
clock_t start, end;
double duration;
printf("Time do %ld loop spent ", i);
start = clock();
while (i--);
end = clock();
duration = (double)(end - start) / CLOCKS_PER_SEC;
printf("%f seconds\n", duration);
return EXIT_SUCCESS;
} /* ---------- end of function main ---------- */
I run the above the program,
The first time I spent 0.031000 seconds.
The second time I spent 0.015000 seconds
If I try again and again, the time spent on will 0.031 or 0.015
seconds
Why have such big difference?

I suspect the clock() function on your system has a granularity of around 15
milliseconds. If this is the case, the clock() function will return the
same value for all calls during each 15 millisecond interval. Depending on
when exactly you start your measurements within that interval, a task
lasting cloless than 15ms can be "clocked" as lasting 0ms or 15ms.
Similarly, one that takes between 15 and 30 might be reposted as taking
exactly 15ms or exactly 30ms.

Thank you for you explanation about the question.
You should repeat the tests many many times, and average the results,
discarding extreme values.

Yes, when I larger the loop times, the spent times differs in a small
point.
Thanks you enthusiasm.
 
E

Eric Sosman

xianwei wrote On 09/25/07 08:42,:
Thank you, you are right, I should replace one millions to ten
millions.
When I do that, the time keep in 0.285 - 0.231 seconds, I think this
is
well.

To test how fast humming wings flutter sound not like a good
idea !! :)

The point is that the "granularity" of your measuring
instrument influences how precisely you can measure. An
hourglass is fine for measuring durations on the order of,
well, hours, but is not well suited for measuring milliseconds.
There are ways to improve the measurement precision of a
"coarse" clock; one of them is to measure more repetitions
of the activity whose duration interests you.
 
K

Keith Thompson

CBFalconer said:
xianwei wrote:
... snip ...

Because the resolution of your clock is obviously roughly 0.0155 S.

Most likely 1/60 second, but that's just a semi-educated guess.
 
P

Peter J. Holzer

PCs can have some peculiar number, tied back to the old XT.

That would be 1/18.2 seconds (or rather 1 / (4.77E6 / 4 / 65536)
seconds). Was CLOCKS_PER_SEC actually a floating point constant on
MS-DOS compilers? I don't remember but I guess it must have been.

hp
 
C

Charlie Gordon

Peter J. Holzer said:
That would be 1/18.2 seconds (or rather 1 / (4.77E6 / 4 / 65536)
seconds). Was CLOCKS_PER_SEC actually a floating point constant on
MS-DOS compilers? I don't remember but I guess it must have been.

For those who wonder why 18.2 Hz, that makes 64K ticks per hour.
This may be the very reason for the rather odd original PC frequency:
65536 * 65536 * 4 / 3600 = 4.772185 MHz
 
S

Sjouke Burry

No. CLOCKS_PER_SEC is (mostly) an int.


Watcom compiler:
#define CLOCKS_PER_SEC 100

Microsoft C Compiler Version 2.00.000 V 6.00A
#define CLOCKS_PER_SEC 1000

Borland BC5 however disagrees.
#define CLOCKS_PER_SEC 1000.0

Digital Mars compiler:
#define CLOCKS_PER_SEC ((clock_t)1000)
 
R

Richard Tobin

This may be the very reason for the rather odd original PC frequency:
65536 * 65536 * 4 / 3600 = 4.772185 MHz

No, that comes from the 4/3 the frequency of an NTSC colour
sub-carrier oscillator. I'm not sure if the PC had one of these
anyway and it was reused, or if they were just cheap.

-- Richard
 
C

Charlie Gordon

Richard Tobin said:
No, that comes from the 4/3 the frequency of an NTSC colour
sub-carrier oscillator. I'm not sure if the PC had one of these
anyway and it was reused, or if they were just cheap.

I am positive it did not, since the graphics was done on a separate adapter
board.
But you are right, these oscillators were common and cheap, and the
frequency may not have been exactly what I stated. The BIOS did take
advantage of the 64K ticks in an hour, but there might have been an
adjustment, I'll take a look.
 
P

Peter J. Holzer

No. CLOCKS_PER_SEC is (mostly) an int.


Watcom compiler:
#define CLOCKS_PER_SEC 100

Microsoft C Compiler Version 2.00.000 V 6.00A
#define CLOCKS_PER_SEC 1000

Borland BC5 however disagrees.
#define CLOCKS_PER_SEC 1000.0

Digital Mars compiler:
#define CLOCKS_PER_SEC ((clock_t)1000)

These examples are rather irrelevant since the clock frequency in these
cases is (presumably) exactly 100 Hz or 1000 Hz. But in MS-DOS the
frequency was 18.2 Hz. If CLOCKS_PER_SEC was defined as 18, that would
have caused an error of about 1.1%, which I think would have been
noticable. (Maybe I should get out my old Turbo-C++ 1.0 disks and have a
look).

hp
 
S

Sjouke Burry

Peter said:
These examples are rather irrelevant since the clock frequency in these
cases is (presumably) exactly 100 Hz or 1000 Hz. But in MS-DOS the
frequency was 18.2 Hz. If CLOCKS_PER_SEC was defined as 18, that would
have caused an error of about 1.1%, which I think would have been
noticable. (Maybe I should get out my old Turbo-C++ 1.0 disks and have a
look).

hp
Pardon me??? That whole list is from dos computers.
However they all do modify their clockticks in some way.
I have never seen a clocktick to have a one to one relation
to the 18.2 HZ clock, they all modify that to give a
"decimal" clocktick.
When however you print all clockchanges,you can recognize the
relation to the systemclock.
 
K

Keith Thompson

Peter J. Holzer said:
These examples are rather irrelevant since the clock frequency in these
cases is (presumably) exactly 100 Hz or 1000 Hz. But in MS-DOS the
frequency was 18.2 Hz. If CLOCKS_PER_SEC was defined as 18, that would
have caused an error of about 1.1%, which I think would have been
noticable. (Maybe I should get out my old Turbo-C++ 1.0 disks and have a
look).

CLOCKS_PER_SEC doesn't necessarily match the actual clock frequency.
It just gives you the factor by which you need to scale the value
returned by the clock() function.

For example, a physical clock frequency of 18.2 Hz and a
CLOCKS_PER_SEC value of 1000 would be consistent. Successive calls to
clock() might return
0
55
110
165
...
989
1044
and so forth; each result exceeds the prievous one by 54 or 55.
 
B

Ben Pfaff

Keith Thompson said:
CLOCKS_PER_SEC doesn't necessarily match the actual clock frequency.
It just gives you the factor by which you need to scale the value
returned by the clock() function.

And in fact, CLOCKS_PER_SEC has the same value on all
XSI-conformant UNIX systems, even though such systems are not
required to have a clock that ticks at any particular frequency.
 
P

Peter J. Holzer

Pardon me??? That whole list is from dos computers.
However they all do modify their clockticks in some way.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

That's the crux. If they modify the clockticks, they aren't the same any
more. If a program does NOT modify the clock ticks, the frequency is
18.2 Hz, which is not an integral number.
I have never seen a clocktick to have a one to one relation
to the 18.2 HZ clock, they all modify that to give a
"decimal" clocktick.

I am positive that Turbo C (up to and including Turbo-C++ 1.0, which
was the last version I've used) did not modify the clock rate, because
on ocassion I needed a higher resolution and had to reprogram the timer
chip myself. It is possible that clock did the conversion internally,
as Keith suggests, but I doubt that, because:

* I think I would remember it

* I found some old benchmark code of mine, which contains a comment
on the granularity of the times() function on Ultrix, but none on the
granularity of clock on MS-DOS. I think I would have added a comment
if granularity of clock was worse than CLOCKS_PER_SEC suggested.

So, I think that CLOCKS_PER_SEC should have been 18.2 on Turbo-C, but I
don't remember if they actually defined it as 18.2 or approximated it
with 18. However, I notice that in your examples the Borland compiler is
the only one which defines CLOCKS_PER_SEC as a floating point constant,
which strongly suggests that it was 18.2 in earlier versions and when
they changed it to 1000, they didn't want to break programs which
(erroneously) assumed that CLOCKS_PER_SEC was of type double.

hp
 
P

Peter J. Holzer

CLOCKS_PER_SEC doesn't necessarily match the actual clock frequency.

I know, I have used systems where they didn't match (in fact I'm using
one right now). I am quite sure that they did match on MS-DOS with the
Turbo-C compilers, though. I should have written "... the unit of time
returned by clock is exactly 1/100 or 1/1000 second" instead of "...
clock frequency ...".

hp
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,527
Members
45,000
Latest member
MurrayKeync

Latest Threads

Top