Another Tricky Problem I am Messing With (Not Homework)

J

joebenjamin

This is a problem I was trying to help a few friends figure out for fun. I
am not sure how to go about this, but there is something I am missing
here. Here is what we want the output to be:
Need to read in a time period from the keyboard (for example 1.2 seconds,
3.4 seconds, or 8.37 seconds). Once the time has been read in, we want to
print out the word “TICK” and then wait the designated time period and
print the word “TICK” again. Repeat until there are 6 “TICK”s on the
screen. It would be real neat if we knew how to variate between TICK and
TOCK, however this might be the output.

Enter a time ==> 2.27
TICK <wait 2.27 seconds>
TICK <wait 2.27 seconds>
TICK <wait 2.27 seconds>
TICK <wait 2.27 seconds>
TICK <wait 2.27 seconds>

Here is what I started, I am lost because I am a newbie lol... This is
wrong any ideas?

Code:
#include <stdio.h> 
#include <conio.h> 
#include <time.h> 
 
int main( ) 
 
{ 
   clock_t start, end;      
   float   total_time;       
 
                                            
   int i;                   
   int j;                   
   int k;           
   int timer;
 
 printf("Enter any time in seconds\n ");
 scanf ("%i", &timer);
 getchar ();
 
   printf( "Start timing\n" ); 
   start = clock(); 
   

   for ( i=0; i<5000; i++ ) 
      for ( j=0; j<1000; j++ ) 
         for ( k=0; k<100; k++ );
         
       
   end = clock(); 
 
   total_time = ( end - start ) / CLK_TCK;  
   printf("TICK\n");
   printf("TICK\n");
   printf("TICK\n");
   printf("TICK\n");
   printf("TICK\n");
   printf("TICK\n");
   printf( "\nTotal Time Elapsed : %0.3f seconds\n", total_time ); 
 
   getch(); 
}
 
M

Miguel Guedes

joebenjamin said:
This is a problem I was trying to help a few friends figure out for fun. I
am not sure how to go about this, but there is something I am missing
here. Here is what we want the output to be:
Need to read in a time period from the keyboard (for example 1.2 seconds,
3.4 seconds, or 8.37 seconds). Once the time has been read in, we want to
print out the word “TICK†and then wait the designated time period and
print the word “TICK†again. Repeat until there are 6 “TICKâ€s on the
screen. It would be real neat if we knew how to variate between TICK and
TOCK, however this might be the output.

Had nothing to do so I decided to give it a shot... :)

Made the code as simpler as I could; hopefully you'll understand what the code
does, as I find it self-explanatory.

It's been awhile since I've coded in C (since the late 80's) and am not sure
whether I've used any non-standard C features. Hopefully some of the regulars
here can spot'em and correct'em if I have.



#include <stdio.h>
#include <conio.h>
#include <time.h>

int main( )
{
clock_t start, target, end;
float timer;
int i, j;
char msg[][5] = {"TICK", "TOCK"};


do
{
printf("\nEnter any time in seconds: ");
scanf ("%f", &timer);

i = j = 0;

printf( "Start timing\n" );


start = clock();
while(i++ < 6)
{
target = clock() + (clock_t)(timer*(float)CLOCKS_PER_SEC);

while(clock() < target);

printf("%s ", msg[j]);

if(++j > 1)
j = 0;
}
end = clock() - start;

printf( "\nTotal Time Elapsed : %0.3f seconds, %0.3f/iteration\n",
(float)end/1000, (float)end/((i-1)*1000));

printf("Another go? y/[n] ");
} while(getch() == 'y');
}
 
B

Bart van Ingen Schenau

Miguel Guedes wrote:

{
target = clock() + (clock_t)(timer*(float)CLOCKS_PER_SEC);

while(clock() < target);

Although this is a standard conforming way to wait for a certain time
period, it is actually not a very friendly way on a multi-user system.
This loop probably consumes 100% CPU time while it is doing nothing.

For this kind of task, it is advisable to look for a implementation
specific method. Functions like Sleep (Windows) and sleep/usleep (unix)
come to mind.

Bart v Ingen Schenau
 
K

Keith Thompson

Bart van Ingen Schenau said:
Miguel Guedes wrote:


Although this is a standard conforming way to wait for a certain time
period, it is actually not a very friendly way on a multi-user system.
This loop probably consumes 100% CPU time while it is doing nothing.

And it doesn't necessarily do what you want it to do. The clock()
function returns an indication of CPU time, not real time. So if, for
example, your program is getting 25% of the CPU, the loop will wait 4
times as long as you probably want it to (while wasting 25% of the CPU
doing nothing particularly useful).
For this kind of task, it is advisable to look for a implementation
specific method. Functions like Sleep (Windows) and sleep/usleep (unix)
come to mind.

Indeed. Delaying for a specified time interval is one of those things
that cannot be done well using only standard C, but that can probably
be done *very* well using some system-specific interface.

(It would have been easy enough for the standard to define a sleep()
function, but applications that need that functionality almost always
need other functionality that can't be define so easily in a portable
interface.)
 
W

Willem

Keith wrote:
) Indeed. Delaying for a specified time interval is one of those things
) that cannot be done well using only standard C, but that can probably
) be done *very* well using some system-specific interface.

A good bet would probably be the POSIX standard.
select(), for example, can be used to sleep for a specified time interval.


SaSW, Willem
--
Disclaimer: I am in no way responsible for any of the statements
made in the above text. For all I know I might be
drugged or something..
No I'm not paranoid. You all think I'm paranoid, don't you !
#EOT
 
B

Bill Reid

Keith Thompson said:
And it doesn't necessarily do what you want it to do. The clock()
function returns an indication of CPU time, not real time.

Really? Seems to work to a few thousandths of a second
here (note that _sleep() is my development package version of
a sleep/wait/delay function for Windows, and CLK_TCK
is the macro for determining the number of seconds from
clock()):

#include <stdio.h> /* required to breathe */
#include <time.h> /* required for clock functions */
#include <dos.h> /* required for "sleep" function */

int main(void) {
int inc;
clock_t start,end;

for(inc=1;inc<5;inc++) {
printf("Sleeping for %d seconds\n",inc);
start=clock();
_sleep(inc);
end=clock();
printf("Slept for %f seconds\n",((end-start)/CLK_TCK));
}
}

Sleeping for 1 seconds
Slept for 1.000000 seconds
Sleeping for 2 seconds
Slept for 2.000000 seconds
Sleeping for 3 seconds
Slept for 3.000000 seconds
Sleeping for 4 seconds
Slept for 4.000000 seconds

Hey, that appears to be "perfect timing"! Before when I ran
this some of the clock() timings were off by about 0.0002 to
0.0005 seconds; maybe clock() IS dependant on CPU usage...

As usual, I'm confused...
So if, for
example, your program is getting 25% of the CPU, the loop will wait 4
times as long as you probably want it to (while wasting 25% of the CPU
doing nothing particularly useful).

Actually, I would have just thought that you'd be kinda chasing
your CPU cycles around each other and wind up with a mess, as
was previously mentioned...

Yes, I think just about all compilers for general purpose computers
allow you to call in some way a system timer for suspending your program
for a period of time...
Indeed. Delaying for a specified time interval is one of those things
that cannot be done well using only standard C, but that can probably
be done *very* well using some system-specific interface.

Yes, since some type of timer and process suspension/activation is
needed for most general purpose computer OSs...
(It would have been easy enough for the standard to define a sleep()
function, but applications that need that functionality almost always
need other functionality that can't be define so easily in a portable
interface.)

Portable to what? Embedded systems again? I'm surprised they
stooped so low as to include something to print to a "screen"...
 
K

Keith Thompson

Bill Reid said:

Yes, really.

C99 7.23.2.1p3:

The clock function returns the implementation's best approximation
to the processor time used by the program since the beginning of
an implementation-defined era related only to the program
invocation. To determine the time in seconds, the value returned
by the clock function should be divided by the value of the macro
CLOCKS_PER_SEC. If the processor time used is not available or its
value cannot be represented, the function returns the value
(clock_t)(-1).

Seems to work to a few thousandths of a second
here (note that _sleep() is my development package version of
a sleep/wait/delay function for Windows, and CLK_TCK
is the macro for determining the number of seconds from
clock()):

#include <stdio.h> /* required to breathe */
#include <time.h> /* required for clock functions */
#include <dos.h> /* required for "sleep" function */

int main(void) {
int inc;
clock_t start,end;

for(inc=1;inc<5;inc++) {
printf("Sleeping for %d seconds\n",inc);
start=clock();
_sleep(inc);
end=clock();
printf("Slept for %f seconds\n",((end-start)/CLK_TCK));
}
}

Sleeping for 1 seconds
Slept for 1.000000 seconds
Sleeping for 2 seconds
Slept for 2.000000 seconds
Sleeping for 3 seconds
Slept for 3.000000 seconds
Sleeping for 4 seconds
Slept for 4.000000 seconds

Hey, that appears to be "perfect timing"! Before when I ran
this some of the clock() timings were off by about 0.0002 to
0.0005 seconds; maybe clock() IS dependant on CPU usage...

As usual, I'm confused...

I have no idea how your '_sleep' function works. I suspect that
either '_sleep' delays for a specified interval of CPU time, or your
program is using exactly 1 second of CPU time per second of real time.
But if _sleep(4) consumes a full 4 seconds of CPU time doing nothing.

Note that clock_t could be signed, unsigned, or floating-point, and
CLK_TCK isn't defined in standard C; the correct macro is
CLOCKS_PER_SEC.

[...]
 
B

buuuuuum

Yes, really.

C99 7.23.2.1p3:

The clock function returns the implementation's best approximation
to the processor time used by the program since the beginning of
an implementation-defined era related only to the program
invocation. To determine the time in seconds, the value returned
by theclockfunction should be divided by the value of the macro
CLOCKS_PER_SEC. If the processor time used is not available or its
value cannot be represented, the function returns the value
(clock_t)(-1).

this mean that if i have a program that count execution's time and run
it several times it should output always the same value? because I
think it should take always the same processor time to run, if it
doesn't have i/o for example

but I tried some code here and even if I use sleep(), what I think
would make the program to doesn't use the processor time, the program
outputs the correct value
 
B

Bill Reid

Keith Thompson said:
Yes, really.

C99 7.23.2.1p3:

The clock function returns the implementation's best approximation
to the processor time used by the program since the beginning of
an implementation-defined era related only to the program
invocation. To determine the time in seconds, the value returned
by the clock function should be divided by the value of the macro
CLOCKS_PER_SEC. If the processor time used is not available or its
value cannot be represented, the function returns the value
(clock_t)(-1).

Well, yeah, it's supposed to tell you how long the program has
been running. On my system the program running time is counted
in milliseconds starting from zero as soon as the program starts.
On some systems, this timing service is not available, so it returns
-1...

What I was really getting at was your statement that you
snipped:
So if, for
example, your program is getting 25% of the CPU, the loop will wait 4
times as long as you probably want it to (while wasting 25% of the CPU
doing nothing particularly useful).

At the very least, this should depend on the system, and at the worst,
may be conflating two values which aren't related to each other on any
system that supports stuff like clock() and sleep()...
I have no idea how your '_sleep' function works. I suspect that
either '_sleep' delays for a specified interval of CPU time,

No possibility it just delays execution of the program for a
specified amount of time regardless of the "CPU time"?
or your
program is using exactly 1 second of CPU time per second of real time.

Since it is nominally a "multi-tasking" single-processor system it
must be sharing that CPU time with all the other programs I'm running,
but that doesn't seem to affect the perceived time as the program
runs or timings returned by clock() much. For example, I'll go ahead
and run it while doing a heavy download from the net and starting
up another big program:

Sleeping for 1 seconds
Slept for 1.001000 seconds
Sleeping for 2 seconds
Slept for 2.000000 seconds
Sleeping for 3 seconds
Slept for 3.003000 seconds
Sleeping for 4 seconds
Slept for 4.004000 seconds

OK, like I said, sometimes it's off by a few thousandths of a second,
but it still managed to sneak in there while everything else was happening
and my second-hand watch perception was that the timings were exactly
as advertised...
But if _sleep(4) consumes a full 4 seconds of CPU time doing nothing.

I think on a modern GUI single-processor system the CPU is always
doing SOMETHING, but the point of the "sleep" stuff is that my program
does NOTHING for a specified period of time...
Note that clock_t could be signed, unsigned, or floating-point, and
CLK_TCK isn't defined in standard C; the correct macro is
CLOCKS_PER_SEC.

Yeah, I noticed that, but for this development package, they
use a different name for the same macro, which in this case is
a value supporting millisecond granularity...
 
R

Richard Bos

Bill Reid said:
Really? Seems to work to a few thousandths of a second
here (note that _sleep() is my development package version of
a sleep/wait/delay function for Windows, and CLK_TCK
is the macro for determining the number of seconds from
clock()):

Seems is correct. It may do so under MS-DOS, when no other program is
running, but it isn't the right function to use for most systems.
Yes, I think just about all compilers for general purpose computers
allow you to call in some way a system timer for suspending your program
for a period of time...

Yup. And that's the best way to handle this. Busy-looping is not, unless
you enjoy having your account suspended by your friendly local BOFH.

Richard
 
M

Mark McIntyre

Well, yeah, it's supposed to tell you how long the program has
been running.

No, its supposed to tell you the processor time used. Which is
entirely different to wallclock time.

I assume you're not used to multitasking operating systems.
On my system the program running time is counted
in milliseconds starting from zero as soon as the program starts.

However this isn't what clock() measures.
No possibility it just delays execution of the program for a
specified amount of time regardless of the "CPU time"?

Its possible, but since its a nonstandard function, we can't say.
Since it is nominally a "multi-tasking" single-processor system it
must be sharing that CPU time with all the other programs I'm running,
but that doesn't seem to affect the perceived time as the program
runs or timings returned by clock() much.

My guess: when you're running this process, it takes 100% of hte CPU
available for the duration of hte "busy wait" loop. Try running a huge
numerical simulation at the same time, say BOINC or converting a nice
big AVI into MPG. Either you will see a difference, or your
background jobs will all freeze, or your clock() is nonstandard and
you need a new compiler.

Yeah, I noticed that, but for this development package, they
use a different name for the same macro,

It can't be the same macro if it has a different name....
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
B

Bill Reid

Mark McIntyre said:
No, its supposed to tell you the processor time used. Which is
entirely different to wallclock time.

Well, if the program is suspended, it's not using processor time,
and it's not "running", now is it?
I assume you're not used to multitasking operating systems.

Actually, it might be more true that I've forgotten more
about multi-tasking, multi-user, multi-processor systems
than you'll ever know...
However this isn't what clock() measures.

No, it's what it returns!
Its possible, but since its a nonstandard function, we can't say.

We can't even make an educated "guess"? Come on, what
would one of my forays into this group be without some completely
unfounded leap of non-logic?
My guess:

Thanks for that! My faith in this group has been restored!
Occasionally, it is shaken by somebody who actually responds
based on knowledge, fact, and practical application of
technology to solve real-world problems quickly, but you've
redeemed it!
when you're running this process, it takes 100% of hte CPU
available for the duration of hte "busy wait" loop.

Now let's just think about this for a second...you "guess" that
the "sleep" function bumps the CPU usage of the process up to
100% for the duration of the "sleep" on a "multi-tasking" system
(the exact kind of system you are an "expert" on!).

And yet, it seems to ignorant me that a "multi-tasking" single
processor system must always be stopping execution of processes
to allow other processes to get their share of "CPU time", to
create this "illusion" that they are all "running" at the same time.

But I guess what really happens is that the OS sees a new
process that needs to run when another process is running,
so it bumps the "CPU usage" of the currently running process
to "100%" to allow the new process to run! Brilliant! In fact,
I "guess" you "think" that ALL "simultaneously" running programs
are always using "100%" of CPU time, because how else could
a "multi-tasking" system work?!?!!
Try running a huge
numerical simulation at the same time, say BOINC or converting a nice
big AVI into MPG.

You snipped out (classic behavior) my description of what I did,
as well as the program itself (no need to confuse anybody with the
facts, let's just "guess"). I downloaded about a meg of data from
the net while at the same time started a graphics program that
opened up a thumbnail view of hundreds of JPEGs in a directory,
AND simultaneously ran my friggin' little like six-line program, that
you "guess" was mostly in a "busy/wait state" (whatever the hell that is)
for 10 seconds, and "guess" what? I reported what ACTUALLY
happened, and you snipped it!

Yeah, sure, I could run ever more computationally-intensive
programs; in fact, I run a HUGE numerical simulation every night
that takes everything my poor little computer has to offer and
with nothing else running it still takes HOURS for it to complete
(a few months ago one of your "peers" amusingly sneered at me
that I just wasn't used to writing large-scale programs to support
their "guess" on a subject).

"Guess" what, I'm not going to waste my time running anything
else when I'm running that, because as I've tried to stress repeatedly,
I don't make money by writing programs that DON'T work; in
other words, I'm NOT a "professional" programmer.

I do know this: the documentation for any and all "sleep" functions
of ANY sort is always careful to say that the specified "sleep" time
is the guaranteed MINIMUM amount of time the process will
"sleep". In other words, when it comes time to "wake up", if
there is a gigantic process running, it might take a few milliseconds
to get the "sleeping" process back into the mix...
Either you will see a difference, or your
background jobs will all freeze, or your clock() is nonstandard and
you need a new compiler.

Aside from your rampant "guessing", you keep conflating "sleep"
with "clock()" (though I suspect they may be somewhat tied together
technically at the "implementation"), and any multi-tasking "CPU usage"
with "100%" of all the cycles all the time. My "background jobs" are
not going to "freeze", I was watching several performance monitors
(more processes!) as well as the on-screen progress of my programs
when I ran them before using "sleep". Nothing "froze", and people
who actually know how "multi-tasking" works (and perhaps more
importantly, WHY it works) know why...

As far as my clock() being "non-standard", I say GREAT! Since
it worked for that little test purpose, I'm HAPPY it's "non-standard"!
I guess I can use it if I wanted to "clock" the running time of my
programs, including the "sleep" times (not that I have a lot of use
for that, but something to keep in mind).

Once again, I can only make money if my programs actually WORK,
because again I AM NOT A PROFESSIONAL PROGRAMMER.
It can't be the same macro if it has a different name....

Yup, spoken like a true "professional"...
 
M

Mark McIntyre

Well, if the program is suspended, it's not using processor time,
and it's not "running", now is it?

Bizarre - thats precisely what we've all been trying to explain to
you. However the way you phrased the original statement led me to
believe you thought it told you how long had elapsed on the wall clock
since you started the programme.
Actually, it might be more true that I've forgotten more
about multi-tasking, multi-user, multi-processor systems
than you'll ever know...

I sincerely doubt that, but I'm unlikely to ever care enough to test
it.
We can't even make an educated "guess"?

Why? Since when did this become comp.lang.allsortsotstuffabitlikec
Now let's just think about this for a second...you "guess" that
the "sleep" function bumps the CPU usage of the process up to
100% for the duration of the "sleep" on a "multi-tasking" system

Oh, I'm sorry. I *thought* you were claiming that your clock()
function was returning the wall clock time between two points in time,
irrespective of whether anything else was running on the box. Now you
seem in fact to be claiming quite the reverse. Perhaps I'll just
ignore you till you make up your minds.
(the exact kind of system you are an "expert" on!).

I have at no stage claimed to be an "expert" (your quotes).
And yet, it seems to ignorant me that a "multi-tasking" single
processor system must always be stopping execution of processes
to allow other processes to get their share of "CPU time", to
create this "illusion" that they are all "running" at the same time.

No shit, sherlock?

(snip ramblings )
You snipped out (classic behavior) my description of what I did,

ah, I see - you're more interested in taking offense at people
pointing out errors in your logic than in understanding.
programs; in fact, I run a HUGE numerical simulation every night
that takes everything my poor little computer has to offer

<irony>
Yeah., and I have babes just begging to be rogered every night, due to
the enormous size of my tackle.
Yup, spoken like a true "professional"...

My mistake - i thought you wanted help. Apparently you just want
blown.
--
Mark McIntyre

"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
 
B

Bill Reid

Mark McIntyre said:
Bizarre - thats precisely what we've all been trying to explain to
you.

Actually, the confusion here is your reliance on something in
the "C" standard that clearly talks about an "approximation" by
the "implementation", and promoting that language into a "legal"
requirement for process control on every system that has a
"standard C" compiler...
However the way you phrased the original statement led me to
believe you thought it told you how long had elapsed on the wall clock
since you started the programme.

Yup, because that's pretty much the way it worked for ME...on
MY "implementation"...
I sincerely doubt that, but I'm unlikely to ever care enough to test
it.

Well, here's a little test for you to ignore then:

How do you think your precious little "clock" function handles
"user wait"? And now a really hard one: if you switch back and
forth between a bunch of "user wait" for several "concurrent"
processes" a few hundred times a second, how many CPU cycles
have actually been used by each process? And for bonus points,
how much "CPU time" have they each used?

And the final question would be: why the hell and how the hell
do you think the "implementation" is going to count every freakin'
cycle used by a "programme" and divide by the wall clock time
elapsed since it started to come up with your "C" standard-derived
notion of "CPU time"?
Why? Since when did this become comp.lang.allsortsotstuffabitlikec

Because the "C" standard controls all, the AC, DC, and the
horizontal and vertical...at least, according to this group...
Oh, I'm sorry. I *thought* you were claiming that your clock()
function was returning the wall clock time between two points in time,
irrespective of whether anything else was running on the box.

Well, I didn't "guess" about it, I ran a little test and that's what it
appeared to do...you know, what you snipped out...
Now you
seem in fact to be claiming quite the reverse. Perhaps I'll just
ignore you till you make up your minds.

That's always a good tactic, just ignore all new information...I'm
guessing you started this life strategy about the time DOS was superceded
by Windows...
I have at no stage claimed to be an "expert" (your quotes).

Well, smarter than me, and I worked with that stuff on multi-$million
systems for 9 years at the atomic technical level...
No shit, sherlock?

Will you understand the plot by the end of the story if it is
explained to you in elementary terms, Watson?
(snip ramblings )

Another good tactic...
ah, I see - you're more interested in taking offense at people
pointing out errors in your logic than in understanding.

To understand anything, we must first understand the
"experiment"...is that elementary enough for you?
<irony>
Yeah., and I have babes just begging to be rogered every night, due to
the enormous size of my tackle.
</irony>

Chicks generally don't get turned on by numerical simulations,
no matter how large...but thanks for the insight to more of your
underlying insecurities...
My mistake - i thought you wanted help. Apparently you just want
blown.

Ah yes, the non-sequitur vulgar life strategy tactic. I NEVER
asked for any help here, just pointed out your and Keith Thompson's
errors of understanding the "C" standard and computer science
after the 1970s...

So anyway, as you were, and apparently always will be...
 
K

Keith Thompson

Bill Reid said:
Ah yes, the non-sequitur vulgar life strategy tactic. I NEVER
asked for any help here, just pointed out your and Keith Thompson's
errors of understanding the "C" standard and computer science
after the 1970s...
[...]

I don't believe I've misunderstood the C standard, but I never quite
understood what the program you posted is doing, partly because it
used some non-standard function called "_sleep".

Here's what the standard says about the clock() function
(C99 7.23.2.1):

Synopsis

#include <time.h>
clock_t clock(void);

Description

The clock function determines the processor time used.

Returns

The clock function returns the implementation?s best approximation
to the processor time used by the program since the beginning of
an implementation-defined era related only to the program
invocation. To determine the time in seconds, the value returned
by the clock function should be divided by the value of the macro
CLOCKS_PER_SEC. If the processor time used is not available or its
value cannot be represented, the function returns the value
(clock_t)(-1).

with a footnote:

In order to measure the time spent in a program, the clock
function should be called at the start of the program and its
return value subtracted from the value returned by subsequent
calls.

Now the behavior you described for the program you posted *seemed* to
be inconsistent with that. Either something odd is going on on your
system (e.g., your program uses CPU time even during the _sleep()
calls), or your implementation's clock() function isn't working
properly ("properly" meaning consistently with the standard's
requirements), or something else is going on.

Perhaps you can shed some light on this.
 
B

Bill Reid

Keith Thompson said:
Bill Reid said:
Ah yes, the non-sequitur vulgar life strategy tactic. I NEVER
asked for any help here, just pointed out your and Keith Thompson's
errors of understanding the "C" standard and computer science
after the 1970s...
[...]

I don't believe I've misunderstood the C standard, but I never quite
understood what the program you posted is doing, partly because it
used some non-standard function called "_sleep".

Here's what the standard says about the clock() function
(C99 7.23.2.1):

Synopsis

#include <time.h>
clock_t clock(void);

Description

The clock function determines the processor time used.

Returns

The clock function returns the implementation?s best approximation
to the processor time used by the program since the beginning of
an implementation-defined era related only to the program
invocation. To determine the time in seconds, the value returned
by the clock function should be divided by the value of the macro
CLOCKS_PER_SEC. If the processor time used is not available or its
value cannot be represented, the function returns the value
(clock_t)(-1).

with a footnote:

In order to measure the time spent in a program, the clock
function should be called at the start of the program and its
return value subtracted from the value returned by subsequent
calls.

Now the behavior you described for the program you posted *seemed* to
be inconsistent with that. Either something odd is going on on your
system (e.g., your program uses CPU time even during the _sleep()
calls), or your implementation's clock() function isn't working
properly ("properly" meaning consistently with the standard's
requirements), or something else is going on.

Perhaps you can shed some light on this.

Well, I've shed a little light (AND some heat) on this, but
it just seems to fly right over the heads of the "frequent posters"
here...and then they REALLY don't help matters by "snipping"
out the explanation...

But <big sigh, waiting for this to snipped out again>, AS I
SAID IN THE POST YOU ARE RESPONDING TO, you
are missing/misinterpreting a key word in the standard:

"The clock function returns the implementation's best approximation..."

"APPROXIMATION"! SEE IT?!??!!!

"APPROXIMATION"!!! SEE IT!!????!!!?!!

"APPROXIMATION"!!!!!!! SEE IT??!???!!!??!??!!!

Your so-called "requirement" is only an "APPROXIMATION"
by the "IMPLEMENTATION".

In other words, it's whatever the friggin' OS has handily available
to return to the function. THE STANDARD DOES NOT "REQUIRE"
THAT THE OS IMPLEMENT A "PERFORMANCE MONITOR"
(WHICH ARE HIDEOUS PERFORMANCE-DRAINERS) FOR
EVERY FRIGGIN' PROCESS TO SUPPLY WHAT YOU HAVE
ERRONEOUSLY DECIDED IS THE CORRECT "PROCESSOR
TIME".

Does that shed enough light for you?

MY OS apparently just returns something that is effectively
the same as "wall clock" time as ITS "approximation" of "processor
time". If you think that it is a slaggard among OSs for doing so,
you might want to re-read all the stuff I wrote in the post you
responded to, carefully, and look up the confusing "technical
words", and read up on "performance monitors" and ponder
why sys admins rarely run them and get so mad if somebody
else does and eventually have a little light bulb go off over your
head as you get the "clear light" as to how modern high-performance
"multi-tasking" systems actually work...

Then you'll understand perfectly, without even being told, what
the pesky "_sleep" function is doing, and you'll be so "enlightened"
that you'll suddenly "grok" why that "confusing" six-line program
I wrote behaves the way it does (and why there's a good chance
it would behave that way on a LOT of systems).

You'll even understand stuff like why I am only using tiny fraction
of my "processor time" right now as I type this, even with many other
programs running "simultaneously". You'll understand that they are
all effectively mostly a"_sleep", waiting for the OS to return a
user "event" to wake them up again, perhaps to use as much as
50% of "processor time" if I do something really crazy...

And if all that doesn't shed enough light, consider carefully the
apparent contradiction between the "requirement" you believe
the standard imposes, and the footnote that you included in
your own post:
In order to measure the time spent in a program, the clock
function should be called at the start of the program and its
return value subtracted from the value returned by subsequent
calls.

Whaaaa? You mean I can just call clock() at the beginning of
the program (or any time, like I did in my program) and then subtract
subsequent call return values to measure the TIME SPENT (wall
clock time?) "in a program"? Shirley, they must have meant
"processor time" when they wrote "time spent", and were just
a little "sloppy" with their language...but I always sloppy thinking
results inevitably in sloppy writing...
 
K

Keith Thompson

Bill Reid said:
Keith Thompson said:
Bill Reid said:
Ah yes, the non-sequitur vulgar life strategy tactic. I NEVER
asked for any help here, just pointed out your and Keith Thompson's
errors of understanding the "C" standard and computer science
after the 1970s...
[...]

I don't believe I've misunderstood the C standard, but I never quite
understood what the program you posted is doing, partly because it
used some non-standard function called "_sleep".

Here's what the standard says about the clock() function
(C99 7.23.2.1): [...]
Perhaps you can shed some light on this.
[...]

But <big sigh, waiting for this to snipped out again>, AS I
SAID IN THE POST YOU ARE RESPONDING TO, you
are missing/misinterpreting a key word in the standard:

"The clock function returns the implementation's best approximation..."

"APPROXIMATION"! SEE IT?!??!!!

"APPROXIMATION"!!! SEE IT!!????!!!?!!

"APPROXIMATION"!!!!!!! SEE IT??!???!!!??!??!!!

Your so-called "requirement" is only an "APPROXIMATION"
by the "IMPLEMENTATION".
[...]

MY OS apparently just returns something that is effectively
the same as "wall clock" time as ITS "approximation" of "processor
time".
[...]

Then you'll understand perfectly, without even being told, what
the pesky "_sleep" function is doing, and you'll be so "enlightened"
that you'll suddenly "grok" why that "confusing" six-line program
I wrote behaves the way it does (and why there's a good chance
it would behave that way on a LOT of systems).

[...]

Stop shouting, and stop treating us like idiots just because we may
have missed (or disagreed with) some point you made. I'm trying to
take part in a technical discussion here. You're welcome to join me
in that endeavor.

Yes, the value returned by the clock() function is an *approximation*.
I'm very well aware of that; I don't expect it to count every CPU
cycle unless the underlying system makes it reasonably easy to do
that.

Unless I've missed something, you still haven't explained what
_sleep() does; you've just insulted me for not already knowing. I
could guess that it behaves similarly to the POSIX sleep() function,
i.e., that it suspends execution of the current program (process,
whatever) for a specified time interval, allowing other programs
(processes, whatever) to use the CPU until the specified interval
expires. But since it's a non-standard function, I didn't see any
point in making even an educated guess when you could easily tell us
what it does.

The point is that, based on your description, buried somewhere within
the insults, your system's clock() function appears to be broken.
Yes, the standard only requires an "approximation", but did you miss
the context: "best approximation"?

If your system's clock() function indicates that your program used 4
seconds of processor time while it was sleeping for 4 seconds and not
using the processor, then I can hardly imagine that that's "the
implementation's best approximation to the processor time used by the
program". A sin() function that always returns 0.0 would be a
similarly bad approximation; you might be able to argue that the
standard allows it, but it would unquestionably be broken.

So the whole point of this discussion is apparently that your
implementation has a broken, or at least non-standard, clock()
function.

You claim that there's a good chance that your program would behave
the same way on a lot of systems. I believe you're mistaken. For one
thing, most systems aren't likely to have a function called '_sleep'.

Here's my version of your program. It depends on the POSIX sleep()
function, but is otherwise standard C. Note that the standard says
only that clock_t is an arithmetic type, so I've allowed for all the
possibilities (floating, unsigned, and signed). I've also used the
standard CLOCKS_PER_SEC macro rather than CLK_TCK. And I've examined
the result of clock() again after performing some CPU-intensive
calculatations. A decent optimizer could eliminate the calculations,
but I compiled the program without asking for optimization. If
necessary, I could have declared 'result' as volatile.

I observed, as the program ran, that each sleep(N) call took
approximately N seconds (I didn't time it precisely).

========================================================================
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <math.h>

static void show_clock(clock_t c)
{
if ((clock_t)1 / 2 > (clock_t)0) {
/* clock_t is floating-point */
printf("%f", (double)c);
}
else if ((clock_t)-1 > (clock_t)0) {
/* clock_t is unsigned */
printf("%luU", (unsigned long)c);
}
else {
/* clock_t is signed */
printf("%ld", (long)c);
}
}

static void do_stuff(void)
{
#define ITERATIONS 10000000
long i;
double result;
for (i = 0; i < ITERATIONS; i ++) {
result = sin((double)i/ITERATIONS);
}
}

int main(void)
{
int inc;
clock_t start, end;

printf("CLOCKS_PER_SEC = ");
show_clock(CLOCKS_PER_SEC);
putchar('\n');

for (inc = 1; inc < 5; inc++) {
printf("Sleeping for %d seconds\n", inc);
start = clock();
sleep(inc);
end = clock();

printf("start = ");
show_clock(start);
printf(", end = ");
show_clock(end);
putchar('\n');

printf("Slept for %f seconds of processor time\n",
(((double)end-start)/CLOCKS_PER_SEC));
}

do_stuff();
printf("After computations, clock() returns ");
show_clock(clock());
putchar('\n');

return 0;
}
========================================================================

And here's the output I got on one system:
========================================================================
CLOCKS_PER_SEC = 1000000
Sleeping for 1 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
Sleeping for 2 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
Sleeping for 3 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
Sleeping for 4 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
After computations, clock() returns 940000
========================================================================

The amount of processor time consumed by each sleep() call was
approximately zero; it was not anywhere near the wall clock time
consumed. This was very different from the results you got.

If you reply to this with another long screed full of insults, words
in all-caps, and repeated exclamation points, I will ignore you. If
you care to calm down and discuss technical issues, I'll be glad to
continue the discussion. It's up to you.
 
O

Old Wolf

MY OS apparently just returns something that is effectively
the same as "wall clock" time as ITS "approximation" of "processor
time".

Then you'll understand perfectly, without even being told, what
the pesky "_sleep" function is doing, and you'll be so "enlightened"

I think you are refusing to explain your _sleep function
on purpose because you know that you are talking crap.

(Note to anyone who may have only just joined this thread -
OP wrote his own function titled "_sleep" but has not yet
explained what it does, other than to "sleep/wait/delay").

Your posted results showed that calling _sleep(4) caused
your system to sleep for approximately 4 seconds of
processor time.

Yet you seem to be claiming in other messages that your
_sleep function does not consume any significant amount
of processor time while it is 'running'.

Only a true incompetent would think that 4 seconds
is a good approximation of 0 seconds in the context
of how much processor time was used in a wall clock
4 second sleep.
I AM NOT A PROFESSIONAL PROGRAMMER.

No shit sherlock. BTW, looking forward to your post
next week where you tell a plumber how to unblock
a toilet.
 
B

Bill Reid

Keith Thompson said:
Bill Reid said:
Keith Thompson said:
[...]
Ah yes, the non-sequitur vulgar life strategy tactic. I NEVER
asked for any help here, just pointed out your and Keith Thompson's
errors of understanding the "C" standard and computer science
after the 1970s...
[...]

I don't believe I've misunderstood the C standard, but I never quite
understood what the program you posted is doing, partly because it
used some non-standard function called "_sleep".

Here's what the standard says about the clock() function
(C99 7.23.2.1): [...]
Perhaps you can shed some light on this.
[...]

But <big sigh, waiting for this to snipped out again>, AS I
SAID IN THE POST YOU ARE RESPONDING TO, you
are missing/misinterpreting a key word in the standard:

"The clock function returns the implementation's best approximation..."

"APPROXIMATION"! SEE IT?!??!!!

"APPROXIMATION"!!! SEE IT!!????!!!?!!

"APPROXIMATION"!!!!!!! SEE IT??!???!!!??!??!!!

Your so-called "requirement" is only an "APPROXIMATION"
by the "IMPLEMENTATION".
[...]

MY OS apparently just returns something that is effectively
the same as "wall clock" time as ITS "approximation" of "processor
time".
[...]

Then you'll understand perfectly, without even being told, what
the pesky "_sleep" function is doing, and you'll be so "enlightened"
that you'll suddenly "grok" why that "confusing" six-line program
I wrote behaves the way it does (and why there's a good chance
it would behave that way on a LOT of systems).

[...]

Stop shouting, and stop treating us like idiots just because we may
have missed (or disagreed with) some point you made.

OK, "you" ("us") first...
I'm trying to
take part in a technical discussion here. You're welcome to join me
in that endeavor.

I like technical discussions, except I'm really not that technically
knowledgeable, at least by primary occupation...this sometimes
goads people who are not very technically knowledgeable but rely
on a resume indicating technical knowledge to "take advantage"...
Yes, the value returned by the clock() function is an *approximation*.
I'm very well aware of that; I don't expect it to count every CPU
cycle unless the underlying system makes it reasonably easy to do
that.

Take it another logical step further, because the standard already
does...clock() is under NO REQUIREMENT to do any particular
damn thing at all, except return -1 if it does NOTHING, for the
simple reason that you absolutely cannot "sue" somebody for not
performing a good enough "approximation"...
Unless I've missed something, you still haven't explained what
_sleep() does; you've just insulted me for not already knowing.

I don't KNOW what it does either! I just ASSUME it calls
the task scheduler of the OS and says "stop my execution for
n seconds, put all other processes ahead of me in the priority
queue, even if there are no other processes, take my 'context'
(or 'process pointer' in some UNIX systems) and put it in
limbo-land for those n seconds, then start me up again".

I CAN'T speak for the internals of every multi-tasking operating
system out there, but in general all the ones I DO know can stop
the process from running, store the state of the process (registers,
etc.), keep the process from running for a specified amount of time,
then restore the process state and start it running again.

Since that capability is just an OS call away, why not use it
for sleep() (or even more pertinently, _sleep())?
I
could guess that it behaves similarly to the POSIX sleep() function,
i.e., that it suspends execution of the current program (process,
whatever) for a specified time interval, allowing other programs
(processes, whatever) to use the CPU until the specified interval
expires.

There you go, it's that simple. POSIX is derived from UNIX,
of course, and UNIX had this capability built into the kernel from
the get-go...certain other "microcomputer" OSs had to re-invent
the wheel in their own kludgy way, but eventually they got there,
because there probably aren't a lot of other ways to implement
a "multi-tasking" system; the differences will be nomenclatural
rather than significant...
But since it's a non-standard function, I didn't see any
point in making even an educated guess when you could easily tell us
what it does.

I DID tell you what it did! I wrote a program that uses it, and
ran it, and reported the results!

As far as the "fine" documentation is concerned, give me a break...

void _sleep(unsigned seconds);

Description

Suspends execution for an interval (seconds).

With a call to _sleep, the current program is suspended from execution for
the
number of seconds specified by the argument seconds. The interval is
accurate
only to the nearest hundredth of a second or to the accuracy of the
operating
system clock, whichever is less accurate.

Return Value

None.

---end of compiler package "man page"

Other than that, I'm only vaguely familiar with the internals of
the OS involved, and some other "sleep" functions for thread
control and so forth, so your guess is as good as mine...except,
I REALLY don't think it is...
The point is that, based on your description, buried somewhere within
the insults, your system's clock() function appears to be broken.

Never say die, eh?
Yes, the standard only requires an "approximation", but did you miss
the context: "best approximation"?

Again, who and how are you gonna sue over the words "best
approximation"? I already said I ain't suing because I LIKE the
way it works!
If your system's clock() function indicates that your program used 4
seconds of processor time while it was sleeping for 4 seconds and not
using the processor, then I can hardly imagine that that's "the
implementation's best approximation to the processor time used by the
program".

OK, thank God you don't have this compiler (and probably OS).

Again, I would point out that what you are asking for requires
a de facto "performance monitor" and actually counting cycles
in some way, and that's not very practical for "real life"...but I
will grant you, clock() (really the OS) counting the number
of seconds that a process is suspended is a VERY poor
"approximation"...

Of course, using the word "approximation" was sloppy to
begin with (for "standard" language), because it has led us
into an argument over the even more sloppy word "broken"...
A sin() function that always returns 0.0 would be a
similarly bad approximation; you might be able to argue that the
standard allows it, but it would unquestionably be broken.

Yeah, but that's not taking the information from an "outside
source" where the quality of information can't be "regulated".
So the whole point of this discussion is apparently that your
implementation has a broken, or at least non-standard, clock()
function.

Yup...never say die...if you say it enough times, somebody
somewhere will even believe the Earth is flat...
You claim that there's a good chance that your program would behave
the same way on a lot of systems. I believe you're mistaken. For one
thing, most systems aren't likely to have a function called '_sleep'.

Well, change _sleep() to sleep() or whatever and compile that
six-line bad boy on as many systems as you can, and see what
happens, and report back...it may very well behave more "correctly"
on a UNIX system, give it a shot...
Here's my version of your program.

Or write a new one!!!
It depends on the POSIX sleep()
function, but is otherwise standard C. Note that the standard says
only that clock_t is an arithmetic type, so I've allowed for all the
possibilities (floating, unsigned, and signed). I've also used the
standard CLOCKS_PER_SEC macro rather than CLK_TCK. And I've examined
the result of clock() again after performing some CPU-intensive
calculatations. A decent optimizer could eliminate the calculations,
but I compiled the program without asking for optimization. If
necessary, I could have declared 'result' as volatile.

I observed, as the program ran, that each sleep(N) call took
approximately N seconds (I didn't time it precisely).

========================================================================
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <math.h>

static void show_clock(clock_t c)
{
if ((clock_t)1 / 2 > (clock_t)0) {
/* clock_t is floating-point */
printf("%f", (double)c);
}
else if ((clock_t)-1 > (clock_t)0) {
/* clock_t is unsigned */
printf("%luU", (unsigned long)c);
}
else {
/* clock_t is signed */
printf("%ld", (long)c);
}
}

static void do_stuff(void)
{
#define ITERATIONS 10000000
long i;
double result;
for (i = 0; i < ITERATIONS; i ++) {
result = sin((double)i/ITERATIONS);
}
}

int main(void)
{
int inc;
clock_t start, end;

printf("CLOCKS_PER_SEC = ");
show_clock(CLOCKS_PER_SEC);
putchar('\n');

for (inc = 1; inc < 5; inc++) {
printf("Sleeping for %d seconds\n", inc);
start = clock();
sleep(inc);
end = clock();

printf("start = ");
show_clock(start);
printf(", end = ");
show_clock(end);
putchar('\n');

printf("Slept for %f seconds of processor time\n",
(((double)end-start)/CLOCKS_PER_SEC));
}

do_stuff();
printf("After computations, clock() returns ");
show_clock(clock());
putchar('\n');

return 0;
}
========================================================================

And here's the output I got on one system:
========================================================================
CLOCKS_PER_SEC = 1000000
Sleeping for 1 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
Sleeping for 2 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
Sleeping for 3 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
Sleeping for 4 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
After computations, clock() returns 940000
========================================================================

The amount of processor time consumed by each sleep() call was
approximately zero; it was not anywhere near the wall clock time
consumed. This was very different from the results you got.

Great! You've managed to prove EXACTLY NOTHING,
but you spent a lot of time doing it!
If you reply to this with another long screed full of insults, words
in all-caps, and repeated exclamation points, I will ignore you. If
you care to calm down and discuss technical issues, I'll be glad to
continue the discussion. It's up to you.

Well, I don't know if I met your decorum requirements, but
I know that I am unable to discuss certain important internal
considerations relating to the individual compilers and OSs
involved, so I really can't contribute anything more "intelligent"
in any event.

The remainder is just "semantics" and "logic", but most
importantly as used (abused?) to "defend your ego", and
since my posts were merely empirical and technical in nature
I really don't "have a dog in this hunt"...if you want to say
my compiler (and/or OS) is "broken" because its "approximation"
is a little TOO "approximate"...

I can agree with the following statements:

* The original poster was wasting his time to try to use
clock() in a "busy" loop to "stop execution" of a program
for a period of time; use some form of "sleep()" instead

* The "C" standard apparently wants clock() to in some
way return "processor time" used by a program, NOT
"wall clock time" (but you can't always git whut u want)

* clock() probably returns different values on different
systems with different compilers (and almost certainly returns
-1 for some)

* people who think that a "sleep" function on a modern
multi-tasking operating system performs a "busy-wait" because
of some interpretation of the "C" standard "requirement" for
"clock()" deserve all the abuse that can be heaped upon
their heads, 'specially if they state that they have some type
of superior technical "knowledge" that brought them to
that conclusion...
 
K

Keith Thompson

Bill Reid said:
I like technical discussions, except I'm really not that technically
knowledgeable, at least by primary occupation...this sometimes
goads people who are not very technically knowledgeable but rely
on a resume indicating technical knowledge to "take advantage"...

If you're "really not that technically knowledgeable", you really
should pay attention to those of us who are.

[...]
As far as the "fine" documentation is concerned, give me a break...

void _sleep(unsigned seconds);

Description

Suspends execution for an interval (seconds).

With a call to _sleep, the current program is suspended from
execution for the number of seconds specified by the argument
seconds. The interval is accurate only to the nearest hundredth of a
second or to the accuracy of the operating system clock, whichever
is less accurate.

Return Value

None.

---end of compiler package "man page"

Great. What does you're implementation's documentation say about the
clock() function?

[...]
* people who think that a "sleep" function on a modern
multi-tasking operating system performs a "busy-wait" because
of some interpretation of the "C" standard "requirement" for
"clock()" deserve all the abuse that can be heaped upon
their heads, 'specially if they state that they have some type
of superior technical "knowledge" that brought them to
that conclusion...

Who claimed that a sleep function performs a busy-wait?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,011
Latest member
AjaUqq1950

Latest Threads

Top