Keith Thompson said:
Bill Reid said:
Keith Thompson said:
[...]
Ah yes, the non-sequitur vulgar life strategy tactic. I NEVER
asked for any help here, just pointed out your and Keith Thompson's
errors of understanding the "C" standard and computer science
after the 1970s...
[...]
I don't believe I've misunderstood the C standard, but I never quite
understood what the program you posted is doing, partly because it
used some non-standard function called "_sleep".
Here's what the standard says about the clock() function
(C99 7.23.2.1): [...]
Perhaps you can shed some light on this.
[...]
But <big sigh, waiting for this to snipped out again>, AS I
SAID IN THE POST YOU ARE RESPONDING TO, you
are missing/misinterpreting a key word in the standard:
"The clock function returns the implementation's best approximation..."
"APPROXIMATION"! SEE IT?!??!!!
"APPROXIMATION"!!! SEE IT!!????!!!?!!
"APPROXIMATION"!!!!!!! SEE IT??!???!!!??!??!!!
Your so-called "requirement" is only an "APPROXIMATION"
by the "IMPLEMENTATION".
[...]
MY OS apparently just returns something that is effectively
the same as "wall clock" time as ITS "approximation" of "processor
time".
[...]
Then you'll understand perfectly, without even being told, what
the pesky "_sleep" function is doing, and you'll be so "enlightened"
that you'll suddenly "grok" why that "confusing" six-line program
I wrote behaves the way it does (and why there's a good chance
it would behave that way on a LOT of systems).
[...]
Stop shouting, and stop treating us like idiots just because we may
have missed (or disagreed with) some point you made.
OK, "you" ("us") first...
I'm trying to
take part in a technical discussion here. You're welcome to join me
in that endeavor.
I like technical discussions, except I'm really not that technically
knowledgeable, at least by primary occupation...this sometimes
goads people who are not very technically knowledgeable but rely
on a resume indicating technical knowledge to "take advantage"...
Yes, the value returned by the clock() function is an *approximation*.
I'm very well aware of that; I don't expect it to count every CPU
cycle unless the underlying system makes it reasonably easy to do
that.
Take it another logical step further, because the standard already
does...clock() is under NO REQUIREMENT to do any particular
damn thing at all, except return -1 if it does NOTHING, for the
simple reason that you absolutely cannot "sue" somebody for not
performing a good enough "approximation"...
Unless I've missed something, you still haven't explained what
_sleep() does; you've just insulted me for not already knowing.
I don't KNOW what it does either! I just ASSUME it calls
the task scheduler of the OS and says "stop my execution for
n seconds, put all other processes ahead of me in the priority
queue, even if there are no other processes, take my 'context'
(or 'process pointer' in some UNIX systems) and put it in
limbo-land for those n seconds, then start me up again".
I CAN'T speak for the internals of every multi-tasking operating
system out there, but in general all the ones I DO know can stop
the process from running, store the state of the process (registers,
etc.), keep the process from running for a specified amount of time,
then restore the process state and start it running again.
Since that capability is just an OS call away, why not use it
for sleep() (or even more pertinently, _sleep())?
I
could guess that it behaves similarly to the POSIX sleep() function,
i.e., that it suspends execution of the current program (process,
whatever) for a specified time interval, allowing other programs
(processes, whatever) to use the CPU until the specified interval
expires.
There you go, it's that simple. POSIX is derived from UNIX,
of course, and UNIX had this capability built into the kernel from
the get-go...certain other "microcomputer" OSs had to re-invent
the wheel in their own kludgy way, but eventually they got there,
because there probably aren't a lot of other ways to implement
a "multi-tasking" system; the differences will be nomenclatural
rather than significant...
But since it's a non-standard function, I didn't see any
point in making even an educated guess when you could easily tell us
what it does.
I DID tell you what it did! I wrote a program that uses it, and
ran it, and reported the results!
As far as the "fine" documentation is concerned, give me a break...
void _sleep(unsigned seconds);
Description
Suspends execution for an interval (seconds).
With a call to _sleep, the current program is suspended from execution for
the
number of seconds specified by the argument seconds. The interval is
accurate
only to the nearest hundredth of a second or to the accuracy of the
operating
system clock, whichever is less accurate.
Return Value
None.
---end of compiler package "man page"
Other than that, I'm only vaguely familiar with the internals of
the OS involved, and some other "sleep" functions for thread
control and so forth, so your guess is as good as mine...except,
I REALLY don't think it is...
The point is that, based on your description, buried somewhere within
the insults, your system's clock() function appears to be broken.
Never say die, eh?
Yes, the standard only requires an "approximation", but did you miss
the context: "best approximation"?
Again, who and how are you gonna sue over the words "best
approximation"? I already said I ain't suing because I LIKE the
way it works!
If your system's clock() function indicates that your program used 4
seconds of processor time while it was sleeping for 4 seconds and not
using the processor, then I can hardly imagine that that's "the
implementation's best approximation to the processor time used by the
program".
OK, thank God you don't have this compiler (and probably OS).
Again, I would point out that what you are asking for requires
a de facto "performance monitor" and actually counting cycles
in some way, and that's not very practical for "real life"...but I
will grant you, clock() (really the OS) counting the number
of seconds that a process is suspended is a VERY poor
"approximation"...
Of course, using the word "approximation" was sloppy to
begin with (for "standard" language), because it has led us
into an argument over the even more sloppy word "broken"...
A sin() function that always returns 0.0 would be a
similarly bad approximation; you might be able to argue that the
standard allows it, but it would unquestionably be broken.
Yeah, but that's not taking the information from an "outside
source" where the quality of information can't be "regulated".
So the whole point of this discussion is apparently that your
implementation has a broken, or at least non-standard, clock()
function.
Yup...never say die...if you say it enough times, somebody
somewhere will even believe the Earth is flat...
You claim that there's a good chance that your program would behave
the same way on a lot of systems. I believe you're mistaken. For one
thing, most systems aren't likely to have a function called '_sleep'.
Well, change _sleep() to sleep() or whatever and compile that
six-line bad boy on as many systems as you can, and see what
happens, and report back...it may very well behave more "correctly"
on a UNIX system, give it a shot...
Here's my version of your program.
Or write a new one!!!
It depends on the POSIX sleep()
function, but is otherwise standard C. Note that the standard says
only that clock_t is an arithmetic type, so I've allowed for all the
possibilities (floating, unsigned, and signed). I've also used the
standard CLOCKS_PER_SEC macro rather than CLK_TCK. And I've examined
the result of clock() again after performing some CPU-intensive
calculatations. A decent optimizer could eliminate the calculations,
but I compiled the program without asking for optimization. If
necessary, I could have declared 'result' as volatile.
I observed, as the program ran, that each sleep(N) call took
approximately N seconds (I didn't time it precisely).
========================================================================
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <math.h>
static void show_clock(clock_t c)
{
if ((clock_t)1 / 2 > (clock_t)0) {
/* clock_t is floating-point */
printf("%f", (double)c);
}
else if ((clock_t)-1 > (clock_t)0) {
/* clock_t is unsigned */
printf("%luU", (unsigned long)c);
}
else {
/* clock_t is signed */
printf("%ld", (long)c);
}
}
static void do_stuff(void)
{
#define ITERATIONS 10000000
long i;
double result;
for (i = 0; i < ITERATIONS; i ++) {
result = sin((double)i/ITERATIONS);
}
}
int main(void)
{
int inc;
clock_t start, end;
printf("CLOCKS_PER_SEC = ");
show_clock(CLOCKS_PER_SEC);
putchar('\n');
for (inc = 1; inc < 5; inc++) {
printf("Sleeping for %d seconds\n", inc);
start = clock();
sleep(inc);
end = clock();
printf("start = ");
show_clock(start);
printf(", end = ");
show_clock(end);
putchar('\n');
printf("Slept for %f seconds of processor time\n",
(((double)end-start)/CLOCKS_PER_SEC));
}
do_stuff();
printf("After computations, clock() returns ");
show_clock(clock());
putchar('\n');
return 0;
}
========================================================================
And here's the output I got on one system:
========================================================================
CLOCKS_PER_SEC = 1000000
Sleeping for 1 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
Sleeping for 2 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
Sleeping for 3 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
Sleeping for 4 seconds
start = 0, end = 0
Slept for 0.000000 seconds of processor time
After computations, clock() returns 940000
========================================================================
The amount of processor time consumed by each sleep() call was
approximately zero; it was not anywhere near the wall clock time
consumed. This was very different from the results you got.
Great! You've managed to prove EXACTLY NOTHING,
but you spent a lot of time doing it!
If you reply to this with another long screed full of insults, words
in all-caps, and repeated exclamation points, I will ignore you. If
you care to calm down and discuss technical issues, I'll be glad to
continue the discussion. It's up to you.
Well, I don't know if I met your decorum requirements, but
I know that I am unable to discuss certain important internal
considerations relating to the individual compilers and OSs
involved, so I really can't contribute anything more "intelligent"
in any event.
The remainder is just "semantics" and "logic", but most
importantly as used (abused?) to "defend your ego", and
since my posts were merely empirical and technical in nature
I really don't "have a dog in this hunt"...if you want to say
my compiler (and/or OS) is "broken" because its "approximation"
is a little TOO "approximate"...
I can agree with the following statements:
* The original poster was wasting his time to try to use
clock() in a "busy" loop to "stop execution" of a program
for a period of time; use some form of "sleep()" instead
* The "C" standard apparently wants clock() to in some
way return "processor time" used by a program, NOT
"wall clock time" (but you can't always git whut u want)
* clock() probably returns different values on different
systems with different compilers (and almost certainly returns
-1 for some)
* people who think that a "sleep" function on a modern
multi-tasking operating system performs a "busy-wait" because
of some interpretation of the "C" standard "requirement" for
"clock()" deserve all the abuse that can be heaped upon
their heads, 'specially if they state that they have some type
of superior technical "knowledge" that brought them to
that conclusion...