Pausing/Waiting in C

K

Kwebway Konongo

Hi everyone,
I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

TIA

Paul
 
K

Keith Thompson

Kwebway Konongo said:
I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

There is no good portable way to do this.

(The clock() function returns an indication of the amount of CPU time
your program has consumed. You might be tempted to write a loop that
executes until the result of the clock() function reaches a certain
value. Resist this temptation. Though it uses only standard C
features, it has at least two major drawbacks: it measures CPU time,
not wall clock time, and this kind of busy loop causes your program to
waste CPU time, possibly affecting other programs on the system.)

However, most operating systems will provide a good way to do this.
Ask in a newsgroup that's specific to whatever OS you're using, such
as comp.unix.programmer or comp.os.ms-windows.programmer.win32 -- but
see if you can find an answer in the newsgroup's FAQ first.
 
V

Victor Silva

Kwebway said:
Hi everyone,
I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

TIA

Paul

Maybe you can use something like sleep().
 
U

user923005

Kwebway said:
Hi everyone,
I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible
From the C-FAQ:
19.37: How can I implement a delay, or time a user's response, with
sub-
second resolution?

A: Unfortunately, there is no portable way. V7 Unix, and derived
systems, provided a fairly useful ftime() function with
resolution up to a millisecond, but it has disappeared from
System V and POSIX. Other routines you might look for on your
system include clock(), delay(), gettimeofday(), msleep(),
nap(), napms(), nanosleep(), setitimer(), sleep(), times(), and
usleep(). (A function called wait(), however, is at least under
Unix *not* what you want.) The select() and poll() calls (if
available) can be pressed into service to implement simple
delays. On MS-DOS machines, it is possible to reprogram the
system timer and timer interrupts.

Of these, only clock() is part of the ANSI Standard. The
difference between two calls to clock() gives elapsed execution
time, and may even have subsecond resolution, if CLOCKS_PER_SEC
is greater than 1. However, clock() gives elapsed processor time
used by the current program, which on a multitasking system may
differ considerably from real time.

If you're trying to implement a delay and all you have available
is a time-reporting function, you can implement a CPU-intensive
busy-wait, but this is only an option on a single-user, single-
tasking machine as it is terribly antisocial to any other
processes. Under a multitasking operating system, be sure to
use a call which puts your process to sleep for the duration,
such as sleep() or select(), or pause() in conjunction with
alarm() or setitimer().

For really brief delays, it's tempting to use a do-nothing loop
like

long int i;
for(i = 0; i < 1000000; i++)
;

but resist this temptation if at all possible! For one thing,
your carefully-calculated delay loops will stop working properly
next month when a faster processor comes out. Perhaps worse, a
clever compiler may notice that the loop does nothing and
optimize it away completely.

References: H&S Sec. 18.1 pp. 398-9; PCS Sec. 12 pp. 197-8,215-
6; POSIX Sec. 4.5.2.
 
K

Keith Thompson

Victor Silva said:
Maybe you can use something like sleep().

Maybe he can, but there is no sleep() function in the C standard
library (and the system-specific sleep() functions I'm familiar with
don't meet his requirements).

If the phrase "something like sleep()" is intended to exclude sleep()
itself, then you're probably right, but it's still system-specific.
(Hint: a system's documentation for sleep() might have links to other
similar functions.)
 
B

Barry

user923005 said:
19.37: How can I implement a delay, or time a user's response, with
sub-
second resolution?

A: Unfortunately, there is no portable way. V7 Unix, and derived
systems, provided a fairly useful ftime() function with
resolution up to a millisecond, but it has disappeared from
System V and POSIX. Other routines you might look for on your
system include clock(), delay(), gettimeofday(), msleep(),
nap(), napms(), nanosleep(), setitimer(), sleep(), times(), and
usleep(). (A function called wait(), however, is at least under
Unix *not* what you want.) The select() and poll() calls (if
available) can be pressed into service to implement simple
delays. On MS-DOS machines, it is possible to reprogram the
system timer and timer interrupts.

Of these, only clock() is part of the ANSI Standard. The
difference between two calls to clock() gives elapsed execution
time, and may even have subsecond resolution, if CLOCKS_PER_SEC
is greater than 1. However, clock() gives elapsed processor time
used by the current program, which on a multitasking system may
differ considerably from real time.

If you're trying to implement a delay and all you have available
is a time-reporting function, you can implement a CPU-intensive
busy-wait, but this is only an option on a single-user, single-
tasking machine as it is terribly antisocial to any other
processes. Under a multitasking operating system, be sure to
use a call which puts your process to sleep for the duration,
such as sleep() or select(), or pause() in conjunction with
alarm() or setitimer().

For really brief delays, it's tempting to use a do-nothing loop
like

long int i;
for(i = 0; i < 1000000; i++)
;

but resist this temptation if at all possible! For one thing,
your carefully-calculated delay loops will stop working properly
next month when a faster processor comes out. Perhaps worse, a
clever compiler may notice that the loop does nothing and
optimize it away completely.

References: H&S Sec. 18.1 pp. 398-9; PCS Sec. 12 pp. 197-8,215-
6; POSIX Sec. 4.5.2.

Most of your response has nothing to do with C. You should have
just referred the OP to an appropriate news group.

Instead you posed a response irrelevant to C, and lacking of
many proper solutions.
 
N

Nelu

Keith Thompson said:
Maybe he can, but there is no sleep() function in the C standard
library (and the system-specific sleep() functions I'm familiar with
don't meet his requirements).

If the phrase "something like sleep()" is intended to exclude sleep()
itself, then you're probably right, but it's still system-specific.
(Hint: a system's documentation for sleep() might have links to other
similar functions.)

Is it ok to use stdin like this:

int abuseSTDIN() {
char a[2];
if(EOF==(ungetc('\n',stdin)||EOF==ungetc('a',stdin))) {
return EOF;
}
if(NULL==fgets(a,2,stdin)) {
return EOF;
}
return !EOF;
}

?

If yes, this can be run for a number of seconds, in a loop with calls
to mktime and difftime to get some kind of sub-second resolution (like
CLOCKS_PER_SEC). That aproximation can be used to implement a kind of
a sleep function using milliseconds later (unless the function returns
EOF in which case the function may not work).
Also, by using the I/O system it may be more CPU friendly although
it will by no means replace a system-specific sleep function.
 
D

David T. Ashley

Kwebway Konongo said:
Hi everyone,
I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

OS-dependent question.

Any form of spin-wait is bad programming practice (but I suppose it would
work).

In Linux it is usleep():

http://www.hmug.org/man/3/usleep.php

In Windows, I'm not sure, but there are various references on the Microsoft
website to sleep(). Since Microsoft tries to support straightforward Unix
applications, there is a good chance you'll find sleep() or usleep() as a
Windows API system call.
 
K

Keith Thompson

Nelu said:
Keith Thompson said:
Maybe he can, but there is no sleep() function in the C standard
library (and the system-specific sleep() functions I'm familiar with
don't meet his requirements).

If the phrase "something like sleep()" is intended to exclude sleep()
itself, then you're probably right, but it's still system-specific.
(Hint: a system's documentation for sleep() might have links to other
similar functions.)

Is it ok to use stdin like this:

int abuseSTDIN() {
char a[2];
if(EOF==(ungetc('\n',stdin)||EOF==ungetc('a',stdin))) {
return EOF;
}
if(NULL==fgets(a,2,stdin)) {
return EOF;
}
return !EOF;
}

?

Is it ok? I'd say definitely not (which isn't *necessarily* meant to
imply that it wouldn't work).

The standard guarantees only one character of pushback. I suppose you
could ungetc() a single '\n' character and read it back with fgets().

What is the result supposed to indicate? Since EOF is non-zero, !EOF
is just 0.
If yes, this can be run for a number of seconds, in a loop with calls
to mktime and difftime to get some kind of sub-second resolution (like
CLOCKS_PER_SEC). That aproximation can be used to implement a kind of
a sleep function using milliseconds later (unless the function returns
EOF in which case the function may not work).
Also, by using the I/O system it may be more CPU friendly although
it will by no means replace a system-specific sleep function.

I think your idea is to execute this function a large number of times,
checking the value of time() before and after the loop, and using the
results for calibration, to estimate the time the function takes to
execute. I don't see how mktime() applies here. There's no guarantee
that the function will take the same time to execute each time you
call it.

Pushing back a character with ungetc() and then reading it with
fgets() is not likely to cause any physical I/O to take place, so this
method is likely to be as antisocially CPU-intensive as any other busy
loop.
 
N

Nelu

Keith Thompson said:
Nelu said:
Keith Thompson said:
Kwebway Konongo wrote:
I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

Maybe you can use something like sleep().

Maybe he can, but there is no sleep() function in the C standard
library (and the system-specific sleep() functions I'm familiar with
don't meet his requirements).

If the phrase "something like sleep()" is intended to exclude sleep()
itself, then you're probably right, but it's still system-specific.
(Hint: a system's documentation for sleep() might have links to other
similar functions.)

Is it ok to use stdin like this:

int abuseSTDIN() {
char a[2];
if(EOF==(ungetc('\n',stdin)||EOF==ungetc('a',stdin))) {
return EOF;
}
if(NULL==fgets(a,2,stdin)) {
return EOF;
}
return !EOF;
}

?

Is it ok? I'd say definitely not (which isn't *necessarily* meant to
imply that it wouldn't work).

The standard guarantees only one character of pushback. I suppose you
could ungetc() a single '\n' character and read it back with fgets().

I wasn't sure whether pushing '\n' back to stdin would make fgets
return.
What is the result supposed to indicate? Since EOF is non-zero, !EOF
is just 0.

Just EOF if it failed. I could've used 0, I thought it was more
suggestive this way.
I think your idea is to execute this function a large number of times,
checking the value of time() before and after the loop, and using the
results for calibration, to estimate the time the function takes to
execute. I don't see how mktime() applies here. There's no guarantee
that the function will take the same time to execute each time you
call it.

I meant to say time(). I know there are no guarantees that's why I said
approximation.
Pushing back a character with ungetc() and then reading it with
fgets() is not likely to cause any physical I/O to take place, so this
method is likely to be as antisocially CPU-intensive as any other busy
loop.

Yes, you're right.
 
J

jaysome

There is no good portable way to do this.

(The clock() function returns an indication of the amount of CPU time
your program has consumed. You might be tempted to write a loop that
executes until the result of the clock() function reaches a certain
value. Resist this temptation. Though it uses only standard C
features, it has at least two major drawbacks: it measures CPU time,
not wall clock time, and this kind of busy loop causes your program to
waste CPU time, possibly affecting other programs on the system.)

However, most operating systems will provide a good way to do this.
Ask in a newsgroup that's specific to whatever OS you're using, such
as comp.unix.programmer or comp.os.ms-windows.programmer.win32 -- but
see if you can find an answer in the newsgroup's FAQ first.

s/most operating systems/most implementations/

The OP mentioned nothing about using an OS, and lack of use of an OS
is entirely conformant to the C Standard. Many or most implementations
targeted for embedded processors in which there is no OS running even
support good ways to do this.

Best regards
 
B

Barry

Nelu said:
Keith Thompson said:
Nelu said:
Kwebway Konongo wrote:
I'm developing an application in C; basically a linked list, with a series
of "events" to be popped off, separated by a command to pause reading off
the next event in the list. It has been sometime since I last did C, and
that was the first K&R version! Is there a command to pause an app for
a period of time, as all the commands I am familiar with specify pauses
for integer numbers of seconds, and what I would like is fractions of a
second, preferably milliseconds if possible

Maybe you can use something like sleep().

Maybe he can, but there is no sleep() function in the C standard
library (and the system-specific sleep() functions I'm familiar with
don't meet his requirements).

If the phrase "something like sleep()" is intended to exclude sleep()
itself, then you're probably right, but it's still system-specific.
(Hint: a system's documentation for sleep() might have links to other
similar functions.)


Is it ok to use stdin like this:

int abuseSTDIN() {
char a[2];
if(EOF==(ungetc('\n',stdin)||EOF==ungetc('a',stdin))) {
return EOF;
}
if(NULL==fgets(a,2,stdin)) {
return EOF;
}
return !EOF;
}

?

Is it ok? I'd say definitely not (which isn't *necessarily* meant to
imply that it wouldn't work).

The standard guarantees only one character of pushback. I suppose you
could ungetc() a single '\n' character and read it back with fgets().

I wasn't sure whether pushing '\n' back to stdin would make fgets
return.
What is the result supposed to indicate? Since EOF is non-zero, !EOF
is just 0.

Just EOF if it failed. I could've used 0, I thought it was more
suggestive this way.
I think your idea is to execute this function a large number of times,
checking the value of time() before and after the loop, and using the
results for calibration, to estimate the time the function takes to
execute. I don't see how mktime() applies here. There's no guarantee
that the function will take the same time to execute each time you
call it.

I meant to say time(). I know there are no guarantees that's why I said
approximation.

The value returned by time() on most(?) implementations is in seconds.
Half of your statement seems to be about time(), the other half about
clock().
 
N

Nelu

Barry said:
The value returned by time() on most(?) implementations is in seconds.
Half of your statement seems to be about time(), the other half about
clock().

No, I was talking about counting how many times the function gets called
in a number of seconds. It will likely be called a lot more times than
the number of seconds so you get under a second approximations for a
call or a number of calls and it will give you something similar to
CLOCKS_PER_SEC, you can name it CALLS_PER_SEC, although it's just a very
bad approximation.
 
T

Trev

David said:
OS-dependent question.

Any form of spin-wait is bad programming practice (but I suppose it would
work).

Sounds like the OP is asking about something I'm trying to do...

Incidentally, why is "spin-wait" bad? In my case, I'm trying to do a
pseudo-realtime application on a dedicated server, with latencies until
the next event is due to occur.
Sadly, the boss doesn't realise that my realtime app development
experience is precisely zero!
 
D

David T. Ashley

Trev said:
Sounds like the OP is asking about something I'm trying to do...

Incidentally, why is "spin-wait" bad? In my case, I'm trying to do a
pseudo-realtime application on a dedicated server, with latencies until
the next event is due to occur.
Sadly, the boss doesn't realise that my realtime app development
experience is precisely zero!

Spin-wait is bad on a server because it chews up (i.e. consumes towards no
productive purpose) CPU bandwidth that would best be returned to the
operating system.

For example, on Linux, there might be a daemon that needs to wait for 10
minutes. "sleep(600);" allows the OS to do other things for 10 minutes. A
spin-wait will consume a large fraction of CPU bandwidth--potentially as
much as 100%--to do nothing but repeatedly check the time. On a shared
system -- and any server is shared, at least between your process and the
operating system -- it is horribly inefficient.

Now, on an embedded system, spin-wait may be valid. In fact, the most
common software architecture for small systems is just to spin-wait until
the next time tick and then do the things you need to do. That is OK
because you're the only process on the system, and there is nobody else who
can make better use of the CPU.

If you have any further questions or observations, please write me directly
at (e-mail address removed) and answer my SPAM filtering system's automatic reply. I
might know one or two things about small embedded systems.
 
B

Barry

Nelu said:
No, I was talking about counting how many times the function gets called
in a number of seconds. It will likely be called a lot more times than
the number of seconds so you get under a second approximations for a
call or a number of calls and it will give you something similar to
CLOCKS_PER_SEC, you can name it CALLS_PER_SEC, although it's just a very
bad approximation.

I only read your post and gave you too much
credit. Your explanation is worse than what I thought
you were doing.
 
B

Barry

David T. Ashley said:
Spin-wait is bad on a server because it chews up (i.e. consumes towards no
productive purpose) CPU bandwidth that would best be returned to the
operating system.

For example, on Linux, there might be a daemon that needs to wait for 10
minutes. "sleep(600);" allows the OS to do other things for 10 minutes. A
spin-wait will consume a large fraction of CPU bandwidth--potentially as
much as 100%--to do nothing but repeatedly check the time. On a shared
system -- and any server is shared, at least between your process and the
operating system -- it is horribly inefficient.

Now, on an embedded system, spin-wait may be valid. In fact, the most
common software architecture for small systems is just to spin-wait until
the next time tick and then do the things you need to do. That is OK
because you're the only process on the system, and there is nobody else who
can make better use of the CPU.

Of course we have gotten so far off topic for clc it doesn't matter.
But, every embedded system I have worked on (and you have used all
of them, directly or indirectly :)) also respond to hardware interrupts.
 
D

Default User

[snip]
Most of your response has nothing to do with C. You should have
just referred the OP to an appropriate news group.

He gave the answer that's in the comp.lang.c FAQ on the matter. Try to
pay attention.




Brian
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top