Pausing/Waiting in C

Discussion in 'C Programming' started by Kwebway Konongo, Jan 10, 2007.

  1. Hi everyone,
    I'm developing an application in C; basically a linked list, with a series
    of "events" to be popped off, separated by a command to pause reading off
    the next event in the list. It has been sometime since I last did C, and
    that was the first K&R version! Is there a command to pause an app for
    a period of time, as all the commands I am familiar with specify pauses
    for integer numbers of seconds, and what I would like is fractions of a
    second, preferably milliseconds if possible

    TIA

    Paul

    --
    ----
    Home: http://www.paullee.com
    Woes: http://www.dr_paul_lee.btinternet.co.uk/zzq.shtml
     
    Kwebway Konongo, Jan 10, 2007
    #1
    1. Advertising

  2. Kwebway Konongo <> writes:
    > I'm developing an application in C; basically a linked list, with a series
    > of "events" to be popped off, separated by a command to pause reading off
    > the next event in the list. It has been sometime since I last did C, and
    > that was the first K&R version! Is there a command to pause an app for
    > a period of time, as all the commands I am familiar with specify pauses
    > for integer numbers of seconds, and what I would like is fractions of a
    > second, preferably milliseconds if possible


    There is no good portable way to do this.

    (The clock() function returns an indication of the amount of CPU time
    your program has consumed. You might be tempted to write a loop that
    executes until the result of the clock() function reaches a certain
    value. Resist this temptation. Though it uses only standard C
    features, it has at least two major drawbacks: it measures CPU time,
    not wall clock time, and this kind of busy loop causes your program to
    waste CPU time, possibly affecting other programs on the system.)

    However, most operating systems will provide a good way to do this.
    Ask in a newsgroup that's specific to whatever OS you're using, such
    as comp.unix.programmer or comp.os.ms-windows.programmer.win32 -- but
    see if you can find an answer in the newsgroup's FAQ first.

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
     
    Keith Thompson, Jan 11, 2007
    #2
    1. Advertising

  3. Kwebway Konongo

    Victor Silva Guest

    Kwebway Konongo wrote:

    > Hi everyone,
    > I'm developing an application in C; basically a linked list, with a series
    > of "events" to be popped off, separated by a command to pause reading off
    > the next event in the list. It has been sometime since I last did C, and
    > that was the first K&R version! Is there a command to pause an app for
    > a period of time, as all the commands I am familiar with specify pauses
    > for integer numbers of seconds, and what I would like is fractions of a
    > second, preferably milliseconds if possible
    >
    > TIA
    >
    > Paul
    >


    Maybe you can use something like sleep().
     
    Victor Silva, Jan 11, 2007
    #3
  4. Kwebway Konongo

    user923005 Guest

    Kwebway Konongo wrote:
    > Hi everyone,
    > I'm developing an application in C; basically a linked list, with a series
    > of "events" to be popped off, separated by a command to pause reading off
    > the next event in the list. It has been sometime since I last did C, and
    > that was the first K&R version! Is there a command to pause an app for
    > a period of time, as all the commands I am familiar with specify pauses
    > for integer numbers of seconds, and what I would like is fractions of a
    > second, preferably milliseconds if possible


    >From the C-FAQ:

    19.37: How can I implement a delay, or time a user's response, with
    sub-
    second resolution?

    A: Unfortunately, there is no portable way. V7 Unix, and derived
    systems, provided a fairly useful ftime() function with
    resolution up to a millisecond, but it has disappeared from
    System V and POSIX. Other routines you might look for on your
    system include clock(), delay(), gettimeofday(), msleep(),
    nap(), napms(), nanosleep(), setitimer(), sleep(), times(), and
    usleep(). (A function called wait(), however, is at least under
    Unix *not* what you want.) The select() and poll() calls (if
    available) can be pressed into service to implement simple
    delays. On MS-DOS machines, it is possible to reprogram the
    system timer and timer interrupts.

    Of these, only clock() is part of the ANSI Standard. The
    difference between two calls to clock() gives elapsed execution
    time, and may even have subsecond resolution, if CLOCKS_PER_SEC
    is greater than 1. However, clock() gives elapsed processor time
    used by the current program, which on a multitasking system may
    differ considerably from real time.

    If you're trying to implement a delay and all you have available
    is a time-reporting function, you can implement a CPU-intensive
    busy-wait, but this is only an option on a single-user, single-
    tasking machine as it is terribly antisocial to any other
    processes. Under a multitasking operating system, be sure to
    use a call which puts your process to sleep for the duration,
    such as sleep() or select(), or pause() in conjunction with
    alarm() or setitimer().

    For really brief delays, it's tempting to use a do-nothing loop
    like

    long int i;
    for(i = 0; i < 1000000; i++)
    ;

    but resist this temptation if at all possible! For one thing,
    your carefully-calculated delay loops will stop working properly
    next month when a faster processor comes out. Perhaps worse, a
    clever compiler may notice that the loop does nothing and
    optimize it away completely.

    References: H&S Sec. 18.1 pp. 398-9; PCS Sec. 12 pp. 197-8,215-
    6; POSIX Sec. 4.5.2.
     
    user923005, Jan 11, 2007
    #4
  5. Victor Silva <> writes:
    > Kwebway Konongo wrote:
    >> I'm developing an application in C; basically a linked list, with a series
    >> of "events" to be popped off, separated by a command to pause reading off
    >> the next event in the list. It has been sometime since I last did C, and
    >> that was the first K&R version! Is there a command to pause an app for
    >> a period of time, as all the commands I am familiar with specify pauses
    >> for integer numbers of seconds, and what I would like is fractions of a
    >> second, preferably milliseconds if possible

    >
    > Maybe you can use something like sleep().


    Maybe he can, but there is no sleep() function in the C standard
    library (and the system-specific sleep() functions I'm familiar with
    don't meet his requirements).

    If the phrase "something like sleep()" is intended to exclude sleep()
    itself, then you're probably right, but it's still system-specific.
    (Hint: a system's documentation for sleep() might have links to other
    similar functions.)

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
     
    Keith Thompson, Jan 11, 2007
    #5
  6. Kwebway Konongo

    Barry Guest

    "user923005" <> wrote in message
    news:...
    > Kwebway Konongo wrote:
    > > Hi everyone,
    > > I'm developing an application in C; basically a linked list, with a

    series
    > > of "events" to be popped off, separated by a command to pause reading

    off
    > > the next event in the list. It has been sometime since I last did C, and
    > > that was the first K&R version! Is there a command to pause an app for
    > > a period of time, as all the commands I am familiar with specify pauses
    > > for integer numbers of seconds, and what I would like is fractions of a
    > > second, preferably milliseconds if possible

    >
    > >From the C-FAQ:

    > 19.37: How can I implement a delay, or time a user's response, with
    > sub-
    > second resolution?
    >
    > A: Unfortunately, there is no portable way. V7 Unix, and derived
    > systems, provided a fairly useful ftime() function with
    > resolution up to a millisecond, but it has disappeared from
    > System V and POSIX. Other routines you might look for on your
    > system include clock(), delay(), gettimeofday(), msleep(),
    > nap(), napms(), nanosleep(), setitimer(), sleep(), times(), and
    > usleep(). (A function called wait(), however, is at least under
    > Unix *not* what you want.) The select() and poll() calls (if
    > available) can be pressed into service to implement simple
    > delays. On MS-DOS machines, it is possible to reprogram the
    > system timer and timer interrupts.
    >
    > Of these, only clock() is part of the ANSI Standard. The
    > difference between two calls to clock() gives elapsed execution
    > time, and may even have subsecond resolution, if CLOCKS_PER_SEC
    > is greater than 1. However, clock() gives elapsed processor time
    > used by the current program, which on a multitasking system may
    > differ considerably from real time.
    >
    > If you're trying to implement a delay and all you have available
    > is a time-reporting function, you can implement a CPU-intensive
    > busy-wait, but this is only an option on a single-user, single-
    > tasking machine as it is terribly antisocial to any other
    > processes. Under a multitasking operating system, be sure to
    > use a call which puts your process to sleep for the duration,
    > such as sleep() or select(), or pause() in conjunction with
    > alarm() or setitimer().
    >
    > For really brief delays, it's tempting to use a do-nothing loop
    > like
    >
    > long int i;
    > for(i = 0; i < 1000000; i++)
    > ;
    >
    > but resist this temptation if at all possible! For one thing,
    > your carefully-calculated delay loops will stop working properly
    > next month when a faster processor comes out. Perhaps worse, a
    > clever compiler may notice that the loop does nothing and
    > optimize it away completely.
    >
    > References: H&S Sec. 18.1 pp. 398-9; PCS Sec. 12 pp. 197-8,215-
    > 6; POSIX Sec. 4.5.2.
    >


    Most of your response has nothing to do with C. You should have
    just referred the OP to an appropriate news group.

    Instead you posed a response irrelevant to C, and lacking of
    many proper solutions.
     
    Barry, Jan 11, 2007
    #6
  7. Kwebway Konongo

    Nelu Guest

    Keith Thompson <> wrote:
    > Victor Silva <> writes:
    >> Kwebway Konongo wrote:
    >>> I'm developing an application in C; basically a linked list, with a series
    >>> of "events" to be popped off, separated by a command to pause reading off
    >>> the next event in the list. It has been sometime since I last did C, and
    >>> that was the first K&R version! Is there a command to pause an app for
    >>> a period of time, as all the commands I am familiar with specify pauses
    >>> for integer numbers of seconds, and what I would like is fractions of a
    >>> second, preferably milliseconds if possible

    >>
    >> Maybe you can use something like sleep().

    >
    > Maybe he can, but there is no sleep() function in the C standard
    > library (and the system-specific sleep() functions I'm familiar with
    > don't meet his requirements).
    >
    > If the phrase "something like sleep()" is intended to exclude sleep()
    > itself, then you're probably right, but it's still system-specific.
    > (Hint: a system's documentation for sleep() might have links to other
    > similar functions.)
    >


    Is it ok to use stdin like this:

    int abuseSTDIN() {
    char a[2];
    if(EOF==(ungetc('\n',stdin)||EOF==ungetc('a',stdin))) {
    return EOF;
    }
    if(NULL==fgets(a,2,stdin)) {
    return EOF;
    }
    return !EOF;
    }

    ?

    If yes, this can be run for a number of seconds, in a loop with calls
    to mktime and difftime to get some kind of sub-second resolution (like
    CLOCKS_PER_SEC). That aproximation can be used to implement a kind of
    a sleep function using milliseconds later (unless the function returns
    EOF in which case the function may not work).
    Also, by using the I/O system it may be more CPU friendly although
    it will by no means replace a system-specific sleep function.

    --
    Ioan - Ciprian Tandau
    tandau _at_ freeshell _dot_ org (hope it's not too late)
    (... and that it still works...)
     
    Nelu, Jan 11, 2007
    #7
  8. "Kwebway Konongo" <> wrote in message
    news:...
    > Hi everyone,
    > I'm developing an application in C; basically a linked list, with a series
    > of "events" to be popped off, separated by a command to pause reading off
    > the next event in the list. It has been sometime since I last did C, and
    > that was the first K&R version! Is there a command to pause an app for
    > a period of time, as all the commands I am familiar with specify pauses
    > for integer numbers of seconds, and what I would like is fractions of a
    > second, preferably milliseconds if possible


    OS-dependent question.

    Any form of spin-wait is bad programming practice (but I suppose it would
    work).

    In Linux it is usleep():

    http://www.hmug.org/man/3/usleep.php

    In Windows, I'm not sure, but there are various references on the Microsoft
    website to sleep(). Since Microsoft tries to support straightforward Unix
    applications, there is a good chance you'll find sleep() or usleep() as a
    Windows API system call.
     
    David T. Ashley, Jan 11, 2007
    #8
  9. Nelu <> writes:
    > Keith Thompson <> wrote:
    >> Victor Silva <> writes:
    >>> Kwebway Konongo wrote:
    >>>> I'm developing an application in C; basically a linked list, with a series
    >>>> of "events" to be popped off, separated by a command to pause reading off
    >>>> the next event in the list. It has been sometime since I last did C, and
    >>>> that was the first K&R version! Is there a command to pause an app for
    >>>> a period of time, as all the commands I am familiar with specify pauses
    >>>> for integer numbers of seconds, and what I would like is fractions of a
    >>>> second, preferably milliseconds if possible
    >>>
    >>> Maybe you can use something like sleep().

    >>
    >> Maybe he can, but there is no sleep() function in the C standard
    >> library (and the system-specific sleep() functions I'm familiar with
    >> don't meet his requirements).
    >>
    >> If the phrase "something like sleep()" is intended to exclude sleep()
    >> itself, then you're probably right, but it's still system-specific.
    >> (Hint: a system's documentation for sleep() might have links to other
    >> similar functions.)
    >>

    >
    > Is it ok to use stdin like this:
    >
    > int abuseSTDIN() {
    > char a[2];
    > if(EOF==(ungetc('\n',stdin)||EOF==ungetc('a',stdin))) {
    > return EOF;
    > }
    > if(NULL==fgets(a,2,stdin)) {
    > return EOF;
    > }
    > return !EOF;
    > }
    >
    > ?


    Is it ok? I'd say definitely not (which isn't *necessarily* meant to
    imply that it wouldn't work).

    The standard guarantees only one character of pushback. I suppose you
    could ungetc() a single '\n' character and read it back with fgets().

    What is the result supposed to indicate? Since EOF is non-zero, !EOF
    is just 0.

    > If yes, this can be run for a number of seconds, in a loop with calls
    > to mktime and difftime to get some kind of sub-second resolution (like
    > CLOCKS_PER_SEC). That aproximation can be used to implement a kind of
    > a sleep function using milliseconds later (unless the function returns
    > EOF in which case the function may not work).
    > Also, by using the I/O system it may be more CPU friendly although
    > it will by no means replace a system-specific sleep function.


    I think your idea is to execute this function a large number of times,
    checking the value of time() before and after the loop, and using the
    results for calibration, to estimate the time the function takes to
    execute. I don't see how mktime() applies here. There's no guarantee
    that the function will take the same time to execute each time you
    call it.

    Pushing back a character with ungetc() and then reading it with
    fgets() is not likely to cause any physical I/O to take place, so this
    method is likely to be as antisocially CPU-intensive as any other busy
    loop.

    --
    Keith Thompson (The_Other_Keith) <http://www.ghoti.net/~kst>
    San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
    We must do something. This is something. Therefore, we must do this.
     
    Keith Thompson, Jan 11, 2007
    #9
  10. Kwebway Konongo

    Nelu Guest

    Keith Thompson <> wrote:
    > Nelu <> writes:
    >> Keith Thompson <> wrote:
    >>> Victor Silva <> writes:
    >>>> Kwebway Konongo wrote:
    >>>>> I'm developing an application in C; basically a linked list, with a series
    >>>>> of "events" to be popped off, separated by a command to pause reading off
    >>>>> the next event in the list. It has been sometime since I last did C, and
    >>>>> that was the first K&R version! Is there a command to pause an app for
    >>>>> a period of time, as all the commands I am familiar with specify pauses
    >>>>> for integer numbers of seconds, and what I would like is fractions of a
    >>>>> second, preferably milliseconds if possible
    >>>>
    >>>> Maybe you can use something like sleep().
    >>>
    >>> Maybe he can, but there is no sleep() function in the C standard
    >>> library (and the system-specific sleep() functions I'm familiar with
    >>> don't meet his requirements).
    >>>
    >>> If the phrase "something like sleep()" is intended to exclude sleep()
    >>> itself, then you're probably right, but it's still system-specific.
    >>> (Hint: a system's documentation for sleep() might have links to other
    >>> similar functions.)
    >>>

    >>
    >> Is it ok to use stdin like this:
    >>
    >> int abuseSTDIN() {
    >> char a[2];
    >> if(EOF==(ungetc('\n',stdin)||EOF==ungetc('a',stdin))) {
    >> return EOF;
    >> }
    >> if(NULL==fgets(a,2,stdin)) {
    >> return EOF;
    >> }
    >> return !EOF;
    >> }
    >>
    >> ?

    >
    > Is it ok? I'd say definitely not (which isn't *necessarily* meant to
    > imply that it wouldn't work).
    >
    > The standard guarantees only one character of pushback. I suppose you
    > could ungetc() a single '\n' character and read it back with fgets().


    I wasn't sure whether pushing '\n' back to stdin would make fgets
    return.

    >
    > What is the result supposed to indicate? Since EOF is non-zero, !EOF
    > is just 0.
    >


    Just EOF if it failed. I could've used 0, I thought it was more
    suggestive this way.

    >> If yes, this can be run for a number of seconds, in a loop with calls
    >> to mktime and difftime to get some kind of sub-second resolution (like
    >> CLOCKS_PER_SEC). That aproximation can be used to implement a kind of
    >> a sleep function using milliseconds later (unless the function returns
    >> EOF in which case the function may not work).
    >> Also, by using the I/O system it may be more CPU friendly although
    >> it will by no means replace a system-specific sleep function.

    >
    > I think your idea is to execute this function a large number of times,
    > checking the value of time() before and after the loop, and using the
    > results for calibration, to estimate the time the function takes to
    > execute. I don't see how mktime() applies here. There's no guarantee
    > that the function will take the same time to execute each time you
    > call it.
    >


    I meant to say time(). I know there are no guarantees that's why I said
    approximation.

    > Pushing back a character with ungetc() and then reading it with
    > fgets() is not likely to cause any physical I/O to take place, so this
    > method is likely to be as antisocially CPU-intensive as any other busy
    > loop.
    >


    Yes, you're right.

    --
    Ioan - Ciprian Tandau
    tandau _at_ freeshell _dot_ org (hope it's not too late)
    (... and that it still works...)
     
    Nelu, Jan 11, 2007
    #10
  11. "Nelu" <>
    <snip>
    > int abuseSTDIN()

    No, do not abuse stdin. LS
     
    Lane Straatman, Jan 11, 2007
    #11
  12. Kwebway Konongo

    jaysome Guest

    On Wed, 10 Jan 2007 16:06:01 -0800, Keith Thompson <>
    wrote:

    >Kwebway Konongo <> writes:
    >> I'm developing an application in C; basically a linked list, with a series
    >> of "events" to be popped off, separated by a command to pause reading off
    >> the next event in the list. It has been sometime since I last did C, and
    >> that was the first K&R version! Is there a command to pause an app for
    >> a period of time, as all the commands I am familiar with specify pauses
    >> for integer numbers of seconds, and what I would like is fractions of a
    >> second, preferably milliseconds if possible

    >
    >There is no good portable way to do this.
    >
    >(The clock() function returns an indication of the amount of CPU time
    >your program has consumed. You might be tempted to write a loop that
    >executes until the result of the clock() function reaches a certain
    >value. Resist this temptation. Though it uses only standard C
    >features, it has at least two major drawbacks: it measures CPU time,
    >not wall clock time, and this kind of busy loop causes your program to
    >waste CPU time, possibly affecting other programs on the system.)
    >
    >However, most operating systems will provide a good way to do this.
    >Ask in a newsgroup that's specific to whatever OS you're using, such
    >as comp.unix.programmer or comp.os.ms-windows.programmer.win32 -- but
    >see if you can find an answer in the newsgroup's FAQ first.


    s/most operating systems/most implementations/

    The OP mentioned nothing about using an OS, and lack of use of an OS
    is entirely conformant to the C Standard. Many or most implementations
    targeted for embedded processors in which there is no OS running even
    support good ways to do this.

    Best regards
    --
    jay
     
    jaysome, Jan 11, 2007
    #12
  13. Kwebway Konongo

    Barry Guest

    "Nelu" <> wrote in message
    news:...
    > Keith Thompson <> wrote:
    > > Nelu <> writes:
    > >> Keith Thompson <> wrote:
    > >>> Victor Silva <> writes:
    > >>>> Kwebway Konongo wrote:
    > >>>>> I'm developing an application in C; basically a linked list, with a

    series
    > >>>>> of "events" to be popped off, separated by a command to pause

    reading off
    > >>>>> the next event in the list. It has been sometime since I last did C,

    and
    > >>>>> that was the first K&R version! Is there a command to pause an app

    for
    > >>>>> a period of time, as all the commands I am familiar with specify

    pauses
    > >>>>> for integer numbers of seconds, and what I would like is fractions

    of a
    > >>>>> second, preferably milliseconds if possible
    > >>>>
    > >>>> Maybe you can use something like sleep().
    > >>>
    > >>> Maybe he can, but there is no sleep() function in the C standard
    > >>> library (and the system-specific sleep() functions I'm familiar with
    > >>> don't meet his requirements).
    > >>>
    > >>> If the phrase "something like sleep()" is intended to exclude sleep()
    > >>> itself, then you're probably right, but it's still system-specific.
    > >>> (Hint: a system's documentation for sleep() might have links to other
    > >>> similar functions.)
    > >>>
    > >>
    > >> Is it ok to use stdin like this:
    > >>
    > >> int abuseSTDIN() {
    > >> char a[2];
    > >> if(EOF==(ungetc('\n',stdin)||EOF==ungetc('a',stdin))) {
    > >> return EOF;
    > >> }
    > >> if(NULL==fgets(a,2,stdin)) {
    > >> return EOF;
    > >> }
    > >> return !EOF;
    > >> }
    > >>
    > >> ?

    > >
    > > Is it ok? I'd say definitely not (which isn't *necessarily* meant to
    > > imply that it wouldn't work).
    > >
    > > The standard guarantees only one character of pushback. I suppose you
    > > could ungetc() a single '\n' character and read it back with fgets().

    >
    > I wasn't sure whether pushing '\n' back to stdin would make fgets
    > return.
    >
    > >
    > > What is the result supposed to indicate? Since EOF is non-zero, !EOF
    > > is just 0.
    > >

    >
    > Just EOF if it failed. I could've used 0, I thought it was more
    > suggestive this way.
    >
    > >> If yes, this can be run for a number of seconds, in a loop with calls
    > >> to mktime and difftime to get some kind of sub-second resolution (like
    > >> CLOCKS_PER_SEC). That aproximation can be used to implement a kind of
    > >> a sleep function using milliseconds later (unless the function returns
    > >> EOF in which case the function may not work).
    > >> Also, by using the I/O system it may be more CPU friendly although
    > >> it will by no means replace a system-specific sleep function.

    > >
    > > I think your idea is to execute this function a large number of times,
    > > checking the value of time() before and after the loop, and using the
    > > results for calibration, to estimate the time the function takes to
    > > execute. I don't see how mktime() applies here. There's no guarantee
    > > that the function will take the same time to execute each time you
    > > call it.
    > >

    >
    > I meant to say time(). I know there are no guarantees that's why I said
    > approximation.


    The value returned by time() on most(?) implementations is in seconds.
    Half of your statement seems to be about time(), the other half about
    clock().

    >
    > > Pushing back a character with ungetc() and then reading it with
    > > fgets() is not likely to cause any physical I/O to take place, so this
    > > method is likely to be as antisocially CPU-intensive as any other busy
    > > loop.
    > >

    >
    > Yes, you're right.
    >
    > --
    > Ioan - Ciprian Tandau
    > tandau _at_ freeshell _dot_ org (hope it's not too late)
    > (... and that it still works...)
    >
     
    Barry, Jan 11, 2007
    #13
  14. Kwebway Konongo

    Nelu Guest

    Barry <> wrote:
    >
    > "Nelu" <> wrote in message
    > news:...
    >> Keith Thompson <> wrote:
    >> > Nelu <> writes:

    <snip>
    >> >> If yes, this can be run for a number of seconds, in a loop with calls
    >> >> to mktime and difftime to get some kind of sub-second resolution (like
    >> >> CLOCKS_PER_SEC). That aproximation can be used to implement a kind of
    >> >> a sleep function using milliseconds later (unless the function returns
    >> >> EOF in which case the function may not work).
    >> >> Also, by using the I/O system it may be more CPU friendly although
    >> >> it will by no means replace a system-specific sleep function.
    >> >
    >> > I think your idea is to execute this function a large number of times,
    >> > checking the value of time() before and after the loop, and using the
    >> > results for calibration, to estimate the time the function takes to
    >> > execute. I don't see how mktime() applies here. There's no guarantee
    >> > that the function will take the same time to execute each time you
    >> > call it.
    >> >

    >>
    >> I meant to say time(). I know there are no guarantees that's why I said
    >> approximation.

    >
    > The value returned by time() on most(?) implementations is in seconds.
    > Half of your statement seems to be about time(), the other half about
    > clock().


    No, I was talking about counting how many times the function gets called
    in a number of seconds. It will likely be called a lot more times than
    the number of seconds so you get under a second approximations for a
    call or a number of calls and it will give you something similar to
    CLOCKS_PER_SEC, you can name it CALLS_PER_SEC, although it's just a very
    bad approximation.


    --
    Ioan - Ciprian Tandau
    tandau _at_ freeshell _dot_ org (hope it's not too late)
    (... and that it still works...)
     
    Nelu, Jan 11, 2007
    #14
  15. Kwebway Konongo

    Trev Guest

    David T. Ashley wrote:
    > "Kwebway Konongo" <> wrote in message
    > news:...
    > > Hi everyone,
    > > I'm developing an application in C; basically a linked list, with a series
    > > of "events" to be popped off, separated by a command to pause reading off
    > > the next event in the list. It has been sometime since I last did C, and
    > > that was the first K&R version! Is there a command to pause an app for
    > > a period of time, as all the commands I am familiar with specify pauses
    > > for integer numbers of seconds, and what I would like is fractions of a
    > > second, preferably milliseconds if possible

    >
    > OS-dependent question.
    >
    > Any form of spin-wait is bad programming practice (but I suppose it would
    > work).
    >


    Sounds like the OP is asking about something I'm trying to do...

    Incidentally, why is "spin-wait" bad? In my case, I'm trying to do a
    pseudo-realtime application on a dedicated server, with latencies until
    the next event is due to occur.
    Sadly, the boss doesn't realise that my realtime app development
    experience is precisely zero!
     
    Trev, Jan 11, 2007
    #15
  16. "Trev" <> wrote in message
    news:...
    >
    > David T. Ashley wrote:
    >> "Kwebway Konongo" <> wrote in message
    >> news:...
    >> > Hi everyone,
    >> > I'm developing an application in C; basically a linked list, with a
    >> > series
    >> > of "events" to be popped off, separated by a command to pause reading
    >> > off
    >> > the next event in the list. It has been sometime since I last did C,
    >> > and
    >> > that was the first K&R version! Is there a command to pause an app for
    >> > a period of time, as all the commands I am familiar with specify pauses
    >> > for integer numbers of seconds, and what I would like is fractions of a
    >> > second, preferably milliseconds if possible

    >>
    >> OS-dependent question.
    >>
    >> Any form of spin-wait is bad programming practice (but I suppose it would
    >> work).
    >>

    >
    > Sounds like the OP is asking about something I'm trying to do...
    >
    > Incidentally, why is "spin-wait" bad? In my case, I'm trying to do a
    > pseudo-realtime application on a dedicated server, with latencies until
    > the next event is due to occur.
    > Sadly, the boss doesn't realise that my realtime app development
    > experience is precisely zero!


    Spin-wait is bad on a server because it chews up (i.e. consumes towards no
    productive purpose) CPU bandwidth that would best be returned to the
    operating system.

    For example, on Linux, there might be a daemon that needs to wait for 10
    minutes. "sleep(600);" allows the OS to do other things for 10 minutes. A
    spin-wait will consume a large fraction of CPU bandwidth--potentially as
    much as 100%--to do nothing but repeatedly check the time. On a shared
    system -- and any server is shared, at least between your process and the
    operating system -- it is horribly inefficient.

    Now, on an embedded system, spin-wait may be valid. In fact, the most
    common software architecture for small systems is just to spin-wait until
    the next time tick and then do the things you need to do. That is OK
    because you're the only process on the system, and there is nobody else who
    can make better use of the CPU.

    If you have any further questions or observations, please write me directly
    at and answer my SPAM filtering system's automatic reply. I
    might know one or two things about small embedded systems.
     
    David T. Ashley, Jan 11, 2007
    #16
  17. Kwebway Konongo

    Barry Guest

    "Nelu" <> wrote in message
    news:...
    > Barry <> wrote:
    > >
    > > "Nelu" <> wrote in message
    > > news:...
    > >> Keith Thompson <> wrote:
    > >> > Nelu <> writes:

    > <snip>
    > >> >> If yes, this can be run for a number of seconds, in a loop with

    calls
    > >> >> to mktime and difftime to get some kind of sub-second resolution

    (like
    > >> >> CLOCKS_PER_SEC). That aproximation can be used to implement a kind

    of
    > >> >> a sleep function using milliseconds later (unless the function

    returns
    > >> >> EOF in which case the function may not work).
    > >> >> Also, by using the I/O system it may be more CPU friendly although
    > >> >> it will by no means replace a system-specific sleep function.
    > >> >
    > >> > I think your idea is to execute this function a large number of

    times,
    > >> > checking the value of time() before and after the loop, and using the
    > >> > results for calibration, to estimate the time the function takes to
    > >> > execute. I don't see how mktime() applies here. There's no

    guarantee
    > >> > that the function will take the same time to execute each time you
    > >> > call it.
    > >> >
    > >>
    > >> I meant to say time(). I know there are no guarantees that's why I said
    > >> approximation.

    > >
    > > The value returned by time() on most(?) implementations is in seconds.
    > > Half of your statement seems to be about time(), the other half about
    > > clock().

    >
    > No, I was talking about counting how many times the function gets called
    > in a number of seconds. It will likely be called a lot more times than
    > the number of seconds so you get under a second approximations for a
    > call or a number of calls and it will give you something similar to
    > CLOCKS_PER_SEC, you can name it CALLS_PER_SEC, although it's just a very
    > bad approximation.
    >


    I only read your post and gave you too much
    credit. Your explanation is worse than what I thought
    you were doing.
     
    Barry, Jan 11, 2007
    #17
  18. Kwebway Konongo

    Nelu Guest

    Barry <> wrote:
    >
    > "Nelu" <> wrote in message
    > news:...
    >> Barry <> wrote:
    >> >
    >> > "Nelu" <> wrote in message
    >> > news:...
    >> >> Keith Thompson <> wrote:
    >> >> > Nelu <> writes:

    >> <snip>
    >> >> >> If yes, this can be run for a number of seconds, in a loop with

    > calls
    >> >> >> to mktime and difftime to get some kind of sub-second resolution

    > (like
    >> >> >> CLOCKS_PER_SEC). That aproximation can be used to implement a kind

    > of
    >> >> >> a sleep function using milliseconds later (unless the function

    > returns
    >> >> >> EOF in which case the function may not work).
    >> >> >> Also, by using the I/O system it may be more CPU friendly although
    >> >> >> it will by no means replace a system-specific sleep function.
    >> >> >
    >> >> > I think your idea is to execute this function a large number of

    > times,
    >> >> > checking the value of time() before and after the loop, and using the
    >> >> > results for calibration, to estimate the time the function takes to
    >> >> > execute. I don't see how mktime() applies here. There's no

    > guarantee
    >> >> > that the function will take the same time to execute each time you
    >> >> > call it.
    >> >> >
    >> >>
    >> >> I meant to say time(). I know there are no guarantees that's why I said
    >> >> approximation.
    >> >
    >> > The value returned by time() on most(?) implementations is in seconds.
    >> > Half of your statement seems to be about time(), the other half about
    >> > clock().

    >>
    >> No, I was talking about counting how many times the function gets called
    >> in a number of seconds. It will likely be called a lot more times than
    >> the number of seconds so you get under a second approximations for a
    >> call or a number of calls and it will give you something similar to
    >> CLOCKS_PER_SEC, you can name it CALLS_PER_SEC, although it's just a very
    >> bad approximation.
    >>

    >
    > I only read your post and gave you too much
    > credit. Your explanation is worse than what I thought
    > you were doing.


    What did you think I was doing?

    --
    Ioan - Ciprian Tandau
    tandau _at_ freeshell _dot_ org (hope it's not too late)
    (... and that it still works...)
     
    Nelu, Jan 11, 2007
    #18
  19. Kwebway Konongo

    Barry Guest

    "David T. Ashley" <> wrote in message
    news:...
    > "Trev" <> wrote in message
    > news:...
    > >
    > > David T. Ashley wrote:
    > >> "Kwebway Konongo" <> wrote in message
    > >> news:...
    > >> > Hi everyone,
    > >> > I'm developing an application in C; basically a linked list, with a
    > >> > series
    > >> > of "events" to be popped off, separated by a command to pause reading
    > >> > off
    > >> > the next event in the list. It has been sometime since I last did C,
    > >> > and
    > >> > that was the first K&R version! Is there a command to pause an app

    for
    > >> > a period of time, as all the commands I am familiar with specify

    pauses
    > >> > for integer numbers of seconds, and what I would like is fractions of

    a
    > >> > second, preferably milliseconds if possible
    > >>
    > >> OS-dependent question.
    > >>
    > >> Any form of spin-wait is bad programming practice (but I suppose it

    would
    > >> work).
    > >>

    > >
    > > Sounds like the OP is asking about something I'm trying to do...
    > >
    > > Incidentally, why is "spin-wait" bad? In my case, I'm trying to do a
    > > pseudo-realtime application on a dedicated server, with latencies until
    > > the next event is due to occur.
    > > Sadly, the boss doesn't realise that my realtime app development
    > > experience is precisely zero!

    >
    > Spin-wait is bad on a server because it chews up (i.e. consumes towards no
    > productive purpose) CPU bandwidth that would best be returned to the
    > operating system.
    >
    > For example, on Linux, there might be a daemon that needs to wait for 10
    > minutes. "sleep(600);" allows the OS to do other things for 10 minutes.

    A
    > spin-wait will consume a large fraction of CPU bandwidth--potentially as
    > much as 100%--to do nothing but repeatedly check the time. On a shared
    > system -- and any server is shared, at least between your process and the
    > operating system -- it is horribly inefficient.
    >
    > Now, on an embedded system, spin-wait may be valid. In fact, the most
    > common software architecture for small systems is just to spin-wait until
    > the next time tick and then do the things you need to do. That is OK
    > because you're the only process on the system, and there is nobody else

    who
    > can make better use of the CPU.
    >


    Of course we have gotten so far off topic for clc it doesn't matter.
    But, every embedded system I have worked on (and you have used all
    of them, directly or indirectly :)) also respond to hardware interrupts.
     
    Barry, Jan 11, 2007
    #19
  20. Kwebway Konongo

    Default User Guest

    Barry wrote:

    >
    > "user923005" <> wrote in message
    > news:...
    > > Kwebway Konongo wrote:
    > > > Hi everyone,
    > > > I'm developing an application in C; basically a linked list, with
    > > > a

    > series
    > > > of "events" to be popped off, separated by a command to pause
    > > > reading

    > off
    > > > the next event in the list. It has been sometime since I last did
    > > > C, and that was the first K&R version! Is there a command to
    > > > pause an app for a period of time, as all the commands I am
    > > > familiar with specify pauses for integer numbers of seconds, and
    > > > what I would like is fractions of a second, preferably
    > > > milliseconds if possible

    > >
    > > > From the C-FAQ:


    [snip]

    > Most of your response has nothing to do with C. You should have
    > just referred the OP to an appropriate news group.


    He gave the answer that's in the comp.lang.c FAQ on the matter. Try to
    pay attention.




    Brian
     
    Default User, Jan 11, 2007
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Bryan R. Meyer

    Pausing Threads From Events

    Bryan R. Meyer, Apr 27, 2004, in forum: Java
    Replies:
    16
    Views:
    1,488
    NOBODY
    Apr 30, 2004
  2. C-man
    Replies:
    2
    Views:
    5,241
  3. Sharp

    Pausing in Java

    Sharp, Jan 28, 2005, in forum: Java
    Replies:
    4
    Views:
    4,136
    Tony Morris
    Jan 28, 2005
  4. zubairkhan

    pausing a process from java

    zubairkhan, Feb 23, 2005, in forum: Java
    Replies:
    1
    Views:
    489
    dar7yl
    Feb 23, 2005
  5. Sebastian Millies
    Replies:
    0
    Views:
    419
    Sebastian Millies
    Oct 6, 2005
Loading...

Share This Page