is it possible to switch between threads manually.

Discussion in 'C++' started by hayyal, Oct 1, 2007.

  1. hayyal

    hayyal Guest

    Hi folks,

    I have program which utilizes 5 threads to complete the process.
    when the application starts running, it randomly switches between
    threads to complete the process as expected.

    My question is, when the process is running, can i interrupt it
    manually and switch between threads?
    for example, the application is running, now when i interrupted, the
    applications thread is on thread3.
    Now from thread3 can i switch to thread5 and continue with execution?
    if yes, is there any difference in what operating system does and what
    i did?
    if yes, is there a chance of getting coredump?
    if yes, what happens to the stacks of other threads?
    if yes, what happens to the thread which was interrupted manually( in
    this case what happens to thread3)?
    will thread3 starts from where it was stopped? or will it starts from
    where it has been ordered by OS?

    Appreciate your views and comments on this topic.

    Thanks & regards
    Nagaraj Hayyal
     
    hayyal, Oct 1, 2007
    #1
    1. Advertisements

  2. hayyal

    Guest Guest

    First, the current C++ standard has no knowledge about threads, so your
    question is off topic, perhaps comp.programming.threads is a better
    suited group for your questions.
    The whole idea with threads is that more than one of them can execute
    concurrently, so unless you only have a single-core CPU without hyper-
    threading capabilities (or similar) your application's thread should be
    executing concurrently.
    It depends on your threading library, but I have never heard of one that
    allows that kind of manipulation, not can I come up with a reason to do so.
    Each thread should have its own stack so not much I would imagine.
    It stops running?
    If you find a library that allows you to do such a thing, then it would
    probably continue from where it was stopped.
    My views are that it is a weird idea with little practical applications.
    If you want to allow such behaviour then write the application to allow
    it, I would imagine that for some applications arbitrary stopping a
    thread in the middle of execution could have disastrous consequences.
     
    Guest, Oct 1, 2007
    #2
    1. Advertisements

  3. hayyal

    yanlinlin82 Guest

    I know Visual C++ has this function. When you interrupt the program,
    there is a menu item 'Thread' in 'Debug', which can list all threads
    of current program to switch. I think this function is belong to
    debugger. Other platforms must have similar one.
     
    yanlinlin82, Oct 1, 2007
    #3
  4. hayyal

    werasm Guest

    Sorry, OT, but your response got my attention.

    Not necessarily. Threads have been used long before dual core
    processors
    existed. In software that requires real-time response under
    certain circumstances (especially if one only has one processor),
    threads
    are used to prioritize the part that requires real time response. It
    is also used in cases where one waits on blocking IO calls whilst
    keeping the GUI active, for example.
    Yes, true, if they make use of round robin scheduling.
    If you make use of pre-emptive scheduling (OS dependent) you may be
    able
    to do this by using the priorities of threads. I don't think Windows
    supports pre-emptive scheduling, for one. We use it under Linux, but
    it is only advised when something really needs to finish before
    anything
    else (even the Kernel).
    Yes, and for some it is absolutely required (If I understand you
    correctly). E.g. a little controller responsible for controlling
    some mechanism that acts on incoming missiles monitors status
    (threadX)
    that takes X time. Suddenly event happens indicating incoming
    missile, and the controller (threadY) has to respond in Y time, but
    if the rest of threadX completes execution the Y time deadline would
    not be met. Stop (or pause) threadX, continue with threadY until
    complete, and continue with thread X where you left off last time.

    Actually, in Windows this happens all the time - it
    is called round robin scheduling, where threads each get given
    a share of the processor (not?), the ones not having a share at
    that particular instance in time, being saved until they do
    get processor share.
    executed in a thread don't have control over when
    it executes relative to other threads, except if one
    uses synchronization primitives which would exist
    at specific points (mutexes, semaphores, conditional
    variables).

    Regards,

    Werner
     
    werasm, Oct 1, 2007
    #4
  5. hayyal

    werasm Guest

    Sorry, OT, but your response got my attention.

    Not necessarily. Threads have been used long before dual core
    processors
    existed. In software that requires real-time response under
    certain circumstances (especially if one only has one processor),
    threads
    are used to prioritize the part that requires real time response. It
    is also used in cases where one waits on blocking IO calls whilst
    keeping the GUI active, for example.
    Yes, true, if they make use of round robin scheduling.
    If you make use of pre-emptive scheduling (OS dependent) you may be
    able
    to do this by using the priorities of threads. I don't think Windows
    supports pre-emptive scheduling, for one. We use it under Linux, but
    it is only advised when something really needs to finish before
    anything
    else (even the Kernel).
    Yes, and for some it is absolutely required (If I understand you
    correctly). E.g. a little controller responsible for controlling
    some mechanism that acts on incoming missiles monitors status
    (threadX)
    that takes X time. Suddenly event happens indicating incoming
    missile, and the controller (threadY) has to respond in Y time, but
    if the rest of threadX completes execution the Y time deadline would
    not be met. Stop (or pause) threadX, continue with threadY until
    complete, and continue with thread X where you left off last time.

    Actually, in Windows this happens all the time - it
    is called round robin scheduling, where threads each get given
    a share of the processor (not?), the ones not having a share at
    that particular instance in time, being saved until they do
    get processor share.
    executed in a thread don't have control over when
    it executes relative to other threads, except if one
    uses synchronization primitives which would exist
    at specific points (mutexes, semaphores, conditional
    variables).

    Regards,

    Werner
     
    werasm, Oct 1, 2007
    #5
  6. hayyal

    James Kanze Guest

    This depends very much on the system. I've never heard of a
    system which has a request: switch to thread x, however.
    I'm not too sure what you're trying to do, but Posix threads
    (and all real time systems, at least, have a provision for
    thread priority; if an external event unblocks a thread with
    higher priority than the one running, the thread with higher
    priority is guaranteed to run. (Note that on a modern machine,
    this doesn't mean that the orginally running thread is paused,
    however. Most modern machines are capable of running several
    threads at the same time.) This feature is optional, however,
    and may not be implemented on the particular Posix
    implementation which you're using.
    There's always a chance of getting a core dump. Even without
    threads.
    If you get a core dump, the process is terminated.
    You'll have to explain what you're actually trying to do. In
    comp.programming.threads, since that's where the threading
    experts hang out. Note, however, that most threading issues are
    very system dependent, and scheduling policies differ even
    between different Posix implementations.
     
    James Kanze, Oct 2, 2007
    #6
  7. hayyal

    James Kanze Guest

    He should have said "pseudo-concurrently":). Conceptually,
    each thread represents a separate thread of execution, which
    runs in parallel with the other threads. (Obviously, real
    parallelism requires as many CPU's as there are active threads.)
    Except that they weren't always called threads:). Back before
    MMU's were standard, when memory wasn't protected, there really
    wasn't any difference between a thread and a process. My first
    experience with "multi-threaded" code, back in the 1970's,
    called them processes, but since everyone could access all of
    the memory, they behaved pretty much like threads today. (And
    an incorrectly managed pointer could corrupt the OS data
    structures---gave a whole new dimension to undefined behavior,
    which really could wipe out a hard disk.)

    And of course, even back then, we had multiprocessor systems.
    (But they were easier to understand and program, since there was
    no cache, no pipeline, and no hardware reordering.)
    I'm not sure about your vocabulary here. Scheduling policy is
    somewhat orthogonal to preemption; Windows definitly has
    preemptive threads, even if it doesn't support a priority based
    threading policy. Preemption simply means that context switches
    may occur at any time, regardless of what the thread is doing.
    (Without preemption, you don't need locks, because your thread
    will retain control of the CPU until you tell the system it can
    change threads. It's actually a much easier model to deal with,
    but of course, it makes no sense in a multi-CPU environment.)

    Note too that there are many variants of scheduling policies.
    Early Unix, for example, used a decaying priority for process
    scheduling: every so often, the "priority" of all of the
    processes was upped, by a factor which depended on their nice,
    and as a process used CPU (and possibly other resources), it's
    priority "decayed".

    I think that Posix has provisions for relative priority, within
    a process, as well. (But again, all scheduling policy options
    are optional; a Posix system doesn't have to implement them.)
     
    James Kanze, Oct 2, 2007
    #7
  8. hayyal

    werasm Guest

    Yes, I'm aware of this. I did maintenance on a Multibus 2 platform
    using Intels iRMX in '97. After that we moved to VxWorks. In both
    cases they were called tasks.
    Yes, cost an inexperienced programmer many late nights. The biggest
    culprit being sprintf most of the time.
    I got my vocab wrong. I meant to say "Preemptive Priority Scheduling".
    Admittedly we always call it just "Preemptive Scheduling", although
    I see your point. The threads certainly do get preempted, whether
    one is using "Round-Robin" or "Preemptive Priority". In the one case
    time sharing applies, whereas in the other the thread with the highest
    priority gets the processor (Still often used in embedded processors
    today, in fact we are using it for ARM using Linux OS).
    Yes, that was what I meant.
    .... After which the system would preempt you and give control
    to the other thread?...;-)
    I've never used or heard of this model before (non-preemptive). I
    could perhaps think of simulating it, but that would require locks.
    I've implemented something like a ADA rendezvous (if I understand
    it correctly) that waits for another thread to complete an operation
    that specified by it. This seems to simulate this model as the thread
    literally preempts when it goes into the Rendezvous, but it certainly
    requires locks.

    Do you have examples (of non-preemptive sched) for interest sake?

    Regards,

    Werner
     
    werasm, Oct 2, 2007
    #8
  9. hayyal

    James Kanze Guest

    I've heard that word as well; back in the 1970's and 1980's, I
    tended to make the distinction: process ("processus" in French)
    when there was no memory protection, task ("t√Ęche" in French)
    when there was. The real-time embedded processors I mostly
    worked on had processes; IBM mainframes had tasks.

    This distinction went out the window when I started working on
    Unix (late 1980's), which had "processes", but used memory
    protection.
    Or strcpy( malloc( strlen( s ) ), s ). On big-endian machines,
    that got the allocator writing to low memory very quickly.

    [...]
    If you request/authorize the switch, is preemption the correct
    word? (My "feeling" for the word preempt is that it implies
    something happening without my particularly desiring it.)
    I'm not aware of anyone implementing it under processes. It
    obviously requires co-operating processes/threads, and so isn't
    appropriate for processes on a general purpose, multi-user
    system. Threads within a process are supposed to collaborate,
    however, and I think that in most cases, it would be preferable
    to the preemptive model we find every where.
    I've considered that once or twice as well. A single mutex
    lock, always held except when you wanted to allow a context
    switch (instead of only holding it when you didn't want to allow
    one). It makes a lot of things considerably easier, but it does
    require wrapping all system calls that might block, to ensure
    that they count as legitimate context switch locations. (My
    main consideration was to allow lock free logging---logging, of
    course, uses common resources which need protection.)
    Nothing recent, but we used it a lot on the 8080. (The
    non-preemptive kernel I used on the 8080 fit in less than 80
    bytes. Very useful when you only had 2K ROM for the entire
    program.) I think that early Windows (pre-Windows 95) also used
    non-preemptive scheduling for its processes, but I'm not really
    sure; I never actually programmed on the system---I just heard
    rumors that non-cooperating processes could hang the system.
     
    James Kanze, Oct 3, 2007
    #9
  10. hayyal

    Jerry Coffin Guest

    [ ... ]
    That's true of kernel threading as a rule. Back in the bad old days of
    user threading, some packages provided this, though it was far more
    common for a thread to just yield, and the scheduler picked what thread
    to run next.

    Depending on the target system, it sounds like the OP is asking for
    something closer to what Windows calls "fibers". If he wants something
    portable, I believe he'll have to do it himself though. In that case, it
    seems like the old cthreads package would be a reasonable place to
    start. It's been a long time since I played with that, but just glancing
    over the source code, cthread_thread_schedule(newthread) sounds like
    it's at least pretty close to what the OP is apparently asking for.
     
    Jerry Coffin, Oct 3, 2007
    #10
  11. hayyal

    James Kanze Guest

     
    James Kanze, Oct 4, 2007
    #11
  12. hayyal

    Guest Guest

     
    Guest, Oct 4, 2007
    #12
  13. hayyal

    Jerry Coffin Guest

    [ ... ]
    It's pretty much a user-land thread. You start with a normal kernel
    thread. You create N-1 other fibers, as well as convert the original
    thread to a fiber. Then you have a pool of N fibers that you can switch
    between as you see fit. The kernel scheduler continues to schedule the
    group of them as (essentially) a single thread, and you can pick which
    fiber is going to execute at any given time.

    IIRC, MS introduced fibers when they were still doing a JVM. I believe
    they were introduced primarily (exclusively?) to support Java threads
    using the existing kernel thread management instead of writing a whole
    new thread manager into the JVM.
     
    Jerry Coffin, Oct 4, 2007
    #13
  14. * James Kanze:
    You mean co-routines.

    Yes, they are co-routines.

    It's been a long time since I last implemented it (then in terms of
    longjmp + a little assembly, it was a fad around 1990). But the
    interesting thing is that co-routines are so useful that they have been
    implemented in terms of e.g. Java threads, where the efficiency argument
    is void. And Windows API fibers are seemingly implemented in terms of
    Windows threads: you start with threads and convert them to fibers.

    Cheers, & hth.,

    - Alf
     
    Alf P. Steinbach, Oct 4, 2007
    #14
  15. Certainly this is correct for Windows 3.1; one's code had to
    call the "Yield" API function to give other processes a chance!
    Despite which, I often feel that those early Windowses were
    somehow more reliable ... :)
     
    tragomaskhalos, Oct 4, 2007
    #15
  16. hayyal

    Jerry Coffin Guest

    [ ... ]
    One did not normally call the yield function. Most code had a loop to
    repeatedly 1) call GetMessage, and 2) process the message that was
    retrieved. GetMessage was where the yielding happened.

    Of course, when/if you had to carry out processing that wasn't in
    (direct) response to a message, things got a bit uglier...
     
    Jerry Coffin, Oct 5, 2007
    #16
  17. * Jerry Coffin:
    I think this has wandered off-topic.

    But there are some relevant C++ perspectives.

    In particular, the flurry of research on "active objects", mostly based
    on coroutines (although some were thread-based), seemed to die out quite
    silently in the latter half of the 1990's. Even though the term "active
    object" is now being used for just about anything, like "well, it's sort
    of active, that's cool". I suspect practically useful active objects in
    C++ need language support, like, Ada rendezvous's.

    Cheers,

    - Alf
     
    Alf P. Steinbach, Oct 5, 2007
    #17
  18. hayyal

    James Kanze Guest

    Yes, that's the word.
    I think it was around 1985, or a little before, that I last used
    it. But I think it was really at the base of the USL threading
    package---as you say, with longjmp, etc. (In my earlier case,
    it was a lot simpler: I just had two stacks, one for the main
    process, and the other for the co-process. And a special
    function which swapped the stack pointer. This was all in
    assembler, so I just passed parameters and return values in
    registers.)
    Well, co-routines are very much like non pre-emptive threads.
    Which makes thread safety an order or two of magnitude simpler.
    about what Unix (or at least Solaris) called LWP/threads. For
    the most part, my impression is that Unix (or at least Solaris)
    is moving away from that---kernel threads (the earlier LWP) have
    gotten to the point that they're fast enough that you don't need
    anything even lower level.
     
    James Kanze, Oct 5, 2007
    #18
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.