Re: C++ inventor Bjarne Stroustrup answers the Multicore Proust Questionnaire

Discussion in 'C++' started by Ian Collins, Sep 28, 2008.

  1. Ian Collins

    Ian Collins Guest

    Chris M. Thomasson wrote:
    > "gremlin" <> wrote in message
    > news:...
    >> http://www.cilk.com/multicore-blog/...up-answers-the-Multicore-Proust-Questionnaire
    >>

    >
    > I get a not found error:
    >
    > The requested URL
    > /multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
    > was not found on this server.
    >
    > Where is the correct location?


    The link is the correct location I just tried it.

    --
    Ian Collins.
     
    Ian Collins, Sep 28, 2008
    #1
    1. Advertising

  2. Chris M. Thomasson, Sep 28, 2008
    #2
    1. Advertising

  3. "Ian Collins" <> wrote in message
    news:...
    > Chris M. Thomasson wrote:
    >> "gremlin" <> wrote in message
    >> news:...
    >>> http://www.cilk.com/multicore-blog/...up-answers-the-Multicore-Proust-Questionnaire
    >>>

    >>
    >> I get a not found error:
    >>
    >> The requested URL
    >> /multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
    >> was not found on this server.
    >>
    >> Where is the correct location?

    >
    > The link is the correct location I just tried it.


    Hey, it works now! Weird; perhaps temporary server glitch. Who knows.

    ;^/
     
    Chris M. Thomasson, Sep 28, 2008
    #3
  4. Ian Collins

    Ian Collins Guest

    Chris M. Thomasson wrote:
    > "Ian Collins" <> wrote in message
    > news:...
    >> Chris M. Thomasson wrote:
    >>> "gremlin" <> wrote in message
    >>> news:...
    >>>> http://www.cilk.com/multicore-blog/...up-answers-the-Multicore-Proust-Questionnaire
    >>>>
    >>>>
    >>>
    >>> I get a not found error:
    >>>
    >>> The requested URL
    >>> /multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
    >>>
    >>> was not found on this server.
    >>>
    >>> Where is the correct location?

    >>
    >> The link is the correct location I just tried it.

    >
    > Hey, it works now! Weird; perhaps temporary server glitch. Who knows.
    >

    Well it is running on windows :)

    --
    Ian Collins.
     
    Ian Collins, Sep 28, 2008
    #4
  5. "Chris M. Thomasson" <> wrote in message
    news:GxDDk.13217$...
    > "Ian Collins" <> wrote in message
    > news:...
    >> Chris M. Thomasson wrote:
    >>> "gremlin" <> wrote in message
    >>> news:...
    >>>> http://www.cilk.com/multicore-blog/...up-answers-the-Multicore-Proust-Questionnaire
    >>>>
    >>>
    >>> I get a not found error:
    >>>
    >>> The requested URL
    >>> /multicore-blog/bid/6703/C-Inventor-Bjarne-Stroustrup-answers-the-Multicore-Proust-Questionnaire
    >>> was not found on this server.
    >>>
    >>> Where is the correct location?

    >>
    >> The link is the correct location I just tried it.

    >
    > Hey, it works now! Weird; perhaps temporary server glitch. Who knows.
    >
    > ;^/





    Q: The most important problem to solve for multicore software:

    A: How to simplify the expression of potential parallelism.


    Humm... What about scalability? That's a very important problem to solve.
    Perhaps the most important. STM simplifies expression of parallelism, but
    its not really scaleable at all.



    I guess I would have answered:


    CT-A: How to simplify the expression of potential parallelism __without
    sacrificing scalability__.






    Q: My worst fear about how multicore technology might evolve:

    A: Threads on steroids.


    Well, threads on steroids and proper distributed algorihtms can address the
    scalability issue. Nothing wrong with threading on steroids. Don't be
    afraid!!!!! I am a threading freak, so I am oh so VERY BIASED!!! ;^|





    Oh well, that my 2 cents.
     
    Chris M. Thomasson, Sep 28, 2008
    #5
  6. Ian Collins

    James Kanze Guest

    Re: C++ inventor Bjarne Stroustrup answers the Multicore ProustQuestionnaire

    On Sep 28, 6:31 am, "Chris M. Thomasson" <> wrote:
    > "Chris M. Thomasson" <> wrote in
    > messagenews:GxDDk.13217$...
    > > "Ian Collins" <> wrote in message
    > >news:...
    > >> Chris M. Thomasson wrote:
    > >>> "gremlin" <> wrote in message
    > >>>news:...
    > >>>>http://www.cilk.com/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroust....


    [...]
    > Q: The most important problem to solve for multicore software:


    > A: How to simplify the expression of potential parallelism.


    > Humm... What about scalability? That's a very important
    > problem to solve. Perhaps the most important. STM simplifies
    > expression of parallelism, but its not really scaleable at
    > all.


    And what do you think simplifying the expression of potential
    parallelism achieves, if not scalability?

    [...]
    > Q: My worst fear about how multicore technology might evolve:


    > A: Threads on steroids.


    > Well, threads on steroids and proper distributed algorihtms
    > can address the scalability issue. Nothing wrong with
    > threading on steroids. Don't be afraid!!!!! I am a threading
    > freak, so I am oh so VERY BIASED!!! ;^|


    I'm not too sure what Stroustrup was getting at here, but having
    to write explicitly multithreaded code (with e.g. manual locking
    and synchronization) is not a good way to achieve scalability.
    Futures are probably significantly easier to use, and in modern
    Fortran, if I'm not mistaken, there are special constructs to
    tell the compiler that certain operations can be parallelized.
    And back some years ago, there was a fair amount of research
    concerning automatic parallelization by the compiler; I don't
    know where it is now.

    Of course, a lot depends on the application. In my server,
    there's really nothing that could be parallelized in a given
    transaction, but we can run many transactions in parallel. For
    that particular model of parallelization, classical explicit
    threading works fine.

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
     
    James Kanze, Sep 28, 2008
    #6
  7. Ian Collins

    Ian Collins Guest

    Re: C++ inventor Bjarne Stroustrup answers the Multicore ProustQuestionnaire

    James Kanze wrote:
    > On Sep 28, 6:31 am, "Chris M. Thomasson" <> wrote:
    >
    >> A: Threads on steroids.

    >
    >> Well, threads on steroids and proper distributed algorihtms
    >> can address the scalability issue. Nothing wrong with
    >> threading on steroids. Don't be afraid!!!!! I am a threading
    >> freak, so I am oh so VERY BIASED!!! ;^|

    >
    > I'm not too sure what Stroustrup was getting at here, but having
    > to write explicitly multithreaded code (with e.g. manual locking
    > and synchronization) is not a good way to achieve scalability.
    > Futures are probably significantly easier to use, and in modern
    > Fortran, if I'm not mistaken, there are special constructs to
    > tell the compiler that certain operations can be parallelized.
    > And back some years ago, there was a fair amount of research
    > concerning automatic parallelization by the compiler; I don't
    > know where it is now.
    >

    We along with Fortran and C programmers, can use OpenMP which from my
    limited experience with, works very well.

    --
    Ian Collins.
     
    Ian Collins, Sep 28, 2008
    #7
  8. "James Kanze" <> wrote in message
    news:...
    On Sep 28, 6:31 am, "Chris M. Thomasson" <> wrote:
    > > "Chris M. Thomasson" <> wrote in
    > > messagenews:GxDDk.13217$...
    > > > "Ian Collins" <> wrote in message
    > > >news:...
    > > >> Chris M. Thomasson wrote:
    > > >>> "gremlin" <> wrote in message
    > > >>>news:...
    > > >>>>http://www.cilk.com/multicore-blog/bid/6703/C-Inventor-Bjarne-Stroust...


    [...]
    > > Q: The most important problem to solve for multicore software:


    > > A: How to simplify the expression of potential parallelism.


    > > Humm... What about scalability? That's a very important
    > > problem to solve. Perhaps the most important. STM simplifies
    > > expression of parallelism, but its not really scaleable at
    > > all.


    > And what do you think simplifying the expression of potential
    > parallelism achieves, if not scalability?


    Take one attempt at simplifying the expression of potential parallelism;
    STM. Unfortunately, its not really able to scale. The simplification can
    introduce overhead which interfere with scalability. IMVHO, message-passing
    has potential. At least I know how to implement it in a way that basically
    scales to any number of processors.




    [...]
    > > Q: My worst fear about how multicore technology might evolve:


    > > A: Threads on steroids.


    > > Well, threads on steroids and proper distributed algorihtms
    > > can address the scalability issue. Nothing wrong with
    > > threading on steroids. Don't be afraid!!!!! I am a threading
    > > freak, so I am oh so VERY BIASED!!! ;^|


    > I'm not too sure what Stroustrup was getting at here, but having
    > to write explicitly multithreaded code (with e.g. manual locking
    > and synchronization) is not a good way to achieve scalability.


    That's relative to the programmer. I have several abstractions packaged into
    a library which allows one to create highly scaleable programs using
    threads. However, if the programmer is not skilled in the art of
    multi-threading, well, then its not going to do any good!

    ;^(




    > Futures are probably significantly easier to use,


    They have some "caveats". I have implemented futures, and know that a truly
    scaleable impl needs to use distributed queuing which does not really follow
    true global FIFO. There can be ordering anomalies that the programmer does
    not know about, and will bit them in the a$% if some of their algorihtms
    depend on certain orders of actions.




    > and in modern
    > Fortran, if I'm not mistaken, there are special constructs to
    > tell the compiler that certain operations can be parallelized.
    > And back some years ago, there was a fair amount of research
    > concerning automatic parallelization by the compiler; I don't
    > know where it is now.


    No silver bullets in any way shape or form. Automatic parallelization
    sometimes works for a narrow type of algorithm. Usually, breaking up arrays
    across multiple threads. But, the programmer is not out of the woods,
    because they will still need to manually implement enhancements that are KEY
    to scalability (e.g., cache-blocking.). No silver bullets indeed.




    > Of course, a lot depends on the application. In my server,
    > there's really nothing that could be parallelized in a given
    > transaction, but we can run many transactions in parallel. For
    > that particular model of parallelization, classical explicit
    > threading works fine.


    Absolutely.
     
    Chris M. Thomasson, Sep 28, 2008
    #8
  9. "Daniel T." <> wrote in message
    news:-sjc.supernews.net...
    > James Kanze <> wrote:
    >
    >> > Q: My worst fear about how multicore technology might evolve:

    >>
    >> > A: Threads on steroids.

    >>
    >> > Well, threads on steroids and proper distributed algorihtms
    >> > can address the scalability issue. Nothing wrong with
    >> > threading on steroids. Don't be afraid!!!!! I am a threading
    >> > freak, so I am oh so VERY BIASED!!! ;^|

    >>
    >> I'm not too sure what Stroustrup was getting at here, but having
    >> to write explicitly multithreaded code (with e.g. manual locking
    >> and synchronization) is not a good way to achieve scalability.

    >
    > Agreed. I have played around with Occam's expression of Communicating
    > Sequential Processes (CSP). I would like to see CSP explored further in
    > C++.


    IMO, CPS is WAY to high level to be integrated into the language. However,
    you can definitely use C++0x to fully implement Occam and/or CSP. If you
    want to use CSP out of the box, well, C++ is NOT for you; period. Keep in
    mind, C++ is a low-level systems language.




    > What I have found so far is
    > http://www.twistedsquare.com/cppcspv1/docs/index.html
     
    Chris M. Thomasson, Sep 28, 2008
    #9
  10. Re: C++ inventor Bjarne Stroustrup answers the Multicore ProustQuestionnaire

    On Sep 28, 7:00 pm, "Daniel T." <> wrote:

    > [...]
    > Agreed. I have played around with Occam's expression of Communicating
    > Sequential Processes (CSP). I would like to see CSP explored further in
    > C++.
    >
    > What I have found so far ishttp://www.twistedsquare.com/cppcspv1/docs/index.html


    Well, CSP is a very nice language concept and OCCAM is an interesting
    instance of it.

    However, CSP is not matching very well with objects and shared
    resources. If you want to map processes in the CSP sense to objects,
    you will end up with what you would call single threaded object. The
    object has its private state space and any request comes via events.
    The object can await several potential event but whenever one or more
    event is applicable, it selects one non-deterministically and performs
    the corresponding action on its own private data space.

    It is worth mentioning that at the time CSP was published, there
    appeared another elegant language concept: Distributed Processes (DP).
    This is something that can be mapped very naturally to objects and
    gives you high level of potential parallelism. In DP an object starts
    its own thread which operates on its own data space and other objects
    can call its methods asynchronously. The initial thread and the called
    methods then are executed in an interleaved manner. If the initial
    thread finishes, the object continues to serve the potentially
    simultaneous method calls, i.e. it becomes a (shared) passive object.
    http://brinch-hansen.net/papers/1978a.pdf

    Well, both language proposals are much higher level with respect to
    parallelism than what is planned into the new brave C++0x.

    Best Regards,
    Szabolcs
     
    Szabolcs Ferenczi, Sep 28, 2008
    #10
  11. "Szabolcs Ferenczi" <> wrote in message
    news:...
    On Sep 28, 7:00 pm, "Daniel T." <> wrote:

    > > [...]
    > > Agreed. I have played around with Occam's expression of Communicating
    > > Sequential Processes (CSP). I would like to see CSP explored further in
    > > C++.
    > >
    > > What I have found so far
    > > ishttp://www.twistedsquare.com/cppcspv1/docs/index.html


    > Well, CSP is a very nice language concept and OCCAM is an interesting
    > instance of it.


    [...]

    > It is worth mentioning that at the time CSP was published, there
    > appeared another elegant language concept: Distributed Processes (DP).
    > This is something that can be mapped very naturally to objects and
    > gives you high level of potential parallelism. In DP an object starts
    > its own thread which operates on its own data space and other objects
    > can call its methods asynchronously.


    Each object starts its own thread? lol, NO WAY! I can definitely implement a
    high-performance version of DP in which a plurality of thread multiplexes
    multiple objects. N number of threads for O number of objects. N = 4; O =
    10000. Sure. No problem. DP does not have to work like you explicitly
    suggest. Sorry, but you make huge mistake.




    > The initial thread and the called
    > methods then are executed in an interleaved manner. If the initial
    > thread finishes, the object continues to serve the potentially
    > simultaneous method calls, i.e. it becomes a (shared) passive object.
    > http://brinch-hansen.net/papers/1978a.pdf


    > Well, both language proposals are much higher level with respect to
    > parallelism than what is planned into the new brave C++0x.


    DP is NOT as expensive as you claim it is. E.g. each object starts it's OWN
    thread. No way!

    ;^|
     
    Chris M. Thomasson, Sep 29, 2008
    #11
  12. Re: C++ inventor Bjarne Stroustrup answers the Multicore ProustQuestionnaire

    On Sep 29, 12:06 pm, "Chris M. Thomasson" <> wrote:
    > "Szabolcs Ferenczi" <> wrote in message
    >
    > news:...
    > On Sep 28, 7:00 pm, "Daniel T." <> wrote:
    >
    > > > [...]
    > > > Agreed. I have played around with Occam's expression of Communicating
    > > > Sequential Processes (CSP). I would like to see CSP explored further in
    > > > C++.

    >
    > > > What I have found so far
    > > > ishttp://www.twistedsquare.com/cppcspv1/docs/index.html

    > > Well, CSP is a very nice language concept and OCCAM is an interesting
    > > instance of it.

    >
    > [...]
    >
    > > It is worth mentioning that at the time CSP was published, there
    > > appeared another elegant language concept: Distributed Processes (DP).
    > > This is something that can be mapped very naturally to objects and
    > > gives you high level of potential parallelism. In DP an object starts
    > > its own thread which operates on its own data space and other objects
    > > can call its methods asynchronously.

    >
    > Each object starts its own thread? lol, NO WAY!


    Hmm... Yes, *logically* "Each object starts its own thread". That is
    the way it is defined by the author of the concept.

    > I can definitely implement a
    > high-performance version of DP in which a plurality of thread multiplexes
    > multiple objects.


    Obviously, you do not know what you are talking about. You go and read
    about the programming concept Distributed Processes (DP) before you
    start claiming how you would like to hack it.
    http://brinch-hansen.net/papers/1978a.pdf

    > N number of threads for O number of objects. N = 4; O =
    > 10000. Sure. No problem. DP does not have to work like you explicitly
    > suggest. Sorry, but you make huge mistake.


    Well, it is not me who suggest it that way but the author of the
    programming language concept. Again, you may perhaps try to read about
    it first.
    http://brinch-hansen.net/papers/1978a.pdf

    > > The initial thread and the called
    > > methods then are executed in an interleaved manner. If the initial
    > > thread finishes, the object continues to serve the potentially
    > > simultaneous method calls, i.e. it becomes a (shared) passive object.
    > >http://brinch-hansen.net/papers/1978a.pdf
    > > Well, both language proposals are much higher level with respect to
    > > parallelism than what is planned into the new brave C++0x.

    >
    > DP is NOT as expensive as you claim it is. E.g. each object starts it's OWN
    > thread. No way!


    I did not claim it was expensive. You have concluded it but it is just
    because of your ignorance. It is just because you talk about it
    without knowing anything about it.

    I hope, I could help, though.

    Best Regards,
    Szabolcs
     
    Szabolcs Ferenczi, Oct 20, 2008
    #12
  13. "Szabolcs Ferenczi" <> wrote in message
    news:...
    On Sep 29, 12:06 pm, "Chris M. Thomasson" <> wrote:
    > > "Szabolcs Ferenczi" <> wrote in message
    > >
    > > news:...
    > > On Sep 28, 7:00 pm, "Daniel T." <> wrote:
    > >
    > > > > [...]
    > > > > Agreed. I have played around with Occam's expression of
    > > > > Communicating
    > > > > Sequential Processes (CSP). I would like to see CSP explored further
    > > > > in
    > > > > C++.

    > >
    > > > > What I have found so far
    > > > > ishttp://www.twistedsquare.com/cppcspv1/docs/index.html
    > > > Well, CSP is a very nice language concept and OCCAM is an interesting
    > > > instance of it.

    > >
    > > [...]
    > >
    > > > It is worth mentioning that at the time CSP was published, there
    > > > appeared another elegant language concept: Distributed Processes (DP).
    > > > This is something that can be mapped very naturally to objects and
    > > > gives you high level of potential parallelism. In DP an object starts
    > > > its own thread which operates on its own data space and other objects
    > > > can call its methods asynchronously.

    > >
    > > Each object starts its own thread? lol, NO WAY!


    > Hmm... Yes, *logically* "Each object starts its own thread". That is
    > the way it is defined by the author of the concept.


    Right. I correct your major mistake. You stated that each object starts its
    own thread. Well, your idea on how to implement DP is not scaleable. I
    corrected you; why don't you try learning from it? Wow.




    > > I can definitely implement a
    > > high-performance version of DP in which a plurality of thread
    > > multiplexes
    > > multiple objects.


    > Obviously, you do not know what you are talking about.


    Wrong. Try again.




    > You go and read
    > about the programming concept Distributed Processes (DP) before you
    > start claiming how you would like to hack it.
    > http://brinch-hansen.net/papers/1978a.pdf


    Listen, you would implement it with an object per-thread because that's what
    you explicitly said. I know for a fact that I can do it with in a way which
    is scaleable. Multiple objects can be bound to a single thread and
    communication between them is multiplexed. DP can be single-threaded in the
    context. However, I would do it with a thread-pool.





    > > N number of threads for O number of objects. N = 4; O =
    > > 10000. Sure. No problem. DP does not have to work like you explicitly
    > > suggest. Sorry, but you make huge mistake.


    > Well, it is not me who suggest it that way but the author of the
    > programming language concept.


    He is smarter than you are.


    > Again, you may perhaps try to read about
    > it first.

    http://brinch-hansen.net/papers/1978a.pdf

    IMVHOP, the author SURELY knows that multiplexing can be used to implement
    DP; you do not. Sorry, but that's the way it is. You think that a thread
    per-object is needed. Well, its not; your mistaken.




    > > > The initial thread and the called
    > > > methods then are executed in an interleaved manner. If the initial
    > > > thread finishes, the object continues to serve the potentially
    > > > simultaneous method calls, i.e. it becomes a (shared) passive object.
    > > >http://brinch-hansen.net/papers/1978a.pdf
    > > > Well, both language proposals are much higher level with respect to
    > > > parallelism than what is planned into the new brave C++0x.

    > >
    > > DP is NOT as expensive as you claim it is. E.g. each object starts it's
    > > OWN
    > > thread. No way!


    > I did not claim it was expensive.


    Yes you did. You said that an object creates a thread. Dare me to quote you?
    Well, what if there is 100,000 objects?




    > You have concluded it but it is just
    > because of your ignorance. It is just because you talk about it
    > without knowing anything about it.


    > > I hope, I could help, though.


    You helped me confirm my initial thoughts. Sorry, but DP can be implemented
    via thread-pool and multiplexing. No object per-thread crap is needed. I
    quote you:




    "> Well, it is not me who suggest it that way but the author of the
    > programming language concept. "


    Sorry. But, the author knows that DP can be implemented through
    message-passing, thread-pool, and multiplexing such that it can be
    single-threaded, or run on a ten-thousand processor system. You need to
    understand that fact. I suggest that you learn about how to create scaleable
    algorithms. DP is one of them. However, the way you describe it is
    detestable at best.

    :^|
     
    Chris M. Thomasson, Oct 21, 2008
    #13
  14. "Chris M. Thomasson" <> wrote in message
    news:cJbLk.6826$...
    > "Szabolcs Ferenczi" <> wrote in message
    > news:...
    > On Sep 29, 12:06 pm, "Chris M. Thomasson" <> wrote:
    >> > "Szabolcs Ferenczi" <> wrote in message
    >> >
    >> > news:...
    >> > On Sep 28, 7:00 pm, "Daniel T." <> wrote:
    >> >
    >> > > > [...]
    >> > > > Agreed. I have played around with Occam's expression of
    >> > > > Communicating
    >> > > > Sequential Processes (CSP). I would like to see CSP explored
    >> > > > further in
    >> > > > C++.
    >> >
    >> > > > What I have found so far
    >> > > > ishttp://www.twistedsquare.com/cppcspv1/docs/index.html
    >> > > Well, CSP is a very nice language concept and OCCAM is an interesting
    >> > > instance of it.
    >> >
    >> > [...]
    >> >
    >> > > It is worth mentioning that at the time CSP was published, there
    >> > > appeared another elegant language concept: Distributed Processes
    >> > > (DP).
    >> > > This is something that can be mapped very naturally to objects and
    >> > > gives you high level of potential parallelism. In DP an object starts
    >> > > its own thread which operates on its own data space and other objects
    >> > > can call its methods asynchronously.
    >> >
    >> > Each object starts its own thread? lol, NO WAY!

    >
    >> Hmm... Yes, *logically* "Each object starts its own thread". That is
    >> the way it is defined by the author of the concept.

    >
    > Right. I correct your major mistake. You stated that each object starts
    > its own thread. Well, your idea on how to implement DP is not scaleable. I
    > corrected you; why don't you try learning from it? Wow.


    I corrected you by informing you that a thread-pool and multiplexing can
    implement DP using a bounded number of threads. Your answer what that I
    don't know what I am writing about. Well, you make be laugh Szabolcs. One
    thing you know how to do is make me laugh. Thanks.
     
    Chris M. Thomasson, Oct 21, 2008
    #14
  15. I was perhaps WAY to harsh... Let me sum things up... I know for a FACT
    that:

    Distributed Processes
    http://brinch-hansen.net/papers/1978a.pdf

    can be implemented with a thread-pool, message-passing and multiplexing such
    that N threads can handle O objects. Think N == 4; O == 1,000,000. I KNOW
    the author understands this fact; its COMMON SENSE. So be it. However...

    Szabolcs Ferenczi seemed to suggest that an object needed to have its own
    thread. Well, if I take that to another extreme, an object needs its own
    personal process. I think I flamed him to harshly. He is in need of
    information, not flames. Well, I am sorry!

    ;^/
     
    Chris M. Thomasson, Oct 21, 2008
    #15
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Teddy

    Bjarne Stroustrup says

    Teddy, Jun 10, 2005, in forum: C++
    Replies:
    13
    Views:
    723
    Robbie Hatley
    Jun 12, 2005
  2. goer
    Replies:
    0
    Views:
    494
  3. Replies:
    3
    Views:
    488
    Puppet_Sock
    Jun 21, 2006
  4. Mike
    Replies:
    1
    Views:
    313
    F.J.K.
    Sep 20, 2006
  5. Szabolcs Ferenczi
    Replies:
    1
    Views:
    427
    Chris M. Thomasson
    Sep 28, 2008
Loading...

Share This Page