When to use a garbage collector?

Discussion in 'C++' started by Carlo Milanesi, Jun 10, 2008.

  1. Hello,
    traditionally, in C++, dynamically allocated memory has been
    managed explicitly by calling "delete" in the application code.

    Now, in addition to the standard library strings, containers, and
    auto_ptrs, gurus suggest that may be better to use a reference-counted
    smart pointer, or a garbage-collector.

    But in which cases it is better to use one technique and in which cases
    another? IOW, which is the design criterion?

    And if, after having completed a working system, a technique would
    result more performing than another, or better for other reasons, is it
    advisable to change the memory management strategy of that working system?

    --
    Carlo Milanesi
    http://digilander.libero.it/carlmila
     
    Carlo Milanesi, Jun 10, 2008
    #1
    1. Advertising

  2. Carlo Milanesi

    Krice Guest

    On 10 kesä, 18:30, Carlo Milanesi <>
    wrote:
    > But in which cases it is better to use one technique and in which cases
    > another?


    It's probably a question that has no answer. But I like the
    old school delete, because it's transparent, you know exactly
    what it's doing and when.
     
    Krice, Jun 10, 2008
    #2
    1. Advertising

  3. Carlo Milanesi

    Jerry Coffin Guest

    In article <484e9dfd$0$17941$>,
    says...

    [ ... ]

    > Now, in addition to the standard library strings, containers, and
    > auto_ptrs, gurus suggest that may be better to use a reference-counted
    > smart pointer, or a garbage-collector.


    _Some_ gurus -- there are others (who I think qualify as gurus) who are
    opposed to the use of garbage collection to varying degrees, or at least
    are of the view that its use should be restricted to specific
    situations.

    > But in which cases it is better to use one technique and in which cases
    > another? IOW, which is the design criterion?


    The cost of garbage collection varies based on the type of collector
    used. Nearly every garbage collector has a "mark" phase, in which
    "live" objects (i.e. those that are still accessible) are marked as
    being in use. The cost of this phase is normally about linear on the
    number of objects currently accessible to the program.

    After that, different garbage collectors work in a number of different
    ways. A copying collector copies all those live objects into a
    contiguous space in memory, leaving another contiguous free space. This
    makes allocations extremely cheap, and the cost of the collection as a
    whole is also linear on the number of objects that are accessible.

    Other collectors leave the "live" objects where they are, and create
    free blocks of all the contiguous chunks of memory in the heap not
    currently occupied by live objects. In this case, the cost of the
    collection part tends to relate most closely to the number of contiguous
    chunks of free memory in the heap.

    On the other side, life isn't simple either. Manual memory management
    tends to have costs associated most closely with the number of
    allocations and frees used. This cost can be mitigated (drastically) in
    certain cases, such as allocating a large number of objects of identical
    size, or releasing a large number of objects all at the same, if the
    allocator is written to allow them to be released together rather than
    individually.

    That only begins to tell the real story though: much of the cost of
    manual allocation/deletion arises when objects are copied. A garbage
    collector can (and does) keep track of different pointers that refer to
    an object, and only deletes the object when all pointers that give
    access to the object are gone. This makes it easy to keep a single
    object, and create new pointers/references to that object whenever
    needed.

    With manual memory management, it's far more common to duplicate the
    entire object, so each time the object is used, there's a separate
    instance of the object to look at, and each instance has exactly one
    owner that's responsible for deleting the object when it's no longer
    needed. Allocating space to hold extra copies of the object, and copying
    the relevant data into each copy, can take a considerable amount of
    time. With a GC in place, you can usually avoid this copying by just
    passing around pointers and everything shares access to that one object.
    OTOH, when/if you need to copy the object anyway (e.g. if one copy will
    be modified to become different from the other), this does little good.

    As such, a tremendous amount depends upon things like: 1) what you're
    allocating dynamically, 2) how you're using the dynamically allocated
    objects, and 3) the degree to which objects you'd copy with manual
    memory management can be shared in the presence of GC. All of these (and
    more) depend on the application and design, not just the memory
    management itself.

    --
    Later,
    Jerry.

    The universe is a figment of its own imagination.
     
    Jerry Coffin, Jun 10, 2008
    #3
  4. Carlo Milanesi

    Stefan Ram Guest

    Jerry Coffin <> writes:
    >Allocating space to hold extra copies of the object, and copying
    >the relevant data into each copy, can take a considerable amount of
    >time. With a GC in place, you can usually avoid this copying by just
    >passing around pointers and everything shares access to that one object.


    »There were two versions of it, one in Lisp and one in
    C++. The display subsystem of the Lisp version was faster.
    There were various reasons, but an important one was GC:
    the C++ code copied a lot of buffers because they got
    passed around in fairly complex ways, so it could be quite
    difficult to know when one could be deallocated. To avoid
    that problem, the C++ programmers just copied. The Lisp
    was GCed, so the Lisp programmers never had to worry about
    it; they just passed the buffers around, which reduced
    both memory use and CPU cycles spent copying.«

    <XNOkd.7720$>

    »A lot of us thought in the 1990s that the big battle would
    be between procedural and object oriented programming, and
    we thought that object oriented programming would provide
    a big boost in programmer productivity. I thought that,
    too. Some people still think that. It turns out we were
    wrong. Object oriented programming is handy dandy, but
    it's not really the productivity booster that was
    promised. The real significant productivity advance we've
    had in programming has been from languages which manage
    memory for you automatically.«

    http://www.joelonsoftware.com/articles/APIWar.html

    »[A]llocation in modern JVMs is far faster than the best
    performing malloc implementations. The common code path
    for new Object() in HotSpot 1.4.2 and later is
    approximately 10 machine instructions (data provided by
    Sun; see Resources), whereas the best performing malloc
    implementations in C require on average between 60 and 100
    instructions per call (Detlefs, et. al.; see Resources).
    And allocation performance is not a trivial component of
    overall performance -- benchmarks show that many
    real-world C and C++ programs, such as Perl and
    Ghostscript, spend 20 to 30 percent of their total
    execution time in malloc and free -- far more than the
    allocation and garbage collection overhead of a healthy
    Java application (Zorn; see Resources).«

    http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-jw22JavaUrbanLegends

    »Perhaps the most important realisation I had while developing
    this critique is that high level languages are more important
    to programming than object-orientation. That is, languages
    which have the attribute that they remove the burden of
    bookkeeping from the programmer to enhance maintainability and
    flexibility are more significant than languages which just
    add object-oriented features. While C++ adds object-orientation
    to C, it fails in the more important attribute of being high
    level. This greatly diminishes any benefits of the
    object-oriented paradigm.«

    http://burks.brighton.ac.uk/burks/pcinfo/progdocs/cppcrit/index005.htm
     
    Stefan Ram, Jun 10, 2008
    #4
  5. Carlo Milanesi

    Guest

    On Jun 10, 11:30 am, Carlo Milanesi <>
    wrote:
    > Hello,
    >      traditionally, in C++, dynamically allocated memory has been
    > managed explicitly by calling "delete" in the application code.
    >
    > Now, in addition to the standard library strings, containers, and
    > auto_ptrs, gurus suggest that may be better to use a reference-counted
    > smart pointer, or a garbage-collector.
    >
    > But in which cases it is better to use one technique and in which cases
    > another? IOW, which is the design criterion?
    >
    > And if, after having completed a working system, a technique would
    > result more performing than another, or better for other reasons, is it
    > advisable to change the memory management strategy of that working system?


    I almost always use smart pointers for objects. In any moderately
    sized
    system, the complexity of figuring out when to delete something is
    just
    not worth it. With smart pointers, you never have to delete.

    Of course, memory managed *within* a class can be handled with raw
    pointers.

    So I don't see a real choie between using deletes and using smart
    pointers
    unless there is a special circumstance or platform.

    We have yet to move or consider other forms of garbage collectors such
    as that used in Java. Call me old fashioned ;).
     
    , Jun 10, 2008
    #5
  6. Carlo Milanesi

    Guest

    On Jun 10, 9:42 am, Krice <> wrote:
    > On 10 kesä, 18:30, Carlo Milanesi <>
    > wrote:
    >
    > > But in which cases it is better to use one technique and in which cases
    > > another?

    >
    > It's probably a question that has no answer. But I like the
    > old school delete, because it's transparent, you know exactly
    > what it's doing and when.


    I don't think so. Can you tell whether the delete here will be called:

    void foo()
    {
    A * a = new A(/* ... */);
    /* code that may throw today or some time in the future possibly
    after some code change */
    delete a;
    }

    To the OP: never deallocate resources explicitly in code. That would
    not be "traditional," rather "developmental." :) I used to do that
    when I didn't know better. :/

    Ali
     
    , Jun 10, 2008
    #6
  7. Carlo Milanesi

    Guest

    On Jun 10, 12:03 pm, "" <>
    wrote:
    > On Jun 10, 11:30 am, Carlo Milanesi <>
    > wrote:


    > I almost always use smart pointers for objects. In any moderately
    > sized
    > system, the complexity of figuring out when to delete something is
    > just
    > not worth it.


    The problem is, that complex code hoping to figure out when to
    'delete' an object may never be executed. The non-throwing lines of
    code of today can suddenly start possibly throwing in the future by
    code changes, and the explicit delete statement may never be executed.

    > Of course, memory managed *within* a class can be handled with raw
    > pointers.


    Unfortunately that statement must be qualified further: you are
    talking about classes that manage a single object, right? Because the
    following class fits your description, but is not exception-safe and
    cannot manage the memory it hopes to manage:

    class HopefulManager
    {
    One * one_;
    Two * two_;

    public:

    HopefulManager()
    :
    one_(new One()),
    two_(new Two()) // <- 1, <- 2
    {
    // some other code that may throw <- 3
    }

    ~HopefulManager()
    {
    delete two_;
    delete one_;
    }
    };

    The three places that may cause resource leaks are:

    1) new may throw, one_'s object is leaked

    2) Two() may throw, one_'s object is leaked

    3) Any line is the constructor may throw, one_'s and two_'s objects
    are leaked

    > So I don't see a real choie between using deletes and using smart
    > pointers
    > unless there is a special circumstance or platform.


    What I said above is regardless of special circumstances or platforms.
    Pure C++... :)

    Ali
     
    , Jun 10, 2008
    #7
  8. Carlo Milanesi

    Jerry Coffin Guest

    In article <-berlin.de>,
    -berlin.de says...
    > Jerry Coffin <> writes:
    > >Allocating space to hold extra copies of the object, and copying
    > >the relevant data into each copy, can take a considerable amount of
    > >time. With a GC in place, you can usually avoid this copying by just
    > >passing around pointers and everything shares access to that one object.

    >
    > »There were two versions of it, one in Lisp and one in
    > C++. The display subsystem of the Lisp version was faster.
    > There were various reasons, but an important one was GC:
    > the C++ code copied a lot of buffers because they got
    > passed around in fairly complex ways, so it could be quite
    > difficult to know when one could be deallocated. To avoid
    > that problem, the C++ programmers just copied. The Lisp
    > was GCed, so the Lisp programmers never had to worry about
    > it; they just passed the buffers around, which reduced
    > both memory use and CPU cycles spent copying.«
    >
    > <XNOkd.7720$>


    Intentionally or otherwise, I suspect your post is likely to generate
    fare more heat than light. Most of it is unsupported assertions, and
    none of it is from anybody who appears to deserve the title of "guru",
    at least with respect to C++ (and IMO, probably not in any other respect
    either).

    The first quote appears to be purely apocryphal -- an unsupported
    statement from somebody posting under a pseudonym, about software of
    unknown origin written by people of unknown skills.

    Joel Spolsky spends a lot of time writing about software, but his
    credentials seem questionable at best. In particular, I've seen nothing
    to give a really strong indication that he's much of a programmer
    (himself) at all.

    IBM, of course, has a great deal of collective knowledge about
    programming -- but the bit you quote is written with the specific intent
    of promoting Java. It's been discussed here before, and at very best
    it's misleading when applied to more than the very specific domain about
    which it's written.

    Finally we get yet another reference to Ian Joyner's "Critique of C++."
    IMO, there should be something similar to Godwin's law relating to
    anybody who quotes (any part of) this. First of all, it has nothing to
    do with the C++ of today, or anytime in the last decade or more. As of
    the first edition, some (a small fraction) was reasonably accurate about
    the C++ of the time -- but the updates in his second and third editions
    were insufficient to keep the relevant to the C++ of their times, and
    the third edition still predates the original C++ standard by a couple
    of years. With respect to the C++ of today, it varies from irrelevant to
    misleading to downright false. Second, a great deal of it was misleading
    when it was originally written. Third, nearly all the rest of it was
    downright false when written.

    When you get down to it, despite being umpteen pages long, the
    criticisms in this paper that have at least some degree of validity with
    respect to current C++ can be summarized as:

    1) member functions should be virtual by default.
    2) C++ should have Concepts [JVC: C++ 0x will].
    3) Unified syntax for "." and "->" would be nice.
    4) "static" is overloaded in too many (confusing) ways.
    5) Modules would be better than headers.
    6) Support for DbC would have some good points.

    When you get down to it, however, it would be much easier to summarize
    his critique in a single sentence: "C++ isn't Eiffel." Many of his
    individual arguments aren't really supported at all -- they're simply
    statements that C++ must be wrong because it's different from Eiffel.

    Don't get me wrong: my previous statement that GC is favored for some
    situations under some circumstances still stands -- but IMO, none of
    these quotes provides any real enlightenment. Quite the contrary, the
    quote from IBM means _almost_ nothing, and the other three (between
    them) mean far less still.

    --
    Later,
    Jerry.

    The universe is a figment of its own imagination.
     
    Jerry Coffin, Jun 10, 2008
    #8
  9. Carlo Milanesi

    Fran Guest

    On Jun 10, 5:37 pm, wrote:

    > The problem is, that complex code hoping to figure out when to
    > 'delete' an object may never be executed. The non-throwing lines of
    > code of today can suddenly start possibly throwing in the future by
    > code changes, and the explicit delete statement may never be executed.


    Is there any chance that C++0x will give us mandatory checking of
    throw() clauses in function definitions? That would enable the
    compiler to warn about leaks when exceptions might happen between
    calls to new and delete.
    --
    franl
     
    Fran, Jun 11, 2008
    #9
  10. Carlo Milanesi

    James Kanze Guest

    On Jun 11, 12:54 am, Jerry Coffin <> wrote:
    > In article <-berlin.de>,
    > -berlin.de says...
    > > Jerry Coffin <> writes:
    > > >Allocating space to hold extra copies of the object, and
    > > >copying the relevant data into each copy, can take a
    > > >considerable amount of time. With a GC in place, you can
    > > >usually avoid this copying by just passing around pointers
    > > >and everything shares access to that one object.


    > > »There were two versions of it, one in Lisp and one in
    > > C++. The display subsystem of the Lisp version was faster.
    > > There were various reasons, but an important one was GC:
    > > the C++ code copied a lot of buffers because they got
    > > passed around in fairly complex ways, so it could be quite
    > > difficult to know when one could be deallocated. To avoid
    > > that problem, the C++ programmers just copied. The Lisp
    > > was GCed, so the Lisp programmers never had to worry about
    > > it; they just passed the buffers around, which reduced
    > > both memory use and CPU cycles spent copying.«


    > > <XNOkd.7720$>


    > Intentionally or otherwise, I suspect your post is likely to
    > generate fare more heat than light. Most of it is unsupported
    > assertions, and none of it is from anybody who appears to
    > deserve the title of "guru", at least with respect to C++ (and
    > IMO, probably not in any other respect either).


    You noticed that too.

    > The first quote appears to be purely apocryphal -- an
    > unsupported statement from somebody posting under a pseudonym,
    > about software of unknown origin written by people of unknown
    > skills.


    The first quote is probably the one which does correspond most
    to practical reality; I seem to recall a similar statement being
    made by Walter Bright (who certainly does qualify as a C++
    guru). But of course, it doesn't have to be that way.

    > Joel Spolsky spends a lot of time writing about software, but
    > his credentials seem questionable at best. In particular, I've
    > seen nothing to give a really strong indication that he's much
    > of a programmer (himself) at all.


    Another case of "those who can, do; those who can't teach (or
    write articles)".

    > IBM, of course, has a great deal of collective knowledge about
    > programming -- but the bit you quote is written with the
    > specific intent of promoting Java. It's been discussed here
    > before, and at very best it's misleading when applied to more
    > than the very specific domain about which it's written.


    > Finally we get yet another reference to Ian Joyner's "Critique
    > of C++." IMO, there should be something similar to Godwin's
    > law relating to anybody who quotes (any part of) this. First
    > of all, it has nothing to do with the C++ of today, or anytime
    > in the last decade or more. As of the first edition, some (a
    > small fraction) was reasonably accurate about the C++ of the
    > time -- but the updates in his second and third editions were
    > insufficient to keep the relevant to the C++ of their times,
    > and the third edition still predates the original C++ standard
    > by a couple of years. With respect to the C++ of today, it
    > varies from irrelevant to misleading to downright false.
    > Second, a great deal of it was misleading when it was
    > originally written. Third, nearly all the rest of it was
    > downright false when written.


    > When you get down to it, despite being umpteen pages long, the
    > criticisms in this paper that have at least some degree of
    > validity with respect to current C++ can be summarized as:


    > 1) member functions should be virtual by default.


    Which is just wrong, at least from a software engineering point
    of view.

    > 2) C++ should have Concepts [JVC: C++ 0x will].
    > 3) Unified syntax for "." and "->" would be nice.


    I don't think I agree with this one, either.

    > 4) "static" is overloaded in too many (confusing) ways.


    The price we pay for C compatibility.

    > 5) Modules would be better than headers.


    I don't think anyone could disagree with that one. Of course,
    just about everyone has a different definition of what they mean
    by "modules".

    > 6) Support for DbC would have some good points.


    Interestingly, I think that C++ today has the best support of
    any language, although it's not automatic, and many programmers
    fail to use it.

    > When you get down to it, however, it would be much easier to
    > summarize his critique in a single sentence: "C++ isn't
    > Eiffel." Many of his individual arguments aren't really
    > supported at all -- they're simply statements that C++ must be
    > wrong because it's different from Eiffel.


    > Don't get me wrong: my previous statement that GC is favored
    > for some situations under some circumstances still stands --
    > but IMO, none of these quotes provides any real enlightenment.
    > Quite the contrary, the quote from IBM means _almost_ nothing,
    > and the other three (between them) mean far less still.


    I don't think that there is complete consensus among the gurus
    as to when garbage collection would be appropriate. I would be
    very suspicious, however, of anyone who claimed that it is
    always appropriate, or never appropriate. That it's not
    available in the standard toolkit is a definite flaw in the
    language, but requiring it to be used in every case would
    probably be even worse (but I don't think anyone has ever
    proposted that).

    --
    James Kanze (GABI Software) email:
    Conseils en informatique orientée objet/
    Beratung in objektorientierter Datenverarbeitung
    9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
     
    James Kanze, Jun 11, 2008
    #10
  11. Carlo Milanesi <> writes:

    > Hello,
    > traditionally, in C++, dynamically allocated memory has been
    > managed explicitly by calling "delete" in the application code.
    >
    > Now, in addition to the standard library strings, containers, and
    > auto_ptrs, gurus suggest that may be better to use a reference-counted
    > smart pointer, or a garbage-collector.
    >
    > But in which cases it is better to use one technique and in which
    > cases another? IOW, which is the design criterion?


    Reference counted smart pointers: never. They leak memory as soon as
    you have bidirectionnal associations or cycles in your data
    structures.

    Garbage collectors: always. There are even real-time garbage
    collectors, if you have real-time constraints.


    > And if, after having completed a working system, a technique would
    > result more performing than another, or better for other reasons, is
    > it advisable to change the memory management strategy of that working
    > system?


    Well, usually garbage collectors give better performance.
    http://www.jwz.org/doc/gc.html
    But of course it depends on the application and datasets. It might be
    easier to change the garbage collection strategy, selecting one more
    adapted to the application, than to change the application to use
    another memory management style. On the other hand, if you write your
    code with reference counting, it is easy enough to disable reference
    counting and fall back to garbage collection.

    --
    __Pascal Bourguignon__
     
    Pascal J. Bourguignon, Jun 11, 2008
    #11
  12. Carlo Milanesi

    dizzy Guest

    Carlo Milanesi wrote:

    > Hello,
    > traditionally, in C++, dynamically allocated memory has been
    > managed explicitly by calling "delete" in the application code.


    I think this is a misuderstanding. If by "traditionally" you mean C++ code
    written until about 1998 maybe so, but the current C++ standard along with
    auto_ptr<> and various third-party shared_ptr/counted_ptr imlementations
    exist since at least 10 years now. Not to mention C++0x is just around the
    corner and it will change the way we think C++, so we really need to drop
    tradition and use best solutions as offered by a modern C++ implementation.

    You probably didn't ment that by traditionally, in that case I'm stating the
    above for those that do mean it that way :)

    > Now, in addition to the standard library strings, containers, and
    > auto_ptrs, gurus suggest that may be better to use a reference-counted
    > smart pointer, or a garbage-collector.


    I'm no guru but I'll state my oppinion based on my experience.

    There are many "problems" with using blind pointers. All IMO derive from the
    fact that a pointer is just too semantically rich. Take for example a C
    complex program, you will see pointers used as such:
    - 90% of cases used as a reference (that is used to pass by reference
    arguments and they actually cannot logically be NULL inside the function
    receiving it but no explicit syntax states that)
    - 9% of times used to signal optionality of value (that is, if the value
    exists then it's != 0 or if it doesn't it is 0)
    - 1% used for pointer arithmetic and other advanced pointer semantics

    Ok, those numbers are obviously exagerated but you get the idea. Using a
    pointer is like designing a class type that has all the program logic in it
    (a monster with a huge number of memer functins and data with no invariants
    to keep). Good design dictates that classes should be made as much
    specialized as you can, to do one simple thing and do it well (and maintain
    the invariants of that).

    Same with pointers, you need to pass by references something around use a
    reference (for which you know it can't be NULL since in order to initialize
    a reference to something NULL you would have had to dereference NULL which
    is UB anyway). You need to signal optional value, use something like
    boost::eek:ptional (many even very experienced C++ programmers still prefer
    pointers for that). You need a pointer to own some memory and delete the
    memory on end scope of the pointer, use an scoped_ptr (maybe an auto_ptr if
    you prefer only std code or if you need the auto_ptr move semantics). And
    so on.

    All this specialization helps your program by moving into the C++ compile
    time type system alot of your program logic (the fact that there is no
    operator++ on auto_ptr<> means you can't do pointer arithmetic by mistake
    on auto_ptr<> values, signaled by a compile time error) thus resulting into
    a less error prone program not to mention easier to read by a reader that
    knows what all those "alternative pointer types" do (since she won't have
    to see how do you use a pointer, she sees an auto_ptr and knows already
    some facts about your usage of it).

    > But in which cases it is better to use one technique and in which cases
    > another? IOW, which is the design criterion?


    I think in general you should almost never use pointers. First go through
    these alternative solutions and see which best fits your needs: C++
    references, boost::eek:ptional (optional is interesting also because you can
    use it to build a sort of pointer value that can be either NULL or point
    validly if for example you make an boost::eek:ptional<T&>), auto_ptr /
    scoped_ptr / shared_ptr / weak_ptr.

    About gc I can't say much since I haven't used it in C++ (only in Java as it
    was forced in). Since for my kind of development I deal with alot of
    resources for which RAII and scoped objects map very well I have no need of
    gc.

    > And if, after having completed a working system, a technique would
    > result more performing than another, or better for other reasons, is it
    > advisable to change the memory management strategy of that working system?


    I think there are some situations in which a gc should perform better than
    probably all those solutions above, maybe someone more experienced with
    using gc's can provide an example (because all examples I come with right
    now I also find them a solution C++ by doing a custom allocator).

    --
    Dizzy
     
    dizzy, Jun 11, 2008
    #12
  13. Carlo Milanesi

    dizzy Guest

    Pascal J. Bourguignon wrote:

    > Carlo Milanesi <> writes:
    >
    >> Hello,
    >> traditionally, in C++, dynamically allocated memory has been
    >> managed explicitly by calling "delete" in the application code.
    >>
    >> Now, in addition to the standard library strings, containers, and
    >> auto_ptrs, gurus suggest that may be better to use a reference-counted
    >> smart pointer, or a garbage-collector.
    >>
    >> But in which cases it is better to use one technique and in which
    >> cases another? IOW, which is the design criterion?

    >
    > Reference counted smart pointers: never. They leak memory as soon as
    > you have bidirectionnal associations or cycles in your data
    > structures.


    By that logic you mean he will always have bidirectional associations or
    cycles in his data structures (thus NEVER use shared_ptr). In my years of
    C++ I've had that very rare and when I did, I used weak_ptr to break the
    cycle. How often do you have bidirectional associations in your data
    structures? In thos projects that you have, which percent of the data
    structures from the project has cycles?

    > Garbage collectors: always. There are even real-time garbage
    > collectors, if you have real-time constraints.


    gc's are no silver bullet. They may be good in some scenarios but I don't
    think they are good in any situation. Plus memory management is just a
    small part of resource management in a C++ program (at least in my
    programs).

    > Well, usually garbage collectors give better performance.
    > http://www.jwz.org/doc/gc.html
    > But of course it depends on the application and datasets.


    Ah so then you contradict your previous "Garbage collectors: always".

    --
    Dizzy
     
    dizzy, Jun 11, 2008
    #13
  14. dizzy <> writes:
    > Pascal J. Bourguignon wrote:
    >> Carlo Milanesi <> writes:
    >>> But in which cases it is better to use one technique and in which
    >>> cases another? IOW, which is the design criterion?

    >>
    >> Reference counted smart pointers: never. They leak memory as soon as
    >> you have bidirectionnal associations or cycles in your data
    >> structures.

    >
    > By that logic you mean he will always have bidirectional associations or
    > cycles in his data structures (thus NEVER use shared_ptr). In my years of
    > C++ I've had that very rare and when I did, I used weak_ptr to break the
    > cycle. How often do you have bidirectional associations in your data
    > structures? In thos projects that you have, which percent of the data
    > structures from the project has cycles?


    Often enough. But the main point is of course, if your language
    doesn't allow you to express some ideas easily, then you won't try to
    express those ideas. If having circular references in C++ is a PITA,
    then we will try very hard to avoid them. (And thus, burning a lot of
    wetware cycles that would be better allocated to resolving the true
    problems, instead of these technicalities).


    >> Well, usually garbage collectors give better performance.
    >> http://www.jwz.org/doc/gc.html
    >> But of course it depends on the application and datasets.

    >
    > Ah so then you contradict your previous "Garbage collectors: always".


    Not really, in the following sentence you cut out, I explained that if
    the currrent garbage collection algorithm wasn't good enough for your
    application, it would be better to change this garbage collection
    algorithm for another one more adapted to your particular
    circumstances, rather than going back to manage memory manually.

    --
    __Pascal Bourguignon__
     
    Pascal J. Bourguignon, Jun 11, 2008
    #14
  15. Carlo Milanesi

    dizzy Guest

    Fran wrote:

    > On Jun 10, 5:37 pm, wrote:
    >
    >> The problem is, that complex code hoping to figure out when to
    >> 'delete' an object may never be executed. The non-throwing lines of
    >> code of today can suddenly start possibly throwing in the future by
    >> code changes, and the explicit delete statement may never be executed.

    >
    > Is there any chance that C++0x will give us mandatory checking of
    > throw() clauses in function definitions? That would enable the
    > compiler to warn about leaks when exceptions might happen between
    > calls to new and delete.


    Why is there such a need? Always assume anything may throw and code
    accordingly. In the exceptional cases where writing exception safe code is
    not possible (making the code expensive or error prone) I'm sure you can
    find a solution (like std::stack has for top()/pop()).

    --
    Dizzy
     
    dizzy, Jun 11, 2008
    #15
  16. Carlo Milanesi

    Krice Guest

    On 11 kesä, 00:25, wrote:
    > void foo()
    > {
    > A * a = new A(/* ... */);
    > /* code that may throw today or some time in the future possibly
    > after some code change */
    > delete a;
    >
    > }


    Throw what? A ball?
     
    Krice, Jun 11, 2008
    #16
  17. Carlo Milanesi

    Jerry Coffin Guest

    In article <bcb28001-8bed-4732-8191-b97f61e511b3
    @k13g2000hse.googlegroups.com>, says...

    [ ... ]

    > > The first quote appears to be purely apocryphal -- an
    > > unsupported statement from somebody posting under a pseudonym,
    > > about software of unknown origin written by people of unknown
    > > skills.

    >
    > The first quote is probably the one which does correspond most
    > to practical reality; I seem to recall a similar statement being
    > made by Walter Bright (who certainly does qualify as a C++
    > guru). But of course, it doesn't have to be that way.


    Right -- my point wasn't that the quote was wrong, only that it didn't
    really add much. If somebody disagreed with (essentially) the same point
    when I said it probably wouldn't find much in this to convince them (of
    anything).

    > > Joel Spolsky spends a lot of time writing about software, but
    > > his credentials seem questionable at best. In particular, I've
    > > seen nothing to give a really strong indication that he's much
    > > of a programmer (himself) at all.

    >
    > Another case of "those who can, do; those who can't teach (or
    > write articles)".


    ....except that most of the people I can think of who write specifically
    about C++ really _can_ write code, and most of them clearly _do_, and as
    a rule do it quite well at that. The only prominent exception would be
    Scott Meyers, who's pretty open about the fact that he consults about
    C++, teaches C++, but does NOT really write much C++ at all. OTOH, I'm
    pretty sure that if he really needed (or wanted to) he could write code
    quite nicely as well -- though given his talents as a teacher, I think
    it would be rather a waste if he spent his time that way.

    Others, however (e.g. David Abrahams, Andrei Alexandrescu, Andrew
    Koenig, Herb Sutter) who write about C++, also appear to write a fair
    amount of code, and mostly do it quite well at that (and no, I'm not
    claiming to be such a guru that I'm in a position to rate the experts,
    or anything like that...)

    [ ... ]

    > > When you get down to it, despite being umpteen pages long, the
    > > criticisms in this paper that have at least some degree of
    > > validity with respect to current C++ can be summarized as:

    >
    > > 1) member functions should be virtual by default.

    >
    > Which is just wrong, at least from a software engineering point
    > of view.


    Right -- I don't mean to imply that all these are correct, or anything
    like that -- I just mean that:

    1) they're clearly enough defined to be fairly sure what he's saying.
    2) they aren't obviously obsolete.
    3) They aren't simply of the form: "Eiffel does it differently."

    They're points that can be discussed intelligently, their strengths and
    weaknesses can be examined, etc. They're not necessarily right, but at
    least you can define (to at least some degree) what it means for them to
    be right or wrong.

    [ ... ]

    > I don't think that there is complete consensus among the gurus
    > as to when garbage collection would be appropriate. I would be
    > very suspicious, however, of anyone who claimed that it is
    > always appropriate, or never appropriate. That it's not
    > available in the standard toolkit is a definite flaw in the
    > language, but requiring it to be used in every case would
    > probably be even worse (but I don't think anyone has ever
    > proposted that).


    I'm not sure it really needs to be part of the standard library, but I
    do think it would be a good thing to tighten up the language
    specification to the point that almost any known type of GC could be
    included without leading to undefined behavior -- but we've been over
    that before...

    --
    Later,
    Jerry.

    The universe is a figment of its own imagination.
     
    Jerry Coffin, Jun 11, 2008
    #17
  18. Jerry Coffin <> writes:
    > I'm not sure it really needs to be part of the standard library, but I
    > do think it would be a good thing to tighten up the language
    > specification to the point that almost any known type of GC could be
    > included without leading to undefined behavior


    Well said!

    > -- but we've been over
    > that before...


    Ok.

    --
    __Pascal Bourguignon__
     
    Pascal J. Bourguignon, Jun 11, 2008
    #18
  19. Carlo Milanesi

    Guest

    On Jun 11, 6:56 am, Krice <> wrote:
    > On 11 kesä, 00:25, wrote:
    >
    > > void foo()
    > > {
    > > A * a = new A(/* ... */);
    > > /* code that may throw today or some time in the future possibly
    > > after some code change */
    > > delete a;

    >
    > > }

    >
    > Throw what? A ball?


    No, an exception; unless you are imagining a type named "ball" of
    course. :p

    I can't believe you are serious; you must have forgotten the smiley...
    Seriously though, if you really didn't know what I meant with "throw,"
    you should learn about exceptions.

    Ali
     
    , Jun 11, 2008
    #19
  20. Carlo Milanesi

    Krice Guest

    On 11 kesä, 19:43, wrote:
    > Seriously though, if you really didn't know what I meant with "throw,"
    > you should learn about exceptions.


    Exceptions are not logical. If construction of the object
    fails then what? The program fails also, usually. I never
    check anything, not since they invented exceptions, so
    I'm assuming that there are no exceptions:)
     
    Krice, Jun 11, 2008
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Replies:
    2
    Views:
    379
    Robert Klemme
    Jan 6, 2006
  2. Arne Vajhøj
    Replies:
    12
    Views:
    811
  3. Ian Collins
    Replies:
    0
    Views:
    574
    Ian Collins
    Apr 11, 2008
  4. Roedy Green
    Replies:
    14
    Views:
    1,209
    Mirek Fidler
    Apr 14, 2008
  5. asterisc
    Replies:
    6
    Views:
    499
    Mark Space
    Apr 12, 2008
Loading...

Share This Page