Re: C++ Memory Management Question

Discussion in 'C++' started by Nobody, Jan 1, 2012.

  1. Nobody

    Nobody Guest

    On Sat, 31 Dec 2011 08:56:29 -0500, Datesfat Chicks wrote:

    > It seems to me that C++ has more inherent reliance on dynamic memory
    > allocation than C.


    The STL relies upon dynamic memory where libc normally requires the caller
    provide buffers. The C++ language doesn't rely upon it.

    > a)I'm assuming that malloc() and free() are still available in C++?


    Yes.

    > b)Do these draw from the same memory pool as "new" and "delete"?


    It isn't specified, so you should assume that they don't.

    > My reason for asking that question is that I'm looking for a graceful
    > strategy if dynamic memory gets exhausted.
    >
    > The obvious strategy that comes to mind is:
    >
    > a)When the program starts, grab a good chunk of memory using malloc().
    >
    > b)If the dynamic memory gets exhausted via "new" and friends, handle
    > the exception. In the exception handler, free() the memory allocated
    > by (a) and set a flag to signal the program to gracefully exit as soon
    > as it can.
    >
    > c)Hopefully the memory released by free() would be enough to allow the
    > program to exit gracefully (rather than a sudden less controlled
    > exit).
    >
    > Is this sane? Is there a better way?


    If you're programming in C++, a trivial modification to your strategy
    would be to grab a chunk of memory using new[] rather than malloc() and
    release it using delete[] rather than free().

    But you're probably better off designing your "graceful failure"
    strategy so that it doesn't depend upon additional memory, rather than
    trying to guess how much memory you need to reserve for it. Certainly,
    that's likely to be harder to do in C++ than in C (and even in C, it's
    easier if you use the OS' native APIs rather than the portable C APIs).

    One final point: on Linux, using too much memory often results not in a
    std::bad_alloc exception (or malloc() returning NULL) but in SIGKILL from
    the kernel's "OOM killer", in which case any recovery strategy is doomed.
    Even without that aspect, you can't control how third-party libraries
    handle an out-of-memory situation; they may simply abort(), or not bother
    checking the value returned from malloc() and promptly segfault upon
    dereferencing the NULL pointer.
    Nobody, Jan 1, 2012
    #1
    1. Advertising

  2. Nobody

    Goran Guest

    On Jan 2, 10:48 pm, Datesfat Chicks <> wrote:
    > On Sun, 01 Jan 2012 09:16:38 +0000, Nobody <> wrote:
    >
    > >One final point: on Linux, using too much memory often results not in a
    > >std::bad_alloc exception (or malloc() returning NULL) but in SIGKILL from
    > >the kernel's "OOM killer", in which case any recovery strategy is doomed..
    > >Even without that aspect, you can't control how third-party libraries
    > >handle an out-of-memory situation; they may simply abort(), or not bother
    > >checking the value returned from malloc() and promptly segfault upon
    > >dereferencing the NULL pointer.

    >
    > Both good points.  I had made the assumption that it would be MY code
    > running out of memory.


    Actually, these are wrong points in this newsgroup. In C and C++,
    there's no such thing as OOM killer, there's no SIGKILL. There's no
    such thing on other systems either, and even on systems who do use
    overcommit, it's an optional feature. Not bothering with checking can
    be a strategy on OOM-enabled systems, sure, but is not a good general
    answer, and is certainly not meaningful in the light of standard C or C
    ++. That said, proper C++ code will not "check" anything anyhow; it
    will possibly catch an exception later, and upon a stack unwind.

    As for libraries who abort() upon OOM (or segfault), I think that
    everyone would agree that they have a design bug. I mean, what they
    are doing is akin to coming into someone's house and shitting on the
    floor because someone else was in the bathroom.

    Goran.
    Goran, Jan 3, 2012
    #2
    1. Advertising

  3. Nobody

    Nobody Guest

    On Tue, 03 Jan 2012 12:53:47 -0500, Datesfat Chicks wrote:

    > I'm a little surprised at the SIGKILL response. If you're going to
    > have a std::bad_alloc exception, then it seems an improper
    > implementation if that exception is useless because the process gets
    > killed by another mechanism before the exception is thrown.


    They serve different purposes. std::bad_alloc is thrown if you want more
    memory but it cannot be provided (i.e. it isn't available from the heap
    and the OS won't let you enlarge the heap). SIGKILL occurs if the OS
    decides that it wants the memory which your process already has.

    The latter is primarily an issue on systems where overcommit is enabled.
    Overcommit being enabled is the norm; if overcommit is disabled, the OS
    won't allow a process to fork() unless it can provide sufficient virtual
    memory to back all writeable pages in the process' address space. This is
    usually overkill, as most process which call fork() will promptly call
    execve() having modified only a tiny fraction of their writable memory.

    The problem comes when the kernel finds that it actually needs more memory
    than is available, e.g. because one copy-on-write page too many has
    actually been written to. The only way out of that situation is to revoke
    allocations which have already been granted by sacrificing an existing
    process. When doing so, it will typically choose the most "greedy"
    unprivileged process available. Any process where handling out-of-memory
    conditions was a significant design consideration would seem to be a
    likely candidate to be sacrificed.

    BTW: regarding your original problem (how to handle the C++ side of
    allocation failures), I remembered std::set_new_handler(). If a handler
    has been registered via this function, operator new should[1] call it if
    it cannot obtain the desired memory. In fact, it should call it repeatedly
    until either the allocation succeeds or the handler unregisters itself
    (IOW: if you write such a handler, it must unregister itself if it cannot
    release any more memory, otherwise operator new will get stuck in an
    infinite loop). std::bad_alloc will only be thrown if allocation fails and
    no handler is registered.

    [1] The global operator new can be relied upon to do so; user-defined
    versions of operator new may be a different matter.
    Nobody, Jan 3, 2012
    #3
  4. Nobody

    Goran Guest

    On Jan 3, 6:53 pm, Datesfat Chicks <> wrote:
    > On Tue, 3 Jan 2012 00:02:21 -0800 (PST), Goran <>
    > wrote:
    >
    >
    >
    >
    >
    >
    >
    >
    >
    > >On Jan 2, 10:48 pm, Datesfat Chicks <> wrote:
    > >> On Sun, 01 Jan 2012 09:16:38 +0000, Nobody <> wrote:

    >
    > >> >One final point: on Linux, using too much memory often results not ina
    > >> >std::bad_alloc exception (or malloc() returning NULL) but in SIGKILL from
    > >> >the kernel's "OOM killer", in which case any recovery strategy is doomed.
    > >> >Even without that aspect, you can't control how third-party libraries
    > >> >handle an out-of-memory situation; they may simply abort(), or not bother
    > >> >checking the value returned from malloc() and promptly segfault upon
    > >> >dereferencing the NULL pointer.

    >
    > >> Both good points. I had made the assumption that it would be MY code
    > >> running out of memory.

    >
    > >Actually, these are wrong points in this newsgroup. In C and C++,
    > >there's no such thing as OOM killer, there's no SIGKILL. There's no
    > >such thing on other systems either, and even on systems who do use
    > >overcommit, it's an optional feature. Not bothering with checking can
    > >be a strategy on OOM-enabled systems, sure, but is not a good general
    > >answer, and is certainly not meaningful in the light of standard C or C
    > >++. That said, proper C++ code will not "check" anything anyhow; it
    > >will possibly catch an exception later, and upon a stack unwind.

    >
    > I'm a little surprised at the SIGKILL response.  If you're going to
    > have a std::bad_alloc exception, then it seems an improper
    > implementation if that exception is useless because the process gets
    > killed by another mechanism before the exception is thrown.


    Well... malloc returns NULL if allocation request can't be fulfilled,
    or a pointer pointing to a block of requested size. That's standard C.
    In standard C++, new returns the object (objects), or throws
    bad_alloc. This is standard C++. This is the scope of this group.

    Now... On systems with OOM killer, malloc might return a pointer, and
    code would die when trying to use it (likely, to write to it). Similar
    to new, but there, if OOM killer is to kick in, it is slightly more
    probable that OOM killer will kick in __sooner__, because in C++, it
    is more probable that construction will happen right after the
    allocation.

    When does OOM killer kick in, in practice? When there is address space
    for the request, but system over-comitted when it granted that memory,
    then ran out of virtual memory and wants to get rid of the process it
    found offensive. So even on OOM-killer enabled systems, bad_alloc is
    not useless - it still covers the "no address space, buddy" case.

    Finally, what is well-behaved code to do about OOM killer on such
    systems? IMHO, have a process-restart strategy, and data-loss-
    prevention strategy. Indeed, such systems do provide restart
    mechanisms.

    > >As for libraries who abort() upon OOM (or segfault), I think that
    > >everyone would agree that they have a design bug. I mean, what they
    > >are doing is akin to coming into someone's house and shitting on the
    > >floor because someone else was in the bathroom.

    >
    > I do that even if someone else is in the bathroom.
    >
    > However, for people I really like, I put a little plastic wrap or
    > aluminum foil down and do it on that.  That way you get the aroma and
    > shock effect without the cleanup difficulties.


    I guess you fund yourself offended by my analogy because your own code
    aborts or segfaults on NULL from malloc. There's no need for that.
    Nobody cares if you do that, if they don't have to deal with such
    code. And if code only runs on a system with OOM killer... I guess it
    has to be said that this strategy is arguably more such-system-
    friendly than standard C or C++. But this is very much out of scope
    for this newsgroup.

    Goran.
    Goran, Jan 4, 2012
    #4
  5. Nobody

    Nobody Guest

    On Wed, 04 Jan 2012 00:11:41 -0800, Goran wrote:

    > So even on OOM-killer enabled systems, bad_alloc is
    > not useless - it still covers the "no address space, buddy" case.


    It also covers the case where the process is subject to explicit resource
    limits which are set well below the total amount of virtual memory on the
    system.

    Even on systems which lack an OOM-killer, memory-hungry processes still
    have to worry about issues other than user-space allocation failure
    (std::bad_alloc or a NULL return from malloc() or nothrow-new). E.g. if
    the system has "excessive" amounts of swap, allocation will succeed and
    the process won't get OOM-killed, but it may run so slowly that just
    having it killed would have been preferable.

    IOW, the "official" allocation-failure mechanism is only one piece of the
    puzzle, and it's only worth solving if you can also deal with the other
    aspects.
    Nobody, Jan 4, 2012
    #5
  6. Nobody

    Jorgen Grahn Guest

    On Sun, 2012-01-01, Nobody wrote:
    ....
    > One final point: on Linux, using too much memory often results not in a
    > std::bad_alloc exception (or malloc() returning NULL) but in SIGKILL from
    > the kernel's "OOM killer", [...]


    I believe you (and the followups) are using the word "OOM killer"
    incorrectly here. The "OOM killer" of Linux is/was a feature which
    kills random processes when the OS is desperate for memory. Those
    processes don't have to allocate any memory at all in order to be
    candidates for killing.

    http://linux-mm.org/OOM_Killer

    Possibly, the word you want is "overcommitting".

    /Jorgen

    --
    // Jorgen Grahn <grahn@ Oo o. . .
    \X/ snipabacken.se> O o .
    Jorgen Grahn, Jan 4, 2012
    #6
  7. Nobody

    Nobody Guest

    On Wed, 04 Jan 2012 19:28:51 +0000, Jorgen Grahn wrote:

    >> One final point: on Linux, using too much memory often results not in a
    >> std::bad_alloc exception (or malloc() returning NULL) but in SIGKILL from
    >> the kernel's "OOM killer", [...]

    >
    > I believe you (and the followups) are using the word "OOM killer"
    > incorrectly here. The "OOM killer" of Linux is/was a feature which
    > kills random processes when the OS is desperate for memory.


    Yes, I'm aware of this, and thought that I had explained it.

    > Those processes don't have to allocate any memory at all in order to be
    > candidates for killing.
    >
    > http://linux-mm.org/OOM_Killer


    That link describes the factors which are used to choose a process to
    sacrifice. One of them is the amount of memory which it is using. In fact,
    it says:

    > 3) we don't kill anything innocent of eating tons of memory


    That memory doesn't have to come from malloc/new (it could be a result of
    modifying large portions of the data and/or bss segments, or using a lot
    of stack), but a process which doesn't use much memory one way or another
    isn't a likely candidate for being sacrificed.

    > Possibly, the word you want is "overcommitting".


    Overcommit is the reason the kernel needs an OOM-killer. If you disable
    overcommit, the kernel will never find itself in the position of needing
    to sacrifice a process in order to dig itself out of the hole that
    overcommitting got it into. But if you do that, don't expect large
    processes (e.g. Firefox) to be able to spawn any child processes; the
    fork() will fail unless there's enough free memory to duplicate the
    parent's entire address space, even though 99% of it will be discarded as
    soon as exec() is called.
    Nobody, Jan 4, 2012
    #7
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Floris van Haaster

    Project management / bug management

    Floris van Haaster, Sep 23, 2005, in forum: ASP .Net
    Replies:
    3
    Views:
    1,229
    Jon Paal
    Sep 23, 2005
  2. Chris Ott

    Java memory management question

    Chris Ott, Feb 20, 2004, in forum: Java
    Replies:
    6
    Views:
    658
    Doug Pardee
    Feb 27, 2004
  3. pouet
    Replies:
    2
    Views:
    737
    Will Hartung
    Jul 30, 2004
  4. sbayeta

    Memory management question

    sbayeta, Aug 15, 2003, in forum: C Programming
    Replies:
    46
    Views:
    1,048
    goose
    Aug 20, 2003
  5. Matt Oefinger
    Replies:
    0
    Views:
    205
    Matt Oefinger
    Jun 25, 2003
Loading...

Share This Page