Re: C++ Memory Management Question

Discussion in 'C++' started by Victor Bazarov, Dec 31, 2011.

  1. On 12/31/2011 8:56 AM, Datesfat Chicks wrote:
    > I'm just learning C++ (after being a 19-year C veteran), so forgive
    > any naive questions or concerns.
    >
    > It seems to me that C++ has more inherent reliance on dynamic memory
    > allocation than C. It doesn't have to be that way in all cases (many
    > classes would be implemented without dynamic allocation), but it seems
    > more natural in C++ that dynamic allocation would appear in programs.


    The reality is simpler: it's easier to do dynamic memory properly in C++
    (purposefully so, actually), and that's the reason more people resort to
    it. In fact I've written plenty of code that never allocated anything
    in the free store (except by any library mechanisms that I had no
    control over). You don't have to use dynamic memory if you don't need it.

    > Here are my questions:
    >
    > a)I'm assuming that malloc() and free() are still available in C++?


    That's not a question, you know. And, they are.

    > b)Do these draw from the same memory pool as "new" and "delete"?


    Usually. Unless you've overridden 'new' and 'delete' and made THEM
    "draw" from some other place.

    > My reason for asking that question is that I'm looking for a graceful
    > strategy if dynamic memory gets exhausted.


    Good luck.

    > The obvious strategy that comes to mind is:
    >
    > a)When the program starts, grab a good chunk of memory using malloc().


    And do what with it? Just sit on it? That's a serious waste of memory.

    > b)If the dynamic memory gets exhausted via "new" and friends, handle
    > the exception. In the exception handler, free() the memory allocated
    > by (a) and set a flag to signal the program to gracefully exit as soon
    > as it can.


    Explain "gracefully exit". I mean, explain it to yourself, keeping in
    mind that your process was *interrupted* by a *failed attempt* to get
    more memory.

    > c)Hopefully the memory released by free() would be enough to allow the
    > program to exit gracefully (rather than a sudden less controlled
    > exit).


    So, your strategy is "reserve [a preset amount of] some resource until
    it's desperately needed, then release it and pray that it's enough to
    kill the process", right?

    > Is this sane?


    How about I don't answer this, and you instead get a second look at it
    at your convenience?

    > Is there a better way?


    This question is asked here countless times every year. How about *you*
    google for it and try to find out how successful your predecessors were?
    And if you find something that suits you, by all means use it. If you
    don't, *and* you have spare time, by all means look for it. But my
    advice (based on experience trying to handle it and seeing others trying
    to handle it) is, don't waste your time. If your process has run out of
    memory, the best solution is to restart it with *more memory*.

    And, again, good luck!

    V
    --
    I do not respond to top-posted replies, please don't ask
     
    Victor Bazarov, Dec 31, 2011
    #1
    1. Advertising

  2. Il 31/12/2011 15:44, Victor Bazarov ha scritto:
    > > Is there a better way?

    >
    > my
    > advice (based on experience trying to handle it and seeing others trying
    > to handle it) is, don't waste your time. If your process has run out of
    > memory, the best solution is to restart it with *more memory*.


    So, if your word processor cannot load a huge document, or your video
    editor cannot load a huge piece of footage, you'd like the application
    terminate at once without saving anything :(
    I think it would be better to undo the last command, and inform the user
    that the command failed for the memory shortage (compared to the size of
    the document).

    To grant a graceful exit, I would ensure that the shutdown routines do
    not allocate any memory, by reserving beforehand all the needed memory
    space.

    --

    Carlo Milanesi
    http://carlomilanesi.wordpress.com/
     
    Carlo Milanesi, Dec 31, 2011
    #2
    1. Advertising

  3. On 12/31/2011 12:18 PM, Carlo Milanesi wrote:
    > Il 31/12/2011 15:44, Victor Bazarov ha scritto:
    >> > Is there a better way?

    >>
    >> my
    >> advice (based on experience trying to handle it and seeing others trying
    >> to handle it) is, don't waste your time. If your process has run out of
    >> memory, the best solution is to restart it with *more memory*.

    >
    > So, if your word processor cannot load a huge document, or your video
    > editor cannot load a huge piece of footage, you'd like the application
    > terminate at once without saving anything :(


    No, not without saying anything. But saying anything often does NOT
    require memory allocation, nor do I consider "saying anything" a
    graceful exist.

    Neither a word processor nor a video editor should even attempt loading
    anything if it determines that the attempt might cause it to run out of
    memory. If they can't determine that before opening the file, they
    aren't worth our time to discuss them.

    Now, the usual way WRT handling memory constraints is *not to let the
    process to run out of memory* in the first place, instead of trying to
    do anything after it has happened.

    > I think it would be better to undo the last command, and inform the user
    > that the command failed for the memory shortage (compared to the size of
    > the document).


    Undoing the last command can require allocating of memory...

    > To grant a graceful exit, I would ensure that the shutdown routines do
    > not allocate any memory, by reserving beforehand all the needed memory
    > space.


    That's good. What happens if the process runs out of memory while
    trying to allocate the memory for those shutdown routines?

    V
    --
    I do not respond to top-posted replies, please don't ask
     
    Victor Bazarov, Dec 31, 2011
    #3
  4. On 31/12/2011 19:04, Victor Bazarov wrote:
    >> terminate at once without saving anything :(

    >
    > No, not without saying anything. But saying anything often does NOT
    > require memory allocation, nor do I consider "saying anything" a
    > graceful exist.


    I wrote "saving" not "saying". I mean, if an application has several
    documents open and one document cannot be loaded, it's better to abort
    only the loading of that document, without closing all the documents,
    and possibly lose changes to them.

    > Neither a word processor nor a video editor should even attempt loading
    > anything if it determines that the attempt might cause it to run out of
    > memory. If they can't determine that before opening the file, they
    > aren't worth our time to discuss them.


    Is there a way to determine if the next memory allocation will succeed?

    To execute Adobe Flash applets, the Firefox Web browser executes a
    process named "plugin-container.exe". I've seen that if there is not not
    enough memory to run one Adobe Flash applet, all Adobe Flash applets
    running in different Firefox windows terminate at once, showing an error
    message in a gray area.
    I don't think that is a good behavior, but I think it is rather typical.
    It's good that Firefox itself doesn't terminate.

    > Now, the usual way WRT handling memory constraints is *not to let the
    > process to run out of memory* in the first place, instead of trying to
    > do anything after it has happened.


    With some programs that is not possible (or it is quite hard to do), as
    it is the user that decides the size of the data to keep in memory.

    >> I think it would be better to undo the last command, and inform the user
    >> that the command failed for the memory shortage (compared to the size of
    >> the document).

    >
    > Undoing the last command can require allocating of memory...


    Undoing a partially performed (uncommitted) document load may mean just
    unrolling the stack and let destructors do their deallocations. And this
    is already done by the exception mechanism. If all has been properly
    loaded, then a pointer to the loaded contents is inserted in the document.

    >> To grant a graceful exit, I would ensure that the shutdown routines do
    >> not allocate any memory, by reserving beforehand all the needed memory
    >> space.

    >
    > That's good. What happens if the process runs out of memory while trying
    > to allocate the memory for those shutdown routines?


    That allocation should happens at startup of the application, before
    larger allocations. If that fails, nothing can be done.

    --

    Carlo Milanesi
    http://carlomilanesi.wordpress.com/
     
    Carlo Milanesi, Dec 31, 2011
    #4
  5. Victor Bazarov

    MikeWhy Guest

    "Datesfat Chicks" <> wrote in message
    news:...
    > For the type of applications I have in mind, the task would already
    > have been run with the maximum amount of memory available, and short
    > of rephrasing the computation or using a different computer, no
    > options would be available to the user.


    What system? The virtual address space for a 32-bit Windows app is 2 GB;
    much larger on 64-bit. Are you really in danger of running out of heap? What
    is your process doing?
     
    MikeWhy, Dec 31, 2011
    #5
  6. Datesfat Chicks <> wrote:
    > The reason for releasing memory when "new" fails is that without doing
    > that, the application couldn't even make it to a point where it could
    > finish writing log files and let the user know they're copulated.


    There could be at least two things that might throw a spanner
    into the works when you try to "pre-allocate" memory. First,
    on multi-tasking systems when you release the pre-allocated
    memory you're not always guaranteed that you will get it back
    when you then call 'new' - a different process may have been
    run in between and just grabbed the memory you released (if
    this can happen depends a lot on the system and how it hand-
    les memory). And then there's something called "memory over-
    commitment", i.e. even though malloc() returns succesfully
    you still might have your process killed when you try to use
    it. This feature was added on some systems because there are
    a lot of programs that do pre-allocation of memory they never
    use, so the system signals that memory is available but may
    not actually give it to you when you need it and memory has
    become exhausted. This can only be avoided by either getting
    the administrator of the machine to switch off overcommitment
    or by "using" the memory you got (by writing to it). Of cour
    se, the details again will be rather system specific.

    On the other hand, is a scenario where memory is that exhaus-
    ed that you won't get enough even for simple clean-up tasks
    really something you have to worry that much about? Normally
    you will run out of memory when you need really lots of it
    and then there will typically still be enough available for
    the clean-up (as long as they don't also need huge amounts).
    So, is that a problem you need to solve in a way that there
    is not the slightest chance that it can ever occur? As I
    read in one of Tanenbaum's books sometimes the best solu-
    tion to a rare problem can be to stick your head in the
    sand and pretend that it won't happen (see also "ostrich
    algorithm";-)
    Regards, Jens
    --
    \ Jens Thoms Toerring ___
    \__________________________ http://toerring.de
     
    Jens Thoms Toerring, Dec 31, 2011
    #6
  7. Victor Bazarov

    Goran Guest

    On Dec 31 2011, 7:04 pm, Victor Bazarov <>
    wrote:
    > On 12/31/2011 12:18 PM, Carlo Milanesi wrote:
    >
    > > Il 31/12/2011 15:44, Victor Bazarov ha scritto:
    > >> > Is there a better way?

    >
    > >> my
    > >> advice (based on experience trying to handle it and seeing others trying
    > >> to handle it) is, don't waste your time. If your process has run out of
    > >> memory, the best solution is to restart it with *more memory*.

    >
    > > So, if your word processor cannot load a huge document, or your video
    > > editor cannot load a huge piece of footage, you'd like the application
    > > terminate at once without saving anything :(

    >
    > No, not without saying anything.  But saying anything often does NOT
    > require memory allocation, nor do I consider "saying anything" a
    > graceful exist.
    >
    > Neither a word processor nor a video editor should even attempt loading
    > anything if it determines that the attempt might cause it to run out of
    > memory.  If they can't determine that before opening the file, they
    > aren't worth our time to discuss them.


    I disagree. There's no guarantee whatsoever that any allocation will
    succeed, so theoretically, your idea is unsound. Practically, it's
    difficult to implement more often than not. I was doing it, and
    looking back, it was an error. E.g. it's easy only if you merely load
    file contents into memory. I don't believe that happens all that
    often.

    What you're saying is equivalent to going to supermarket to see
    whether it has mayonnaise, then going back for money, then coming back
    to buy.

    Goran.
     
    Goran, Jan 1, 2012
    #7
  8. On 1/1/2012 2:25 AM, Goran wrote:
    > On Dec 31 2011, 7:04 pm, Victor Bazarov<>
    > wrote:
    >> Neither a word processor nor a video editor should even attempt loading
    >> anything if it determines that the attempt might cause it to run out of
    >> memory. If they can't determine that before opening the file, they
    >> aren't worth our time to discuss them.

    >
    > I disagree. There's no guarantee whatsoever that any allocation will
    > succeed, so theoretically, your idea is unsound. Practically, it's
    > difficult to implement more often than not. I was doing it, and
    > looking back, it was an error. E.g. it's easy only if you merely load
    > file contents into memory. I don't believe that happens all that
    > often.
    >
    > What you're saying is equivalent to going to supermarket to see
    > whether it has mayonnaise, then going back for money, then coming back
    > to buy.


    Not at all. There is always some reasonable portion that, when
    allocated fresh, is likely to be granted to the process. And if
    allocating that reasonable portion fails, quit and tell the user he
    can't use that system for that operation without any changes (e.g.
    making some more resources available by either adding spare ones or
    releasing the ones currently used). I am saying that if you come to the
    store to buy mayonnaise, there is (a) no sense to reach into your pocket
    if the mayo is not on the shelf (file is missing), (b) no need to buy
    the lifetime's worth in advance (even if you know what your lifetime is
    going to be). We buy mayo in portions, consume, then come to buy some
    more. Besides, we have replenishment of money, so it's a bad analogy
    anyway.

    In 1960s programmers wrote solving large systems of linear equations
    (hundreds and even thousands) on a system that only had 100 cells of
    memory. How did they manage? What is so different about it when it's
    not 100 but 100 billion cells? There will be problems that aren't going
    to fit in memory whole. Another example: on MS-DOS there was a text
    editor called MultiEdit that managed to deal with files much larger than
    the available 640KB. How did they do it?

    What I am saying is that there is always a way to deal with large
    amounts of data without having to load it all in memory.

    V
    --
    I do not respond to top-posted replies, please don't ask
     
    Victor Bazarov, Jan 1, 2012
    #8
  9. Victor Bazarov

    Goran Guest

    On Jan 1, 3:39 pm, Victor Bazarov <> wrote:
    > On 1/1/2012 2:25 AM, Goran wrote:
    >
    >
    >
    >
    >
    >
    >
    >
    >
    > > On Dec 31 2011, 7:04 pm, Victor Bazarov<>
    > > wrote:
    > >> Neither a word processor nor a video editor should even attempt loading
    > >> anything if it determines that the attempt might cause it to run out of
    > >> memory.  If they can't determine that before opening the file, they
    > >> aren't worth our time to discuss them.

    >
    > > I disagree. There's no guarantee whatsoever that any allocation will
    > > succeed, so theoretically, your idea is unsound. Practically, it's
    > > difficult to implement more often than not. I was doing it, and
    > > looking back, it was an error. E.g. it's easy only if you merely load
    > > file contents into memory. I don't believe that happens all that
    > > often.

    >
    > > What you're saying is equivalent to going to supermarket to see
    > > whether it has mayonnaise, then going back for money, then coming back
    > > to buy.

    >
    > Not at all.  There is always some reasonable portion that, when
    > allocated fresh, is likely to be granted to the process.  And if
    > allocating that reasonable portion fails,


    But in your previous post you said

    >> Neither a word processor nor a video editor should even attempt loading
    >> anything if it determines that the attempt might cause it to run out of
    >> memory.


    .... and now you're saying that you will try to allocate memory.

    That doesn't make good sense.

    > quit and tell the user he
    > can't use that system for that operation without any changes (e.g.
    > making some more resources available by either adding spare ones or
    > releasing the ones currently used).  I am saying that if you come to the
    > store to buy mayonnaise, there is (a) no sense to reach into your pocket
    > if the mayo is not on the shelf (file is missing), (b) no need to buy
    > the lifetime's worth in advance (even if you know what your lifetime is
    > going to be).  We buy mayo in portions, consume, then come to buy some
    > more.  Besides, we have replenishment of money, so it's  a bad analogy
    > anyway.


    I disagree. We're possibly lacking mayonnaise, not money. In my
    analogy, it is assumed that one does have money, just like it is
    assumed that one can _call_ allocation function. You changed my
    analogy!

    > In 1960s programmers wrote solving large systems of linear equations
    > (hundreds and even thousands) on a system that only had 100 cells of
    > memory.  How did they manage?  What is so different about it when it's
    > not 100 but 100 billion cells?  There will be problems that aren't going
    > to fit in memory whole.  Another example: on MS-DOS there was a text
    > editor called MultiEdit that managed to deal with files much larger than
    > the available 640KB.  How did they do it?
    >
    > What I am saying is that there is always a way to deal with large
    > amounts of data without having to load it all in memory.


    I agree with that. But that's another problem, really, one of design.
    Question here is clearly how to handle the situation where expected
    resource isn't here __without__ changing the design (not right away,
    at least).

    Goran.
     
    Goran, Jan 1, 2012
    #9
  10. Victor Bazarov

    Jorgen Grahn Guest

    On Sat, 2011-12-31, Victor Bazarov wrote:
    > On 12/31/2011 8:56 AM, Datesfat Chicks wrote:
    >> I'm just learning C++ (after being a 19-year C veteran), so forgive
    >> any naive questions or concerns.
    >>
    >> It seems to me that C++ has more inherent reliance on dynamic memory
    >> allocation than C. It doesn't have to be that way in all cases (many
    >> classes would be implemented without dynamic allocation), but it seems
    >> more natural in C++ that dynamic allocation would appear in programs.

    >
    > The reality is simpler: it's easier to do dynamic memory properly in C++
    > (purposefully so, actually), and that's the reason more people resort to
    > it. In fact I've written plenty of code that never allocated anything
    > in the free store (except by any library mechanisms that I had no
    > control over). You don't have to use dynamic memory if you don't need it.


    Another angle on the same answer:

    I tend to use much less explicit dynamic allocation in C++ than in C,
    but that's because the standard containers like std::vector,
    std::string and std::map replace malloc()ed arrays (and the nasty
    fixed-size arrays).

    I believe I do /more/ dynamic allocation in C++ -- if you count the
    ones done for me by the standard containers.

    /Jorgen

    --
    // Jorgen Grahn <grahn@ Oo o. . .
    \X/ snipabacken.se> O o .
     
    Jorgen Grahn, Jan 1, 2012
    #10
  11. Victor Bazarov

    Joe keane Guest

    In article <4eff43fa$0$16659$>,
    Carlo Milanesi <> wrote:
    >So, if your word processor cannot load a huge document, or your video
    >editor cannot load a huge piece of footage, you'd like the application
    >terminate at once without saving anything :(


    no it could produce a useful measage

    a) your hard drive is full
    b) this can't possibly work on your machine
    c) we ran into VM problems, if you close something else, maybe it will work
     
    Joe keane, Jan 1, 2012
    #11
  12. On 01/01/2012 23:04, Joe keane wrote:
    > In article<4eff43fa$0$16659$>,
    > Carlo Milanesi<> wrote:
    >> So, if your word processor cannot load a huge document, or your video
    >> editor cannot load a huge piece of footage, you'd like the application
    >> terminate at once without saving anything :(

    >
    > no it could produce a useful measage
    >
    > a) your hard drive is full
    > b) this can't possibly work on your machine
    > c) we ran into VM problems, if you close something else, maybe it will work


    I was sarcastic, of course.
    However those messages are not really informative.
    Microsoft Paint ("prush.exe", the bitmap editing utility included in
    Microsoft Windows) does something better.
    I don't have the English version of it, but when I try to create an
    image of 20000 x 20000 pixels on my system (your system may be
    different), it waits a couple of seconds, then displays a message
    similar to "Insufficient memory or resources. Close one or more
    applications and retry", and it does not terminate, keeping open the
    previous bitmap. Obviously it tried to create such a large bitmap and
    failed, because if it had decided that such a bitmap was too large, it
    wouldn't wait that time, and it wouldn't suggest to close other
    applications.

    --

    Carlo Milanesi
    http://carlomilanesi.wordpress.com/
     
    Carlo Milanesi, Jan 1, 2012
    #12
  13. Victor Bazarov

    Ian Collins Guest

    On 01/ 2/12 08:05 AM, Leigh Johnston wrote:
    > On 01/01/2012 14:39, Victor Bazarov wrote:
    >> On 1/1/2012 2:25 AM, Goran wrote:
    >>> On Dec 31 2011, 7:04 pm, Victor Bazarov<>
    >>> wrote:
    >>>> Neither a word processor nor a video editor should even attempt loading
    >>>> anything if it determines that the attempt might cause it to run out of
    >>>> memory. If they can't determine that before opening the file, they
    >>>> aren't worth our time to discuss them.
    >>>
    >>> I disagree. There's no guarantee whatsoever that any allocation will
    >>> succeed, so theoretically, your idea is unsound. Practically, it's
    >>> difficult to implement more often than not. I was doing it, and
    >>> looking back, it was an error. E.g. it's easy only if you merely load
    >>> file contents into memory. I don't believe that happens all that
    >>> often.
    >>>
    >>> What you're saying is equivalent to going to supermarket to see
    >>> whether it has mayonnaise, then going back for money, then coming back
    >>> to buy.

    >>
    >> Not at all. There is always some reasonable portion that, when allocated
    >> fresh, is likely to be granted to the process. And if allocating that
    >> reasonable portion fails, quit and tell the user he can't use that
    >> system for that operation without any changes (e.g. making some more
    >> resources available by either adding spare ones or releasing the ones
    >> currently used). I am saying that if you come to the store to buy
    >> mayonnaise, there is (a) no sense to reach into your pocket if the mayo
    >> is not on the shelf (file is missing), (b) no need to buy the lifetime's
    >> worth in advance (even if you know what your lifetime is going to be).
    >> We buy mayo in portions, consume, then come to buy some more. Besides,
    >> we have replenishment of money, so it's a bad analogy anyway.
    >>
    >> In 1960s programmers wrote solving large systems of linear equations
    >> (hundreds and even thousands) on a system that only had 100 cells of
    >> memory. How did they manage? What is so different about it when it's not
    >> 100 but 100 billion cells? There will be problems that aren't going to
    >> fit in memory whole. Another example: on MS-DOS there was a text editor
    >> called MultiEdit that managed to deal with files much larger than the
    >> available 640KB. How did they do it?
    >>
    >> What I am saying is that there is always a way to deal with large
    >> amounts of data without having to load it all in memory.

    >
    > I agree with Goran; you are talking rubbish; there is no standard way to
    > determine if an allocation *attempt* will succeed or not; to avoid OOM
    > either pre-allocate whatever you need up front if memory requirements
    > are bounded otherwise you must handle allocation *failures*.


    What part of "there is always a way to deal with large amounts of data
    without having to load it all in memory" do you consider to be rubbish?
    It's one of the oldest problems in computing.

    --
    Ian Collins
     
    Ian Collins, Jan 2, 2012
    #13
  14. Victor Bazarov

    Ian Collins Guest

    On 01/ 3/12 10:02 AM, Leigh Johnston wrote:
    >
    > When one replies to a post one is not necessarily only replying to the
    > very last sentence of a post.


    Then snip appropriately.

    --
    Ian Collins
     
    Ian Collins, Jan 2, 2012
    #14
  15. Victor Bazarov

    Ian Collins Guest

    On 01/ 3/12 10:14 AM, Leigh Johnston wrote:
    >
    > Is that your first of many ad hominems of 2012? In that spirit you are
    > a proven homophobic bigoted troll who deserves little respect; why
    > should I waste time teaching you how to use C++ properly? Read a book
    > and clean up your act.


    So why do you keep on giving it the attention it craves?

    --
    Ian Collins
     
    Ian Collins, Jan 2, 2012
    #15
  16. Leigh Johnstonæ–¼ 2012å¹´1月2日星期一UTC+8下åˆ10時32分44秒寫é“:
    > On 02/01/2012 11:26, Paul
    > "Leigh Johnston"<> wrote in message
    > > news:...
    > >> On 01/01/2012 19:36, Paul
    > >> "Leigh Johnston"<> wrote in message
    > >>> news:...
    > >>>> On 01/01/2012 14:39, Victor Bazarov wrote:
    > >>>>> On 1/1/2012 2:25 AM, Goran wrote:
    > >>>>>> On Dec 31 2011, 7:04 pm, Victor Bazarov<>
    > >>>>>> wrote:
    > >>>>>>> Neither a word processor nor a video editor should even attempt
    > >>>>>>> loading
    > >>>>>>> anything if it determines that the attempt might cause it to run out
    > >>>>>>> of
    > >>>>>>> memory. If they can't determine that before opening the file, they
    > >>>>>>> aren't worth our time to discuss them.
    > >>>>>>
    > >>>>>> I disagree. There's no guarantee whatsoever that any allocation will
    > >>>>>> succeed, so theoretically, your idea is unsound. Practically, it's
    > >>>>>> difficult to implement more often than not. I was doing it, and
    > >>>>>> looking back, it was an error. E.g. it's easy only if you merely load
    > >>>>>> file contents into memory. I don't believe that happens all that
    > >>>>>> often.
    > >>>>>>
    > >>>>>> What you're saying is equivalent to going to supermarket to see
    > >>>>>> whether it has mayonnaise, then going back for money, then coming back
    > >>>>>> to buy.
    > >>>>>
    > >>>>> Not at all. There is always some reasonable portion that, when
    > >>>>> allocated
    > >>>>> fresh, is likely to be granted to the process. And if allocating that
    > >>>>> reasonable portion fails, quit and tell the user he can't use that
    > >>>>> system for that operation without any changes (e.g. making some more
    > >>>>> resources available by either adding spare ones or releasing the ones
    > >>>>> currently used). I am saying that if you come to the store to buy
    > >>>>> mayonnaise, there is (a) no sense to reach into your pocket if the mayo
    > >>>>> is not on the shelf (file is missing), (b) no need to buy the
    > >>>>> lifetime's
    > >>>>> worth in advance (even if you know what your lifetime is going to be).
    > >>>>> We buy mayo in portions, consume, then come to buy some more. Besides,
    > >>>>> we have replenishment of money, so it's a bad analogy anyway.
    > >>>>>
    > >>>>> In 1960s programmers wrote solving large systems of linear equations
    > >>>>> (hundreds and even thousands) on a system that only had 100 cells of
    > >>>>> memory. How did they manage? What is so different about it when it's
    > >>>>> not
    > >>>>> 100 but 100 billion cells? There will be problems that aren't goingto
    > >>>>> fit in memory whole. Another example: on MS-DOS there was a text editor
    > >>>>> called MultiEdit that managed to deal with files much larger than the
    > >>>>> available 640KB. How did they do it?
    > >>>>>
    > >>>>> What I am saying is that there is always a way to deal with large
    > >>>>> amounts of data without having to load it all in memory.
    > >>>>
    > >>>> I agree with Goran; you are talking rubbish; there is no standard way to
    > >>>> determine if an allocation *attempt* will succeed or not; to avoid OOM
    > >>>> either pre-allocate whatever you need up front if memory requirements
    > >>>> are
    > >>>> bounded otherwise you must handle allocation *failures*.
    > >>>>
    > >>> The problem with this discussion is that its very OS speific.
    > >>>
    > >>> To create a robust memory managment system would require good knowledge
    > >>> of
    > >>> the underlying OS and an understanding of all possible allocation
    > >>> failures.
    > >>>
    > >>> I think it is possible that an OS may provide methods for checking
    > >>> available
    > >>> memory.
    > >>>
    > >>> I do agree with Carlo that profession programmes should exit gracefully ,
    > >>> if
    > >>> possible. True it may not always be possible as there may be times when a
    > >>> complete system crash is inevitable , but a program should at least *try*
    > >>> to
    > >>> save work or recover under low memory conditions.
    > >>
    > >> That is fine if "save work" or "recover" do not allocate memory otherwise
    > >> they may also suffer a memory allocation failure. Aborting the program
    > >> (what you call "crashing") on allocation failure is perfectly acceptable
    > >> for a subset of all possible programs and this is what will happen if you
    > >> don't handle (catch) allocation failure exceptions such as std::bad_alloc.
    > >>

    > >
    > > This is not what I meant by crashing, I said a system crash , that is the OS
    > > encountering a unrecoverable and critical error. You are speaking about
    > > aborting a program , a program just dying whenever it encounters an
    > > exception..

    >
    > This thread is not about unrecoverable or critical OS errors; it is
    > about memory allocation failures.
    >
    > Even if an OS provided a mechanism to determine available memory what
    > guarantee would there be that the amount of available memory reported by
    > such a function would be the same as the amount of available memory when
    > the allocation request is actually made?
    >
    > Are you suggesting that one should check available memory before calling
    > any function that may allocate memory? This is of course absurd.
    >
    > Are you suggesting that in C++ one should only use the no-throw version
    > of 'new' and check every allocation request at the site of the request
    > and report an error and "save work" or "recover"? This is of course absurd.
    >
    > HTH.
    >
    > /Leigh


    What are you talking about?
    If you just play with programs that need small amount of memory from the OS,
    then a lot error checking operations can be omitted.

    But those are low paid programmers' jobs definitely.

    Please don't baby sister too much for those ambitious talented young
    to have the bad habits to avoid exploring and experimenting for further professional tricks.
     
    88888 Dihedral, Jan 3, 2012
    #16
  17. Victor Bazarov

    Goran Guest

    On Jan 3, 6:45 am, 88888 Dihedral <>
    wrote:
    > > Are you suggesting that in C++ one should only use the no-throw version
    > > of 'new' and check every allocation request at the site of the request
    > > and report an error and "save work" or "recover"?  This is of course absurd.

    >
    > > HTH.

    >
    > > /Leigh

    >
    > What are you talking about?
    > If you just play with programs that need small amount of memory from the OS,
    > then a lot error checking operations can be omitted.


    Leigh is pretty much right on the money when calling systematic use of
    nothrow absurd.

    In normal C++, one uses throwing new and said error-checking is
    indeed, well, not omitted, but replaced by exception handling and
    RAII. You can't reasonably use new(nothrow) without lulling yourself
    in the false sense of security. Or, you can, but you need to renounce
    using STL and other parts of standard library, boost and many other
    libraries, basically, you need to only use C libraries. That's doable,
    but, as he said, absurd.

    Goran.
     
    Goran, Jan 3, 2012
    #17
  18. Victor Bazarov

    Ian Collins Guest

    On 01/ 4/12 09:53 AM, Scott Lurndal wrote:
    >
    > Define "use C++ properly". Isn't "properly" rather context-dependent?
    >
    > For example, I've been writing in C++ since 1989. Almost all of it
    > (several hundreds of thousands of lines) has been written commercially for
    > hypervisors (one for 3Leaf, one for SGI) and operating systems for very
    > large systems (256 processors in 1995) (Burroughs/Unisys).
    >
    > In such code, one doesn't have an RTL, and the run-time overhead of exceptions
    > and newer features such as run-time typing is prohibitive.


    Shouldn't that be in the past tense? I'm not aware of any contemporary
    compilers were the overhead of using exceptions is any more than C style
    return checking. In every comparison I've run since the late 90s,
    exceptions have been faster. Also nothing that calls its self a C++
    compiler lacks a standard library.

    > Would you argue that
    > such code is not "proper"? I wouldn't. So exceptions aren't always the "proper"
    > answer to "how do I determine if my allocation failed (succeeded)?".


    Nor would I. I have also worked in situations where "C with classes"
    (and even "C with function overloading") was a good fit.

    > The first OS was written using cfront 2.1[*]. There were no exceptions, templates,
    > funky casts, etc. Using classes to encapsulate data and associated methods with
    > data was of enough value that we felt it was the correct choice of implementation
    > language. When 3.0 came out with templates, exceptions etc, we eschewed those
    > features due to the additional cost at run-time (space bloat in the case of templates).


    Unfortunately memories of those long past runtime performance issues
    have stuck around.

    --
    Ian Collins
     
    Ian Collins, Jan 3, 2012
    #18
  19. Victor Bazarov

    gwowen Guest

    On Jan 2, 9:14 pm, Leigh Johnston <> wrote:

    > Is that your first of many ad hominems of 2012?  In that spirit you are
    > a proven homophobic bigoted troll who deserves little respect; why
    > should I waste time teaching you how to use C++ properly?


    And yet I suspect you will *continue* to waste your time on this, as
    you have proved yourself utterly incapable of not doing so, despite
    the colossal evidence that its a waste of perfectly good IP packets.












    /**
    * Space above left for Leigh to call me a troll
    */
    --
    Killfiles cleared each calender year...
     
    gwowen, Jan 4, 2012
    #19
  20. Victor Bazarov

    none Guest

    In article <4f00ee2d$0$6836$>,
    Carlo Milanesi <> wrote:
    >On 01/01/2012 23:04, Joe keane wrote:
    >> In article<4eff43fa$0$16659$>,
    >> Carlo Milanesi<> wrote:
    >>> So, if your word processor cannot load a huge document, or your video
    >>> editor cannot load a huge piece of footage, you'd like the application
    >>> terminate at once without saving anything :(

    >>
    >> no it could produce a useful measage
    >>
    >> a) your hard drive is full
    >> b) this can't possibly work on your machine
    >> c) we ran into VM problems, if you close something else, maybe it will work

    >
    >I was sarcastic, of course.
    >However those messages are not really informative.
    >Microsoft Paint ("prush.exe", the bitmap editing utility included in
    >Microsoft Windows) does something better.
    >I don't have the English version of it, but when I try to create an
    >image of 20000 x 20000 pixels on my system (your system may be
    >different), it waits a couple of seconds, then displays a message
    >similar to "Insufficient memory or resources. Close one or more
    >applications and retry", and it does not terminate, keeping open the
    >previous bitmap. Obviously it tried to create such a large bitmap and
    >failed, because if it had decided that such a bitmap was too large, it
    >wouldn't wait that time, and it wouldn't suggest to close other
    >applications.


    Well, this can be implemented relatively simply if, like MS-Paint, you
    are willing to preallocate all the memory for a new bitmap. On create
    new bitmap request, simply try{ allocate memory for x*y pixels } and
    catch(std::bad_alloc &).

    Most of the time, the std::bad_alloc will be thrown because the
    allocation attempt was too large so the error message should work.
    However, it might happen that even attempting to print the error
    message later will fail.
     
    none, Jan 9, 2012
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Floris van Haaster

    Project management / bug management

    Floris van Haaster, Sep 23, 2005, in forum: ASP .Net
    Replies:
    3
    Views:
    1,244
    Jon Paal
    Sep 23, 2005
  2. Chris Ott

    Java memory management question

    Chris Ott, Feb 20, 2004, in forum: Java
    Replies:
    6
    Views:
    677
    Doug Pardee
    Feb 27, 2004
  3. pouet
    Replies:
    2
    Views:
    761
    Will Hartung
    Jul 30, 2004
  4. sbayeta

    Memory management question

    sbayeta, Aug 15, 2003, in forum: C Programming
    Replies:
    46
    Views:
    1,071
    goose
    Aug 20, 2003
  5. Matt Oefinger
    Replies:
    0
    Views:
    217
    Matt Oefinger
    Jun 25, 2003
Loading...

Share This Page