bad alloc

Discussion in 'C++' started by Paul, Aug 30, 2011.

  1. Paul

    Paul Guest

    There are numerous C++ examples and code snippets which use STL
    allocators in containers such as STL vector an string.
    It has come to my attention that nobody ever seems to care about
    checking the allocation has been successfull. As we have this
    exception handling why is it not used very often in practise?.
    Surely it should be common practise to "try" all allocations, or do
    people just not care and allow the program to crash if it runs out of
    memory?
    I think if people were more conscious of this error checking the
    reserve function would be used more often.
    Paul, Aug 30, 2011
    #1
    1. Advertising

  2. Paul

    Krice Guest

    On 30 elo, 09:26, Paul <> wrote:
    > or do people just not care and allow the program to crash if
    > it runs out of memory?


    Yes, that's what I do and I think many others. It's because
    try-catch syntax is confusing and because 4Gb+ computers
    rarely run out of memory. And if they do, how are you going
    to recover from the situation with try-catch if you really
    need to allocate that memory? The only time it's reasonable
    to use try-catch is when something is left out from
    the program (scaling).
    Krice, Aug 30, 2011
    #2
    1. Advertising

  3. Paul

    Goran Guest

    On Aug 30, 8:26 am, Paul <> wrote:
    > There are numerous C++ examples and code snippets which use STL
    > allocators in containers such as STL vector an string.
    > It has come to my attention that nobody ever seems to care about
    > checking the allocation has been successfull. As we have this
    > exception handling why is it not used very often in practise?.
    > Surely it should be common practise to "try" all allocations, or do
    > people just not care and allow the program to crash if it runs out of
    > memory?


    Good-style exception-aware C++ code does __NOT__ check for allocation
    failures. Instead, it's written in such a manner that said (and other)
    failures don't break it __LOGICALLY__. This is done through a careful
    design and observation of exception safety guarantees (http://
    en.wikipedia.org/wiki/Exception_handling#Exception_safety) of the
    code. Some simple example:

    FILE* f = fopen(...);
    if (!f) throw whatever();
    vector<int> v;
    v.push_back(2);
    fclose(f);

    This snippet should have no-leak (basic) exception safety guarantee,
    but it doesn't (possible resource leak: FILE*, if there's an exception
    between vector creation and fclose. For example, push_back will throw
    if there's no memory to allocate internal vector storage.

    To satisfy no-leak guarantee, the above should be:

    FILE* f = fopen(...);
    if (!f) throw whatever();
    try
    {
    vector<int> v;
    v.push_back(2);
    fclose(f);
    }
    catch(...)
    {
    fclose(f);
    throw;
    }

    The above is pretty horrible, hence one would reach for RAII and use
    fstream in lieu of FILE* and no try/catch would be needed for the code
    to function correctly.

    Another example:

    container1 c1;
    container2 c2;
    c1.add(stuff);
    c2.add(stuff);

    Suppose that "stuff" needs to be in both c1 and c, otherwise something
    is wrong. If so, the above needs strong excetion safety. Correction
    would be:

    c1.add(stuff);
    try
    {
    c2.add(stuff);
    }
    catch(...)
    {
    c1.remove(stuff);
    throw;
    }

    Again, writing this is annoying, and for this sort of things there's
    an application of RAII in a trick called "scope guard". Using scope
    guard, this should turn out as:

    c1.add(stuff);
    ScopeGuard guardc1 = MakeObjGuard(c1, &container1::remove,
    ByRef(stuff));
    c2.add(stuff);
    guardc1.Dismiss();

    Similar examples can be made for other exception safety levels but IMO
    the above two happen in vaast majority of cases.

    > I think if people were more conscious of this error checking the
    > reserve function would be used more often.


    I am very convinced that this is a wrong way to go reasoning about
    error checking with exception-aware code. First off, using reserve
    lulls into a false sense of security. So space is reserved for the
    vector, and elements will be copied in it. What if copy ctor/
    assignment throws in the process? Code that is sensitive to exceptions
    will still be at risk. Second, it pollutes the code with gratuitous
    snippets no one cares about. There's a better way, see above.

    What's so wrong with this reasoning? The idea that one needs to make
    sure that each possible failure mode is looked after. This is
    possible, but is __extremely__ tedious. Instead, one should think in
    this manner: here are pieces of code that might throw (that should be
    a vaaaaaast majority of total code). If they throw, what will go wrong
    with the code (internal state, resources etc)? (e.g. FILE* will leak,
    c1 and c2 won't be in sync...) For those things, appropriate cleanup
    action should be taken (in vaaast majority of cases, said cleanup
    action is going to be "apply RAII"). Said cleanup action must be a no-
    throw operation (hence use of destructors w/RAII). There should also
    be a clear idea where no-throw areas are, and they should be a tiny
    amount of the code (in C++, these are typically primitive type
    assignments, C-function calls and use of no-throwing swap).

    There's a school of thought that says that allocation failure should
    simply terminate everything. This is based on the notion that, once
    there's no memory, world has ended for the code anyhow. This notion is
    false in a significant amount of cases (and is not in the spirit of C
    or C++; if it were, malloc or new would terminate()). Why is notion
    wrong? Because normally, code goes like this: work, work, allocate
    some resources, work (with those), free those, allocate more, work,
    allocate, work, free, free etc. That is, for many-a-code-path, there's
    a "hill climb" where resources are allocated while working "up", and
    they are deallocated, all or at least a significant part, while going
    "down" (e.g. perhaps calculation result is kept allocated. So once
    code hits a brick wall going up, there's a big chance there will be
    plenty of breathing room once it comes down (due to all those
    freeing). IOW, there's no __immediate need__ to die. Go down, clean up
    behind you and you'll be fine. It's kinda better to go back and say "I
    couldn't do X due to OOM" is kinda better than dying at the spot,
    wouldn't you say?

    Finally, there's a school of thought that says that allocations should
    not be checked at all. IMO, that school comes from programming under
    systems with overcommit, where, upon OOM conditions, external force
    (OOM killer) is brought in to terminate the code __not__ at the spot
    of the allocation, but at the spot allocated memory is used. Indeed,
    if this is the observed behavior, then any checking for allocation
    failure is kinda useless. However, this behavior is not prescribed by
    C or C++ language and is therefore outside the scope of this
    newsgroup ;-).

    Goran.
    Goran, Aug 30, 2011
    #3
  4. Paul

    Paul Guest

    On Aug 30, 11:26 am, Goran <> wrote:
    > On Aug 30, 8:26 am, Paul <> wrote:
    >
    > > There are numerous C++ examples and code snippets which use STL
    > > allocators in containers such as STL vector an string.
    > > It has come to my attention that nobody ever seems to care about
    > > checking the allocation has been successfull. As we have this
    > > exception handling why is it not used very often in practise?.
    > > Surely it should be common practise to "try" all allocations, or do
    > > people just not care and allow the program to crash if it runs out of
    > > memory?

    >
    > Good-style exception-aware C++ code does __NOT__ check for allocation
    > failures. Instead, it's written in such a manner that said (and other)
    > failures don't break it __LOGICALLY__. This is done through a careful
    > design and observation of exception safety guarantees (http://
    > en.wikipedia.org/wiki/Exception_handling#Exception_safety) of the
    > code. Some simple example:
    >
    > FILE* f = fopen(...);
    > if (!f) throw whatever();
    > vector<int> v;
    > v.push_back(2);
    > fclose(f);
    >
    > This snippet should have no-leak (basic) exception safety guarantee,
    > but it doesn't (possible resource leak: FILE*, if there's an exception
    > between vector creation and fclose. For example, push_back will throw
    > if there's no memory to allocate internal vector storage.
    >
    > To satisfy no-leak guarantee, the above should be:
    >
    > FILE* f = fopen(...);
    > if (!f) throw whatever();
    > try
    > {
    >  vector<int> v;
    >  v.push_back(2);
    >  fclose(f);}
    >
    > catch(...)
    > {
    >  fclose(f);
    >  throw;
    >
    > }
    >
    > The above is pretty horrible, hence one would reach for RAII and use
    > fstream in lieu of FILE* and no try/catch would be needed for the code
    > to function correctly.
    >
    > Another example:
    >
    > container1 c1;
    > container2 c2;
    > c1.add(stuff);
    > c2.add(stuff);
    >
    > Suppose that "stuff" needs to be in both c1 and c, otherwise something
    > is wrong. If so, the above needs strong excetion safety. Correction
    > would be:
    >
    > c1.add(stuff);
    > try
    > {
    >  c2.add(stuff);}
    >
    > catch(...)
    > {
    >   c1.remove(stuff);
    >   throw;
    >
    > }
    >
    > Again, writing this is annoying, and for this sort of things there's
    > an application of RAII in a trick called "scope guard". Using scope
    > guard, this should turn out as:
    >
    > c1.add(stuff);
    > ScopeGuard guardc1 = MakeObjGuard(c1, &container1::remove,
    > ByRef(stuff));
    > c2.add(stuff);
    > guardc1.Dismiss();
    >
    > Similar examples can be made for other exception safety levels but IMO
    > the above two happen in vaast majority of cases.
    >

    I am not familiar with the scopeguard objects I will need to look them
    up.

    > > I think if people were more conscious of this error checking the
    > > reserve function would be used more often.

    >
    > I am very convinced that this is a wrong way to go reasoning about
    > error checking with exception-aware code. First off, using reserve
    > lulls into a false sense of security. So space is reserved for the
    > vector, and elements will be copied in it. What if copy ctor/
    > assignment throws in the process? Code that is sensitive to exceptions
    > will still be at risk. Second, it pollutes the code with gratuitous
    > snippets no one cares about. There's a better way, see above.
    >
    > What's so wrong with this reasoning? The idea that one needs to make
    > sure that each possible failure mode is looked after. This is
    > possible, but is __extremely__ tedious. Instead, one should think in
    > this manner: here are pieces of code that might throw (that should be
    > a vaaaaaast majority of total code). If they throw, what will go wrong
    > with the code (internal state, resources etc)? (e.g. FILE* will leak,
    > c1 and c2 won't be in sync...) For those things, appropriate cleanup
    > action should be taken (in vaaast majority of cases, said cleanup
    > action is going to be "apply RAII"). Said cleanup action must be a no-
    > throw operation (hence use of destructors w/RAII). There should also
    > be a clear idea where no-throw areas are, and they should be a tiny
    > amount of the code (in C++, these are typically primitive type
    > assignments, C-function calls and use of no-throwing swap).


    Granted there are other possible exceptions that could be thrown and
    should also be considered. I was using the reserve as an example to
    guard against program crashing from OOM. I would have thought at least
    inform the user then close the program in a civilised manner or
    postone the current operation and inform the user of the memory
    situation and handle this without closing the program (i.e: by freeing
    other recources)
    >
    > There's a school of thought that says that allocation failure should
    > simply terminate everything. This is based on the notion that, once
    > there's no memory, world has ended for the code anyhow. This notion is
    > false in a significant amount of cases (and is not in the spirit of C
    > or C++; if it were, malloc or new would terminate()). Why is notion
    > wrong? Because normally, code goes like this: work, work, allocate
    > some resources, work (with those), free those, allocate more, work,
    > allocate, work, free, free etc. That is, for many-a-code-path, there's
    > a "hill climb" where resources are allocated while working "up", and
    > they are deallocated, all or at least a significant part, while going
    > "down" (e.g. perhaps calculation result is kept allocated. So once
    > code hits a brick wall going up, there's a big chance there will be
    > plenty of breathing room once it comes down (due to all those
    > freeing). IOW, there's no __immediate need__ to die. Go down, clean up
    > behind you and you'll be fine. It's kinda better to go back and say "I
    > couldn't do X due to OOM" is kinda better than dying at the spot,
    > wouldn't you say?

    I agree I think as a minimum, a professional distribution should check
    for memory allocation failures in alot of situations. Especially in a
    memory hungry program that may be expected to run on a range of pc's
    from say 512MB - 4GB.
    >
    > Finally, there's a school of thought that says that allocations should
    > not be checked at all. IMO, that school comes from programming under
    > systems with overcommit, where, upon OOM conditions, external force
    > (OOM killer) is brought in to terminate the code __not__ at the spot
    > of the allocation, but at the spot allocated memory is used. Indeed,
    > if this is the observed behavior, then any checking for allocation
    > failure is kinda useless. However, this behavior is not prescribed by
    > C or C++ language and is therefore outside the scope of this
    > newsgroup ;-).
    >

    I'm not so sure about this, as a user I'd hate a program to crash with
    no error msg. An error should be displayed before closing at the very
    least which is why I wonder why I never see anyone using try blocks.
    I can understand small test programs but these aside, as is the case
    with programs checking the return from new to not be null shouldn't
    people also check STL containers for allocation errors (as a very
    minimum)?
    Paul, Aug 30, 2011
    #4
  5. Paul

    Paul Guest

    On Aug 30, 10:14 am, Krice <> wrote:
    > On 30 elo, 09:26, Paul <> wrote:
    >
    > > or do people just not care and allow the program to crash if
    > > it runs out of memory?

    >
    > Yes, that's what I do and I think many others. It's because
    > try-catch syntax is confusing and because 4Gb+ computers
    > rarely run out of memory. And if they do, how are you going
    > to recover from the situation with try-catch if you really
    > need to allocate that memory? The only time it's reasonable
    > to use try-catch is when something is left out from
    > the program (scaling).


    I think that pre-STL it was pretty much standard practise to check for
    memory allocation failures for example:

    float* m1 = new float[16];
    if(!m1){
    //output an exit msg.
    exit();
    }
    Larger programs would have memory handling routines that took care of
    all the allocations so this code was not required all over the place.

    However since the boom of STL I think people just assume that
    std::vector handles all that and they don't need to worry about it.
    Perhaps it was often taught in such a way to promote using the STL
    containers that the error checking was deliberately ignored to make
    them appear more straight forward to use and simpler to code.
    I guess what I am really wondering is if it's the case that many
    people are not making a conscious decision to igore error checking but
    are instead overlooking it altogether?

    Personally I don't mind the try-catch syntax and its alot better than
    if's and else's. I think it should be used more as standard practise
    when coding with STL containers and not as an exception(no pun) to the
    norm, which seems to often be the case.

    When I learned C++ I was taught that the return from new should always
    be checked for null. This was put across as an important point but it
    was usually said that for simplicity any further examples would skip
    the error checking. Should a similar point not apply for using STL
    containers and try-catch blocks?
    Paul, Aug 30, 2011
    #5
  6. Paul

    Paul Guest

    On Aug 30, 2:34 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 14:26, Paul wrote:
    >
    >
    >
    >
    >
    > > On Aug 30, 10:14 am, Krice<>  wrote:
    > >> On 30 elo, 09:26, Paul<>  wrote:

    >
    > >>> or do people just not care and allow the program to crash if
    > >>> it runs out of memory?

    >
    > >> Yes, that's what I do and I think many others. It's because
    > >> try-catch syntax is confusing and because 4Gb+ computers
    > >> rarely run out of memory. And if they do, how are you going
    > >> to recover from the situation with try-catch if you really
    > >> need to allocate that memory? The only time it's reasonable
    > >> to use try-catch is when something is left out from
    > >> the program (scaling).

    >
    > > I think that pre-STL it was pretty much standard practise to check for
    > > memory allocation failures for example:

    >
    > > float* m1 = new float[16];
    > > if(!m1){
    > > //output an exit msg.
    > > exit();
    > > }
    > > Larger programs would have memory handling routines that took care of
    > > all the allocations so this code was not required all over the place.

    >
    > > However since the boom of STL I think people just assume that
    > > std::vector handles all that and they don't need to worry about it.
    > > Perhaps it was often taught in such a way to promote using the STL
    > > containers that the error checking was deliberately ignored to make
    > > them appear more straight forward to use and simpler to code.
    > > I guess what I am really wondering is if it's the case that many
    > > people are not making a conscious decision to igore error checking but
    > > are instead overlooking it altogether?

    >
    > > Personally I don't mind the try-catch syntax and its alot better than
    > > if's and else's. I think it should be used more as standard practise
    > > when coding with STL containers and not as an exception(no pun) to the
    > > norm, which seems to often be the case.

    >
    > > When I learned C++ I was taught that the return from new should always
    > > be checked for null. This was put across as an important point but it
    > > was usually said that for simplicity any further examples would skip
    > > the error checking. Should a similar point not apply for using STL
    > > containers and try-catch blocks?

    >
    > The only sane remedy for most cases of allocation failure is to
    > terminate the application which is what will happen with an uncaught
    > exception.


    I don't agree with this.
    A program is allocated a heap size, which is not normally all of the
    systems memory resources. If a program uses all of its heap allocation
    it can request more memory from the OS.
    Program termination is a last resort and even then the allocation
    exception should still be caught and the program terminated properly.


    >
    > Rule of thumb for C++ exception aware code: liberally sprinkle throws
    > but have very few try/catches.
    >

    What is the point in having throws if you're not going to catch them?
    Paul, Aug 30, 2011
    #6
  7. Paul

    Goran Guest

    On Aug 30, 3:26 pm, Paul <> wrote:
    > On Aug 30, 10:14 am, Krice <> wrote:
    >
    > > On 30 elo, 09:26, Paul <> wrote:

    >
    > > > or do people just not care and allow the program to crash if
    > > > it runs out of memory?

    >
    > > Yes, that's what I do and I think many others. It's because
    > > try-catch syntax is confusing and because 4Gb+ computers
    > > rarely run out of memory. And if they do, how are you going
    > > to recover from the situation with try-catch if you really
    > > need to allocate that memory? The only time it's reasonable
    > > to use try-catch is when something is left out from
    > > the program (scaling).

    >
    > I think that pre-STL it was pretty much standard practise to check for
    > memory allocation failures for example:
    >
    > float* m1 = new float[16];
    > if(!m1){
    > //output an exit msg.
    > exit();}


    No, that's completely wrong. new float[whatever] should throw just as
    vector.push_back should throw. This code is just completely wrong.
    Either you'd get to the "if" with m1 pointing to allocated memory and
    not being NULL, either exception is thrown and you can't possibly
    assign to m1 not reach the "if".

    > Larger programs would have memory handling routines that took care of
    > all the allocations so this code was not required all over the place.
    >
    > However since the boom of STL I think people just assume that
    > std::vector handles all that and they don't need to worry about it.
    > Perhaps it was often taught in such a way to promote using the STL
    > containers that the error checking was deliberately ignored to make
    > them appear more straight forward to use and simpler to code.
    > I guess what I am really wondering is if it's the case that many
    > people are not making a conscious decision to igore error checking but
    > are instead overlooking it altogether?
    >
    > Personally I don't mind the try-catch syntax and its alot better than
    > if's and else's. I think it should be used more as standard practise
    > when coding with STL containers and not as an exception(no pun) to the
    > norm, which seems to often be the case.
    >
    > When I learned C++ I was taught that the return from new should always
    > be checked for null.


    You were taught wrong. What compiler was that? Microsoft's, version 5
    or 6, no MFC? Those had had nonstandard behavior (operator new was
    returning NULL).

    Goran.
    Goran, Aug 30, 2011
    #7
  8. Paul

    Paul Guest

    On Aug 30, 2:30 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 13:54, Paul wrote:
    >
    >
    >
    >
    >
    > > On Aug 30, 11:26 am, Goran<>  wrote:
    > >> On Aug 30, 8:26 am, Paul<>  wrote:

    >
    > >>> There are numerous C++ examples and code snippets which use STL
    > >>> allocators in containers such as STL vector an string.
    > >>> It has come to my attention that nobody ever seems to care about
    > >>> checking the allocation has been successfull. As we have this
    > >>> exception handling why is it not used very often in practise?.
    > >>> Surely it should be common practise to "try" all allocations, or do
    > >>> people just not care and allow the program to crash if it runs out of
    > >>> memory?

    >
    > >> Good-style exception-aware C++ code does __NOT__ check for allocation
    > >> failures. Instead, it's written in such a manner that said (and other)
    > >> failures don't break it __LOGICALLY__. This is done through a careful
    > >> design and observation of exception safety guarantees (http://
    > >> en.wikipedia.org/wiki/Exception_handling#Exception_safety) of the
    > >> code. Some simple example:

    >
    > >> FILE* f = fopen(...);
    > >> if (!f) throw whatever();
    > >> vector<int>  v;
    > >> v.push_back(2);
    > >> fclose(f);

    >
    > >> This snippet should have no-leak (basic) exception safety guarantee,
    > >> but it doesn't (possible resource leak: FILE*, if there's an exception
    > >> between vector creation and fclose. For example, push_back will throw
    > >> if there's no memory to allocate internal vector storage.

    >
    > >> To satisfy no-leak guarantee, the above should be:

    >
    > >> FILE* f = fopen(...);
    > >> if (!f) throw whatever();
    > >> try
    > >> {
    > >>   vector<int>  v;
    > >>   v.push_back(2);
    > >>   fclose(f);}

    >
    > >> catch(...)
    > >> {
    > >>   fclose(f);
    > >>   throw;

    >
    > >> }

    >
    > >> The above is pretty horrible, hence one would reach for RAII and use
    > >> fstream in lieu of FILE* and no try/catch would be needed for the code
    > >> to function correctly.

    >
    > >> Another example:

    >
    > >> container1 c1;
    > >> container2 c2;
    > >> c1.add(stuff);
    > >> c2.add(stuff);

    >
    > >> Suppose that "stuff" needs to be in both c1 and c, otherwise something
    > >> is wrong. If so, the above needs strong excetion safety. Correction
    > >> would be:

    >
    > >> c1.add(stuff);
    > >> try
    > >> {
    > >>   c2.add(stuff);}

    >
    > >> catch(...)
    > >> {
    > >>    c1.remove(stuff);
    > >>    throw;

    >
    > >> }

    >
    > >> Again, writing this is annoying, and for this sort of things there's
    > >> an application of RAII in a trick called "scope guard". Using scope
    > >> guard, this should turn out as:

    >
    > >> c1.add(stuff);
    > >> ScopeGuard guardc1 = MakeObjGuard(c1,&container1::remove,
    > >> ByRef(stuff));
    > >> c2.add(stuff);
    > >> guardc1.Dismiss();

    >
    > >> Similar examples can be made for other exception safety levels but IMO
    > >> the above two happen in vaast majority of cases.

    >
    > > I am not familiar with the scopeguard objects I will need to look them
    > > up.

    >
    > You are not familiar with scope guard objects as you are not familiar
    > with RAII hence your n00bish question.  At least you have admitted a gap
    > in your knowledge for once.
    >
    >
    >
    > >>> I think if people were more conscious of this error checking the
    > >>> reserve function would be used more often.

    >
    > What is so special about reserve?  See below.
    >
    >
    >
    >
    >
    >
    >
    > >> I am very convinced that this is a wrong way to go reasoning about
    > >> error checking with exception-aware code. First off, using reserve
    > >> lulls into a false sense of security. So space is reserved for the
    > >> vector, and elements will be copied in it. What if copy ctor/
    > >> assignment throws in the process? Code that is sensitive to exceptions
    > >> will still be at risk. Second, it pollutes the code with gratuitous
    > >> snippets no one cares about. There's a better way, see above.

    >
    > >> What's so wrong with this reasoning? The idea that one needs to make
    > >> sure that each possible failure mode is looked after. This is
    > >> possible, but is __extremely__ tedious. Instead, one should think in
    > >> this manner: here are pieces of code that might throw (that should be
    > >> a vaaaaaast majority of total code). If they throw, what will go wrong
    > >> with the code (internal state, resources etc)? (e.g. FILE* will leak,
    > >> c1 and c2 won't be in sync...) For those things, appropriate cleanup
    > >> action should be taken (in vaaast majority of cases, said cleanup
    > >> action is going to be "apply RAII"). Said cleanup action must be a no-
    > >> throw operation (hence use of destructors w/RAII). There should also
    > >> be a clear idea where no-throw areas are, and they should be a tiny
    > >> amount of the code (in C++, these are typically primitive type
    > >> assignments, C-function calls and use of no-throwing swap).

    >
    > > Granted there are other possible exceptions that could be thrown and
    > > should also be considered. I was using the reserve as an example to
    > > guard against program crashing from OOM. I would have thought at least
    > > inform the user then close the program in a civilised manner or
    > > postone the current operation and inform the user of the memory
    > > situation and handle this without closing the program (i.e: by freeing
    > > other recources)

    >
    > Why does reserve help?  If it is a std::vector of std::string for
    > example then reserve will only allocate space for the std::string
    > object; it will not allocate space for what each std::string element may
    > subsequently allocate.  Also not all containers have reserve.
    >

    Well reserve would cause the vector to allocate space. Thus you only
    need to put the reserve operation in the try-catch block and if this
    doesn't throw you know the allocation was successfull and the program
    can continue.
    Paul, Aug 30, 2011
    #8
  9. Paul

    Goran Guest

    On Aug 30, 2:54 pm, Paul <> wrote:
    > On Aug 30, 11:26 am, Goran <> wrote:
    >
    >
    >
    >
    >
    >
    >
    > > On Aug 30, 8:26 am, Paul <> wrote:

    >
    > > > There are numerous C++ examples and code snippets which use STL
    > > > allocators in containers such as STL vector an string.
    > > > It has come to my attention that nobody ever seems to care about
    > > > checking the allocation has been successfull. As we have this
    > > > exception handling why is it not used very often in practise?.
    > > > Surely it should be common practise to "try" all allocations, or do
    > > > people just not care and allow the program to crash if it runs out of
    > > > memory?

    >
    > > Good-style exception-aware C++ code does __NOT__ check for allocation
    > > failures. Instead, it's written in such a manner that said (and other)
    > > failures don't break it __LOGICALLY__. This is done through a careful
    > > design and observation of exception safety guarantees (http://
    > > en.wikipedia.org/wiki/Exception_handling#Exception_safety) of the
    > > code. Some simple example:

    >
    > > FILE* f = fopen(...);
    > > if (!f) throw whatever();
    > > vector<int> v;
    > > v.push_back(2);
    > > fclose(f);

    >
    > > This snippet should have no-leak (basic) exception safety guarantee,
    > > but it doesn't (possible resource leak: FILE*, if there's an exception
    > > between vector creation and fclose. For example, push_back will throw
    > > if there's no memory to allocate internal vector storage.

    >
    > > To satisfy no-leak guarantee, the above should be:

    >
    > > FILE* f = fopen(...);
    > > if (!f) throw whatever();
    > > try
    > > {
    > >  vector<int> v;
    > >  v.push_back(2);
    > >  fclose(f);}

    >
    > > catch(...)
    > > {
    > >  fclose(f);
    > >  throw;

    >
    > > }

    >
    > > The above is pretty horrible, hence one would reach for RAII and use
    > > fstream in lieu of FILE* and no try/catch would be needed for the code
    > > to function correctly.

    >
    > > Another example:

    >
    > > container1 c1;
    > > container2 c2;
    > > c1.add(stuff);
    > > c2.add(stuff);

    >
    > > Suppose that "stuff" needs to be in both c1 and c, otherwise something
    > > is wrong. If so, the above needs strong excetion safety. Correction
    > > would be:

    >
    > > c1.add(stuff);
    > > try
    > > {
    > >  c2.add(stuff);}

    >
    > > catch(...)
    > > {
    > >   c1.remove(stuff);
    > >   throw;

    >
    > > }

    >
    > > Again, writing this is annoying, and for this sort of things there's
    > > an application of RAII in a trick called "scope guard". Using scope
    > > guard, this should turn out as:

    >
    > > c1.add(stuff);
    > > ScopeGuard guardc1 = MakeObjGuard(c1, &container1::remove,
    > > ByRef(stuff));
    > > c2.add(stuff);
    > > guardc1.Dismiss();

    >
    > > Similar examples can be made for other exception safety levels but IMO
    > > the above two happen in vaast majority of cases.

    >
    > I am not familiar with the scopeguard objects I will need to look them
    > up.
    >
    >
    >
    >
    >
    >
    >
    >
    >
    > > > I think if people were more conscious of this error checking the
    > > > reserve function would be used more often.

    >
    > > I am very convinced that this is a wrong way to go reasoning about
    > > error checking with exception-aware code. First off, using reserve
    > > lulls into a false sense of security. So space is reserved for the
    > > vector, and elements will be copied in it. What if copy ctor/
    > > assignment throws in the process? Code that is sensitive to exceptions
    > > will still be at risk. Second, it pollutes the code with gratuitous
    > > snippets no one cares about. There's a better way, see above.

    >
    > > What's so wrong with this reasoning? The idea that one needs to make
    > > sure that each possible failure mode is looked after. This is
    > > possible, but is __extremely__ tedious. Instead, one should think in
    > > this manner: here are pieces of code that might throw (that should be
    > > a vaaaaaast majority of total code). If they throw, what will go wrong
    > > with the code (internal state, resources etc)? (e.g. FILE* will leak,
    > > c1 and c2 won't be in sync...) For those things, appropriate cleanup
    > > action should be taken (in vaaast majority of cases, said cleanup
    > > action is going to be "apply RAII"). Said cleanup action must be a no-
    > > throw operation (hence use of destructors w/RAII). There should also
    > > be a clear idea where no-throw areas are, and they should be a tiny
    > > amount of the code (in C++, these are typically primitive type
    > > assignments, C-function calls and use of no-throwing swap).

    >
    > Granted there are other possible exceptions that could be thrown and
    > should also be considered. I was using the reserve as an example to
    > guard against program crashing from OOM. I would have thought at least
    > inform the user then close the program in a civilised manner or
    > postone the current operation and inform the user of the memory
    > situation and handle this without closing the program (i.e: by freeing
    > other recources)
    >
    >
    >
    >
    >
    >
    >
    >
    >
    > > There's a school of thought that says that allocation failure should
    > > simply terminate everything. This is based on the notion that, once
    > > there's no memory, world has ended for the code anyhow. This notion is
    > > false in a significant amount of cases (and is not in the spirit of C
    > > or C++; if it were, malloc or new would terminate()). Why is notion
    > > wrong? Because normally, code goes like this: work, work, allocate
    > > some resources, work (with those), free those, allocate more, work,
    > > allocate, work, free, free etc. That is, for many-a-code-path, there's
    > > a "hill climb" where resources are allocated while working "up", and
    > > they are deallocated, all or at least a significant part, while going
    > > "down" (e.g. perhaps calculation result is kept allocated. So once
    > > code hits a brick wall going up, there's a big chance there will be
    > > plenty of breathing room once it comes down (due to all those
    > > freeing). IOW, there's no __immediate need__ to die. Go down, clean up
    > > behind you and you'll be fine. It's kinda better to go back and say "I
    > > couldn't do X due to OOM" is kinda better than dying at the spot,
    > > wouldn't you say?

    >
    > I agree I think as a minimum, a professional distribution should check
    > for memory allocation failures in alot of situations. Especially in a
    > memory hungry program that may be expected to run on a range of pc's
    > from say 512MB - 4GB.
    >
    > > Finally, there's a school of thought that says that allocations should
    > > not be checked at all. IMO, that school comes from programming under
    > > systems with overcommit, where, upon OOM conditions, external force
    > > (OOM killer) is brought in to terminate the code __not__ at the spot
    > > of the allocation, but at the spot allocated memory is used. Indeed,
    > > if this is the observed behavior, then any checking for allocation
    > > failure is kinda useless. However, this behavior is not prescribed by
    > > C or C++ language and is therefore outside the scope of this
    > > newsgroup ;-).

    >
    > I'm not so sure about this, as a user I'd hate a program to crash with
    > no error msg.


    Attention, it's user's choice, not yours. (Well, it's a choice of
    whoever set the operating system up). Program is being shut down by an
    external force outside your control (not entirely, but you should not
    be fighting OOM killer ;-)).


    > An error should be displayed before closing at the very
    > least which is why I wonder why I never see anyone using try blocks.


    If nothing else, main(), and any thread function, should have a giant
    try/catch as the outermost activity. That catch must be a no-throw
    zone that somehow informs the operator about what went wrong.
    Otherwise, yes, there should be a really, really small of try/catch
    blocks. Good C++ code simply doesn't need them (cfr. RAII and scope
    guard in my other answer).

    > I can understand small test programs but these aside, as is the case
    > with programs checking the return from new to not be null shouldn't
    > people also check STL containers for allocation errors (as a very
    > minimum)?


    No (cfr. my other answer).

    Goran.
    Goran, Aug 30, 2011
    #9
  10. Paul

    Paul Guest

    On Aug 30, 2:54 pm, Goran <> wrote:
    > On Aug 30, 3:26 pm, Paul <> wrote:
    >
    >
    >
    >
    >
    > > On Aug 30, 10:14 am, Krice <> wrote:

    >
    > > > On 30 elo, 09:26, Paul <> wrote:

    >
    > > > > or do people just not care and allow the program to crash if
    > > > > it runs out of memory?

    >
    > > > Yes, that's what I do and I think many others. It's because
    > > > try-catch syntax is confusing and because 4Gb+ computers
    > > > rarely run out of memory. And if they do, how are you going
    > > > to recover from the situation with try-catch if you really
    > > > need to allocate that memory? The only time it's reasonable
    > > > to use try-catch is when something is left out from
    > > > the program (scaling).

    >
    > > I think that pre-STL it was pretty much standard practise to check for
    > > memory allocation failures for example:

    >
    > > float* m1 = new float[16];
    > > if(!m1){
    > > //output an exit msg.
    > > exit();}

    >
    > No, that's completely wrong. new float[whatever] should throw just as
    > vector.push_back should throw. This code is just completely wrong.
    > Either you'd get to the "if" with m1 pointing to allocated memory and
    > not being NULL, either exception is thrown and you can't possibly
    > assign to m1 not reach the "if".
    >
    >
    >
    >
    >
    > > Larger programs would have memory handling routines that took care of
    > > all the allocations so this code was not required all over the place.

    >
    > > However since the boom of STL I think people just assume that
    > > std::vector handles all that and they don't need to worry about it.
    > > Perhaps it was often taught in such a way to promote using the STL
    > > containers that the error checking was deliberately ignored to make
    > > them appear more straight forward to use and simpler to code.
    > > I guess what I am really wondering is if it's the case that many
    > > people are not making a conscious decision to igore error checking but
    > > are instead overlooking it altogether?

    >
    > > Personally I don't mind the try-catch syntax and its alot better than
    > > if's and else's. I think it should be used more as standard practise
    > > when coding with STL containers and not as an exception(no pun) to the
    > > norm, which seems to often be the case.

    >
    > > When I learned C++ I was taught that the return from new should always
    > > be checked for null.

    >
    > You were taught wrong. What compiler was that? Microsoft's, version 5
    > or 6, no MFC? Those had had nonstandard behavior (operator new was
    > returning NULL).
    >

    Yes I was taught a long time ago and perhaps I am overgeneralising
    between new(nothrow) and malloc but as I remember an allocation
    failure was represented by return of null. I see now that new does not
    return null on failure unless the nothrow version is used.
    Paul, Aug 30, 2011
    #10
  11. Paul

    Paul Guest

    On Aug 30, 3:02 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 14:54, Paul wrote:
    >
    >
    >
    >
    >
    > > On Aug 30, 2:34 pm, Leigh Johnston<>  wrote:
    > >> On 30/08/2011 14:26, Paul wrote:

    >
    > >>> On Aug 30, 10:14 am, Krice<>    wrote:
    > >>>> On 30 elo, 09:26, Paul<>    wrote:

    >
    > >>>>> or do people just not care and allow the program to crash if
    > >>>>> it runs out of memory?

    >
    > >>>> Yes, that's what I do and I think many others. It's because
    > >>>> try-catch syntax is confusing and because 4Gb+ computers
    > >>>> rarely run out of memory. And if they do, how are you going
    > >>>> to recover from the situation with try-catch if you really
    > >>>> need to allocate that memory? The only time it's reasonable
    > >>>> to use try-catch is when something is left out from
    > >>>> the program (scaling).

    >
    > >>> I think that pre-STL it was pretty much standard practise to check for
    > >>> memory allocation failures for example:

    >
    > >>> float* m1 = new float[16];
    > >>> if(!m1){
    > >>> //output an exit msg.
    > >>> exit();
    > >>> }
    > >>> Larger programs would have memory handling routines that took care of
    > >>> all the allocations so this code was not required all over the place.

    >
    > >>> However since the boom of STL I think people just assume that
    > >>> std::vector handles all that and they don't need to worry about it.
    > >>> Perhaps it was often taught in such a way to promote using the STL
    > >>> containers that the error checking was deliberately ignored to make
    > >>> them appear more straight forward to use and simpler to code.
    > >>> I guess what I am really wondering is if it's the case that many
    > >>> people are not making a conscious decision to igore error checking but
    > >>> are instead overlooking it altogether?

    >
    > >>> Personally I don't mind the try-catch syntax and its alot better than
    > >>> if's and else's. I think it should be used more as standard practise
    > >>> when coding with STL containers and not as an exception(no pun) to the
    > >>> norm, which seems to often be the case.

    >
    > >>> When I learned C++ I was taught that the return from new should always
    > >>> be checked for null. This was put across as an important point but it
    > >>> was usually said that for simplicity any further examples would skip
    > >>> the error checking. Should a similar point not apply for using STL
    > >>> containers and try-catch blocks?

    >
    > >> The only sane remedy for most cases of allocation failure is to
    > >> terminate the application which is what will happen with an uncaught
    > >> exception.

    >
    > > I don't agree with this.

    >
    > Of course you don't agree with this due to your obvious lack of real
    > world experience.
    >
    > > A program is allocated a heap size, which is not normally all of the
    > > systems memory resources. If a program uses all of its heap allocation
    > > it can request more memory from the OS.

    >
    > This will either happen automatically or can be done inside an
    > overloaded operator new.


    Either or?
    Are you suggesting that you expect all programs to automatically have
    access the systems full memory resources?
    Or are you suggesting that you should write your own new function for
    all programs to automatically request more resources on an allocation
    failure?

    >
    > > Program termination is a last resort and even then the allocation
    > > exception should still be caught and the program terminated properly.

    >
    > Define "terminated properly" when nothing can be guaranteed as we have
    > run out of memory.
    >
    >

    Because allocation failed doesn't necessarily mean you have run out of
    memory. The program may still able to free resources, close streams
    etc.
    Paul, Aug 30, 2011
    #11
  12. Paul

    Paul Guest

    On Aug 30, 3:31 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 15:20, Paul wrote:
    >
    >
    >
    >
    >
    > > On Aug 30, 3:02 pm, Leigh Johnston<>  wrote:
    > >> On 30/08/2011 14:54, Paul wrote:

    >
    > >>> On Aug 30, 2:34 pm, Leigh Johnston<>    wrote:
    > >>>> On 30/08/2011 14:26, Paul wrote:

    >
    > >>>>> On Aug 30, 10:14 am, Krice<>      wrote:
    > >>>>>> On 30 elo, 09:26, Paul<>      wrote:

    >
    > >>>>>>> or do people just not care and allow the program to crash if
    > >>>>>>> it runs out of memory?

    >
    > >>>>>> Yes, that's what I do and I think many others. It's because
    > >>>>>> try-catch syntax is confusing and because 4Gb+ computers
    > >>>>>> rarely run out of memory. And if they do, how are you going
    > >>>>>> to recover from the situation with try-catch if you really
    > >>>>>> need to allocate that memory? The only time it's reasonable
    > >>>>>> to use try-catch is when something is left out from
    > >>>>>> the program (scaling).

    >
    > >>>>> I think that pre-STL it was pretty much standard practise to check for
    > >>>>> memory allocation failures for example:

    >
    > >>>>> float* m1 = new float[16];
    > >>>>> if(!m1){
    > >>>>> //output an exit msg.
    > >>>>> exit();
    > >>>>> }
    > >>>>> Larger programs would have memory handling routines that took care of
    > >>>>> all the allocations so this code was not required all over the place.

    >
    > >>>>> However since the boom of STL I think people just assume that
    > >>>>> std::vector handles all that and they don't need to worry about it.
    > >>>>> Perhaps it was often taught in such a way to promote using the STL
    > >>>>> containers that the error checking was deliberately ignored to make
    > >>>>> them appear more straight forward to use and simpler to code.
    > >>>>> I guess what I am really wondering is if it's the case that many
    > >>>>> people are not making a conscious decision to igore error checking but
    > >>>>> are instead overlooking it altogether?

    >
    > >>>>> Personally I don't mind the try-catch syntax and its alot better than
    > >>>>> if's and else's. I think it should be used more as standard practise
    > >>>>> when coding with STL containers and not as an exception(no pun) to the
    > >>>>> norm, which seems to often be the case.

    >
    > >>>>> When I learned C++ I was taught that the return from new should always
    > >>>>> be checked for null. This was put across as an important point but it
    > >>>>> was usually said that for simplicity any further examples would skip
    > >>>>> the error checking. Should a similar point not apply for using STL
    > >>>>> containers and try-catch blocks?

    >
    > >>>> The only sane remedy for most cases of allocation failure is to
    > >>>> terminate the application which is what will happen with an uncaught
    > >>>> exception.

    >
    > >>> I don't agree with this.

    >
    > >> Of course you don't agree with this due to your obvious lack of real
    > >> world experience.

    >
    > >>> A program is allocated a heap size, which is not normally all of the
    > >>> systems memory resources. If a program uses all of its heap allocation
    > >>> it can request more memory from the OS.

    >
    > >> This will either happen automatically or can be done inside an
    > >> overloaded operator new.

    >
    > > Either or?

    >
    > Either.
    >
    > > Are you suggesting that you expect all programs to automatically have
    > > access the systems full memory resources?

    >
    > No I am suggesting that you don't use a catch to allocate more "system"
    > memory and then "try again" which is what you appeared to be suggesting.
    >
    > > Or are you suggesting that you should write your own new function for
    > > all programs to automatically request more resources on an allocation
    > > failure?

    >
    > Depends on the OS.  See also "set_new_handler".
    >
    >
    >
    > >>> Program termination is a last resort and even then the allocation
    > >>> exception should still be caught and the program terminated properly.

    >
    > >> Define "terminated properly" when nothing can be guaranteed as we have
    > >> run out of memory.

    >
    > > Because allocation failed doesn't necessarily mean you have run out of
    > > memory. The program may still able to free resources, close streams
    > > etc.

    >
    > Such OS resources are normally automatically freed by the OS on process
    > termination.  For the few instances where this is not the case then all
    > you need to do is rely on RAII and have a top-level try/catch in main()
    > or equivalent.
    >

    But you're missing or avoiding the point. You don't need to have
    program termination.

    Notice your opinion is the direct opposite of Gorans.:)

    Perhaps you could slow the program down and make it run so its using
    less resources. Allowing it to just use up all memory and then crash
    doesn't seem like a very good idea to me.
    Paul, Aug 30, 2011
    #12
  13. Paul

    Paul Guest

    On Aug 30, 3:03 pm, Goran <> wrote:
    > On Aug 30, 2:54 pm, Paul <> wrote:
    >
    >
    >
    >
    >
    > > On Aug 30, 11:26 am, Goran <> wrote:

    >
    > > > On Aug 30, 8:26 am, Paul <> wrote:

    >
    > > > > There are numerous C++ examples and code snippets which use STL
    > > > > allocators in containers such as STL vector an string.
    > > > > It has come to my attention that nobody ever seems to care about
    > > > > checking the allocation has been successfull. As we have this
    > > > > exception handling why is it not used very often in practise?.
    > > > > Surely it should be common practise to "try" all allocations, or do
    > > > > people just not care and allow the program to crash if it runs out of
    > > > > memory?

    >
    > > > Good-style exception-aware C++ code does __NOT__ check for allocation
    > > > failures. Instead, it's written in such a manner that said (and other)
    > > > failures don't break it __LOGICALLY__. This is done through a careful
    > > > design and observation of exception safety guarantees (http://
    > > > en.wikipedia.org/wiki/Exception_handling#Exception_safety) of the
    > > > code. Some simple example:

    >
    > > > FILE* f = fopen(...);
    > > > if (!f) throw whatever();
    > > > vector<int> v;
    > > > v.push_back(2);
    > > > fclose(f);

    >
    > > > This snippet should have no-leak (basic) exception safety guarantee,
    > > > but it doesn't (possible resource leak: FILE*, if there's an exception
    > > > between vector creation and fclose. For example, push_back will throw
    > > > if there's no memory to allocate internal vector storage.

    >
    > > > To satisfy no-leak guarantee, the above should be:

    >
    > > > FILE* f = fopen(...);
    > > > if (!f) throw whatever();
    > > > try
    > > > {
    > > >  vector<int> v;
    > > >  v.push_back(2);
    > > >  fclose(f);}

    >
    > > > catch(...)
    > > > {
    > > >  fclose(f);
    > > >  throw;

    >
    > > > }

    >
    > > > The above is pretty horrible, hence one would reach for RAII and use
    > > > fstream in lieu of FILE* and no try/catch would be needed for the code
    > > > to function correctly.

    >
    > > > Another example:

    >
    > > > container1 c1;
    > > > container2 c2;
    > > > c1.add(stuff);
    > > > c2.add(stuff);

    >
    > > > Suppose that "stuff" needs to be in both c1 and c, otherwise something
    > > > is wrong. If so, the above needs strong excetion safety. Correction
    > > > would be:

    >
    > > > c1.add(stuff);
    > > > try
    > > > {
    > > >  c2.add(stuff);}

    >
    > > > catch(...)
    > > > {
    > > >   c1.remove(stuff);
    > > >   throw;

    >
    > > > }

    >
    > > > Again, writing this is annoying, and for this sort of things there's
    > > > an application of RAII in a trick called "scope guard". Using scope
    > > > guard, this should turn out as:

    >
    > > > c1.add(stuff);
    > > > ScopeGuard guardc1 = MakeObjGuard(c1, &container1::remove,
    > > > ByRef(stuff));
    > > > c2.add(stuff);
    > > > guardc1.Dismiss();

    >
    > > > Similar examples can be made for other exception safety levels but IMO
    > > > the above two happen in vaast majority of cases.

    >
    > > I am not familiar with the scopeguard objects I will need to look them
    > > up.

    >
    > > > > I think if people were more conscious of this error checking the
    > > > > reserve function would be used more often.

    >
    > > > I am very convinced that this is a wrong way to go reasoning about
    > > > error checking with exception-aware code. First off, using reserve
    > > > lulls into a false sense of security. So space is reserved for the
    > > > vector, and elements will be copied in it. What if copy ctor/
    > > > assignment throws in the process? Code that is sensitive to exceptions
    > > > will still be at risk. Second, it pollutes the code with gratuitous
    > > > snippets no one cares about. There's a better way, see above.

    >
    > > > What's so wrong with this reasoning? The idea that one needs to make
    > > > sure that each possible failure mode is looked after. This is
    > > > possible, but is __extremely__ tedious. Instead, one should think in
    > > > this manner: here are pieces of code that might throw (that should be
    > > > a vaaaaaast majority of total code). If they throw, what will go wrong
    > > > with the code (internal state, resources etc)? (e.g. FILE* will leak,
    > > > c1 and c2 won't be in sync...) For those things, appropriate cleanup
    > > > action should be taken (in vaaast majority of cases, said cleanup
    > > > action is going to be "apply RAII"). Said cleanup action must be a no-
    > > > throw operation (hence use of destructors w/RAII). There should also
    > > > be a clear idea where no-throw areas are, and they should be a tiny
    > > > amount of the code (in C++, these are typically primitive type
    > > > assignments, C-function calls and use of no-throwing swap).

    >
    > > Granted there are other possible exceptions that could be thrown and
    > > should also be considered. I was using the reserve as an example to
    > > guard against program crashing from OOM. I would have thought at least
    > > inform the user then close the program in a civilised manner or
    > > postone the current operation and inform the user of the memory
    > > situation and handle this without closing the program (i.e: by freeing
    > > other recources)

    >
    > > > There's a school of thought that says that allocation failure should
    > > > simply terminate everything. This is based on the notion that, once
    > > > there's no memory, world has ended for the code anyhow. This notion is
    > > > false in a significant amount of cases (and is not in the spirit of C
    > > > or C++; if it were, malloc or new would terminate()). Why is notion
    > > > wrong? Because normally, code goes like this: work, work, allocate
    > > > some resources, work (with those), free those, allocate more, work,
    > > > allocate, work, free, free etc. That is, for many-a-code-path, there's
    > > > a "hill climb" where resources are allocated while working "up", and
    > > > they are deallocated, all or at least a significant part, while going
    > > > "down" (e.g. perhaps calculation result is kept allocated. So once
    > > > code hits a brick wall going up, there's a big chance there will be
    > > > plenty of breathing room once it comes down (due to all those
    > > > freeing). IOW, there's no __immediate need__ to die. Go down, clean up
    > > > behind you and you'll be fine. It's kinda better to go back and say "I
    > > > couldn't do X due to OOM" is kinda better than dying at the spot,
    > > > wouldn't you say?

    >
    > > I agree I think as a minimum, a professional distribution should check
    > > for memory allocation failures in alot of situations. Especially in a
    > > memory hungry program that may be expected to run on a range of pc's
    > > from say 512MB - 4GB.

    >
    > > > Finally, there's a school of thought that says that allocations should
    > > > not be checked at all. IMO, that school comes from programming under
    > > > systems with overcommit, where, upon OOM conditions, external force
    > > > (OOM killer) is brought in to terminate the code __not__ at the spot
    > > > of the allocation, but at the spot allocated memory is used. Indeed,
    > > > if this is the observed behavior, then any checking for allocation
    > > > failure is kinda useless. However, this behavior is not prescribed by
    > > > C or C++ language and is therefore outside the scope of this
    > > > newsgroup ;-).

    >
    > > I'm not so sure about this, as a user I'd hate a program to crash with
    > > no error msg.

    >
    > Attention, it's user's choice, not yours. (Well, it's a choice of
    > whoever set the operating system up). Program is being shut down by an
    > external force outside your control (not entirely, but you should not
    > be fighting OOM killer ;-)).
    >

    Its not outwith your control, if you are allowing it to crash because
    an allocation failed.
    You can write the program in such a way that it doesn't crash if an
    allocation fails. Ok any operations that depend on that allocation
    cannot proceed but the whole program need not crash.
    A small message to the user informing them that the operaton failed
    due to lack of memory may be all that is required.

    > > An error should be displayed before closing at the very
    > > least which is why I wonder why I never see anyone using try blocks.

    >
    > If nothing else, main(), and any thread function, should have a giant
    > try/catch as the outermost activity. That catch must be a no-throw
    > zone that somehow informs the operator about what went wrong.
    > Otherwise, yes, there should be a really, really small of try/catch
    > blocks. Good C++ code simply doesn't need them (cfr. RAII and scope
    > guard in my other answer).
    >

    Well this scope guard things seems to require a log on to read all
    links I tried. But that aside, with standard try-catch blocks there is
    no need to have them around everything to write safe code.
    In reality a programmer should only need to try-catch possible errors
    outwith their control, within reason.
    Obvioulsy the pc could be switched of at the wall and you aren't going
    to try-catch for that sort of error.
    Paul, Aug 30, 2011
    #13
  14. Paul

    Paul Guest

    On Aug 30, 3:52 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 15:47, Paul wrote:
    >
    >
    >
    >
    >
    > > On Aug 30, 3:31 pm, Leigh Johnston<>  wrote:
    > >> On 30/08/2011 15:20, Paul wrote:

    >
    > >>> On Aug 30, 3:02 pm, Leigh Johnston<>    wrote:
    > >>>> On 30/08/2011 14:54, Paul wrote:

    >
    > >>>>> On Aug 30, 2:34 pm, Leigh Johnston<>      wrote:
    > >>>>>> On 30/08/2011 14:26, Paul wrote:

    >
    > >>>>>>> On Aug 30, 10:14 am, Krice<>        wrote:
    > >>>>>>>> On 30 elo, 09:26, Paul<>        wrote:

    >
    > >>>>>>>>> or do people just not care and allow the program to crash if
    > >>>>>>>>> it runs out of memory?

    >
    > >>>>>>>> Yes, that's what I do and I think many others. It's because
    > >>>>>>>> try-catch syntax is confusing and because 4Gb+ computers
    > >>>>>>>> rarely run out of memory. And if they do, how are you going
    > >>>>>>>> to recover from the situation with try-catch if you really
    > >>>>>>>> need to allocate that memory? The only time it's reasonable
    > >>>>>>>> to use try-catch is when something is left out from
    > >>>>>>>> the program (scaling).

    >
    > >>>>>>> I think that pre-STL it was pretty much standard practise to check for
    > >>>>>>> memory allocation failures for example:

    >
    > >>>>>>> float* m1 = new float[16];
    > >>>>>>> if(!m1){
    > >>>>>>> //output an exit msg.
    > >>>>>>> exit();
    > >>>>>>> }
    > >>>>>>> Larger programs would have memory handling routines that took care of
    > >>>>>>> all the allocations so this code was not required all over the place.

    >
    > >>>>>>> However since the boom of STL I think people just assume that
    > >>>>>>> std::vector handles all that and they don't need to worry about it.
    > >>>>>>> Perhaps it was often taught in such a way to promote using the STL
    > >>>>>>> containers that the error checking was deliberately ignored to make
    > >>>>>>> them appear more straight forward to use and simpler to code.
    > >>>>>>> I guess what I am really wondering is if it's the case that many
    > >>>>>>> people are not making a conscious decision to igore error checking but
    > >>>>>>> are instead overlooking it altogether?

    >
    > >>>>>>> Personally I don't mind the try-catch syntax and its alot better than
    > >>>>>>> if's and else's. I think it should be used more as standard practise
    > >>>>>>> when coding with STL containers and not as an exception(no pun) to the
    > >>>>>>> norm, which seems to often be the case.

    >
    > >>>>>>> When I learned C++ I was taught that the return from new should always
    > >>>>>>> be checked for null. This was put across as an important point but it
    > >>>>>>> was usually said that for simplicity any further examples would skip
    > >>>>>>> the error checking. Should a similar point not apply for using STL
    > >>>>>>> containers and try-catch blocks?

    >
    > >>>>>> The only sane remedy for most cases of allocation failure is to
    > >>>>>> terminate the application which is what will happen with an uncaught
    > >>>>>> exception.

    >
    > >>>>> I don't agree with this.

    >
    > >>>> Of course you don't agree with this due to your obvious lack of real
    > >>>> world experience.

    >
    > >>>>> A program is allocated a heap size, which is not normally all of the
    > >>>>> systems memory resources. If a program uses all of its heap allocation
    > >>>>> it can request more memory from the OS.

    >
    > >>>> This will either happen automatically or can be done inside an
    > >>>> overloaded operator new.

    >
    > >>> Either or?

    >
    > >> Either.

    >
    > >>> Are you suggesting that you expect all programs to automatically have
    > >>> access the systems full memory resources?

    >
    > >> No I am suggesting that you don't use a catch to allocate more "system"
    > >> memory and then "try again" which is what you appeared to be suggesting.

    >
    > >>> Or are you suggesting that you should write your own new function for
    > >>> all programs to automatically request more resources on an allocation
    > >>> failure?

    >
    > >> Depends on the OS.  See also "set_new_handler".

    >
    > >>>>> Program termination is a last resort and even then the allocation
    > >>>>> exception should still be caught and the program terminated properly.

    >
    > >>>> Define "terminated properly" when nothing can be guaranteed as we have
    > >>>> run out of memory.

    >
    > >>> Because allocation failed doesn't necessarily mean you have run out of
    > >>> memory. The program may still able to free resources, close streams
    > >>> etc.

    >
    > >> Such OS resources are normally automatically freed by the OS on process
    > >> termination.  For the few instances where this is not the case then all
    > >> you need to do is rely on RAII and have a top-level try/catch in main()
    > >> or equivalent.

    >
    > > But you're missing or avoiding the point. You don't need to have
    > > program termination.

    >
    > > Notice your opinion is the direct opposite of Gorans.:)

    >
    > > Perhaps you could slow the program down and make it run so its using
    > > less resources. Allowing it to just use up all memory and then crash
    > > doesn't seem like a very good idea to me.

    >
    > That is just n00bish guff.  I cannot help you with your fundamental
    > problem which is simply a lack of real world experience.
    >

    Bollocks.
    You have spouted bullshit and now you realise it you are trying to get
    out of the arguement with some bullshit about noobish guff.

    Its you thats noobish saying that the only sane option is to allow a
    program to crash if an allocation fails, and you know it.
    Paul, Aug 30, 2011
    #14
  15. Paul

    Paul Guest

    On Aug 30, 4:04 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 15:57, Paul wrote:
    >
    >
    >
    >
    >
    > > On Aug 30, 3:03 pm, Goran<>  wrote:
    > >> On Aug 30, 2:54 pm, Paul<>  wrote:

    >
    > >>> On Aug 30, 11:26 am, Goran<>  wrote:

    >
    > >>>> On Aug 30, 8:26 am, Paul<>  wrote:

    >
    > >>>>> There are numerous C++ examples and code snippets which use STL
    > >>>>> allocators in containers such as STL vector an string.
    > >>>>> It has come to my attention that nobody ever seems to care about
    > >>>>> checking the allocation has been successfull. As we have this
    > >>>>> exception handling why is it not used very often in practise?.
    > >>>>> Surely it should be common practise to "try" all allocations, or do
    > >>>>> people just not care and allow the program to crash if it runs out of
    > >>>>> memory?

    >
    > >>>> Good-style exception-aware C++ code does __NOT__ check for allocation
    > >>>> failures. Instead, it's written in such a manner that said (and other)
    > >>>> failures don't break it __LOGICALLY__. This is done through a careful
    > >>>> design and observation of exception safety guarantees (http://
    > >>>> en.wikipedia.org/wiki/Exception_handling#Exception_safety) of the
    > >>>> code. Some simple example:

    >
    > >>>> FILE* f = fopen(...);
    > >>>> if (!f) throw whatever();
    > >>>> vector<int>  v;
    > >>>> v.push_back(2);
    > >>>> fclose(f);

    >
    > >>>> This snippet should have no-leak (basic) exception safety guarantee,
    > >>>> but it doesn't (possible resource leak: FILE*, if there's an exception
    > >>>> between vector creation and fclose. For example, push_back will throw
    > >>>> if there's no memory to allocate internal vector storage.

    >
    > >>>> To satisfy no-leak guarantee, the above should be:

    >
    > >>>> FILE* f = fopen(...);
    > >>>> if (!f) throw whatever();
    > >>>> try
    > >>>> {
    > >>>>   vector<int>  v;
    > >>>>   v.push_back(2);
    > >>>>   fclose(f);}

    >
    > >>>> catch(...)
    > >>>> {
    > >>>>   fclose(f);
    > >>>>   throw;

    >
    > >>>> }

    >
    > >>>> The above is pretty horrible, hence one would reach for RAII and use
    > >>>> fstream in lieu of FILE* and no try/catch would be needed for the code
    > >>>> to function correctly.

    >
    > >>>> Another example:

    >
    > >>>> container1 c1;
    > >>>> container2 c2;
    > >>>> c1.add(stuff);
    > >>>> c2.add(stuff);

    >
    > >>>> Suppose that "stuff" needs to be in both c1 and c, otherwise something
    > >>>> is wrong. If so, the above needs strong excetion safety. Correction
    > >>>> would be:

    >
    > >>>> c1.add(stuff);
    > >>>> try
    > >>>> {
    > >>>>   c2.add(stuff);}

    >
    > >>>> catch(...)
    > >>>> {
    > >>>>    c1.remove(stuff);
    > >>>>    throw;

    >
    > >>>> }

    >
    > >>>> Again, writing this is annoying, and for this sort of things there's
    > >>>> an application of RAII in a trick called "scope guard". Using scope
    > >>>> guard, this should turn out as:

    >
    > >>>> c1.add(stuff);
    > >>>> ScopeGuard guardc1 = MakeObjGuard(c1,&container1::remove,
    > >>>> ByRef(stuff));
    > >>>> c2.add(stuff);
    > >>>> guardc1.Dismiss();

    >
    > >>>> Similar examples can be made for other exception safety levels but IMO
    > >>>> the above two happen in vaast majority of cases.

    >
    > >>> I am not familiar with the scopeguard objects I will need to look them
    > >>> up.

    >
    > >>>>> I think if people were more conscious of this error checking the
    > >>>>> reserve function would be used more often.

    >
    > >>>> I am very convinced that this is a wrong way to go reasoning about
    > >>>> error checking with exception-aware code. First off, using reserve
    > >>>> lulls into a false sense of security. So space is reserved for the
    > >>>> vector, and elements will be copied in it. What if copy ctor/
    > >>>> assignment throws in the process? Code that is sensitive to exceptions
    > >>>> will still be at risk. Second, it pollutes the code with gratuitous
    > >>>> snippets no one cares about. There's a better way, see above.

    >
    > >>>> What's so wrong with this reasoning? The idea that one needs to make
    > >>>> sure that each possible failure mode is looked after. This is
    > >>>> possible, but is __extremely__ tedious. Instead, one should think in
    > >>>> this manner: here are pieces of code that might throw (that should be
    > >>>> a vaaaaaast majority of total code). If they throw, what will go wrong
    > >>>> with the code (internal state, resources etc)? (e.g. FILE* will leak,
    > >>>> c1 and c2 won't be in sync...) For those things, appropriate cleanup
    > >>>> action should be taken (in vaaast majority of cases, said cleanup
    > >>>> action is going to be "apply RAII"). Said cleanup action must be a no-
    > >>>> throw operation (hence use of destructors w/RAII). There should also
    > >>>> be a clear idea where no-throw areas are, and they should be a tiny
    > >>>> amount of the code (in C++, these are typically primitive type
    > >>>> assignments, C-function calls and use of no-throwing swap).

    >
    > >>> Granted there are other possible exceptions that could be thrown and
    > >>> should also be considered. I was using the reserve as an example to
    > >>> guard against program crashing from OOM. I would have thought at least
    > >>> inform the user then close the program in a civilised manner or
    > >>> postone the current operation and inform the user of the memory
    > >>> situation and handle this without closing the program (i.e: by freeing
    > >>> other recources)

    >
    > >>>> There's a school of thought that says that allocation failure should
    > >>>> simply terminate everything. This is based on the notion that, once
    > >>>> there's no memory, world has ended for the code anyhow. This notion is
    > >>>> false in a significant amount of cases (and is not in the spirit of C
    > >>>> or C++; if it were, malloc or new would terminate()). Why is notion
    > >>>> wrong? Because normally, code goes like this: work, work, allocate
    > >>>> some resources, work (with those), free those, allocate more, work,
    > >>>> allocate, work, free, free etc. That is, for many-a-code-path, there's
    > >>>> a "hill climb" where resources are allocated while working "up", and
    > >>>> they are deallocated, all or at least a significant part, while going
    > >>>> "down" (e.g. perhaps calculation result is kept allocated. So once
    > >>>> code hits a brick wall going up, there's a big chance there will be
    > >>>> plenty of breathing room once it comes down (due to all those
    > >>>> freeing). IOW, there's no __immediate need__ to die. Go down, clean up
    > >>>> behind you and you'll be fine. It's kinda better to go back and say "I
    > >>>> couldn't do X due to OOM" is kinda better than dying at the spot,
    > >>>> wouldn't you say?

    >
    > >>> I agree I think as a minimum, a professional distribution should check
    > >>> for memory allocation failures in alot of situations. Especially in a
    > >>> memory hungry program that may be expected to run on a range of pc's
    > >>> from say 512MB - 4GB.

    >
    > >>>> Finally, there's a school of thought that says that allocations should
    > >>>> not be checked at all. IMO, that school comes from programming under
    > >>>> systems with overcommit, where, upon OOM conditions, external force
    > >>>> (OOM killer) is brought in to terminate the code __not__ at the spot
    > >>>> of the allocation, but at the spot allocated memory is used. Indeed,
    > >>>> if this is the observed behavior, then any checking for allocation
    > >>>> failure is kinda useless. However, this behavior is not prescribed by
    > >>>> C or C++ language and is therefore outside the scope of this
    > >>>> newsgroup ;-).

    >
    > >>> I'm not so sure about this, as a user I'd hate a program to crash with
    > >>> no error msg.

    >
    > >> Attention, it's user's choice, not yours. (Well, it's a choice of
    > >> whoever set the operating system up). Program is being shut down by an
    > >> external force outside your control (not entirely, but you should not
    > >> be fighting OOM killer ;-)).

    >
    > > Its not outwith your control, if you are allowing it to crash because
    > > an allocation failed.
    > > You can write the program in such a way that it doesn't crash if an
    > > allocation fails. Ok any operations that depend on that allocation
    > > cannot proceed but the whole program need not crash.
    > > A small message to the user informing them that the operaton failed
    > > due to lack of memory may be all that is required.

    >
    > >>> An error should be displayed before closing at the very
    > >>> least which is why I wonder why I never see anyone using try blocks.

    >
    > >> If nothing else, main(), and any thread function, should have a giant
    > >> try/catch as the outermost activity. That catch must be a no-throw
    > >> zone that somehow informs the operator about what went wrong.
    > >> Otherwise, yes, there should be a really, really small of try/catch
    > >> blocks. Good C++ code simply doesn't need them (cfr. RAII and scope
    > >> guard in my other answer).

    >
    > > Well this scope guard things seems to require a log on to read all
    > > links I tried. But that aside, with standard try-catch blocks there is

    >
    > "Require a log on"?  You really are full of it.  Try learning about
    > things *before* you make posts about them on Usenet.
    >

    So I cant be arsed registering to some site ATM, WTF is your problem
    with that?
    Paul, Aug 30, 2011
    #15
  16. Paul

    Paul Guest

    On Aug 30, 4:15 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 16:00, Paul wrote:
    >
    > > Bollocks.
    > > You have spouted bullshit and now you realise it you are trying to get
    > > out of the arguement with some bullshit about noobish guff.

    >
    > > Its you thats noobish saying that the only sane option is to allow a
    > > program to crash if an allocation fails, and you know it.

    >
    > I said the following:
    >
    > "The only sane remedy for most cases of allocation failure is to
    > terminate the application which is what will happen with an uncaught
    > exception."
    >
    > I stand by what I said so I am not trying to "get out of" anything.  As
    > usual you fail and continue to fail with your bullshit: note the words
    > "most cases of" in what I said.
    >

    It's not the only sane remedy. It's the most insane remedy.

    Imagine a program that controlled a robot:
    One process for each limb, one for the head and one for the central
    gyromatic balance system. The robot is in walk mode and you run out of
    memory because a screeching parrot flies past.
    What is the "sane" option your program should take:
    Shut down eye and hearing modules and slow his walking speed?
    Just let the program crash and allow the robot to fall over in a heap?

    If you were working on critical systems such as aircraft controls
    would you still think that allowing errors to go uncatched is the only
    sane remedy?
    Paul, Aug 30, 2011
    #16
  17. Paul

    Paul Guest

    On Aug 30, 4:20 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 16:15, Paul wrote:
    >
    >
    >
    >
    >
    > >>>>> An error should be displayed before closing at the very
    > >>>>> least which is why I wonder why I never see anyone using try blocks..

    >
    > >>>> If nothing else, main(), and any thread function, should have a giant
    > >>>> try/catch as the outermost activity. That catch must be a no-throw
    > >>>> zone that somehow informs the operator about what went wrong.
    > >>>> Otherwise, yes, there should be a really, really small of try/catch
    > >>>> blocks. Good C++ code simply doesn't need them (cfr. RAII and scope
    > >>>> guard in my other answer).

    >
    > >>> Well this scope guard things seems to require a log on to read all
    > >>> links I tried. But that aside, with standard try-catch blocks there is

    >
    > >> "Require a log on"?  You really are full of it.  Try learning about
    > >> things *before* you make posts about them on Usenet.

    >
    > > So I cant be arsed registering to some site ATM, WTF is your problem
    > > with that?

    >
    > My problem is the ignorant noise that you continually emit in this
    > newsgroup.  Again: try reading about things *before* you make uninformed
    > posts about those things in newsgroups.
    >

    Why is it ignorant noise to raise a perfectly reasonable discussion
    about error checking allocations?
    Paul, Aug 30, 2011
    #17
  18. Paul

    Paul Guest

    On Aug 30, 4:15 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 16:00, Paul wrote:
    >
    > > Bollocks.
    > > You have spouted bullshit and now you realise it you are trying to get
    > > out of the arguement with some bullshit about noobish guff.

    >
    > > Its you thats noobish saying that the only sane option is to allow a
    > > program to crash if an allocation fails, and you know it.

    >
    > I said the following:
    >
    > "The only sane remedy for most cases of allocation failure is to
    > terminate the application which is what will happen with an uncaught
    > exception."
    >
    > I stand by what I said so I am not trying to "get out of" anything.  As
    > usual you fail and continue to fail with your bullshit: note the words
    > "most cases of" in what I said.
    >

    But what you said is bollocks.

    Imagine a robot control program with a process for each limb, one for
    head functions(eyes, ears etc) and one for the central gyroscopic
    balance system.
    If the robot is walking along and a screeching colourfull parrot flew
    past the robot may have a momentarily memory overload what should the
    program do:
    Shut down eyes and ears and decrease walking speed?
    Just crash and allow robot to fall over in a heap?

    If you worked on an aircraft control system would you think its only
    sane to allow a program to crash with an alloc error?

    It's not acceptable to allow a program to wildly allocate memory until
    it crashes, without error checking.
    Paul, Aug 30, 2011
    #18
  19. Paul

    Paul Guest

    On Aug 30, 6:58 pm, Leigh Johnston <> wrote:
    > On 30/08/2011 18:31, Paul wrote:
    >
    >
    >
    >
    >
    > > On Aug 30, 4:15 pm, Leigh Johnston<>  wrote:
    > >> On 30/08/2011 16:00, Paul wrote:

    >
    > >>> Bollocks.
    > >>> You have spouted bullshit and now you realise it you are trying to get
    > >>> out of the arguement with some bullshit about noobish guff.

    >
    > >>> Its you thats noobish saying that the only sane option is to allow a
    > >>> program to crash if an allocation fails, and you know it.

    >
    > >> I said the following:

    >
    > >> "The only sane remedy for most cases of allocation failure is to
    > >> terminate the application which is what will happen with an uncaught
    > >> exception."

    >
    > >> I stand by what I said so I am not trying to "get out of" anything.  As
    > >> usual you fail and continue to fail with your bullshit: note the words
    > >> "most cases of" in what I said.

    >
    > > It's not the only sane remedy. It's the most insane remedy.

    >
    > In your own deluded, ill-informed opinion.
    >
    >
    >
    > > Imagine a program that controlled a robot:
    > > One process for each limb, one for the head and one for the central
    > > gyromatic balance system. The robot is in walk mode and you run out of
    > > memory because a screeching parrot flies past.
    > > What is the "sane" option your program should take:
    > > Shut down eye and hearing modules and slow his walking speed?
    > > Just let the program crash and allow the robot to fall over in a heap?

    >
    > Are you blind as well as dense?  I said:
    >
    > "The only sane remedy for *most cases of* allocation failure is to
    > terminate the application which is what will happen with an uncaught
    > exception."
    >
    > *most cases of*
    >
    > If I was designing robot control software than I would design it in such
    > a way that certain individual sub-systems allocate all their required
    > memory up-front so that if some fatal error does occur then the robot
    > can safely come to a stand still and then reboot its systems.
    >

    But how do you know how much memory the eyes and ears will process on
    the robots journey? What happens if a screeching hyena leaps out in
    front of it? Does the robot do a detailed analysis of the objects
    movement,coloursceme and the sound it emits, or does it just store
    this information for later processing and treat it as a simple object
    to be avoided?

    If wouldn't be a very good robot if it stopped to reboot, would it?


    >
    >
    > > If you were working on critical systems such as aircraft controls
    > > would you still think that allowing errors to go uncatched is the only
    > > sane remedy?

    >
    > Critical systems such as aircraft controls will (I assume) allocate all
    > the memory that is needed for all of their operations up-front so the
    > situation of a memory allocation failure during flight will never occur.
    >

    But say they don't know how much memory they will need, consider an
    air traffic control centre, which due to unforeseen circumstances, has
    to deal with hundreds of additional flight paths one day.
    Does the system just crash or does it prioritise and deal with the
    problem?

    THe air traffic controller may have one or two planes flashing
    onscreen with flight path unknown, but I'd say thats a hell of alot
    better than having a BSOD.
    Paul, Aug 30, 2011
    #19
  20. Paul

    Dombo Guest

    Op 30-Aug-11 11:14, Krice schreef:
    > On 30 elo, 09:26, Paul<> wrote:
    >> or do people just not care and allow the program to crash if
    >> it runs out of memory?

    >
    > Yes, that's what I do and I think many others. It's because
    > try-catch syntax is confusing and because 4Gb+ computers
    > rarely run out of memory.


    What is so confusing about try-catch? Exceptions make dealing with
    failed allocations a lot less tedious than in C.

    > And if they do, how are you going
    > to recover from the situation with try-catch if you really
    > need to allocate that memory?


    That is the real question: what do you do when you have run out of
    memory? The application could inform the user that the operation failed
    because there was not enough memory and keep on running. If a program
    has cached a lot of information would flush its caches to free some
    memory and retry the operation. But chances are that it will never get
    to that point. My experience is that in a typical PC environment the
    system tends to become non-responding or worse long before memory
    allocation fails. Before that happens the user has probably restarted
    his PC.
    Dombo, Aug 30, 2011
    #20
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Bob
    Replies:
    1
    Views:
    600
    .NET Follower
    Feb 13, 2004
  2. aurgathor

    alloc 2d array with new

    aurgathor, Mar 12, 2005, in forum: C++
    Replies:
    4
    Views:
    7,009
    Axter
    Mar 13, 2005
  3. Adam Hartshorne

    Heap Alloc Exception Problem

    Adam Hartshorne, Jun 2, 2005, in forum: C++
    Replies:
    6
    Views:
    564
    Peter Koch Larsen
    Jun 2, 2005
  4. rantingrick
    Replies:
    44
    Views:
    1,170
    Peter Pearson
    Jul 13, 2010
  5. red floyd
    Replies:
    3
    Views:
    425
Loading...

Share This Page