Does c++(under linux) overcommit memory?

Discussion in 'C++' started by jon wayne, Feb 17, 2007.

  1. jon wayne

    jon wayne Guest

    Hi

    I was always under the assumption that linux always overcommits memory
    by default - but I'm getting unexpected results
    while requesting for a large ammount of memory using new (c++).

    In the sense , say I try and allocate dynamically a large array p (int
    *p)

    p = (int *) malloc(N * sizeof(int)); // ----
    1

    and replace it by

    p = new int[ N * sizeof(int)]; // -- 2

    where N = 1000000000000000 //

    the second statement always generates a bad_alloc exception ---
    Agreed that if you try and access p it'd give a SIGSEGV - but why
    should a plain allocation give a bad_alloc - "C" doesn't seem to mind
    it - shouldn't C++ too??

    I suspect it could be because C++ uses a different memory management
    library - could someone please clarify.

    (When I do an strace - I find both of the above versions end up
    calling mmap().)

    ENV -

    gcc 3.4.3

    linux - 2.4.21-40.EL

    I'd really appreciate some info on this,

    Regards
    jon wayne, Feb 17, 2007
    #1
    1. Advertising

  2. jon wayne

    peter koch Guest

    On 17 Feb., 11:10, "jon wayne" <> wrote:
    > Hi
    >
    > I was always under the assumption that linux always overcommits memory
    > by default - but I'm getting unexpected results
    > while requesting for a large ammount of memory using new (c++).

    That I don't know.
    >
    > In the sense , say I try and allocate dynamically a large array p (int
    > *p)
    >
    > p = (int *) malloc(N * sizeof(int)); // ----
    > 1
    >
    > and replace it by
    >
    > p = new int[ N * sizeof(int)]; // -- 2

    That is not a replacement (unless sizeof(int) happens to be 1 on your
    platform). The corresponding expression is
    p = new int[ N];

    >
    > where N = 1000000000000000 //
    >
    > the second statement always generates a bad_alloc exception ---
    > Agreed that if you try and access p it'd give a SIGSEGV - but why
    > should a plain allocation give a bad_alloc - "C" doesn't seem to mind
    > it - shouldn't C++ too??

    I would normally recommend that you use std::vector. Here, you'd have:
    std::vector<int> v(N);

    (and use &v[0] whenever you want a pointer to its first element).
    In that case you'd have a segment violation because the vector would
    be initialised and the overcomitment would kick in right away.
    [snip]

    /Peter
    peter koch, Feb 17, 2007
    #2
    1. Advertising

  3. jon wayne wrote:
    > Hi
    >
    > I was always under the assumption that linux always overcommits memory
    > by default - but I'm getting unexpected results
    > while requesting for a large ammount of memory using new (c++).
    >
    > In the sense , say I try and allocate dynamically a large array p (int
    > *p)
    >
    > p = (int *) malloc(N * sizeof(int)); // ----
    > 1
    >
    > and replace it by
    >
    > p = new int[ N * sizeof(int)]; // -- 2
    >
    > where N = 1000000000000000 //
    >
    > the second statement always generates a bad_alloc exception ---
    > Agreed that if you try and access p it'd give a SIGSEGV - but why
    > should a plain allocation give a bad_alloc - "C" doesn't seem to mind
    > it - shouldn't C++ too??



    You don't need '*sizeof(int)' in C++, it does that for you. Try this

    p = new int[ N ]; // -- 2

    >
    > I suspect it could be because C++ uses a different memory management
    > library - could someone please clarify.


    new is different from malloc in another way too. With new each allocated
    object is also default constructed. That makes no difference for int,
    but might for a class you have written.

    john
    John Harrison, Feb 17, 2007
    #3
  4. jon wayne wrote:
    > Hi
    >
    > I was always under the assumption that linux always overcommits memory
    > by default - but I'm getting unexpected results
    > while requesting for a large ammount of memory using new (c++).
    >
    > In the sense , say I try and allocate dynamically a large array p (int
    > *p)
    >
    > p = (int *) malloc(N * sizeof(int)); // ----
    > 1
    >
    > and replace it by
    >
    > p = new int[ N * sizeof(int)]; // -- 2
    >
    > where N = 1000000000000000 //


    I'd say N is not a long constant, so it most probably
    wraps to something negative. I believe this is undefined
    behaviour right away, and if not, then new is surprised
    by you requesting a negative amount of elements.

    HTH,
    - J.
    Jacek Dziedzic, Feb 17, 2007
    #4
  5. On Sat, 17 Feb 2007 14:30:49 +0100, Jacek Dziedzic
    <> wrote:

    >jon wayne wrote:
    >> Hi
    >>
    >> I was always under the assumption that linux always overcommits memory
    >> by default - but I'm getting unexpected results
    >> while requesting for a large ammount of memory using new (c++).
    >>
    >> In the sense , say I try and allocate dynamically a large array p (int
    >> *p)
    >>
    >> p = (int *) malloc(N * sizeof(int)); // ----
    >> 1
    >>
    >> and replace it by
    >>
    >> p = new int[ N * sizeof(int)]; // -- 2
    >>
    >> where N = 1000000000000000 //

    >
    > I'd say N is not a long constant, so it most probably
    >wraps to something negative. I believe this is undefined
    >behaviour right away, and if not, then new is surprised
    >by you requesting a negative amount of elements.


    Depends on what std::size_t is on your machine. The number may be valid on a
    64-bit computer.

    -dr
    Dave Rahardja, Feb 17, 2007
    #5
  6. jon wayne

    Michael Guest

    > In the sense , say I try and allocate dynamically a large array p (int
    > *p)
    >
    > p = (int *) malloc(N * sizeof(int)); // ----
    > 1
    >
    > and replace it by
    >
    > p = new int[ N * sizeof(int)]; // -- 2
    >
    > where N = 1000000000000000 //
    >
    > the second statement always generates a bad_alloc exception ---


    As noted, you're allocating 4 or so times more memory in the second
    case.

    Also, in the first case, are you checking if p == NULL after the
    malloc? It's quite possible that it's failing too, and you're just
    not checking it.

    Michael
    Michael, Feb 17, 2007
    #6
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. hshdude
    Replies:
    12
    Views:
    1,037
    Dimitri Maziuk
    Nov 4, 2004
  2. bronby
    Replies:
    1
    Views:
    590
    Andrew Thompson
    Jul 15, 2005
  3. Bo Peng
    Replies:
    1
    Views:
    457
    Bo Peng
    Jan 9, 2006
  4. ming_cuhk
    Replies:
    4
    Views:
    345
    ming_cuhk
    Jan 2, 2009
  5. Luke Kenneth Casson Leighton
    Replies:
    0
    Views:
    355
    Luke Kenneth Casson Leighton
    Jan 15, 2009
Loading...

Share This Page