J
jon wayne
Hi
I was always under the assumption that linux always overcommits memory
by default - but I'm getting unexpected results
while requesting for a large ammount of memory using new (c++).
In the sense , say I try and allocate dynamically a large array p (int
*p)
p = (int *) malloc(N * sizeof(int)); // ----
1
and replace it by
p = new int[ N * sizeof(int)]; // -- 2
where N = 1000000000000000 //
the second statement always generates a bad_alloc exception ---
Agreed that if you try and access p it'd give a SIGSEGV - but why
should a plain allocation give a bad_alloc - "C" doesn't seem to mind
it - shouldn't C++ too??
I suspect it could be because C++ uses a different memory management
library - could someone please clarify.
(When I do an strace - I find both of the above versions end up
calling mmap().)
ENV -
gcc 3.4.3
linux - 2.4.21-40.EL
I'd really appreciate some info on this,
Regards
I was always under the assumption that linux always overcommits memory
by default - but I'm getting unexpected results
while requesting for a large ammount of memory using new (c++).
In the sense , say I try and allocate dynamically a large array p (int
*p)
p = (int *) malloc(N * sizeof(int)); // ----
1
and replace it by
p = new int[ N * sizeof(int)]; // -- 2
where N = 1000000000000000 //
the second statement always generates a bad_alloc exception ---
Agreed that if you try and access p it'd give a SIGSEGV - but why
should a plain allocation give a bad_alloc - "C" doesn't seem to mind
it - shouldn't C++ too??
I suspect it could be because C++ uses a different memory management
library - could someone please clarify.
(When I do an strace - I find both of the above versions end up
calling mmap().)
ENV -
gcc 3.4.3
linux - 2.4.21-40.EL
I'd really appreciate some info on this,
Regards