Does c++(under linux) overcommit memory?

J

jon wayne

Hi

I was always under the assumption that linux always overcommits memory
by default - but I'm getting unexpected results
while requesting for a large ammount of memory using new (c++).

In the sense , say I try and allocate dynamically a large array p (int
*p)

p = (int *) malloc(N * sizeof(int)); // ----
1

and replace it by

p = new int[ N * sizeof(int)]; // -- 2

where N = 1000000000000000 //

the second statement always generates a bad_alloc exception ---
Agreed that if you try and access p it'd give a SIGSEGV - but why
should a plain allocation give a bad_alloc - "C" doesn't seem to mind
it - shouldn't C++ too??

I suspect it could be because C++ uses a different memory management
library - could someone please clarify.

(When I do an strace - I find both of the above versions end up
calling mmap().)

ENV -

gcc 3.4.3

linux - 2.4.21-40.EL

I'd really appreciate some info on this,

Regards
 
P

peter koch

Hi

I was always under the assumption that linux always overcommits memory
by default - but I'm getting unexpected results
while requesting for a large ammount of memory using new (c++). That I don't know.

In the sense , say I try and allocate dynamically a large array p (int
*p)

p = (int *) malloc(N * sizeof(int)); // ----
1

and replace it by

p = new int[ N * sizeof(int)]; // -- 2
That is not a replacement (unless sizeof(int) happens to be 1 on your
platform). The corresponding expression is
p = new int[ N];
where N = 1000000000000000 //

the second statement always generates a bad_alloc exception ---
Agreed that if you try and access p it'd give a SIGSEGV - but why
should a plain allocation give a bad_alloc - "C" doesn't seem to mind
it - shouldn't C++ too??
I would normally recommend that you use std::vector. Here, you'd have:
std::vector<int> v(N);

(and use &v[0] whenever you want a pointer to its first element).
In that case you'd have a segment violation because the vector would
be initialised and the overcomitment would kick in right away.
[snip]

/Peter
 
J

John Harrison

jon said:
Hi

I was always under the assumption that linux always overcommits memory
by default - but I'm getting unexpected results
while requesting for a large ammount of memory using new (c++).

In the sense , say I try and allocate dynamically a large array p (int
*p)

p = (int *) malloc(N * sizeof(int)); // ----
1

and replace it by

p = new int[ N * sizeof(int)]; // -- 2

where N = 1000000000000000 //

the second statement always generates a bad_alloc exception ---
Agreed that if you try and access p it'd give a SIGSEGV - but why
should a plain allocation give a bad_alloc - "C" doesn't seem to mind
it - shouldn't C++ too??


You don't need '*sizeof(int)' in C++, it does that for you. Try this

p = new int[ N ]; // -- 2
I suspect it could be because C++ uses a different memory management
library - could someone please clarify.

new is different from malloc in another way too. With new each allocated
object is also default constructed. That makes no difference for int,
but might for a class you have written.

john
 
J

Jacek Dziedzic

jon said:
Hi

I was always under the assumption that linux always overcommits memory
by default - but I'm getting unexpected results
while requesting for a large ammount of memory using new (c++).

In the sense , say I try and allocate dynamically a large array p (int
*p)

p = (int *) malloc(N * sizeof(int)); // ----
1

and replace it by

p = new int[ N * sizeof(int)]; // -- 2

where N = 1000000000000000 //

I'd say N is not a long constant, so it most probably
wraps to something negative. I believe this is undefined
behaviour right away, and if not, then new is surprised
by you requesting a negative amount of elements.

HTH,
- J.
 
D

Dave Rahardja

jon said:
Hi

I was always under the assumption that linux always overcommits memory
by default - but I'm getting unexpected results
while requesting for a large ammount of memory using new (c++).

In the sense , say I try and allocate dynamically a large array p (int
*p)

p = (int *) malloc(N * sizeof(int)); // ----
1

and replace it by

p = new int[ N * sizeof(int)]; // -- 2

where N = 1000000000000000 //

I'd say N is not a long constant, so it most probably
wraps to something negative. I believe this is undefined
behaviour right away, and if not, then new is surprised
by you requesting a negative amount of elements.

Depends on what std::size_t is on your machine. The number may be valid on a
64-bit computer.

-dr
 
M

Michael

In the sense , say I try and allocate dynamically a large array p (int
*p)

p = (int *) malloc(N * sizeof(int)); // ----
1

and replace it by

p = new int[ N * sizeof(int)]; // -- 2

where N = 1000000000000000 //

the second statement always generates a bad_alloc exception ---

As noted, you're allocating 4 or so times more memory in the second
case.

Also, in the first case, are you checking if p == NULL after the
malloc? It's quite possible that it's failing too, and you're just
not checking it.

Michael
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,022
Latest member
MaybelleMa

Latest Threads

Top