forcing new to fail (or throw an exception)

H

H.S.

Hello,

Here is a little question. I was reading up on the FAQ on pointers:
http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.6

and wanted to see what g++ (ver. 4.1.3) does if it cannot allocate
enough memory by trying to allocating huge amount. Here is what I am trying:
int main(){
double *ldP;
ldP = new double [2048*2048*2048];

delete ldP;
return 0;
}

It compiles okay. It runs okay too.

What am I missing here? How can I try to allocate memory huge enough
that new throws an exception?

thanks,
->HS
 
V

Victor Bazarov

H.S. said:
Hello,

Here is a little question. I was reading up on the FAQ on pointers:
http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.6

and wanted to see what g++ (ver. 4.1.3) does if it cannot allocate
enough memory by trying to allocating huge amount. Here is what I am
trying: int main(){
double *ldP;
ldP = new double [2048*2048*2048];

Try

size_t s = 2048*2048*2048;
std::cout << "About to allocate " << s << " doubles" << std::endl;
double ldP = new double;
delete ldP;

Should be

delete[] ldP;
return 0;
}

It compiles okay. It runs okay too.

What am I missing here? How can I try to allocate memory huge enough
that new throws an exception?

Hard to say. Your program (due to wrong 'delete') had undefined
behaviour. Try fixing it.

V
 
H

H.S.

Victor said:
H.S. said:
Hello,

Here is a little question. I was reading up on the FAQ on pointers:
http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.6

and wanted to see what g++ (ver. 4.1.3) does if it cannot allocate
enough memory by trying to allocating huge amount. Here is what I am
trying: int main(){
double *ldP;
ldP = new double [2048*2048*2048];

Try

size_t s = 2048*2048*2048;

which generates:
$> g++ -o testmem testmem.cc
testmem.cc: In function ‘int main()’:
testmem.cc:5: warning: overflow in implicit constant conversion
$> ./testmem
About to allocate 0 doubles
std::cout << "About to allocate " << s << " doubles" << std::endl;
double ldP = new double;
delete ldP;

Should be

delete[] ldP;


Thanks for the correction.
Hard to say. Your program (due to wrong 'delete') had undefined
behaviour. Try fixing it.

So after removing my mistakes, and correcting the one in your code (sort
of), here is what throws the exception (this is on a Debian Testing
kernel, 2.6.21, since max memory allocation depends on the kernel
options(?)):

#include <iostream>
int main(){
double *ldP;
size_t s = 2048*2048*58;
std::cout << "About to allocate " << s << " doubles" << std::endl;
ldP = new double ;

delete [] ldP;
return 0;
}

$> g++ -o testmem testmem.cc
$> ./testmem
About to allocate 243269632 doubles
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted

thanks,
->HS
 
B

BobR

H.S. said:
#include <iostream>
int main(){ // > double *ldP;
size_t s = 2048*2048*58;
std::cout << "About to allocate " << s << " doubles" << std::endl;
// > ldP = new double ;
double *ldP( new double[ s ] );
delete [] ldP;
return 0;
}

$> g++ -o testmem testmem.cc
$> ./testmem
About to allocate 243269632 doubles
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted

thanks,
->HS

FYI: When you want a BIG number, try this:

// std::size_t big(-1); // compiler 'warning', but usually works (a)
int bigint(-1);
std::size_t big( bigint );
// #include <limits>
std::size_t big2( std::numeric_limits<std::size_t>::max() );

std::cout<<"size_t big()="<<big<<std::endl;
std::cout<<"size_t big2()="<<big2<<std::endl;

// out: size_t big()=4294967295
// out: size_t big2()=4294967295

Of course the '-1' trick (a) only works on 'unsigned' types
(, and may be UB on some systems (?)).
Use the 'numeric_limits<>' version.
 
R

Robert Bauck Hamar

BobR said:
FYI: When you want a BIG number, try this:

// std::size_t big(-1); // compiler 'warning', but usually works (a)
int bigint(-1);
std::size_t big( bigint );
// #include <limits>
std::size_t big2( std::numeric_limits<std::size_t>::max() );

std::cout<<"size_t big()="<<big<<std::endl;
std::cout<<"size_t big2()="<<big2<<std::endl;

// out: size_t big()=4294967295
// out: size_t big2()=4294967295

Of course the '-1' trick (a) only works on 'unsigned' types
(, and may be UB on some systems (?)).

No, it's well defined. The result should be the least unsigned integer
congruent to -1 modulo 2**N (where ** means power, and N is the number of
bits in std::size_t), and that would be 2**N - 1.
Use the 'numeric_limits<>' version.

The numeric_limits<> version also works on signed integers. And it doesn't
confuse readers who doesn't know that std::size_t is unsigned, or haven't
studied the technicalities of integral conversions.

And: std::size_t is defined in <cstddef> (and some of the other C headers).
It should be included to use std::size_t
 
J

James Kanze

Victor said:
H.S. said:
Here is a little question. I was reading up on the FAQ on pointers:
http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.6
and wanted to see what g++ (ver. 4.1.3) does if it cannot allocate
enough memory by trying to allocating huge amount. Here is what I am
trying: int main(){
double *ldP;
ldP = new double [2048*2048*2048];
Try
size_t s = 2048*2048*2048;
which generates:
$> g++ -o testmem testmem.cc
testmem.cc: In function ?int main()?:
testmem.cc:5: warning: overflow in implicit constant conversion
$> ./testmem
About to allocate 0 doubles

Curious that he didn't get the warning for his code. Or maybe
he didn't notice it. In fact, of course, according to the
standard, that shouldn't be a warning, but an error. (Strictly
speaking: the program is ill formed, and the compiler must issue
a diagnostic. Formally speaking, once the compiler has issued
the diagnostic, it can do whatever it likes, including reformat
your hard drive. From a quality of implementation point of
view, of course, either the program should not compile, or the
compiler should document this as an extension. In this case, at
any rate I'd definitly post a bug report to g++. Supposing 32
bit int's, of course.)
std::cout << "About to allocate " << s << " doubles" << std::endl;
double ldP = new double;
delete ldP;
Should be
delete[] ldP;

Thanks for the correction.

Come now. We both know that the wrong delete wasn't the
problem. The problem was the overflow, which would have been
undefined behavior if the expression hadn't been a constant
expression.
So after removing my mistakes, and correcting the one in your code (sort
of), here is what throws the exception (this is on a Debian Testing
kernel, 2.6.21, since max memory allocation depends on the kernel
options(?)):
#include <iostream>
int main(){
double *ldP;
size_t s = 2048*2048*58;
std::cout << "About to allocate " << s << " doubles" << std::endl;
ldP = new double ;

delete [] ldP;
return 0;
}
$> g++ -o testmem testmem.cc
$> ./testmem
About to allocate 243269632 doubles
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted

You still haven't tested much. (I know, because operator new
doesn't work correctly with the default configuration of Linux.)
Try smaller blocks, and then accessing the allocated memory.
For some configurations, you'll get a core dump. (It may be
hard to simulate if you have a lot of memory.)

Basically, operator new can fail for three reasons: there's not
enough space available in the address space of the process (what
you're seeing, probably), the allocation would cause the process
to exceed some artificially imposed system limits (e.g. with
ulimits -m under Linux), or there really isn't enough virtual
memory. In its default configuration, Linux doesn't work in
this last case: operator new (based on what the OS told it) will
return an apparently valid pointer, which will cause a core dump
when dereferenced. (Older versions of AIX had a similar
problem, and Linux can be configured so that it behaves
correctly, too.)

Note that in this last case, at least some configurations of
some versions of Windows will pop-up a Window, asking you to
stop some other programs in order to make more memory available.
(I think some other configurations will just silently increase
the size of the swap space, and silently continue.)
 
B

BobR

Robert Bauck Hamar said:
BobR said:
// std::size_t big(-1); // compiler 'warning', but usually works (a) [snip]
Of course the '-1' trick (a) only works on 'unsigned' types
(, and may be UB on some systems (?)).

No, it's well defined. The result should be the least unsigned integer
congruent to -1 modulo 2**N (where ** means power, and N is the number of
bits in std::size_t), and that would be 2**N - 1.

Thanks.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,535
Members
45,007
Latest member
obedient dusk

Latest Threads

Top