bad_alloc after 4GB on 64bit

C

cman

Hi guys, why does this fail raising bad_alloc
int *v = new int [6000000000];

if this succeeds
int *v = (int *) malloc((unsigned)6000000000)

both on the same machine, same compiler g++, 64bit linux red hat
enterprise 4, no ulimits on the user, enough virtual memory ecc...

in both cases compiled with
g++ source.cpp -o executable
the file is always a c++ .cpp file, that line above is the only one that
differs.

Thanks
 
S

Stuart Redmann

cman said:
Hi guys, why does this fail raising bad_alloc
int *v = new int [6000000000];

AFAIK, any integer literal will be interpreted as 32 bit signed integer, even
though the compiler is a 64bit compiler. I think that this is meant to be so for
backward compatibility reasons to 32bit compilers. If you want an int literal to
be a 64bit int literal, you'll have to add LL at the back: 6000000000LL should
do the trick.
if this succeeds
int *v = (int *) malloc((unsigned)6000000000)

both on the same machine, same compiler g++, 64bit linux red hat
enterprise 4, no ulimits on the user, enough virtual memory ecc...

in both cases compiled with
g++ source.cpp -o executable
the file is always a c++ .cpp file, that line above is the only one that
differs.

Regards,
Stuart
 
?

=?iso-8859-1?Q?Leo_Havm=F8ller?=

int *v = new int [6000000000];

Request for 24000000000 bytes.
int *v = (int *) malloc((unsigned)6000000000)

Request for 6000000000 bytes.

Leo Havmøller.
 
C

cman

Leo said:
int *v = new int [6000000000];

Request for 24000000000 bytes.
int *v = (int *) malloc((unsigned)6000000000)

Request for 6000000000 bytes.

Ok this was my mistake, however this is not the problem.
Here is a new test source code that shows the problem:

#include <iostream>
#include <cstdlib>
int main() {
//char *v = new char[48000000000]; //FAILS
char * v = (char*)malloc((unsigned)48000000000); //SUCCEEDES
std::cout << (void*)v << std::endl;
return 0;
}

the commented line fails with bad_alloc while the other one succeedes,
that is, nonzero pointer.
 
T

Tadeusz B. Kopec

Leo said:
int *v = new int [6000000000];

Request for 24000000000 bytes.
int *v = (int *) malloc((unsigned)6000000000)

Request for 6000000000 bytes.

Ok this was my mistake, however this is not the problem. Here is a new
test source code that shows the problem:

#include <iostream>
#include <cstdlib>
int main() {
//char *v = new char[48000000000]; //FAILS char *
v = (char*)malloc((unsigned)48000000000); //SUCCEEDES std::cout
<< (void*)v << std::endl;
return 0;
}

the commented line fails with bad_alloc while the other one succeedes,
that is, nonzero pointer.

I guess it's a feature of OS, not language or compiler. It's based on an
article by Herb Sutter (on of his 'Exceptional C++...' books, article
title like 'Why not bother with catching bad_alloc').
I mean that OS might wait with allocating memory until you try to access
it. operator new tries to initialise allocated memory while malloc
doesn't. If I am right, calloc should fail.
 
K

karpov2007

#include said:
#include <cstdlib>
int main() {
//char *v = new char[48000000000]; //FAILS
char * v = (char*)malloc((unsigned)48000000000); //SUCCEEDES

I wish to specify. The size of int is equal 4 bytes?
If yes then:
char *v = new char[48000000000]; //FAILS
Allocate: 48 000 000 000 bytes.

char *v = (char*)malloc((unsigned)48000000000); //SUCCEEDES
Allocate: 755 359 744 bytes!!!

755359744 < 48000000000 :)

Programming under 64-bit systems in general is fraught with errors.
Welcome:
http://www.viva64.com/articles/20_issues_of_porting_C++_code_on_the_64-bit_platform.html
http://www.viva64.com/articles/Forgotten_problems.html
http://www.viva64.com/articles/Viva64_-_what_is_and_for.html
http://www.viva64.com
 
J

James Kanze

#include <iostream>
#include <cstdlib>
int main() {
//char *v = new char[48000000000]; //FAILS
char * v = (char*)malloc((unsigned)48000000000); //SUCCEEDES
I wish to specify. The size of int is equal 4 bytes?

Probably. I think he mentioned Linux, which is generally
I32LP64 in 64 bit mode.
If yes then:
char *v = new char[48000000000]; //FAILS
Allocate: 48 000 000 000 bytes.
char *v = (char*)malloc((unsigned)48000000000); //SUCCEEDES
Allocate: 755 359 744 bytes!!!
755359744 < 48000000000 :)
Programming under 64-bit systems in general is fraught with errors.

No more so than programming under anything else. Porting
non-portable code to a different system is fraught with errors.
By definition. The problem is that a lot of code is
non-portable (e.g. assumes 32 bit long) for no real reason.

In this case, one has to wonder why the cast to unsigned, which
serves no possible cause. Malloc takes a size_t (which is 8
bytes in an I32LP64 system), just like operator new.
 
R

Ron Natalie

Stuart said:
cman said:
Hi guys, why does this fail raising bad_alloc
int *v = new int [6000000000];

AFAIK, any integer literal will be interpreted as 32 bit signed integer,

Well not exactly. An integer literal (without a suffix) will be
represented as either int or long int. Unfortunately, long int
on this implementation is still 32 bits.
 
K

karpov2007

And more importanlty sizeof (long int) is still 4.
Are you sure? RHEL4 64 bit used LP64 data model.


Red Hat Enterprise Linux 3.0 ES x86_64 on an Intel Xeon EM64T:

int main(int argc, char** argv) {
printf("pointer: %d\n", sizeof(char*));
printf("int: %d\n", sizeof(int));
printf("long: %d\n", sizeof(long));
}

pointer: 8
int: 4
long: 8
 
A

Alex Buell

Red Hat Enterprise Linux 3.0 ES x86_64 on an Intel Xeon EM64T:

int main(int argc, char** argv) {
printf("pointer: %d\n", sizeof(char*));
printf("int: %d\n", sizeof(int));
printf("long: %d\n", sizeof(long));
}

pointer: 8
int: 4
long: 8

What happens if you use 'long long'? On my 32bit system it's 64 bits.
 
K

karpov2007

What happens if you use 'long long'? On my 32bit system it's 64 bits.

Must be 8 byte. But I have not understood, to what this question. I
have simply objected, that in RHEL4 64 sizeof (long int) = 4.
 
J

James Kanze

Stuart said:
cman wrote:
Hi guys, why does this fail raising bad_alloc
int *v = new int [6000000000];
AFAIK, any integer literal will be interpreted as 32 bit signed integer,
Well not exactly. An integer literal (without a suffix) will
be represented as either int or long int. Unfortunately, long
int on this implementation is still 32 bits.

Which implementation? The OP mentionned a 64 bit Red Hat
Linux---on my system (using Mandriva, instead of Red Hat, but I
don't think that matters), long int is 64 bits.
 
C

cman

#include <iostream>
#include <cstdlib>
int main() {
//char *v = new char[48000000000]; //FAILS
char * v = (char*)malloc((unsigned)48000000000); //SUCCEEDES

I wish to specify. The size of int is equal 4 bytes?
If yes then:
char *v = new char[48000000000]; //FAILS
Allocate: 48 000 000 000 bytes.

char *v = (char*)malloc((unsigned)48000000000); //SUCCEEDES
Allocate: 755 359 744 bytes!!!

755359744 < 48000000000 :)

I think you are right. The (unsigned) must have wrapped the value to 32
bits. I don't remember why we did put that unsigned, it was because of
some our previous tests. We thought it would turn the value to unsigned
long but actually it turned it to unsigned int.
We will fix and repeat that test when the 64bit machine is free, now
there is a BIG process taking all the memory, running there...

BTW: yes int is 32 bits in there
strange thing: shouldn't int be the native and fastest datatype on a
machine? Why isn't it 64 bits then?

Thanks
 
V

Victor Bazarov

cman said:
[..talking about 64bit machine..]
BTW: yes int is 32 bits in there
strange thing: shouldn't int be the native and fastest datatype on a
machine? Why isn't it 64 bits then?

Who says that the 64-bit int is the fastest on a 64-bit machine? It
is quite possible that the 32-bin int is faster (and just as native).

V
 
C

cman

Victor said:
Who says that the 64-bit int is the fastest on a 64-bit machine? It
is quite possible that the 32-bin int is faster (and just as native).

The 16 bit type is certainly faster, then!
 
J

James Kanze

(e-mail address removed) wrote:

[...]
BTW: yes int is 32 bits in there
strange thing: shouldn't int be the native and fastest datatype on a
machine? Why isn't it 64 bits then?

Histerical reasons, no doubt. I think you'll find that in
practice, 1, 2, 4 and 8 byte integers are pretty much the same
speed; depending on the application, the smaller ones may even
run faster, because of better locality. Up until 32 bits, the
size of an int automatically grew, without question, because the
bounds were too small otherwise to be really useful. Beyond 32
bits, that's not so clearly the case, and most 64 bit
implementations which derive from earlier 32 bit ones use 32 bit
ints.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top