Dynamic multidimensional array, deallocation of pointer not malloced..

W

welch.ryan

Hi all,

Having a problem with addressing large amounts of memory. I have a
simple piece of code here that is meant to allocate a large piece of
memory on a ppc64 machine. The code is:

/*
Test to see what happens when we try to allocate a massively huge
piece of memory.
*/

#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;

int main(int argc, char** argv) {
cout << "Attemping to allocate.." << endl;

const int ROWS = 635000;
const int COLS = 2350;

// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;
string blank;
getline(cin,blank);

// Deallocate.
for (int k = 0; k < ROWS; k++) {
delete[] test[k];
}
delete[] test;

cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}

return 0;
}

If I set ROWS and COLS to 5000 and 5000, it works just fine. However,
if I set ROWS to 635000 and COLS to 2350, it will give me the
following error upon deallocation:

HugeMemory.exe(29468) malloc: *** Deallocation of a pointer not
malloced: 0x20afd2000; This could be a double free(), or free() called
with the middle of an allocated block; Try setting environment
variable MallocHelp to see tools to help debug

Note that the allocation step succeeds, and that I only receive this
error after allowing the code to deallocate the array.

Any ideas?

Thanks,
Ryan
 
I

Ian Collins

Hi all,

Having a problem with addressing large amounts of memory. I have a
simple piece of code here that is meant to allocate a large piece of
memory on a ppc64 machine. The code is:

/*
Test to see what happens when we try to allocate a massively huge
piece of memory.
*/

#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;

int main(int argc, char** argv) {
cout << "Attemping to allocate.." << endl;

const int ROWS = 635000;
const int COLS = 2350;

// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;
string blank;
getline(cin,blank);

// Deallocate.
for (int k = 0; k < ROWS; k++) {
delete[] test[k];
}
delete[] test;

cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}

return 0;
}

If I set ROWS and COLS to 5000 and 5000, it works just fine. However,
if I set ROWS to 635000 and COLS to 2350, it will give me the
following error upon deallocation:

HugeMemory.exe(29468) malloc: *** Deallocation of a pointer not
malloced: 0x20afd2000; This could be a double free(), or free() called
with the middle of an allocated block; Try setting environment
variable MallocHelp to see tools to help debug

Note that the allocation step succeeds, and that I only receive this
error after allowing the code to deallocate the array.

Looks suspect, has the machine got the nigh on 6GB or virtual memory
available? Make sure your operator new behaves correctly by attempting
to allocate way more than the machine can provide.
 
B

Branimir Maksimovic

Hi all,

Having a problem with addressing large amounts of memory. I have a
simple piece of code here that is meant to allocate a large piece of
memory on a ppc64 machine. The code is:

/*
Test to see what happens when we try to allocate a massively huge
piece of memory.
*/

#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;

int main(int argc, char** argv) {
cout << "Attemping to allocate.." << endl;

const int ROWS = 635000;
const int COLS = 2350;

// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;
string blank;
getline(cin,blank);

// Deallocate.
for (int k = 0; k < ROWS; k++) {
delete[] test[k];
}
delete[] test;

cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}

return 0;

}

If I set ROWS and COLS to 5000 and 5000, it works just fine. However,
if I set ROWS to 635000 and COLS to 2350, it will give me the
following error upon deallocation:


There are lot of allocations there.
Looks to me that malloc internaly has overflow in pointer arithmetic.
Perhaps you use 32 bit malloc on 64 bit setup somehow?

Try following and see if works:

const unsigned rows = 635000;
const unsigned cols = 2350;
int (*p)[cols] = new int[rows][cols];
for(unsigned i = 0; i<rows;++i)
{
for(unsigned j=0;j<cols;++j)
p[j] = 0;
}
cin.get();
delete[] p;

Greetings, Branimir
 
W

welch.ryan

Having a problem with addressing large amounts of memory. I have a
simple piece of code here that is meant to allocate a large piece of
memory on a ppc64 machine. The code is:
/*
Test to see what happens when we try to allocate a massively huge
piece of memory.
*/
#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;
int main(int argc, char** argv) {
cout << "Attemping to allocate.." << endl;
const int ROWS = 635000;
const int COLS = 2350;
// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;
string blank;
getline(cin,blank);
// Deallocate.
for (int k = 0; k < ROWS; k++) {
delete[] test[k];
}
delete[] test;
cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}
return 0;
}
If I set ROWS and COLS to 5000 and 5000, it works just fine. However,
if I set ROWS to 635000 and COLS to 2350, it will give me the
following error upon deallocation:
HugeMemory.exe(29468) malloc: *** Deallocation of a pointer not
malloced: 0x20afd2000; This could be a double free(), or free() called
with the middle of an allocated block; Try setting environment
variable MallocHelp to see tools to help debug
Note that the allocation step succeeds, and that I only receive this
error after allowing the code to deallocate the array.

Looks suspect, has the machine got the nigh on 6GB or virtual memory
available? Make sure your operator new behaves correctly by attempting
to allocate way more than the machine can provide.


Hmm.. the machine has 8 GB of RAM, so that probably isn't the issue.
I'll try maxing it out to see what happens.
 
W

welch.ryan

Having a problem with addressing large amounts of memory. I have a
simple piece of code here that is meant to allocate a large piece of
memory on a ppc64 machine. The code is:
/*
Test to see what happens when we try to allocate a massively huge
piece of memory.
*/
#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;
int main(int argc, char** argv) {
cout << "Attemping to allocate.." << endl;
const int ROWS = 635000;
const int COLS = 2350;
// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;
string blank;
getline(cin,blank);
// Deallocate.
for (int k = 0; k < ROWS; k++) {
delete[] test[k];
}
delete[] test;
cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}
return 0;

If I set ROWS and COLS to 5000 and 5000, it works just fine. However,
if I set ROWS to 635000 and COLS to 2350, it will give me the
following error upon deallocation:

There are lot of allocations there.
Looks to me that malloc internaly has overflow in pointer arithmetic.
Perhaps you use 32 bit malloc on 64 bit setup somehow?

Try following and see if works:

const unsigned rows = 635000;
const unsigned cols = 2350;
int (*p)[cols] = new int[rows][cols];
for(unsigned i = 0; i<rows;++i)
{
for(unsigned j=0;j<cols;++j)
p[j] = 0;
}
cin.get();
delete[] p;

Greetings, Branimir


I can't seem to get that code to compile, it complains about the line
before the for loop. I'm using gcc 4.0.1 for apple/darwin. Any ideas?

I think you're probably right, it has something to do with pointer
arithmetic. I'm just not sure what. The malloc() failure is happening
on the for loop where I'm deallocating each row of the array, I've
figured out that much. Beyond that, I'm not sure.

I've tried the following compiler options too but they don't warn me
of anything:

g++ -o HugeMemory.exe -O3 -mcpu=powerpc64 -arch ppc64 -faltivec -Wall -
Wconversion HugeMemory.cpp
 
I

Ian Collins

There are lot of allocations there.
Looks to me that malloc internaly has overflow in pointer arithmetic.
Perhaps you use 32 bit malloc on 64 bit setup somehow?

Try following and see if works:

const unsigned rows = 635000;
const unsigned cols = 2350;
int (*p)[cols] = new int[rows][cols];
for(unsigned i = 0; i<rows;++i)
{
for(unsigned j=0;j<cols;++j)
p[j] = 0;
}
cin.get();
delete[] p;

Greetings, Branimir


I can't seem to get that code to compile, it complains about the line
before the for loop. I'm using gcc 4.0.1 for apple/darwin. Any ideas?

The code is fine, with the exception of an integer overflow warning from
gcc.
I think you're probably right, it has something to do with pointer
arithmetic. I'm just not sure what. The malloc() failure is happening
on the for loop where I'm deallocating each row of the array, I've
figured out that much. Beyond that, I'm not sure.
Update your code to scan the rows for duplicate addresses. If you find
one, something is wrong!
 
B

Branimir Maksimovic

On May 12, 2:02 am, (e-mail address removed) wrote: .......
There are lot of allocations there.
Looks to me that malloc internaly has overflow in pointer arithmetic.
Perhaps you use 32 bit malloc on 64 bit setup somehow?
Try following and see if works:
const unsigned rows = 635000;
const unsigned cols = 2350;
int (*p)[cols] = new int[rows][cols];
for(unsigned i = 0; i<rows;++i)
{
for(unsigned j=0;j<cols;++j)
p[j] = 0;
}
cin.get();
delete[] p;

Greetings, Branimir

I can't seem to get that code to compile, it complains about the line
before the for loop. I'm using gcc 4.0.1 for apple/darwin. Any ideas?


What is the error message?

Greetings, Branimir.
 
?

=?ISO-8859-1?Q?Erik_Wikstr=F6m?=

Hi all,

Having a problem with addressing large amounts of memory. I have a
simple piece of code here that is meant to allocate a large piece of
memory on a ppc64 machine. The code is:

/*
Test to see what happens when we try to allocate a massively huge
piece of memory.
*/

#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;

int main(int argc, char** argv) {
cout << "Attemping to allocate.." << endl;

const int ROWS = 635000;
const int COLS = 2350;

// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;
string blank;
getline(cin,blank);

// Deallocate.
for (int k = 0; k < ROWS; k++) {
delete[] test[k];
}
delete[] test;

cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}

return 0;
}

If I set ROWS and COLS to 5000 and 5000, it works just fine. However,
if I set ROWS to 635000 and COLS to 2350, it will give me the
following error upon deallocation:

HugeMemory.exe(29468) malloc: *** Deallocation of a pointer not
malloced: 0x20afd2000; This could be a double free(), or free() called
with the middle of an allocated block; Try setting environment
variable MallocHelp to see tools to help debug

Note that the allocation step succeeds, and that I only receive this
error after allowing the code to deallocate the array.


I have absolutely no idea, but you could try to make the code a bit more
simple by allocating everything in one large block instead:

#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;

int main() {
cout << "Attemping to allocate.." << endl;

const int ROWS = 635000;
const int COLS = 2350;

// Allocate.
try {
int* test = new int[ROWS * COLS];
for (int i = 0; i < ROWS * COLS; i++) {
test = 0;
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;

string blank;
getline(cin,blank);

// Deallocate.
delete[] test;

cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}

return 0;
}

Do you still get the same error (or some other)? If you do there's
probably something wrong with your standard library.
 
W

welch.ryan

Having a problem with addressing large amounts of memory. I have a
simple piece of code here that is meant to allocate a large piece of
memory on a ppc64 machine. The code is:
/*
Test to see what happens when we try to allocate a massively huge
piece of memory.
*/
#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;
int main(int argc, char** argv) {
cout << "Attemping to allocate.." << endl;
const int ROWS = 635000;
const int COLS = 2350;
// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;
string blank;
getline(cin,blank);
// Deallocate.
for (int k = 0; k < ROWS; k++) {
delete[] test[k];
}
delete[] test;
cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}
return 0;

If I set ROWS and COLS to 5000 and 5000, it works just fine. However,
if I set ROWS to 635000 and COLS to 2350, it will give me the
following error upon deallocation:

There are lot of allocations there.
Looks to me that malloc internaly has overflow in pointer arithmetic.
Perhaps you use 32 bit malloc on 64 bit setup somehow?

Try following and see if works:

const unsigned rows = 635000;
const unsigned cols = 2350;
int (*p)[cols] = new int[rows][cols];
for(unsigned i = 0; i<rows;++i)
{
for(unsigned j=0;j<cols;++j)
p[j] = 0;
}
cin.get();
delete[] p;

Greetings, Branimir


This is what happens:

HugeMemory2.cpp:4: error: expected unqualified-id before 'for'
HugeMemory2.cpp:4: error: expected constructor, destructor, or type
conversion before '<' token
HugeMemory2.cpp:4: error: expected unqualified-id before '++' token
HugeMemory2.cpp:9: error: expected constructor, destructor, or type
conversion before '.' token
HugeMemory2.cpp:10: error: expected unqualified-id before 'delete'

I thought maybe it was because there's no 'int' after 'unsigned' but
that didn't help. :(
 
W

welch.ryan

Having a problem with addressing large amounts of memory. I have a
simple piece of code here that is meant to allocate a large piece of
memory on a ppc64 machine. The code is:
/*
Test to see what happens when we try to allocate a massively huge
piece of memory.
*/
#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;
int main(int argc, char** argv) {
cout << "Attemping to allocate.." << endl;
const int ROWS = 635000;
const int COLS = 2350;
// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;
string blank;
getline(cin,blank);
// Deallocate.
for (int k = 0; k < ROWS; k++) {
delete[] test[k];
}
delete[] test;
cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}
return 0;
}
If I set ROWS and COLS to 5000 and 5000, it works just fine. However,
if I set ROWS to 635000 and COLS to 2350, it will give me the
following error upon deallocation:
HugeMemory.exe(29468) malloc: *** Deallocation of a pointer not
malloced: 0x20afd2000; This could be a double free(), or free() called
with the middle of an allocated block; Try setting environment
variable MallocHelp to see tools to help debug
Note that the allocation step succeeds, and that I only receive this
error after allowing the code to deallocate the array.

I have absolutely no idea, but you could try to make the code a bit more
simple by allocating everything in one large block instead:

#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;

int main() {
cout << "Attemping to allocate.." << endl;

const int ROWS = 635000;
const int COLS = 2350;

// Allocate.
try {
int* test = new int[ROWS * COLS];
for (int i = 0; i < ROWS * COLS; i++) {
test = 0;
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;

string blank;
getline(cin,blank);

// Deallocate.
delete[] test;

cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}

return 0;

}

Do you still get the same error (or some other)? If you do there's
probably something wrong with your standard library.


Nope! That code succeeds. However, that code should require around 6
GB of RAM, correct? If I use top to check the memory usage of the
process, I see:

3688 HugeMemory 0.0% 0:39.81 1 13 34 232K 6.51M 1.56G
1.63G

So it's not even coming close.. or am I reading that incorrectly?
Quite strange..
 
M

Markus Schoder

I have absolutely no idea, but you could try to make the code a bit
more simple by allocating everything in one large block instead:

#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;

int main() {
cout << "Attemping to allocate.." << endl;

const int ROWS = 635000;
const int COLS = 2350;

// Allocate.
try {
int* test = new int[ROWS * COLS];
for (int i = 0; i < ROWS * COLS; i++) {
test = 0;
}

cout << "Allocation succeeded!" << endl; cout << "Press a key to
deallocate and continue.." << endl;

string blank;
getline(cin,blank);

// Deallocate.
delete[] test;

cout << "Deallocation completed!" << endl; cout << "Press a key to
terminate.." << endl; getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}

return 0;

}

Do you still get the same error (or some other)? If you do there's
probably something wrong with your standard library.


Nope! That code succeeds. However, that code should require around 6 GB
of RAM, correct? If I use top to check the memory usage of the process,
I see:

3688 HugeMemory 0.0% 0:39.81 1 13 34 232K 6.51M 1.56G
1.63G

So it's not even coming close.. or am I reading that incorrectly? Quite
strange..


Could be a 32bit overflow in top or somewhere else. The reported numbers
are almost exactly 4G short of what one would expect.
 
W

welch.ryan

I have absolutely no idea, but you could try to make the code a bit
more simple by allocating everything in one large block instead:
#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;
int main() {
cout << "Attemping to allocate.." << endl;
const int ROWS = 635000;
const int COLS = 2350;
// Allocate.
try {
int* test = new int[ROWS * COLS];
for (int i = 0; i < ROWS * COLS; i++) {
test = 0;
}
cout << "Allocation succeeded!" << endl; cout << "Press a key to
deallocate and continue.." << endl;
string blank;
getline(cin,blank);
// Deallocate.
delete[] test;
cout << "Deallocation completed!" << endl; cout << "Press a key to
terminate.." << endl; getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}
return 0;
}
Do you still get the same error (or some other)? If you do there's
probably something wrong with your standard library.

Nope! That code succeeds. However, that code should require around 6 GB
of RAM, correct? If I use top to check the memory usage of the process,
I see:
3688 HugeMemory 0.0% 0:39.81 1 13 34 232K 6.51M 1.56G
1.63G
So it's not even coming close.. or am I reading that incorrectly? Quite
strange..

Could be a 32bit overflow in top or somewhere else. The reported numbers
are almost exactly 4G short of what one would expect.


Interesting.. okay, so let's suppose top is reporting it incorrectly,
and that code actually does truly successfully allocate all of that
memory. Then the question is, why is it that my original code (using a
multidimensional approach) fails, yet allocating it as one large block
seems to work?
 
W

welch.ryan

There are lot of allocations there.
Looks to me that malloc internaly has overflow in pointer arithmetic.
Perhaps you use 32 bit malloc on 64 bit setup somehow?
Try following and see if works:
const unsigned rows = 635000;
const unsigned cols = 2350;
int (*p)[cols] = new int[rows][cols];
for(unsigned i = 0; i<rows;++i)
{
for(unsigned j=0;j<cols;++j)
p[j] = 0;
}
cin.get();
delete[] p;
Greetings, Branimir

I can't seem to get that code to compile, it complains about the line
before the for loop. I'm using gcc 4.0.1 for apple/darwin. Any ideas?

The code is fine, with the exception of an integer overflow warning from
gcc.
I think you're probably right, it has something to do with pointer
arithmetic. I'm just not sure what. The malloc() failure is happening
on the for loop where I'm deallocating each row of the array, I've
figured out that much. Beyond that, I'm not sure.

Update your code to scan the rows for duplicate addresses. If you find
one, something is wrong!


Okay, I wrote something that I *think* would detect duplicate
addresses. Don't laugh at the implementation..

#include <iostream>
#include <string>
#include <stdexcept>
#include <map>
using namespace std;

int main(int argc, char** argv) {
cout << "Attempting to allocate.." << endl; // I can spell correctly
now..

const int ROWS = 635000;
const int COLS = 2350;

// Keep track of all used addresses.
map<int*,int> addresses;

// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
addresses[ test ] += 1;
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}

// Check for duplicate addresses.
cout << "Checking for duplicate addresses.." << endl;
map<int*,int>::iterator iter = addresses.begin();
map<int*,int>::iterator end = addresses.end();
while (iter != end) {
if (iter->second > 1) {
cout << "--> Duplicate address detected: " << iter->first<<
endl;
}
iter++;
}

cout << "Allocation succeeded!" << endl;
cout << "Press a key to deallocate and continue.." << endl;
string blank;
getline(cin,blank);

// Deallocate.
for (int k = 0; k < ROWS; k++) {
int** ptr = test + k;
delete[] ptr;
}
delete[] test;

cout << "Deallocation completed!" << endl;
cout << "Press a key to terminate.." << endl;
getline(cin,blank);
}
catch(bad_alloc& e) {
cout << "Allocation failed.." << endl;
}

return 0;
}

That code detects no duplicate addresses.

One additional thing I've noticed, I don't know if this helps: the
addresses in these error messages are about 8 apart from each other,
for example:

HugeMemory.exe(537) malloc: *** Deallocation of a pointer not
malloced: 0x2009350; This could be a double free(), or free() called
with the middle of an allocated block; Try setting environment
variable MallocHelp to see tools to help debug
HugeMemory.exe(537) malloc: *** Deallocation of a pointer not
malloced: 0x2009358; This could be a double free(), or free() called
with the middle of an allocated block; Try setting environment
variable MallocHelp to see tools to help debug
HugeMemory.exe(537) malloc: *** Deallocation of a pointer not
malloced: 0x2009360; This could be a double free(), or free() called
with the middle of an allocated block; Try setting environment
variable MallocHelp to see tools to help debug
 
B

Branimir Maksimovic

Try following and see if works:
const unsigned rows = 635000;
const unsigned cols = 2350;
int (*p)[cols] = new int[rows][cols];
for(unsigned i = 0; i<rows;++i)
{
for(unsigned j=0;j<cols;++j)
p[j] = 0;
}
cin.get();
delete[] p;

Greetings, Branimir

This is what happens:

HugeMemory2.cpp:4: error: expected unqualified-id before 'for'
HugeMemory2.cpp:4: error: expected constructor, destructor, or type
conversion before '<' token
HugeMemory2.cpp:4: error: expected unqualified-id before '++' token
HugeMemory2.cpp:9: error: expected constructor, destructor, or type
conversion before '.' token
HugeMemory2.cpp:10: error: expected unqualified-id before 'delete'

I thought maybe it was because there's no 'int' after 'unsigned' but
that didn't help. :(


Strange. Are you sure you entered code correctly?
I guess that error is triggered by something else in your code,
as I tried with comeau online, g++ 3.4.4 and latest vc++.

Greetings, Branimir.
 
I

Ian Collins

Interesting.. okay, so let's suppose top is reporting it incorrectly,
and that code actually does truly successfully allocate all of that
memory. Then the question is, why is it that my original code (using a
multidimensional approach) fails, yet allocating it as one large block
seems to work?
The evidence is building a good case for a bug in your allocator, time
to try a tool/platform specific forum to see it it is a known problem.
 
W

welch.ryan

The evidence is building a good case for a bug in your allocator, time
to try a tool/platform specific forum to see it it is a known problem.

I'm starting to think you're right.. any suggestions for such a forum?
 
W

welch.ryan

Try following and see if works:
const unsigned rows = 635000;
const unsigned cols = 2350;
int (*p)[cols] = new int[rows][cols];
for(unsigned i = 0; i<rows;++i)
{
for(unsigned j=0;j<cols;++j)
p[j] = 0;
}
cin.get();
delete[] p;
Greetings, Branimir

This is what happens:
HugeMemory2.cpp:4: error: expected unqualified-id before 'for'
HugeMemory2.cpp:4: error: expected constructor, destructor, or type
conversion before '<' token
HugeMemory2.cpp:4: error: expected unqualified-id before '++' token
HugeMemory2.cpp:9: error: expected constructor, destructor, or type
conversion before '.' token
HugeMemory2.cpp:10: error: expected unqualified-id before 'delete'
I thought maybe it was because there's no 'int' after 'unsigned' but
that didn't help. :(

Strange. Are you sure you entered code correctly?
I guess that error is triggered by something else in your code,
as I tried with comeau online, g++ 3.4.4 and latest vc++.

Greetings, Branimir.


Maybe it's a bad line ending or something. I can't find my handy dandy
perl script to fix them..
 
P

Paul

Hi all,

Having a problem with addressing large amounts of memory. I have a
simple piece of code here that is meant to allocate a large piece of
memory on a ppc64 machine. The code is:

/*
Test to see what happens when we try to allocate a massively huge
piece of memory.
*/

#include <iostream>
#include <string>
#include <stdexcept>
using namespace std;

int main(int argc, char** argv) {
cout << "Attemping to allocate.." << endl;

const int ROWS = 635000;
const int COLS = 2350;

// Allocate.
try {
int** test = new int*[ROWS];
for (int i = 0; i < ROWS; i++) {
test = new int[COLS];
for (int j = 0; j < COLS; j++) {
test[j] = 0;
}
}


How long does it take for your code to run? I'm asking this because the
code you posted seems very inefficient.

You are invoking "new[]" 635,000 times. An alternative is to make only two
calls to "new[]". One new[] for the row pointers, and the second new[] to
allocate the pool of memory for the int data. Then in the loop, you point
the row pointers in the right place in the int data pool.

Not only will this more than likely bypass your problem with the allocator,
your code would more than likely see a significant increase in speed, both
in allocation and in deallocation (at least in this area of code). However,
you should test this (but I would be very surprised if there isn't a
significant speed increase)

Here are the internals of your code snippet rewritten making only two calls
new[] and then two calls to delete[] to deallocate the memory.

int *pool;
int** test = new int*[ROWS]; // allocate for row pointers
pool = new int [ROWS * COLS]; // allocate memory pool for data
for (int i = 0; i < ROWS; i++)
{
test = pool;
pool += COLS;
}

// Deallocate.
delete [] test[0]; // deallocate pool
delete [] test; // deallocate row pointers.

- Paul
 
W

welch.ryan

You're right, that's definitely way more efficient. For my instance,
though, I perform one massive allocation initially and then the code
can run for hours to weeks, so the initial time for allocation wasn't
a real issue for me.

However, the interesting thing is, I don't get that malloc() error
anymore upon deletion. I still don't know why that was happening
originally in the first place, but at least this method works.

Thanks for the help everyone, I really appreciate it!

Cheers,
Ryan
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top