some memory questions about two-d arrays

G

Gumby

I want to make a two-d array of unsigned ints that
I can change the size of as I need more memory. To
do this I put in the h file a simple pointer to the
memory and row/col variables to retain the current size.

private:
unsigned int* mergeKeyTable;
unsigned int rows;
unsigned int cols;

and then in the cc file, methods setup the amount of memory and
rows, cols record what the dimensions are:

Foo::Foo()
{
........
initializeTable(20,20); // start off with 400 u.ints
}

void Foo::initializeTable(unsigned therows, unsigned thecols)
{
rows = therows;
cols = thecols;
mergeKeyTable = new unsigned int[rows * cols];
memset(mergeKeyTable, 0, sizeof(unsigned int)*rows*cols);
}

void Foo::addTableValue(unsigned rowpos, unsigned colpos, unsigned value)
{
//the table is too small, make a bigger one
if (rowpos > rows || colpos > cols)
{
delete []mergeKeyTable; // problem here?, how can delete
// know the size?
initializeTable(rowpos, colpos); // make the bigger one..
}
// put the value number at the right position via pointer arithmetic
*(mergeKeyTable + (rows * rowpos) + colpos) = value;
}

the initialize..() func runs from the constructor and if the addTableValue()
is called with a dimension bigger than what would fit in the table it sets
the dimensions, then new's a bigger table. 'value' is then
put in the table at the right position.

The question is how can I delete the memory properly? Usually delete is
passed a true object like a class or a basic type or if you did
make an array via new, the size would be there in the [][] like this:

unsigned int* mergeKeyTable = new unsigned int[100][100];
......
delete [] mergeKeyTable; // OK delete 10000 Uints

then that would be ok, since the compiler can figure out what the
size of mergeKeyTable is because the numbers are right there in the
code. But if the new function is passed the rows and cols at run time and
creates the array does the delete[] do the right thing?

Mark
 
S

Simon Saunders

mergeKeyTable = new unsigned int[rows * cols];
memset(mergeKeyTable, 0, sizeof(unsigned int)*rows*cols);
}

void Foo::addTableValue(unsigned rowpos, unsigned colpos, unsigned value)
{
//the table is too small, make a bigger one
if (rowpos > rows || colpos > cols)

You should be using >= here, not >.
{
delete []mergeKeyTable; // problem here?, how can delete
// know the size?

There is no problem here (although you might want to initialize a new
array before deleting the old one in case the memory allocation fails).
When you create an array with new[] the size is stored somewhere so it can
be retrieved when you call delete[]. See
http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.14.
Consider using a std::vector instead of a raw array; then you can simply
call its resize member function as necessary, and the fiddly memory
allocation details are all taken care of for you.
 
W

wittempj

If you combine the suggestion from Simon with the use of templates you
do not have to worry about memory management, and your class becomes
much more usable - as the same code can store all datatypes you ever
need matrices for.

-#include <iostream>
-#include <vector>
-#include <stdexcept>
-
-using namespace std;
-
-template <class T> class Matrix
-{
- typedef vector<T> column;
- vector<column> data;
-public:
- Matrix(size_t rows, size_t cols)
- {
- if(cols < 1) throw runtime_error("number of columns should be
at least 1");
- if(rows < 1) throw runtime_error("number of rows should be at
least 1");
-
-
- data.resize(cols);
- size_t i;
- for (i=0; i < cols; ++i)
- {
- column c;
- c.resize(rows);
- data = c;
- }
-
- }
-
- const size_t cols()
- {
- return data[0].size();
- }
- const size_t rows()
- {
- return data.size();
- }
-
-};
-
-int main()
-{
- try
- {
- Matrix<int> m(3,2);
- cout << m.cols() << endl;
- cout << m.rows() << endl;
- }
- catch (exception& e)
- {
- cerr << e.what() << endl;
- }
-
- return 0;
-}
 
D

Donovan Rebbechi

Foo::Foo()
{
........
initializeTable(20,20); // start off with 400 u.ints

Why 400 ? Why not make these constructor arguments ? Why make
a separate function to do the constructors work ?

Foo::Foo(int r,int c)
: rows(r), cols(c), mergeKeyTable(new unsigned int[r*c])
{
std::fill(mergeKeyTable,mergeKeyTable+r*c,0);
}
}

void Foo::initializeTable(unsigned therows, unsigned thecols)
{
rows = therows;
cols = thecols;
mergeKeyTable = new unsigned int[rows * cols];
memset(mergeKeyTable, 0, sizeof(unsigned int)*rows*cols);
}

void Foo::addTableValue(unsigned rowpos, unsigned colpos, unsigned value)
{
//the table is too small, make a bigger one
if (rowpos > rows || colpos > cols)
{
delete []mergeKeyTable; // problem here?, how can delete
// know the size?

delete[] knows the size due to compiler magic. The implementation takes care
of it (one way would be to store the size at the address preceding the start
of the array)

But the real problems here are (a) you throw away memory before you allocate
so if new[] fails, you're in trouble, (b) even if you do succeed, you're
throwing away all the data in the table, and (c) you need rowpos>=rows etc
in the above check.

Suggestion: have a separate resize() member function:

// copy data without resizing
// precondition: x.rows <= rows, x.cols <= cols
void Foo::copy(const Foo& x)
{
assert (x.rows <= rows && x.cols <= cols)
for (int i = 0; i < x.rows;++i )
for (int j = 0; j < x.cols; ++j)
mergeKeyTable[i+j*rows] = x.mergeKeyTable[i+j*x.rows];
}

// swap data in two tables
void Foo::swap(Foo& x)
{
std::swap(x.rows,rows);
std::swap(x.cols,cols);
std::swap(x.mergeKeyTable,mergeKeyTable);
}

void resize(int r, int c) {
if (r>=rows || c >= cols)
{
Foo t (r,c);
Foo.copy (*this);
Foo.swap(*this);
}
}

unsigned int* mergeKeyTable = new unsigned int[100][100];
.....
delete [] mergeKeyTable; // OK delete 10000 Uints

then that would be ok, since the compiler can figure out what the
size of mergeKeyTable is because the numbers are right there in the
code.

But that's just a trivial case because you could just use a static
array. For all interesting uses of new, size is not known at compile
time.
But if the new function is passed the rows and cols at run time and
creates the array does the delete[] do the right thing?

Yes.

Cheers,
 
G

Gumby

Simon said:
mergeKeyTable = new unsigned int[rows * cols];
memset(mergeKeyTable, 0, sizeof(unsigned int)*rows*cols);
}

void Foo::addTableValue(unsigned rowpos, unsigned colpos, unsigned value)
{
//the table is too small, make a bigger one
if (rowpos > rows || colpos > cols)

You should be using >= here, not >.

Ok, that wasn't the real code anyway, I was just slapping something
down quick to ask the hypothetical question in the news group.
I find a question makes more sense with some context.
{
delete []mergeKeyTable; // problem here?, how can delete
// know the size?

There is no problem here (although you might want to initialize a new
array before deleting the old one in case the memory allocation fails).
When you create an array with new[] the size is stored somewhere so it can
be retrieved when you call delete[]. See
http://www.parashift.com/c++-faq-lite/freestore-mgmt.html#faq-16.14.
Consider using a std::vector instead of a raw array; then you can simply
call its resize member function as necessary, and the fiddly memory
allocation details are all taken care of for you.

Hm, yeah I know about STL, I'm using it all the time elsewhere in
the code. I didn't consider using stl vector for this
table because I was doing various unusual sorting
things with it but I suppose the vector would act just
like a raw vector so maybe I could do it that way.


thanks,
Mark
 
G

Gumby

Donovan said:
Why 400 ? Why not make these constructor arguments ? Why make
a separate function to do the constructors work ?

Because I want to do it that way. In some other places
in the code (not in this posting), I want to be able to command the table to
change size (grow), if I hard coded it into the constructor that wouldn't
be possible. If I have a initializeTable() method can I use it elsewhere
easily.

Foo::Foo(int r,int c)
: rows(r), cols(c), mergeKeyTable(new unsigned int[r*c])
{
std::fill(mergeKeyTable,mergeKeyTable+r*c,0);
}
}

void Foo::initializeTable(unsigned therows, unsigned thecols)
{
rows = therows;
cols = thecols;
mergeKeyTable = new unsigned int[rows * cols];
memset(mergeKeyTable, 0, sizeof(unsigned int)*rows*cols);
}

void Foo::addTableValue(unsigned rowpos, unsigned colpos, unsigned value)
{
//the table is too small, make a bigger one
if (rowpos > rows || colpos > cols)
{
delete []mergeKeyTable; // problem here?, how can delete
// know the size?

delete[] knows the size due to compiler magic. The implementation takes
care of it (one way would be to store the size at the address preceding
the start of the array)

But the real problems here are (a) you throw away memory before you
allocate so if new[] fails, you're in trouble, (b) even if you do succeed,
you're throwing away all the data in the table, and (c) you need
rowpos>=rows etc in the above check.

I want to throw away the data each time I finish processing in the function
that calls addTableValue(). I only load a particular set
of data into the table once per function call after that it's of no use.
That's why I don't care if I toss the table that's too small, because
I'm loading it all with fresh data anyway.
Suggestion: have a separate resize() member function:

// copy data without resizing
// precondition: x.rows <= rows, x.cols <= cols
void Foo::copy(const Foo& x)
{
assert (x.rows <= rows && x.cols <= cols)
for (int i = 0; i < x.rows;++i )
for (int j = 0; j < x.cols; ++j)
mergeKeyTable[i+j*rows] = x.mergeKeyTable[i+j*x.rows];
}

// swap data in two tables
void Foo::swap(Foo& x)
{
std::swap(x.rows,rows);
std::swap(x.cols,cols);
std::swap(x.mergeKeyTable,mergeKeyTable);
}

void resize(int r, int c) {
if (r>=rows || c >= cols)
{
Foo t (r,c);
Foo.copy (*this);
Foo.swap(*this);
}
}

No point in that, I don't care about saving the data each time (see above),
each time I call the function that uses addTableValue the data set is
brand new. The mergeKeyTable is kind of a scratch area. I wanted it in
the class itself so that it would be persistent and not constantly being
newed and deleted nor dependent on the function stack size limits. This
way it only needs a new/delete cycle if it's resized which should happen
very seldomly.

Mark
 
D

Donovan Rebbechi

But the real problems here are (a) you throw away memory before you
allocate so if new[] fails, you're in trouble, (b) even if you do succeed,
you're throwing away all the data in the table, and (c) you need
rowpos>=rows etc in the above check.

I want to throw away the data each time I finish processing in the function
that calls addTableValue(). I only load a particular set

OK, but you do understand point (a), right ? Your code is not exception-safe,
because if the memory allocation fails, you not only lose data, you also leave
the object in an inconsistent state (because the pointer member no longer
addresses valid memory). So it's good practice to acquire new resources first,
and release the old resources after the acquisition is succesful (whether those
resources are locks, filehandles or whatever) Otherwise, when you get a failure,
you don't have any way to roll back to the original consistent state, so you're
temporarily stuck with a corrupted object.

Cheers,
 
G

Gumby

Donovan said:
But the real problems here are (a) you throw away memory before you
allocate so if new[] fails, you're in trouble, (b) even if you do
succeed, you're throwing away all the data in the table, and (c) you
need rowpos>=rows etc in the above check.

I want to throw away the data each time I finish processing in the
function that calls addTableValue(). I only load a particular set

OK, but you do understand point (a), right ? Your code is not
exception-safe, because if the memory allocation fails, you not only lose
data, you also leave the object in an inconsistent state (because the
pointer member no longer addresses valid memory). So it's good practice to
acquire new resources first, and release the old resources after the
acquisition is succesful (whether those resources are locks, filehandles
or whatever) Otherwise, when you get a failure, you don't have any way to
roll back to the original consistent state, so you're temporarily stuck
with a corrupted object.

In the case of what I'm working on, it's not mission critical but
rather a high performance concept demo system. If a memory allocation
fails, at most I'd just like to detect that it happened and where
but it doesn't matter if it crashes. If this system (quad Dell Poweredge
server with tons of memory) can't fulfill a piddly few thousand bytes of
memory request, the computer is on the verge of dying anyway. I would like
to do the new before the delete and only move the pointer after testing the
new memory is good.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,766
Messages
2,569,569
Members
45,043
Latest member
CannalabsCBDReview

Latest Threads

Top