Visual C++ and large 2d arrays

I

ico.bukvic

Hi all,

I've made a 2d dynamic array as follows (this is a snippet so not all
variables are accounted for):

//numvoices are dynamic (1-1000), entered by user
//MAXCHANNELS is currently defined as 24

float **gvoiceSpat;
float **notechannelGain;
float **notechannelGainSpread;

gvoiceSpat = new float *[numvoices];
notechannelGain = new float *[numvoices];
notechannelGainSpread = new float *[numvoices];

for (i = 0; i < numvoices; i++)
{
gvoiceSpat = new float[MAXCHANNELS];
notechannelGain = new float[MAXCHANNELS];
notechannelGainSpread = new float[MAXCHANNELS];
}

The interesting thing is that this code works flawlessly in gcc but in
Visual C++ (2003 .NET) whenever numvoices exceeds ~120, the program
crashes, sometimes reporting unknown exception. The problem is that
this is a code for an external module for another application and uses
additional third-party libs so it is difficult to point fingers at the
culprit. Yet, the fact remains that this crash occurs only on Windows
using Visual C++, while it works flawlessly on OSX (gcc) and Linux
(gcc) using same libs.

Any ideas as to why would this be the case?

For what it's worth, I also tried substituting these with vectors with
no difference whatsoever.

Any help is most appreciated!

Sincerely,

Ico
 
G

Greg Herlihy

I've made a 2d dynamic array as follows (this is a snippet so not all
variables are accounted for):

//numvoices are dynamic (1-1000), entered by user
//MAXCHANNELS is currently defined as 24

float **gvoiceSpat;
float **notechannelGain;
float **notechannelGainSpread;

gvoiceSpat = new float *[numvoices];
notechannelGain = new float *[numvoices];
notechannelGainSpread = new float *[numvoices];

for (i = 0; i < numvoices; i++)
{
gvoiceSpat = new float[MAXCHANNELS];
notechannelGain = new float[MAXCHANNELS];
notechannelGainSpread = new float[MAXCHANNELS];
}

The interesting thing is that this code works flawlessly in gcc but in
Visual C++ (2003 .NET) whenever numvoices exceeds ~120, the program
crashes, sometimes reporting unknown exception. The problem is that
this is a code for an external module for another application and uses
additional third-party libs so it is difficult to point fingers at the
culprit. Yet, the fact remains that this crash occurs only on Windows
using Visual C++, while it works flawlessly on OSX (gcc) and Linux
(gcc) using same libs.

Any ideas as to why would this be the case?


Could it be that on Windows numvoices is a char - and the program is
crashing when numvoices' value exceeds 128 and overflows?

Greg
 
I

ico.bukvic

Could it be that on Windows numvoices is a char - and the program is
crashing when numvoices' value exceeds 128 and overflows?

Greg

No idea. How can this be tested and/or alleviated?

Ico
 
I

ico.bukvic

Actually, upon closer inspection, the ceiling appears to shift a lot
and optimizing code does increase it (without code optimization it
hovers around 90, and with full optimization it can get up to 120 or
so).

Ico
 
G

GeekBoy

Maybe because you have here a single dimensional array and not a 2D one as
you claim?

Define 2D array:

"A 2D array is an array that has both rows and columns. You must use 2 sets
of square brackets when declaring a 2D array and when using it."

e.g.:
int arr[3][3];
arr[0][0] = 5;
 
G

Gianni Mariani

Hi all,

I've made a 2d dynamic array as follows (this is a snippet so not all
variables are accounted for):

//numvoices are dynamic (1-1000), entered by user
//MAXCHANNELS is currently defined as 24

float **gvoiceSpat;
float **notechannelGain;
float **notechannelGainSpread;

gvoiceSpat = new float *[numvoices];
notechannelGain = new float *[numvoices];
notechannelGainSpread = new float *[numvoices];

for (i = 0; i < numvoices; i++)
{
gvoiceSpat = new float[MAXCHANNELS];
notechannelGain = new float[MAXCHANNELS];
notechannelGainSpread = new float[MAXCHANNELS];
}

The interesting thing is that this code works flawlessly in gcc but in
Visual C++ (2003 .NET) whenever numvoices exceeds ~120, the program
crashes, sometimes reporting unknown exception. The problem is that
this is a code for an external module for another application and uses
additional third-party libs so it is difficult to point fingers at the
culprit. Yet, the fact remains that this crash occurs only on Windows
using Visual C++, while it works flawlessly on OSX (gcc) and Linux
(gcc) using same libs.

Any ideas as to why would this be the case?

For what it's worth, I also tried substituting these with vectors with
no difference whatsoever.

Any help is most appreciated!


You're probably corrupting memory somewhere. If you are running on a
linux box, try to use valgrind (remember to set the appropriate
environment variable.

The environment variable is GLIBCXX_FORCE_NEW (or GLIBCPP_FORCE_NEW
depending on version of GCC).

For gcc 3.3.2 the variable is GLIBCPP_FORCE_NEW
For gcc 3.4.0 and above it is GLIBCXX_FORCE_NEW

If you're not sure
$ strings /usr/lib/libstdc++.so.6 | grep FORCE
GLIBCXX_FORCE_NEW


If you're running only on windows, you have to buy a third party tool.

Just a note, if you litter your code with news and deletes, you're bound
to have a problem like this show up. Use vectors or a 2D matrix class
and avoid problems like this.
 
J

James Kanze

Could it be something like the problem described at the following link
even though I am using "new" to allocate memory?

No, or at least not directly.

I'm not sure what your version of VC++ does if new fails because
of insufficient memory. The standard says you get an exception,
and the latest versions of VC++ are conform, but VC++ 6.0 still
returned a null pointer (when it detected the situation). For
starters, you might try 1) wrapping your code in a try block,
and catching std::bad_alloc, and 2) testing the result of each
new for NULL. Realistically, however: at around 100, you're
allocating under 50 KB; I can't imagine that failing on any
modern machine.

Greg Hilary's suggestion about char overflowing is interesting
as well. Logically, it should mean a threashold of exactly 128,
however. Still, if you compile using unsigned char (option /J,
I think), you should be able to eliminate it.

Other than that, I don't see anything wrong with it. I just ran
it on my PC (a very small machine, with only 256 MB), and had no
problems with numvoices at 100000; at 1000000, the program
became very slow, as the machine started paging, and I finally
got a bad_alloc exception. (This is with VC++ 2005.)

All I can suggest is that you try to isolate a 15-20 line bit of
code which can be compiled on its own, and manifests the error,
and post that.
 
L

Lionel B

Hi all,

I've made a 2d dynamic array as follows (this is a snippet so not all
variables are accounted for):

//numvoices are dynamic (1-1000), entered by user //MAXCHANNELS is
currently defined as 24

float **gvoiceSpat;
float **notechannelGain;
float **notechannelGainSpread;

gvoiceSpat = new float *[numvoices];
notechannelGain = new float *[numvoices]; notechannelGainSpread =
new float *[numvoices];

for (i = 0; i < numvoices; i++)
{
gvoiceSpat = new float[MAXCHANNELS];
notechannelGain = new float[MAXCHANNELS];
notechannelGainSpread = new float[MAXCHANNELS];
}


Maybe because you have here a single dimensional array and not a 2D one
as you claim?

Define 2D array:

"A 2D array is an array that has both rows and columns. You must use 2
sets of square brackets when declaring a 2D array and when using it."

e.g.:
int arr[3][3];
arr[0][0] = 5;


*Please* don't top-post (rearranged)

The OP described it as a *dynamic* array and presents a pretty standard
implementation. Note the two levels of "new" in the code. "Array"
elements can then be referenced as:

gvoiceSpat[j]

etc. just like a "proper" 2D array.

Try the usual; a *minimal* program which demonstrates the problem (my
suspicion would be that the problem exists elsewhere in the code).
 
S

Sylvester Hesp

Hi all,

I've made a 2d dynamic array as follows (this is a snippet so not all
variables are accounted for):

//numvoices are dynamic (1-1000), entered by user
//MAXCHANNELS is currently defined as 24

float **gvoiceSpat;
float **notechannelGain;
float **notechannelGainSpread;

gvoiceSpat = new float *[numvoices];
notechannelGain = new float *[numvoices];
notechannelGainSpread = new float *[numvoices];

for (i = 0; i < numvoices; i++)
{
gvoiceSpat = new float[MAXCHANNELS];
notechannelGain = new float[MAXCHANNELS];
notechannelGainSpread = new float[MAXCHANNELS];
}

The interesting thing is that this code works flawlessly in gcc but in
Visual C++ (2003 .NET) whenever numvoices exceeds ~120, the program
crashes, sometimes reporting unknown exception. The problem is that
this is a code for an external module for another application and uses
additional third-party libs so it is difficult to point fingers at the
culprit. Yet, the fact remains that this crash occurs only on Windows
using Visual C++, while it works flawlessly on OSX (gcc) and Linux
(gcc) using same libs.

Any ideas as to why would this be the case?


Based on the reactions earlier in this thread, my first guess would be heap
corruption, which means your problem has to do with something completely
different in your code rather than the allocations of these floats. Some
ways to get your heap corrupted are: writing to the memory pointed to by an
uninitialized pointer, freeing an allocated object more than once, freeing
an object that has never been allocated, writing to an object after it has
been freed, writing beyond the range of allocated buffers, using delete[]
for objects allocated with new and similarly using delete for objects
allocated with new[].

Memory tracking tools like BoundsChecker or Rational Purify can help you
detect these kinds of problems. But since this is a C++ group, by simply
using a std::vector instead of user-allocated arrays you can avoid these
pitfalls altogether

- Sylvester
 
I

ico.bukvic

been freed, writing beyond the range of allocated buffers, using delete[]
for objects allocated with new and similarly using delete for objects
allocated with new[].

What do you mean by this? Am I not supposed to use delete at all, even
in the destructor if I created something with new?

The issue is also I tried vector and it had the same effect.

This only manifests itself on VC++ (I use 2003 version). gcc on OSX
and Linux are absolutely rock solid with the code.

I tried increasing heap and stack sizes with no difference whatsoever,
yet it seems as if the "magic limit" beyond the object fails to
initialize and/or brings the host application down remains more or
less consistent.

Any other ideas?

Ico
 
V

Victor Bazarov

been freed, writing beyond the range of allocated buffers, using
delete[] for objects allocated with new and similarly using delete
for objects allocated with new[].

What do you mean by this? Am I not supposed to use delete at all, ...

I think the hint you've been given is that you should use 'delete'
for pointers obtained from 'new' and 'delete[]' for pointers you
got from 'new[]', but don't freely mix those up.

V
 
I

ico.bukvic

I think the hint you've been given is that you should use 'delete'
for pointers obtained from 'new' and 'delete[]' for pointers you
got from 'new[]', but don't freely mix those up.

Thanks for the clarification. I am quite confident that this is
exactly what I've been doing because the aforesaid vars are the only
ones that needed special destructors anyhow.

Ico
 
J

James Kanze

I've made a 2d dynamic array as follows (this is a snippet so not all
variables are accounted for):
//numvoices are dynamic (1-1000), entered by user
//MAXCHANNELS is currently defined as 24
float **gvoiceSpat;
float **notechannelGain;
float **notechannelGainSpread;
gvoiceSpat = new float *[numvoices];
notechannelGain = new float *[numvoices];
notechannelGainSpread = new float *[numvoices];
for (i = 0; i < numvoices; i++)
{
gvoiceSpat = new float[MAXCHANNELS];
notechannelGain = new float[MAXCHANNELS];
notechannelGainSpread = new float[MAXCHANNELS];
}
The interesting thing is that this code works flawlessly in gcc but in
Visual C++ (2003 .NET) whenever numvoices exceeds ~120, the program
crashes, sometimes reporting unknown exception. The problem is that
this is a code for an external module for another application and uses
additional third-party libs so it is difficult to point fingers at the
culprit. Yet, the fact remains that this crash occurs only on Windows
using Visual C++, while it works flawlessly on OSX (gcc) and Linux
(gcc) using same libs.
Any ideas as to why would this be the case?

Based on the reactions earlier in this thread, my first guess would be heap
corruption, which means your problem has to do with something completely
different in your code rather than the allocations of these floats. Some
ways to get your heap corrupted are: writing to the memory pointed to by an
uninitialized pointer, freeing an allocated object more than once, freeing
an object that has never been allocated, writing to an object after it has
been freed, writing beyond the range of allocated buffers, using delete[]
for objects allocated with new and similarly using delete for objects
allocated with new[].
Memory tracking tools like BoundsChecker or Rational Purify can help you
detect these kinds of problems.

Valgrind is a good choice for Linux. Purify it's not, but it's
easier to use, and a far more affordable.
But since this is a C++ group, by simply
using a std::vector instead of user-allocated arrays you can avoid these
pitfalls altogether

Not necessarily. In the end, std::vector uses the heap as well,
and if he's corrupted his heap (which is the most likely
situation), then std::vector will get into trouble as well.
 
I

Ian Collins

James said:
Not necessarily. In the end, std::vector uses the heap as well,
and if he's corrupted his heap (which is the most likely
situation), then std::vector will get into trouble as well.
But if he sticks to the standard library rather than managing his own
allocations, he is less likely to mess up the heap.
 
J

James Kanze

been freed, writing beyond the range of allocated buffers, using delete[]
for objects allocated with new and similarly using delete for objects
allocated with new[].
What do you mean by this? Am I not supposed to use delete at all, even
in the destructor if I created something with new?

What he means is that if you allocate an array (as you were
doing), you should use delete[], and not delete. Otherwise you
have undefined behavior.
The issue is also I tried vector and it had the same effect.

Sounds like his first suggestion: you've corrupted the free
space arena somehow: writing beyond the end of the allocated
memory, writing to already freed memory, freeing the same memory
twise, or something along those lines.
This only manifests itself on VC++ (I use 2003 version). gcc on OSX
and Linux are absolutely rock solid with the code.

Just chance. The effects of writing beyond the end of allocated
memory, for example, vary enormously depending on the actual
implementation of the allocator, as does freeing already freed
memory.
I tried increasing heap and stack sizes with no difference whatsoever,
yet it seems as if the "magic limit" beyond the object fails to
initialize and/or brings the host application down remains more or
less consistent.
Any other ideas?

Everything points to a corrupted heap. If you can, try running
the program under Purify. (Purify is pretty much priced out of
the reach of the hobby programmer. But it saves much more than
it costs for a company, so it's more expensive not buying it.)
Otherwise, try valgrind on the Linux platform. It may pick up
the error even if there are no visible symptoms otherwise.
 
J

James Kanze

But if he sticks to the standard library rather than managing his own
allocations, he is less likely to mess up the heap.

Of course, using the standard library systematically will
eliminate large categories of bugs, and just makes good sense
from a software engineering point of view. But once the heap is
corrupt, the standard library is just as fragile as anything
else. And of course, if he's not using a debugging version of
the standard library... things like "*v.end() = 3.14159" will
still get him into trouble.
 
S

Sylvester Hesp

Not necessarily. In the end, std::vector uses the heap as well,
and if he's corrupted his heap (which is the most likely
situation), then std::vector will get into trouble as well.

Naturally, but what I meant was that by using the standard library (or any
other isolated, well tested and proven container implementation for that
matter) you can avoid the programming errors I described. Just replacing his
piece code that reveals the bug by a std::vector implementation obviously
doesn't magically make it all work :)

- Sylvester Hesp
 
2

280Z28

been freed, writing beyond the range of allocated buffers, using delete[]
for objects allocated with new and similarly using delete for objects
allocated with new[].

What do you mean by this? Am I not supposed to use delete at all, even
in the destructor if I created something with new?

The issue is also I tried vector and it had the same effect.

This only manifests itself on VC++ (I use 2003 version). gcc on OSX
and Linux are absolutely rock solid with the code.

I tried increasing heap and stack sizes with no difference whatsoever,
yet it seems as if the "magic limit" beyond the object fails to
initialize and/or brings the host application down remains more or
less consistent.

Any other ideas?

Ico

Application Verifier is an extremely powerful, free, and easy to use
tool for bounds checking in Windows. See if it helps find the problem.

http://www.microsoft.com/technet/prodtechnol/windows/appcompatibility/appverifier.mspx
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top