Speaking of VLAs..
It seems that the standard doesn't specify how VLAs are implemented, although
the most obvious ways are:
1. stack (like the non-portable alloca())
2. malloc() on the heap
In both cases you can declare an absurdly large VLA so that there isn't enough
memory to allocate it. I haven't been able to find in the C99 standard what
happens then..
So what is the 'standard' way to handle an out-of-memory condition for VLAs?
There is no place to 'return NULL' as malloc() does..
Others have already pointed out that you face the same issue with
automatic arrays with fixed size. Possible consequences of the
undefined behavior are some sort of trap and orderly shut down by the
operating system on platforms with memory protection, to just plain
overwriting something else, such as other data or code areas, with who
knows what results.
I did not read all the replies, but I have not noticed anyone pointing
out that there are platforms out there, Linux among them, that do lazy
allocation. They may return a non-null pointer from malloc() even
though they do not really give the memory to the application. Later,
when the application actually tries to access the memory, the system
starts killing it or other, unrelated programs.
I do not know, because I am not a member of the C standard committee,
but I think it likely that VLAs are there at the insistence of those
who claim that they cannot abide the lack of alloca(), or equivalent.
Their programs cannot possible survive the overhead of actually
calling malloc().
Say for the moment that there is some real justification here. It
seems to me that this position only makes sense in a program that
performs a large number of small allocations that are only needed for
relative short lengths of time.
If you are going to allocate an array of 1,000,000 doubles, most
likely just initializing the data in the array to meaningful values
far outweighs the allocation, even if malloc() makes an underlying
system call. If such an array is around for much of the life of a
program, it is actually far more efficient to allocate it statically,
which gives the bonus of default initializing all the elements.
So it seems to me that if one wanted to use VLAs at all, the most
prudent strategy would be to use them for relatively small blocks of
memory with relatively short life times.
Note also, that alloca(), which VLAs supercede in a standard way, have
no more guarantee than VLAs do.
Finally, this could well become a QOI issue. If enough people are
convinced that they must replace all other dynamic allocations with
VLAs with some vague, unproven assertions about efficiency, and they
put enough pressure on compiler vendors, most likely the vendors will
respond.
Here's what an implementation could do when it encountered the
definition of a VLA. It could call some sort of platform-specific
run-time library function that would somehow determine whether or not
the space is available for the VLA. If not, it could do something
instead, such as make a system call to increase the space available,
or call malloc() in the normal fashion. Of course this adds at least
some of the overhead that the "efficiency" proponents say that they
can't stand. So maybe the VLA advocates wouldn't like that at all.
The C standard does not recommend against using VLAs at all, or for
objects above a certain size. It also doesn't recommend gets(), or
does it recommend against using scanf("%s", char_pointer).
This is C, after all, and one of the first principles is that you
don't pay for what you don't use. If you want full protection for
dynamic allocation (except on systems with lazy allocation), then you
call malloc() or siblings and pay for it. If you don't want to pay
that price, you don't have to, you can use VLAs and take your chances.
As Heinlein was fond of pointing out, "TANSTAAFL".