<OT>
I suspect that beyond a certain point, the OS will either fail to load
any more instances of your program or will terminate it soon after
starting it. In any case the OS, (especially Linux), shouldn't crash.
</OT>
<OT>
My recent experience on a 64-bit Linux cluster indicates that Linux
reacts fairly typically [compared to other systems]
- if a new memory allocation request would exceed the maximum system
allocatable memory, then the request will be denied
- this will not directly terminate the process that made the memory
request, but carefully written programs that checked the allocation
results will usually terminate themselves, and programs not written
with sufficient allocation checking will typically misbehave and
then crash with a segmentation fault.
- either way, strange things often happen, as even the programs that
check the allocation are often in the middle of something when the
memory exhaustion is reached -- and many common utilities do not
output any kind of "memory limit" message as they terminate
- allocations that are backed by the swap file result in memory paging.
Either the Linux paging system is not particularily good or else
my typical application (Maple) as very bad locality of reference,
as the swapping slows down the system enourmously
- what is a bit different about our Linux cluster (at least compared to
the SGI IRIX systems I am more accustomed to) is that while heavy
paging is going on, Linux often drops input characters
(though it might be that what it is dropping is network traffic).
This often gives me the impression that the process has died
or the system has died. (Yes, I am certain that the characters
are dropped, not merely buffered for later processing.)
On IRIX each network process would have its own input buffer, so
for small amounts of input, the delay until echo might be quite
perceivable but the characters would not be dropped completely
[true, there are system network buffers that could get clogged, but
the IRIX scheduler would temporarily give priority to the processes
with network input waiting, thus allowing the system buffers to drain
into the process buffers.]
- the side-effect of the above tendancy of our Linux cluster to drop
characters while swapping, is that using ^Z to suspend a process might
not work, as the ^Z might well happen to be dropped. If you continue
to try eventually you might catch the process able to accept characters
and then the ^Z will work as expected. The ^Z is not disabled or
ignored -- as per the above, if the system is heavily paging,
the ^Z might just not arrive to be processed.
- suspending a large heavily swapping linux process suspends the
swapping-in for that process, but does not make any more total system
memory available. Further memory requests (e.g., for running 'top'
to figure out what's happening on the system) may have to be
backed by the swap file instead of by primary memory -- but those
new requests would often have much greater locality of reference
and Linux would typically just swap out more of the suspended process
in order to regain physical memory, so the effect is often perceived
as being that the system speeds right up again when the large
process is suspended. But if total virtual memory is nearly full,
then there might not be enough virtual memory to allocate in
order to satisify even moderate requests, with results as discussed
at the begining.
</OT>