M
Morris Dovey
Keith Thompson (in (e-mail address removed)) said:
| Out of curiosity, how often do real-world programs really do
| something fancy in response to a malloc() failure?
I don't a definitive answer to this one; but will guess that there are
more than one would be inclined to guess - about a third of the
systems I've worked on over the last 20 years have had recovery
strategies for dealing with this problem.
| The simplest solution, as you say, is to immediately abort the
| program (which is far better than ignoring the error). The next
| simplest solution is to do some cleanup (print a coherent error
| message, flush buffers, close files, release resources, log the
| error, etc.) and *then* abort the program.
| I've seen suggestions that, if a malloc() call fails, the program
| can fall back to an alternative algorithm that uses less memory.
| How realistic is this? If there's an algorithm that uses less
| memory, why not use it in the first place? (The obvious answer:
| because it's slower.) Do programmers really go to the effort of
| implementing two separate algorithms, one of which will be used
| only on a memory failure (and will therefore not be tested as
| thoroughly as the primary algorithm)?
I'm not sure about "alternative algorithms"; but I have an application
that efficiently "recycles" the malloc()ed allocations, rather than
free()ing them. If it decides it's using too much of the available
memory (or if the workload exceeds a threshold), the application
presents a request to a supervisory node for cloning (starting same
application on a new node) and transfers a portion of its database to
be handled by the new node.
Some systems are designed to operate in multiple modes - for example,
it's not unusual for launch vehicles to have a "boost" mode, in which
processing essential to sub-orbital flight is given highest priority
for all available resources, an "orbital" mode in which data
acquisition is given the highest priority, a "dump" mode in which
activities related to data transmission are given highest priority,
and perhaps "diagnostic" and "maintenance" modes. Resource
allocation/retention strategy is (re)configured by a mode change.
Other systems incorporate selective degradation in which priorities
are managed over a spectrum and resource allocations are made (and
retained) according to the current priority. A low-priority processing
unit may be asked to surrender a portion (or all) the resouces it
currently "owns".
Choice of approach pretty much depends on the functional requirements.
| Out of curiosity, how often do real-world programs really do
| something fancy in response to a malloc() failure?
I don't a definitive answer to this one; but will guess that there are
more than one would be inclined to guess - about a third of the
systems I've worked on over the last 20 years have had recovery
strategies for dealing with this problem.
| The simplest solution, as you say, is to immediately abort the
| program (which is far better than ignoring the error). The next
| simplest solution is to do some cleanup (print a coherent error
| message, flush buffers, close files, release resources, log the
| error, etc.) and *then* abort the program.
| I've seen suggestions that, if a malloc() call fails, the program
| can fall back to an alternative algorithm that uses less memory.
| How realistic is this? If there's an algorithm that uses less
| memory, why not use it in the first place? (The obvious answer:
| because it's slower.) Do programmers really go to the effort of
| implementing two separate algorithms, one of which will be used
| only on a memory failure (and will therefore not be tested as
| thoroughly as the primary algorithm)?
I'm not sure about "alternative algorithms"; but I have an application
that efficiently "recycles" the malloc()ed allocations, rather than
free()ing them. If it decides it's using too much of the available
memory (or if the workload exceeds a threshold), the application
presents a request to a supervisory node for cloning (starting same
application on a new node) and transfers a portion of its database to
be handled by the new node.
Some systems are designed to operate in multiple modes - for example,
it's not unusual for launch vehicles to have a "boost" mode, in which
processing essential to sub-orbital flight is given highest priority
for all available resources, an "orbital" mode in which data
acquisition is given the highest priority, a "dump" mode in which
activities related to data transmission are given highest priority,
and perhaps "diagnostic" and "maintenance" modes. Resource
allocation/retention strategy is (re)configured by a mode change.
Other systems incorporate selective degradation in which priorities
are managed over a spectrum and resource allocations are made (and
retained) according to the current priority. A low-priority processing
unit may be asked to surrender a portion (or all) the resouces it
currently "owns".
Choice of approach pretty much depends on the functional requirements.