F
Flash Gordon
On the other hand, banking is a bad analogy for a different reason. A
depositor has no right to remove any more money from a bank than the
current account balance, and interest rates give depositors an incentive
to take out no more than they actually need. In memory allocation,
here's no corresponding limit, and no corresponding incentive, to
restrain memory allocation requests. Therefore, the chance that a given
request might make the system will run out of memory can, to a rough
first approximation, be estimated by methods similar to those given above.
A better reason that you cannot use a method like the above for
estimating the likelihood of failure is because it depends on the type
of user and usage pattern. If a user regularly does video editing there
is a good chance that half to three quarters (or more) of the memory is
in use before you start on anything else. Or if the user regularly has
three VMWare images with half a gig of RAM allocated to each open and
also has to work on large word processor documents, and sometimes has to
copy from one document to another, the chance of the 100 byte allocation
failing when copying a small amount of text is much higher.
Before you even look at starting to estimate the frequency (or
probability) of a memory allocation of a given size failing you first
have to get some statistics on the likely base level of memory utilisation.
Probably an easier way of doing an estimate is to produce an
instrumented malloc which logs when malloc fails, how large the
allocation was, the name of the process etc. Then get a representative
selection of users to use that library and collect results.