F
Flash Gordon
Malcolm McLean wrote, On 22/01/08 11:59:
Actually it would not because that would be too expensive (the notebook
cost over 1000UKP. Now that RAM prices for notebooks have dropped it
*might* be upgraded.
Depends. On the days when I hit allocation failures I have normally
already started all the memory hogs and can see that memory is exhausted
using a monitor *before* I hit the errors. In fact, I run a resource
monitor continuously for precisely this reason.
Without fully documenting assumptions your calculation is worth nothing.
So when my resource monitor says that 100% of memory is in use I am
still more likely to have the HW fail than have a small memory
allocation fail? I think not.
You need to because they are false. Basing your decisions on false
assumptions is always a bad idea, especially when someone has pointed
out that they are false.
Depends on whether you are good at design and programming.
Ah, but it has already been pointed out that this is not code that will
never be executed.
Which makes it easy. Since you cannot calculate either probability with
any great assurance you have to assume you need the checks.
Try to follow the logic.
Memory allocation failure always causes a program to behave in some way
that is sub-optimal for the user. Otherwise the program wouldn't request
the memory at all. So failures, like the Federal Government running out
of money, are rare in terms of cycles. Flash can just about tolerate his
machine running out of memory once a month. I'd say that if the machine
runs out of memory every day, pretty soon the system will be replaced.
Actually it would not because that would be too expensive (the notebook
cost over 1000UKP. Now that RAM prices for notebooks have dropped it
*might* be upgraded.
The question is, where will the failure occur? In a big allocation, or a
little allocation?
Depends. On the days when I hit allocation failures I have normally
already started all the memory hogs and can see that memory is exhausted
using a monitor *before* I hit the errors. In fact, I run a resource
monitor continuously for precisely this reason.
With some simplifying assumptions,
Without fully documenting assumptions your calculation is worth nothing.
the answer is very
easy to calculate, and so you can work out the size of allocation that
is orders of magnitude less likely to fail than a hardware failure, that
is to says the error-handling code is vanishingly unlikely to be executed.
So when my resource monitor says that 100% of memory is in use I am
still more likely to have the HW fail than have a small memory
allocation fail? I think not.
If you remove those simplifying assumptions the question does become a
bit more difficult.
You need to because they are false. Basing your decisions on false
assumptions is always a bad idea, especially when someone has pointed
out that they are false.
However the costs of error-handling code are not
negligible.
Depends on whether you are good at design and programming.
Adding code that will never be executed is classical example
of over-engineering.
Ah, but it has already been pointed out that this is not code that will
never be executed.
The answer to the OP's question is "the point at
which a hardware failure becomes statistically more likely".
Which makes it easy. Since you cannot calculate either probability with
any great assurance you have to assume you need the checks.