Leet said:
I believe you are completely wrong on this point. Very often a SIGSEGV
will be caused by (say) a single bad array access - the consequences
will be highly localized, and carrying on with the program will not
cause any significant problems.
And very often this will not be the case and quitting before more damage
happens is the best thing.
For example, consider a program which moves a tree of files from one
filesystem to another by copying them and then deleting the originals
once the copy finishes successfully. And suppose you get a SIGSEGV
while building the list of files to copy. If you ignore it, you might
end up making copies of half the files, then deleting all the originals!
If you really want to recover from local faults, then use a language that
does bounds checking on arrays and pointer/reference dereferencing and
throws exceptions when these things happen. Then if you know such
errors really won't corrupt the state of the larger program and that the
fault is really localized, you can write an exception handler to do the
error recovery and contain the fault within whatever bounds you've
pre-determined it actually *can* be confined within.
Who wants their customer to run their program and have it just crash
with a segfault?
I'd much rather the customer encounter a segfault, file a bug report,
and give me a chance to fix it than I would have it just silently fail
and let the error continue, corrupting data or whatever else for who
knows how many years upon years. There was a trend in business a
decade or two ago called "total quality management" (or TQM), and the
basic idea was that when faults happen, you should not whitewash over
them, and you should instead stop what you're doing and not proceed
until you've corrected the problem. This was carried a little too far
(like most trendy business ideas), but there is some merit to this
approach. Ignoring failures just (a) causes problems and (b) encourages
people to stop caring about whether they cause failures.
- lOGAN