Handling errors and exceptional conditions

T

TonyMc

I confess I have always taken an ad hoc approach to error handling in
programs that I write. I think that is justified since they are written
only for my own use, so if I enter strange data and the program calls
exit(EXIT_FAILURE) after printing a message to stderr, I can cope with
that and try again.

However, for my own learning, I am interested in perhaps starting to use
a more systematic approach to error handling and treating unexpected
conditions. So far I have the following functions and macros:

void syserror(const char *fname, int line, const char *fmt, ...);
void syswarning(const char *fname, int line, const char *fmt, ...);
void error(const char *fname, int line, const char *fmt, ...);
void warning(const char *fname, int line, const char *fmt, ...);

#define SYSERROR(...) syserror(__FILE__, __LINE__, __VA_ARGS__)

with similar #defines for the other three functions.

In my program, I then do something like:

if ((fp = fopen(fname, "r")) == NULL)
SYSERROR("Can't open input file %s.", fname);

The sysxxxx() functions differ from the xxxx() functions in using
strerror() to print the system-specific error message from the errno
value. These are useful only for handling errors generated by library
functions that set errno to a useful value. The error() and warning()
functions are for problems encountered in my own code rather than in
calls to the os. The error() variety calls exit(), the warning()
variety just sends a message to stderr and returns.

So, is that the sort of scheme that others use? Are there obvious
problems with it? And what alternatives do people use? I have seen
jumps to cleanup code at the end of a function, setjmp/longjmp (which, I
confess, does my head in - it feels like going back in time) and
obviously there are much more robust and sophisticated techniques which
attempt to fix the problem and continue rather than simply exiting or
issuing a message. I'm interested to know what others do. Also, if you
can point me to some resources that discuss different strategies in C,
that might be helpful.

Thanks,
Tony
 
D

D Yuniskis

TonyMc said:
However, for my own learning, I am interested in perhaps starting to use
a more systematic approach to error handling and treating unexpected
conditions. So far I have the following functions and macros:

void syserror(const char *fname, int line, const char *fmt, ...);
void syswarning(const char *fname, int line, const char *fmt, ...);
void error(const char *fname, int line, const char *fmt, ...);
void warning(const char *fname, int line, const char *fmt, ...);

#define SYSERROR(...) syserror(__FILE__, __LINE__, __VA_ARGS__)

with similar #defines for the other three functions.

It appears each of your error routines will be similar and
*probably* do a similar set of actions (though syserror() may
exit()?)

Personally, I would write a single myerror() routine and pass a
parameter -- error_type to it (a manifest constant). So,

#define SYSERROR(...) myerror(SYSTEM, ...)
#define SYSWARN(...) myerror(SYSWARN, ...)

In this way, if you change the way you want to handle your
errors, the change can more easily be applied to *all*.
In my program, I then do something like:

if ((fp = fopen(fname, "r")) == NULL)
SYSERROR("Can't open input file %s.", fname);

The sysxxxx() functions differ from the xxxx() functions in using
strerror() to print the system-specific error message from the errno
value. These are useful only for handling errors generated by library
functions that set errno to a useful value. The error() and warning()
functions are for problems encountered in my own code rather than in
calls to the os. The error() variety calls exit(), the warning()
variety just sends a message to stderr and returns.

So, is that the sort of scheme that others use? Are there obvious
problems with it? And what alternatives do people use? I have seen
jumps to cleanup code at the end of a function, setjmp/longjmp (which, I
confess, does my head in - it feels like going back in time) and
obviously there are much more robust and sophisticated techniques which
attempt to fix the problem and continue rather than simply exiting or
issuing a message. I'm interested to know what others do. Also, if you
can point me to some resources that discuss different strategies in C,
that might be helpful.

I've been moving towards more and more aggressive error *reporting*
in my projects (I've always been aggressive about error *detection*
due to the markets that I address but, now, thinking more about
*what* I convey to the user). I use the __FILE__, __LINE__
macros as keys to a database that "myerror()" accesses at
run-time. This allows me to handle traditional error reporting
along with providing real feedback to the user about the likely
cause of the "error". E.g., I can produce a page from the user
manual to explain it, if applicable.

This, admittedly, is a heavyweight approach but, so far, has
some promise!
 
T

TonyMc

D Yuniskis said:
It appears each of your error routines will be similar and
*probably* do a similar set of actions (though syserror() may
exit()?)

Personally, I would write a single myerror() routine and pass a
parameter -- error_type to it (a manifest constant). So,

Indeed, they all call a common function (which is declared static within
the error handling translation unit). Each of the four interface
functions set up the arguments for that common function and then call
it. Same thing you are suggesting, but the error_type business goes on
"behind the scenes". I'm not sure which way is better. Why would you
choose one approach rather than the other?

Best
Tony
 
D

D Yuniskis

TonyMc said:
Indeed, they all call a common function (which is declared static within
the error handling translation unit). Each of the four interface
functions set up the arguments for that common function and then call
it. Same thing you are suggesting, but the error_type business goes on
"behind the scenes". I'm not sure which way is better. Why would you
choose one approach rather than the other?

<shrug> Efficiency? I.e., doing:

#define SYSERROR(...) syserror(...)

syserror(...) {
return myerror(SYSTEM_ERROR, ...)
}

myerror(error_t type, ...) {
...
}

adds one more level of function invocation vs.

#define SYSERROR(...) myerror(SYSTEM_ERROR, ...)

However, some compilers may be smart enough to "absorb" the
wrapper function into the *wrapped* function. Of course,
other compilers *aren't*.

I don't like burying *important* detail any deeper than
necessary. I.e., the latter case makes it fairly obvious
to the developer that all of the "ERROR" routines are essentially
the same -- without having to track down the implementation of
syserror(), syswarn(), etc.
 
E

Eric Sosman

D said:
TonyMc said:
[...]
Indeed, they all call a common function (which is declared static within
the error handling translation unit). Each of the four interface
functions set up the arguments for that common function and then call
it. Same thing you are suggesting, but the error_type business goes on
"behind the scenes". I'm not sure which way is better. Why would you
choose one approach rather than the other?

<shrug> Efficiency? I.e., doing:

#define SYSERROR(...) syserror(...)

syserror(...) {
return myerror(SYSTEM_ERROR, ...)
}

myerror(error_t type, ...) {
...
}

adds one more level of function invocation vs.

#define SYSERROR(...) myerror(SYSTEM_ERROR, ...)

However, some compilers may be smart enough to "absorb" the
wrapper function into the *wrapped* function. Of course,
other compilers *aren't*.
[...]

If you are getting errors so frequently that the efficiency
of the error-handling mechanism becomes a concern, maybe it's time
to start wondering why you get so many errors ...

The cure for "Every time I ram my car into a telephone pole,
it takes the repair shop a week to get me back on the road again"
is not to try to make the repair shop work faster, but to learn
how to steer.
 
D

D Yuniskis

Eric said:
D said:
TonyMc said:
[...]
Indeed, they all call a common function (which is declared static within
the error handling translation unit). Each of the four interface
functions set up the arguments for that common function and then call
it. Same thing you are suggesting, but the error_type business goes on
"behind the scenes". I'm not sure which way is better. Why would you
choose one approach rather than the other?

<shrug> Efficiency? I.e., doing:

#define SYSERROR(...) syserror(...)

syserror(...) {
return myerror(SYSTEM_ERROR, ...)
}

myerror(error_t type, ...) {
...
}

adds one more level of function invocation vs.

#define SYSERROR(...) myerror(SYSTEM_ERROR, ...)

However, some compilers may be smart enough to "absorb" the
wrapper function into the *wrapped* function. Of course,
other compilers *aren't*.
[...]

If you are getting errors so frequently that the efficiency
of the error-handling mechanism becomes a concern, maybe it's time
to start wondering why you get so many errors ...

The cure for "Every time I ram my car into a telephone pole,
it takes the repair shop a week to get me back on the road again"
is not to try to make the repair shop work faster, but to learn
how to steer.

Of course! But, there are other aspects of "efficiency" besides
code size or execution time. As I said:
 
N

Nick Keighley

I confess I have always taken an ad hoc approach to error handling in
programs that I write.  I think that is justified since they are written
only for my own use, so if I enter strange data and the program calls
exit(EXIT_FAILURE) after printing a message to stderr, I can cope with
that and try again.

However, for my own learning, I am interested in perhaps starting to use
a more systematic approach to error handling and treating unexpected
conditions.  So far I have the following functions and macros:

void syserror(const char *fname, int line, const char *fmt, ...);
void syswarning(const char *fname, int line, const char *fmt, ...);
void error(const char *fname, int line, const char *fmt, ...);
void warning(const char *fname, int line, const char *fmt, ...);

#define SYSERROR(...)  syserror(__FILE__, __LINE__, __VA_ARGS__)

with similar #defines for the other three functions.  

In my program, I then do something like:

   if ((fp = fopen(fname, "r")) == NULL)
      SYSERROR("Can't open input file %s.", fname);

The sysxxxx() functions differ from the xxxx() functions in using
strerror() to print the system-specific error message from the errno
value.  These are useful only for handling errors generated by library
functions that set errno to a useful value.  The error() and warning()
functions are for problems encountered in my own code rather than in
calls to the os.  The error() variety calls exit(), the warning()
variety just sends a message to stderr and returns.

So, is that the sort of scheme that others use?  Are there obvious
problems with it?  And what alternatives do people use?  I have seen
jumps to cleanup code at the end of a function, setjmp/longjmp (which, I
confess, does my head in - it feels like going back in time)

setjmp/longjmp is tricky and can leak system resouces (eg. memory or
file handles). At best setjmp/longjmp is a primitive for implementing
somethign a little more robust. I've never used it. I read enough of
Plauger's Library book to scare me.
and
obviously there are much more robust and sophisticated techniques which
attempt to fix the problem and continue rather than simply exiting or
issuing a message.  I'm interested to know what others do.  Also, if you
can point me to some resources that discuss different strategies in C,
that might be helpful.

You need to separate, at least conceptually, error reporting from
error handling or recovery (reporting could be thought of as part of
handling). Reporting just means writing to some stream with as much
context as possible. FILE and LINE yes but also which "object"
generated the error. Where "object" is something meaningful to the
user. It might be a communications channel or a piece of hardware or
an invoice number.

You could add to Warning and Error Info (or Activity) and Critical (or
Fatal) to your error reporting.

Info/Activitity: significant system activity (command-processed,
packet-received) this is handy when your system fails you can see what
it was doing just before things started to go pear-shaped.

Violation (still debating the name): this is for violation of
external interafces or APIs. This means /your/ software is ok but
someone outside the system boundardy is doing something wrong. They
have the syntax of a packet wrong or a value is out of range or they
can't so That at This time. May trigger a Warning or Error.

Warning: something odd happened but you think you can continue.

Error: something bad happened but the system can continue. Some
work has to be discarded or backed out. A link may be reset an invoice
may stop being processed.

Critical/Fatal something really really bad happened. The hardware
has failed. Your data structure is corrupt. The database appears to
have vanished. You have no main memory or disc space. Clean up as best
you can and die.


How sophisticated you make it depends on your domain. Unix filters
tend to be less sophisticated in their error recovery than helicoptor
auto pilots.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,754
Messages
2,569,528
Members
45,000
Latest member
MurrayKeync

Latest Threads

Top