Mechanism to generate annotated "error codes"

D

Don Y

Hi Malcolm,

A parser is a pure function (assuming IO is not handled by the
function). It takes one sequence of bits - the script - and it returns
a another sequence of bits - the result. So all it's doing is
shuffling bits about in memory. If a pure function calls a procedure,
it's no longer a pure function.
Understood.

However most parsers will do IO. That leads to a question. If a pure
function takes a function pointer as a parameter, and that parameter
is to a procedure, is it still a pure function? The answer is no, but
here the classification is beginning to lose its usefulness.

Parsers are an obvious case of invalid input not being reasonably
caught by caller. You could require that caller pass only correct
scripts to the parser, but that means caller has to do almost as much
work as the parser itself to verify that the script is correct. So you

That was the point I was trying to bring out. Do you call the
"parameter" passed to it a "programming error"? That would imply
the caller had the responsibility of checking it for correctness
(eliminating the need for the parser's functionality).
have to treat malformed scripts as part of the normal flow control of
the parser. The caler's then got the problem. If the script was typed
into the source byt he programer, it's an internal error. If the
script was provided by outside, it should usually be treated as normal
flow control - you expect complex user-provided scripts to be
malformed.

I think "expect" is overly cynical. I would say "you PREPARE FOR
<any input> to be 'malformed'" -- since you can't control the
outside world.

Returning to my original example (scanning a string of chars to
extract the numeric value expected to be represented therein),
that function (pure function using your terminology) can report
the nature of any "incompatible input" to any degree of detail
that it considers appropriate. Not just "hey, I couldn't find
a valid 'number' in this mess..."

Those reports make sense in the context of a function that
is responsible for extracting the numeric value represented by a
string of characters. REGARDLESS OF WHAT THAT NUMBER IS BEING
USED FOR. And, regardless of the actual value!

E.g., if that routine also had the responsibility of testing that
the value was within a particular set of bounds, it could add
"Value specified exceeds the maximum allowed.", etc. to its
lexicon of errors.

None of these reports need to know whether the "value" is being
used as a representation of AGE, azimuth/elevation, speed,
stock market index, etc. You are dealing with a *value* -- just
like when you complain that the first letter of a sentence is
not capitalized (you don't even look at the CONTENT of the
sentence to understand the nature of that "report").

But, you obviously don't want that "pure function" to become
a "procedure" by virtue of informing the user directly of
this error (this would also badly contort the design of
your code).

Nor do you need to return the text of the message to the caller!
(how do you decide whether you should return the en_US version
of the message or the es version?)

Instead, return an identifier that allows the caller to fetch
a version of the error that *it* considers appropriate.

It, in turn, can embellish -- or replace -- that message with
something appropriate to its understanding of the application.
E.g., perhaps the caller is responsible for obtaining the
coordinates of a point in 2D space. It *uses* the called routine
to parse the string (that is provided to *it* by ITS caller!)
for the X coordinate, then the Y coordinate. An error reported
during the first parse might cause it to report an error to
*it's* caller like: "Bad value for X coordinate." AUGMENTED
by the report returned by the function that had actually
detected the lower level error "Non-numeric character encountered
in value."

Or not (depends on the range of errors that you document
get_point() as reporting). You could, instead, ignore the
detail presented to you by any of your "underlings" and
just report: "Bad point".
 
D

Don Y

Hi Malcolm,

You've described the problem.

Fred scatters his code with

log_error(ERR_TOOMANYFILESOPEN);

Bert does the same with

fprintf_error(EC_FILECOUNTEXCEEDED, "Too many files open (%d)\n",
Nfiles);

Now when we merge Fred's code with Bert's, what's going to happen?

If the same "underling" is signaling that count of files has been
exceeded, then that underling would have reported (to fred and
bert) in the manner that the underling has been documented.

E.g., (pseudocode)
error_t error;

if (!OpenFile(name_of_file, &error))
That line of code (or variant thereof) in bert and fred's code
would yield the same error code (and corresponding report) in
each case. I.e., "Too many files open. [The system can only
open 20 files for each user. Close some file(s).]".

(the bracketed portion being "extra clarification")

Bert and fred can embellish/replace this as appropriate to their
contexts/roles. E.g., bert might want to add "Too many photos
open. [A maximum of 9 photos can be examined at any given time.]"

The user sees the "too many photos open" and, if that is
sufficient for him to understand the nature of the problem,
fine. If he asks for more clarification, the "maximum of 9"
is offered up. If this still does not resolve the issue
("I'm only viewing 7 photos!"), more, lower level details
are exposed by revealing the details of the error reported
by OpenFile -- "20 files for each user". This suggests
that files are open elsewhere that aren't accounted for in
the "9 photos" context -- but that *are* known to lower
level OpenFile routine. Either a bug *or* the error is
being reported too low in the application -- perhaps this
layer should have been embelished by the layer *above*...
the one that called OpenPhoto()!

Meanwhile, fred might have opted to *replace* the error with
a unilateral proclamation: "Out of resources" with no further
reporting.

Note that this doesn't protect against fred writing his own
oPEN_fILE() routine and implementing whatever errors he wants
in it. So, *his* oPEN_fILE might return reports like:
"Golly, gee... I just can't keep track of all these files
you keep wanting to open!" :>

There is no way around this. Just like you can write your own
strtod() to parse a string for a numeric value instead of
relying on the one provided by the library. Fred can report
KB in units of 1,000 bytes while Bert can use 1,024.

But, if both fred and bert are aware of the ConvertToKB()
function's existence, then their notions of KB will be
consistent.

If they are both aware of the OpenFile() routine's existence,
then their error messages (from *that* layer) will be
consistent.

And, when BOTH fred and bert stop using OpenFile(), the
"Too many files open" error message will magically be excised
from the application -- along with the ERR_TOOMANYFILESOPEN
identifier.

I can't FORCE fred and bert to use OpenFile. I can't *force*
them to write good error messages. But, if I provide a
mechanism that makes presenting these messages to the user
easy and noticeably improves the quality of that experience,
they can exploit that mechanism easier than inventing an
ad hoc mechanism that they have to maintain for themselves

Corporate policy and peer pressure are the only real
policemen. All I can do is make a mechanism that *wants*
to be used, makes it easy to be used and that reduces the
chance of error creeping into that aspect of the code.
Alice joins the team and encounters:
MakeErrorCode(...)
and it doesn't take long for her to see how she can
exploit it to "explain" her code.

In regulated environments, this can go a long way to complying
with requirements for testing (has every error code been "touched"
during the execution of the test suite? just instrument the
routine that resolves error codes and tally each unique code
that is presented to it!), elimination of dead code, etc.
 
D

Don Y

Hi Lowell,

That's the wrong question, because the answer necessarily depends on the
relationship between Fred and Bert. If they are independent third-party
library developers, you're not likely to get them to use *your* error-

If they are truly independent (i.e., you are purchasing a COTS library),
then you *wrap* their library functions to map their error_t's to *your*
error scheme.

E.g.,

void *
MyMalloc(
size_t size,
error_t *error
) {
void *ptr;

if (size > MAX_ALLOCATION) {
MakeErrorCode(Err_TooBig,
"Allocation too large.",
"The requested piece of memory was larger than the
maximum supported for the process."
)
*error = Err_TooBig;
return NULL;
}

ptr = malloc(size);
if (ptr == NULL) {
MakeErrorCode(Err_NoMemory,
"Out of memory"
"The free memory pool has been exhausted."
)

*error = Err_NoMemory;
return NULL;
}

return ptr;
}

[Note that I am not trying to be "friendly" with my error messages
in this example. Also, the semantics for this example try to
resemble the semantics expected of traditional malloc. I would,
instead, return a "Result_t"]
reporting system now. If they both work for you, then you can require
them to use any system you want, and any sane system should work
(although simpler systems may be more effort to scale up).

In practice, the former case usually means that each set of third-party
code has its own type for errors, and the functions you call them from
will report errors upwards according to their own convention that need
have nothing to do with the third-party code. But you're limited to the
information that their API already provide.

Of course! Just as if you want to change the syntax of "decimal
strings" to prohibit leading (or TRAILING!) zeroes yet *mandate*
at least one character on each *side* of the decimal point, you
can't use strtod(3). Or, if you want to conditionally be able to
limit the range of values that can be represented by such
strings, etc.
 
M

Malcolm McLean

"parameter" passed to it a "programming error"?  That would imply
the caller had the responsibility of checking it for correctness
(eliminating the need for the parser's functionality).
You have to specify that in the "contract". To say that "the result of
passing a negative number to square root is undefined" is reasonable.
So then the code can loop forever, because it never converges on root
x. But that doesn't have to be the contract. You could say that "the
result of passing a negative number will always be NaN". Then
if(datamissing) return sqrt(-1); /* generate a NaN */ is correct
code.
To say "this function must be passed a valid Basic program" is
probably unreasonable. But again it depends on the situation.
I think "expect" is overly cynical.  I would say "you PREPARE FOR
<any input> to be 'malformed'" -- since you can't control the
outside world.
In the case of a non-trival script in any programming language,
written by a human, first time it is automatically checked by a
machine almost certainly an error of some sort will be uncovered. In
the case of sqrt(), most experienced programmers know that generally
machines can't handle imaginary numbers and that sqrt(-1) * sqrt(-1)
won't yield the mathematically correct result.
Returning to my original example (scanning a string of chars to
extract the numeric value expected to be represented therein),
that function (pure function using your terminology) can report
the nature of any "incompatible input" to any degree of detail
that it considers appropriate.  Not just "hey, I couldn't find
a valid 'number' in this mess..."

Those reports make sense in the context of a function that
is responsible for extracting the numeric value represented by a
string of characters.  REGARDLESS OF WHAT THAT NUMBER IS BEING
USED FOR.  And, regardless of the actual value!
So somewhere in the code we have a function

bool isvalidnumber(char *str)

calling that function with any C string whatsoever is not a
programming error. Having called that function, you can then call atoi
(which has undefined behaviour on overflow) or strtod().
If isvalidnumber() return false, and you are expecting a number in the
input, that's a parse error
E.g., if that routine also had the responsibility of testing that
the value was within a particular set of bounds, it could add
"Value specified exceeds the maximum allowed.", etc. to its
lexicon of errors.

None of these reports need to know whether the "value" is being
used as a representation of AGE, azimuth/elevation, speed,
stock market index, etc.  You are dealing with a *value* -- just
like when you complain that the first letter of a sentence is
not capitalized (you don't even look at the CONTENT of the
sentence to understand the nature of that "report").

But, you obviously don't want that "pure function" to become
a "procedure" by virtue of informing the user directly of
this error (this would also badly contort the design of
your code).
Exactly. So you pass errors like that up and up, until you reach a
procedure. The procedure then has access to IO, and can tell the user
that it expected a number but found a non-numerical string. There is a
problem of how to make the message informative - sometimes "parse
error reading input file" is all you want, sometimes you want the line
number, sometimes you want to print the offending string. Low-level
functions to extract numbers can't sensibly take these decisions.
Nor do you need to return the text of the message to the caller!
(how do you decide whether you should return the en_US version
of the message or the es version?)

Instead, return an identifier that allows the caller to fetch
a version of the error that *it* considers appropriate.

It, in turn, can embellish -- or replace -- that message with
something appropriate to its understanding of the application.
E.g., perhaps the caller is responsible for obtaining the
coordinates of a point in 2D space.  It *uses* the called routine
to parse the string (that is provided to *it* by ITS caller!)
for the X coordinate, then the Y coordinate.  An error reported
during the first parse might cause it to report an error to
*it's* caller like:  "Bad value for X coordinate." AUGMENTED
by the report returned by the function that had actually
detected the lower level error "Non-numeric character encountered
in value."
It's found a non-numerical string for the x co-ordinate. So processing
should normally stop at that point. The NaN x co-ordianate should
never be passed to the triangle drawing routine. There are problems if
we have someting like this

draw_triangle( void (*get_point)(double *x, double *y) )
{
}
because get_point() has no way of passing the error condition up the
call chain.
Or not (depends on the range of errors that you document
get_point() as reporting).  You could, instead, ignore the
detail presented to you by any of your "underlings" and
just report:  "Bad point".
You do that, but you don't need a centralised error system. The errors
are part of the contract between caller and function.
 
S

Shao Miller

Well, Ben Pfaff has provided one example where parenthesizing a macro
definition interferes with it's intended usage.

ITS ITS ITS ITS ITS ITS ITS

"it's" == "it is"
"its" implies ownership.
 
D

Don Y

Hi Lowell,


[much elided]
The distinction you're making here is about providing appropriate
abstraction.

Yes. At any given level, the extent to which you can "generalize"
or "specialize" the information that you report varies. You
interpret an "error" (it's really hard to avoid using that term
though "report" seems too generic) in the context that *you* are
aware of.

"What is the nature of this activity that I have delegated to
this underlying routine -- GIVEN MY ROLE IN THE UNIVERSE?"
Your technique of ginning up an error value from the file
name and line number doesn't help with this, because the agreement on

The file/line/function is only used to create a KNOWN TO BE UNIQUE
identifier for this INSTANCE of a particular error. E.g., you might
check for a particular "requirement" in several different (disjoint)
places in a function. They might all represent the same general
error (e.g., out of memory) that you will report.

But, they originated in different places! One was raised as
the result of conditional_1 while another might be from
conditional_3. By reporting *where* the error was raised,
someone "remotely" can see exactly where IN THE CODE the
system is reporting the error. This helps cover the case
of buggy code:

"Your software is complaining that I have 'too many files
open'. When pressed for details, it tells me that there
is a limit of '9' files. Well, I only have *7* files open!"

"Could you please tell me the 8 digit error code displayed
in the lower left corner?"

"Sure. 10330449."

"Hmmm... ah, OK. Yes, I see. That error stems from the
fact that the code WHERE IT IS RAISED says:
if (num_files > 7) {
instead of:
if (num_files > MAX_FILES) {
Thanks for bringing this to our attention!"

(Of course, that last comment would not be conveyed to the
user in that level of detail. Instead, you'd say, "I'll
submit a report of this problem and we'll have an update
available for you _______")
symantic returns is between caller and callee -- going further back the
call chain isn't helpful (in fact, it's actively harmful) to maintaining
usable abstrations.

Now that you're thinking in terms of abstractions, it also becomes less
important to worry about exactly what the "identifier" is, and you can
just focus on what it means. It's not a number, it's an object, which
represents everything the caller needs to know about the error.

Exactly. From the *caller's* viewpoint, it is a "bad area code"
or "bad IQ value", etc. It doesn't need to be concerned with
*what* makes it "bad".

OTOH, if the user is confused or DISAGREES with that assessment, you
want a means for getting that information to the user. But *you*
(the caller) don't want to have to independently examine the
issue (duplicating the efforts of the callee -- possibly in a
DIFFERENT manner!). So, you want to be able to expose the next
level of detail to the user... "drill down".

The new information will be more general (in terms of abstraction
since it knows nothing about the meaning of the "bad value")
and more SPECIFIC in terms of the problem that it is reporting:
"non decimal digit", "too many decimal points", etc.
It can
attach additional information, or translate into a different convention
for representing errors, before it reports back to *its* caller.
Exactly.

You'd like a system for adding more information about the error, so each
level can use the specificity of the lower level, and add on the
abstraction that it was hiding from the higher level. This is a
fundamentally different (and vastly more interesting) topic than what
the bottom-level caller reports to its immediate caller.

Yes. Ideally, the UI level first reports to the "user"
(whatever that "user" might be). In a spreadsheet application,
the way that a particular error (<frown> again, bad word) is
reported would make sense to a "spreadsheet user".

From the spreadsheets point of view, a more detailed description
of the error makes sense.

E.g., if OpenFile() fails because of Err_NoMemory, the application
might free up some memory and repeat the operation. OTOH, if
Err_NotFound is returned, it might just report that to the user.
The code only cares about the "text" of the error when it needs
to report that to an organic being.

The "error CODES" are just symbolic ways for the software to
*efficiently* understand the results reported. The software
needs to be able to efficiently decide if this is an "error",
"warning" (potential "qualification") or successful result.
It doesn't want to understand English, French, Portuguese, etc.
to garner that information.
 
K

Keith Thompson

Shao Miller said:
I think you meant:

#define ANSWER (6 * 7)

I think you need to read "The Hitchhiker's Guide to the Galaxy", by
Douglas Adams. 42 is given as "the ultimate answer to life, the
universe, and everything". It's eventually discovered that the question
is "What do you get when you multiply six by nine", demonstrating that
there's something fundamentally wrong with the universe.

See also <https://github.com/Keith-S-Thompson/42>.
 
D

Don Y

Hi Stefan,

Most C-program processes do not create such nice error
messages when the memory for objects with automatic storage
duration is exhausted. Sometimes, they just die
ungracefully.

Sure. But you're surely not advocating the continuation of
that practice? "Just SIGSEGV and let the user figure out
what's wrong. Don't even bother checking malloc's return value!"
<grin>

Most programs (esp desktop/mainframe) operate on the assumption
that they will have what they need -- often without ever THINKING
about what they actually need! ("We'll cross that bridge when we
come to it"). Few actively consider how they will react to
these conditions when/if they are encountered.

"Ooops!"
Saying that the address book was full, when in fact there is
no memory with dynamic storage duration available anymore,
might mislead the user into believing that the specific
memory of his address book was full when in fact the
general dynamic memory is exhausted, which means that now
all of his "books" suddenly are full. It might make him
beliefe the false idea that there is a fixed number of
entries reserved for his address book, which now is reached.

Drilling down (to the error reported by malloc itself) exposes
that detail to the user.

In my case, the database presents remedies to each "error"
so the user is advised on the course of action he should/could
follow. That's the point of being able to give very specific
information to the "user" -- so he can best utilize the
software to achieve *his* goals.

E.g., if "system memory" is exhausted, he can learn that he
can remedy this by freeing up memory *anywhere* (within reason).
I.e., he can trade an MP3 file for one or more address book entries.

OTOH, if the address book has fixed size structures that impose
a limit on the number of entries, a more appropriate remedy would
be "delete one or more address book entries before trying to add"

Malloc() doesn't need to know about these remedies. It doesn't
know *where* it will be used nor *how*. The context appropriate
for reporting the error to the user is the context that offers
explanations and remedies to the user (note that this context
can change as the user drills down)
If the lack of dynamic memory of the C-program process is a
lack of system memory, something less destructive, such as
just closing an unused window, might be more appropriate as
a counter measure than deleting precious address-book entries.

Usually most users will get a better, more correct idea of
the state of things from "Out of memory" than from "Address
book full" in the case that the process encounters a null
result from malloc.

You can't make assumptions about the skill level of your user
(well, you *can* but then you implicitly restrict your user
base). You want to be able to offer as much detail to the
user as the application considers appropriate for the types
of users it expects to interact with.

E.g., I see a SIGSEGV and I know what the problem is. If it
was also tagged with a __FILE__ __LINE__, then I would also
know *where* it is!

OTOH, your grandmother's eyes would cross if she saw the
same "report".
 
I

Ian Collins

Hi Stefan,



Sure. But you're surely not advocating the continuation of
that practice? "Just SIGSEGV and let the user figure out
what's wrong. Don't even bother checking malloc's return value!"
<grin>

There is nothing a program can do if the memory for objects with
automatic storage duration is exhausted.
 
M

Malcolm McLean

You can't make assumptions about the skill level of your user
(well, you *can* but then you implicitly restrict your user
base).  You want to be able to offer as much detail to the
user as the application considers appropriate for the types
of users it expects to interact with.
Normally you've got at least three audiences for an error report. The
user usually has no interest in the program other than as a tool to
get his work done, but he wants some response other than the program
simply failing to work as expected. The first line technical support
person needs to know if a problem is on his list of known issues, and
the programmers need to know in which routine the program failed and,
ideally, what data it was operating on at the time.
 
S

Shao Miller

Hi Shao,

If you can't pack all of the information you wish to have associated with an exception or error condition or whatever into an 'int', then perhaps you're better off allocating some status and passing it around:

/* my.h */
/* A simple error code */
typedef int myerr_t;

/* Extended status information */
typedef struct s_status s_status;

struct s_status {
const char * message;
/* ... */
};

/* my.c */
myerr_t operation_foo(
int * param1,
char * param2,
somes_status * status
) {
myerr_t result;
s_status status_internal;

/*
* If the caller hasn't provided storage for status,
* then they're not interested. Use our own for
* called functions
*/
if (!status) {
status = &status_internal;
memset(status, 0, sizeof *status);
}
/* ... */
result = operation_bar(13, param2, status);
if (!MYERR_SUCCESS(result)) {
/* Actually, we have specific info for this case */
SetStatusMessage(status, "Bar failed while blargling");
SetStatusSourcePoint(status, __FILE__, __LINE__);
/* Twiddle the case for out-of-range major error type */
if (MYERR_TEST(result, MyErrOutOfRange))
SetStatusTwiddle(status);
goto err_bar;
}

/* Success! */
result = SetStatusGeneralSuccess(status);

err_bar:

return result;
}

[...and more...]

However, it's possible that you've already considered and dismissed
material akin to this response post's content. :)

<grin> I've tried to think hard about the sorts of housekeeping
to which any *manual* system would be vulnerable. Since I am
operating in a rather "flush" environment, I am planning on
using those resources to make the code and user interfaces
more robust and full-featured.

Just out of curiosity, did you skip over the 's_status' part of my
previous response, or dismiss it as Not The Right Strategy For You?

I'm trying to avoid complicating the discussion with the introduction
of mechanisms of passing "qualifying information" (what you call
"extended status information") up to the caller. This is a whole
'nother can of worms as each potential error/result could potentially
have its own idea as to "what's important/pertinent".

I think you've misunderstood or I've failed to communicate or both. :)

If I've read all of your posts correctly, including your older posts in
different threads, you're interested in satisfying the following
requirements for any exceptional condition:

- In which translation unit did the exception occur?
- What was the nature of the exception?
- How does is the low-level exception relevant as part of the
higher-level's usage?
- Attach some human-readable text to the exception or allow for such a
look-up
- Avoid manual accounting of the infinitude of possible exceptions in a
giant header file
- Allow for a change in a called function, such as a new ability to
detect an exceptional condition, to be transparent to the caller, but
not prohibitive of the caller being able to be changed to detect this
new exception

Is that right? Are some requirements missing? It's a bit of a
"put-together."
My current approach just focuses on indicating *where* the "test
failed" and in which context that was encountered.

I don't understand why the code example I provided above fails to
satisfy these.

If you're concerned about the distinction between 'myerr_t' and
's_status', then you can omit 'myerr_t' altogether.

There is nothing that I'm aware of to prevent something like an
's_status' from being a linkable node in a linked-list. A low-level
function can determine that "there are an invalid number of decimal
points when a decimal number is expected," then the calling function can
catch that and further attach that "the speed of the car is expected to
be a decimal number," and its calling function to catch that and further
attach that "the time taken to reach the destination cannot be computed,
given the input," etc.

Again, if everything you need can't fit in an 'int', or you don't want
to manually account which 'int' values map to which errors, you can use
pointers. If you are concerned about constants, maybe you could
consider "address constants," where the compiler worries about accounting.

You can have:

/* exceptions.h */
#define EXCEPTION_SIGNATURE 0x4213
typedef char exception_t[1];

/* some_exceptions.h */

struct s_exception {
int signature;
int * exception_type;
struct s_exception * next;
/* Members for __FILE__, __LINE__, message, etc. */
};

struct s_exception_too_many_decimal_points {
struct s_exception common[1];
int decimal_points_found;
const char * copy_of_user_input;
};

extern exception_t ExceptionTooManyDecimalPoints;

Now a called function can produce and populate a 'struct
s_exception_too_many_decimal_points' object, attach it to a chain of one
or more exceptions (or status, or whatever else you might wish to call
it), and allow the caller to detect it:

/* caller.c */

void some_calling_func(
const char * user_data,
struct s_exception ** exception_list
) {
#define user_data "42..13" /* Dummy example */
struct s_exception * exception;
double d;

/* So don't bother with returning a value */
get_decimal_number(&d, user_data, exception_list);

for (exception = *exception_list;
exception;
exception = exception->next) {
/* Something odd happened */
if (exception->exception_type ==
ExceptionTooManyDecimalPoints) {
PushNewGeneralException(
exception_list,
"Speed must be a decimal number",
__FILE__,
__func__,
__LINE__
);
continue;
}
/* Handle other exceptions we understand */
/* ... */
/* Handle unknown case */
UnhandledException(
exception_list,
"Oh no!",
__FILE__,
__func__,
__LINE__
);
/* UnhandledException will terminate */
/* Can't reach here */
continue;
}

/* Check the range */
if (d < 0.0 || d > 200.0) {
PushNewGeneralException(
exception_list,
"Speed cannot be greater than 200.0",
__FILE__,
__func__,
__LINE__
);
}

return;
}

Here 'some_calling_func' hasn't been developed to the point of defining
its own special class of exceptions (via exception_t in a header), so it
simply pushes "general" exceptions onto the head of the list of
exceptions. A caller can now catch all of this detail, unless
'some_calling_func' gets an exception it can't handle, in which case it
forces the program to terminate (it needn't unless it's critical).

Or if you don't like that the arguments to to 'PushNewGeneralException'
aren't "gathered together," you could have, the 'some_calling_func':

/* ...as before... */
/* Something odd happened */
if (exception->exception_type ==
ExceptionTooManyDecimalPoints) {
static const struct s_exception my_exception = {
EXCEPTION_SIGNATURE,
ExceptionGeneral,
NULL,
"Speed must be a decimal number",
__FILE__,
__func__,
__LINE__
};
PushNewGeneralException(
exception_list,
&my_exception
);
continue;
}
/* ...as after... */

Then you could actually scan your final program for the exception
signature and enumerate all exceptions in the program, perhaps for
documentation purposes. Some implementations also have extensions where
you could specify a particular "section" for static-duration objects, so
you could choose to put all exceptions in one section, or even as a
table (as iPXE demonstrates with other aspects of its code, albeit not
exceptions) that can be accessed and enumerated by the program itself.
E.g., imagine someone says the application wasn't accepting their
"input" (cf the "get value" example). People are notoriously
inaccurate at telling you the *exact* message that they are
receiving:
"I typed in my age but it said the number I typed was bad"
"No, I'm sure it didn't say that (because I *know* what all of
the error messages are and none of them are 'the number was bad')"
"Well, that's what it *meant*! I can't remember the actual WORDING..."
"Could you do whatever it was you were doing and provide me with
the error identifier located at the end of the message?"
"OK, it says _______"
"Ah, you've just typed in a '-' sign but you did so after you
started typing in the numeric value/digits. If you want the value
to be negative, you need to type the '-' sign first. OTOH, if you
are trying to type '2012-1965' and hoping the machine will interpret
that as 47, I'm sorry but the machine doesn't have that capability..."

I'm not worried (in this discussion) about providing the extra
details (e.g., "2012-1965") to better explain/understand the
nature of the error to that finer degree.

So omit the 'PushNewGeneralException' pieces above and simply call
'UnhandledException' for any exceptional condition.
 
D

Don Y

Hi Malcolm,

You have to specify that in the "contract". To say that "the result of

Yes, of course. The point I was illustrating is you tend to want to
*hide* detail, not expose it. (Divide and Conquer)
passing a negative number to square root is undefined" is reasonable.
So then the code can loop forever, because it never converges on root

Exactly. When preparing a specification for a client, any time they
respond "I don't care" to one of my questions, I say, sotto voce,
"In the case of ______, crash and burn."

Amazing how quickly people start to "care"! :>
x. But that doesn't have to be the contract. You could say that "the
result of passing a negative number will always be NaN". Then
if(datamissing) return sqrt(-1); /* generate a NaN */ is correct
code.
To say "this function must be passed a valid Basic program" is
probably unreasonable. But again it depends on the situation.

Exactly. I was drawing attention to your partitioning of pure
vs. I/O capable functions and the constraints you placed on
providing "correct input".

[In fact, I litter the preamble of each function with a littany
of assertions to catch any such violations as well as make them
crystal clear: In case the developer might not have understood
the written specification, *this* is what it boils down to!]
In the case of a non-trival script in any programming language,
written by a human, first time it is automatically checked by a
machine almost certainly an error of some sort will be uncovered. In
the case of sqrt(), most experienced programmers know that generally
machines can't handle imaginary numbers and that sqrt(-1) * sqrt(-1)
won't yield the mathematically correct result.

Yes, but there is a wide world of other possibilities and users
out there. If, for example, you prohibit leading zeroes in a
value, someone typing in "0001" might be surprised to see it
rejected.
So somewhere in the code we have a function

bool isvalidnumber(char *str)

calling that function with any C string whatsoever is not a
programming error. Having called that function, you can then call atoi
(which has undefined behaviour on overflow) or strtod().

Except the "isvalidnumber" functionality is folded into the
"atoi/strtod" function. So, that composite function has to
be able to report parsing problems as well as conversion
problems (overflow, loss of significant digits, etc.)
If isvalidnumber() return false, and you are expecting a number in the
input, that's a parse error


Exactly. So you pass errors like that up and up, until you reach a
procedure. The procedure then has access to IO, and can tell the user
that it expected a number but found a non-numerical string. There is a
problem of how to make the message informative - sometimes "parse
error reading input file" is all you want, sometimes you want the line
number, sometimes you want to print the offending string. Low-level
functions to extract numbers can't sensibly take these decisions.

I've ignored the issue of providing "supporting data" for the
message in this discussion (it's already too complicated! :> ).
But, anything that an upper level would *want* to obtain for
"report augmentation" would have to be made available to it
by the appropriate underlings.

E.g., my "malloc" is highly parametrized. You specify the
heap against which the request is made, the allocation policy
to use, the selection policy, etc. To simply return "NULL"
for any sort of error does very little to helping the user *or*
the developer/helpdesk sort out the exact nature of the problem.
It's found a non-numerical string for the x co-ordinate. So processing
should normally stop at that point. The NaN x co-ordianate should
never be passed to the triangle drawing routine. There are problems if
we have someting like this

draw_triangle( void (*get_point)(double *x, double *y) )
{
}
because get_point() has no way of passing the error condition up the
call chain.

get_point would return an error indication/report. draw_triangle would
*also* return an error indication/report. "Turtles all the way down!"

Eventually, something would "handle" the error and decide how to
continue -- prompt for a new "triangle", etc.

(You have to *plan* how you are going to handle errors. You can't
just squeeze that in after-the-fact)
You do that, but you don't need a centralised error system. The errors
are part of the contract between caller and function.

Yes. The centralization deals with how the errors/reports are
"bound to human readable terms". I.e., the messages, explanations
and remedies.

A 1960's style program would just spit out "error 2345" and quit.
My goal is to present locale-specific error messages in increasing
levels of detail (an experienced user only needs a brief message
to know what he did wrong; a novice might need their hands held)
along with explanations for the "error report" and possible remedies.

I.e., smart applications that don't just perform a task but do
so in a way that facilitates the user's performance of that task.

As I said, the (run-time) mechanism is not the problem. I've
already got that working well.

The problem lies with the compile time support. Making it
robust *and* efficient.

The semantics of the whole thing constrain what a developer
should -- and should *not* -- be able to do. The mechanism
should enforce those semantics.

E.g., an error code isn't a number. It isn't a string. It's
an ERROR CODE. A number or a string might *represent* a
particular error code but the semantics of numbers/strings
don't (and CAN'T!) translate to error codes themselves.

An error code is always a constant. You can't "alter" an
error code. OTOH, you can create a container to hold an
error code. You can copy that representation to a similarly
typed container. etc.

You can check to see if two error codes are equivalent
(whatever that means). But, there are no ordering operators
to *rank* error codes.

You can determine if an error code (again, "result" is a better
choice of term) represents a true "error", a "warning" (i.e.,
a qualified success) or actual "success". But, there are
no ordering relationships among these classifications other
than error != success, warning != error, etc.

You can't perform arithmetic on error codes.

You can't perform string operations on error codes.

You can't *duplicate* error codes (though you can have multiple
references to a particular error code).

You can't take the address of an error code (though you can
get a pointer to an error code container!).

You can't dereference an error code.

So, any implementation scheme that equivalences particular error codes
to other "normal" C constructs is suspect.

For example:
#define Err_NoMemory (23)
opens the door for:
if (Err_NoMemoy + 2 < 90) {
which doesn't make sense -- it relies on some magical, NONGUARANTEED
aspect that the developer *thinks* relates to "90" and "2" in some
way.

Or:
const char Err_NoMemory = "Insufficient Memory";
being exploited as:
printf("There is %s to perform your task.", Err_NoMemory)

Etc. (I'm sure you can extrapolate this sort of problem).

So, this limits where you can expose the error code in the source
file. Note how jarring the visual syntax of my examples have been
wrt MakeErrorCode usage:

if (size > MAX_ALLOCATION) {
MakeErrorCode(Err_TooBig,
"Allocation too large.",
"The requested piece of memory was larger than the
maximum supported for the process."
)
*error = Err_TooBig;
}

OK, I could preface the MakeErrorCode invocation with some leading
whitespace:

if (size > MAX_ALLOCATION) {
MakeErrorCode(Err_TooBig,
"Allocation too large.",
"The requested piece of memory was larger than the
maximum supported for the process."
)
*error = Err_TooBig;
}

But, what you *really* want to do is:

if (size > MAX_ALLOCATION) {
*error = MakeErrorCode(Err_TooBig,
"Allocation too large.",
"The requested piece of memory was "
"larger than the maximum supported
"for the process."
)
}

[apologies for all the whitespace if you are using a proportional
typeface]

But, this invites MakeErrorCode() being used in expressions
(you can do things to prevent this).

And, most of all, means any tool you develop to implement this
functionality (create a unique error code named by the first
parameter, squirrel away the explanations offered by the other
parameters, mark it as an ERROR and not a WARNING, etc.) now
has to understand C syntax.

Ah, wait! Let's have the preprocessor run through the source
(since it understands C). Then, extract the necessary parameters
from that macro (having been flagged in the preprocessed output),
*then* we can actually worry about compiling the module!

Hmmm... the source needs to be rewritten to make the "value"
of Err_NoMemory available. And, what if it also requires other
error codes defined in other modules?

I.e., this gets expensive -- though it can still be automated
and run well on today's faster machines. It just doesn;t
scale well.

I have another approach that I am implementing that should make
a lot of these issues go away. But, it still feels klunky.
(The ideal would be to build the support into the compiler
but that ties you to *that* compiler! Processing source
files in a separate tool is more flexible)
 
D

Don Y

Hi Ian,

There is nothing a program can do if the memory for objects with
automatic storage duration is exhausted.

In a general sense, no. But, in my case, the program can choose
to handle a signal delivered by the OS when it exhausts this
resource. It can then request its stack to be increased and
resume execution.

Or, *it* can inform the user of this problem and die GRACEFULLY.

Or, it can let the OS's default behavior KILL the process.
 
I

Ian Collins

Hi Ian,



In a general sense, no. But, in my case, the program can choose
to handle a signal delivered by the OS when it exhausts this
resource. It can then request its stack to be increased and
resume execution.

What if the OS is unwilling or unable to do this? Process limits aren't
usually something a user process can increase.
 
D

Don Y

Hi Malcolm,

Normally you've got at least three audiences for an error report. The

Yes. -----------------^^^^^^^^
user usually has no interest in the program other than as a tool to
get his work done, but he wants some response other than the program
simply failing to work as expected.

Given that the first response most users take to an error is to
REPEAT THE SAME ACTION (hoping for different results :> ), you
now have a user who has made the same mistake TWICE. Chances
are, his level of anxiety is already elevated. Giving him the
same error message is a foolish response to this second attempt.
You should automatically begin embellishment.

if the user KNEW he had made an error in the first case, he would
retry WITH GREATER CARE in the second attempt and the error would
not manifest. OTOH, if he doesn't understand what he did wrong
and repeats the same actions -- only to be met with the same
UNINFORMATIVE REPORT (hey, if it was informative, he would now
KNOW what was wrong with his first attempt and wouldn't be
retrying it!) -- *you* are the problem.

"OK, so I typed a 'bad value'. What the hell is WRONG WITH IT???!"
The first line technical support
person needs to know if a problem is on his list of known issues, and

Or, how to interpret the error message for the user. *Effective*
help desk staff make note of what problems users have with a
product/program and convey that information back to the development
staff. Sometimes, the language is defective/insufficient. Or,
the process is unintuitive ("Why do I have to enter my SSN before
I *can* enter my name?").
the programmers need to know in which routine the program failed and,
ideally, what data it was operating on at the time.

Yes. Being able to report where EXACTLY in the code the 'reporting
condition' was detected goes a long way to expediting resolution
of the problem.

"The malloc() on line 27 of file foo.c returned NULL. I know
this because the error was reported in the conditional on line 28!"

Time for my afternoon walk before it gets much cooler...
 
D

Don Y

Hi Ian,

What if the OS is unwilling or unable to do this? Process limits aren't
usually something a user process can increase.

(sigh) I'd really rather not get into this "distraction", but...

The simple answer to your question is: then the process
(which might be all or part of an "application") can either
die gracefully, die ungracefully, or blindly trudge along
expecting the OS to take a hatchet to it at some point
(if you've already tried to *use* those resources that don't
yet exist -- i.e., and trapped to the OS on a page fault -- then
that hatchet job will be "real soon now" :> )

The longer answer has to address more issues. :(

A process (well, technically, an executing THREAD within a
process) can ask for resources (various flavors of memory be
some of those) at any time. The OS can choose to grant,
deny or *defer* that request.

A process (same caveat as above) can also *implicitly*
request additional resources. E.g., if it has been configured
with a "stack allocation" of 20KB but only 10K have been
actualized, a page fault in the stack segment can cause the
OS to provide the underlying physical memory without the
"process" knowing this has happened.

[of course, a process watching a system timer might wonder
why that last instruction took 100us to execute! :> If
this sort of behavior is not acceptable -- e.g., if the
process has timeliness constraints -- then the configuration
could be modified to instantiate all of the stack allocation
at startup. Faults thereafter would have to be *negotiated*
with the OS!]

A "smart" application would review its past behavior and
anticipate its needs -- requesting those resources at
the start of execution (instead of taking multiple hits
each time it "needs a little more"). This can also
provide more immediate feedback to the user of resource
shortages before the application STUMBLES onto that
limit (e.g., chugging along, trapping 9 times, getting
an extra 1K each time -- and continuing -- until eventually
discovering that the 10th -- and final -- trap's request
won't be satisfied! "Why didn't you tell me that before
I got started?" "Because you didn't ASK!" :> )

A *really* smart application would scrutinize its actual
workload and adjust its resource requests to best fit
the actual needs of that workload. I.e., aiming for
the sky when you only need "a little more" is likely
to leave your request unsatisfied.

The last aspect is the *transience* of those resources.

A process can *lose* resources while it is executing.
This can be implicit or explicit (without or with formal
notification).

E.g., as stack penetration decreases, the OS can silently
choose to recover that memory from the process (anything
"past" the current stack pointer is "undefined"!).

Similarly, the OS can reclaim portions of the TEXT segment.

[of course, these can be configured per the minimum
stack allocation for the same reasons outlined before]

The OS can also (asynchronously) *ask* a process to free
up resources -- voluntarily. The process can choose to
comply -- or not.

This allows a process to be designed to exploit resources
when they are available -- without penalizing other processes
that want to co-execute (since it has no knowledge of when
or if those processes might come along!). A cooperative
process will free up as much as it can as soon as it can.
So, if the process has been using those resources to
*anticipate* future needs or speed up responses (e.g.,
precalculating lots of options in anticipation of the
user asking for one or more of them), it can <shrug> and
give up those hopeful gains -- and resort to a leaner
implementation.

A process can also be *told* to release resources (on pain
of death :> ). So, an uncooperative process ("I asked you,
nicely, to free up some memory and you thumbed your nose at
me!") can't benefit from this antisocial behavior.

If the process truly can't operate without those resources,
"Sorry, Charlie!". (the OS said it NEEDS them. If you
can't provide them, the OS will HAVE TO take them from
somewhere. That could very well be *you*! The consequences
of this can vary -- but, in any case, *you* won't have
control over them!)

(deep breath)

In concert, the two mechanisms let the "system" dynamically
reapportion resources to fit the needs and priorities of
the processes/applications running on it.

E.g., if X requests more of a resource, the OS can choose
to grant it, if a surplus is available; *or*, can approach
other "consumers" looking for a "hand-out". Worst case,
it can kill consumers to make those resources available.

Or not.

The application writer can ignore all of these mechanisms
and just live with the default behavior. But, that means
the application is less capable and more brittle: "You
give me no options OTHER THAN to terminate you..."

As I said, I would really rather this not turn into another
"distraction"... :-/
 
D

Don Y

Hi Stefan,

The include-file can be generated:

printf( "#define YES (%d)\n", ++i );

Sure! But how do you know "YES" is ever USED anywhere?
(replace YES with MOTOR_OVERHEATED and every other
"error" condition that this file IMPLIES exists)

Also, you have now created a dependency that "every"
file relies upon. Add a new error code and every file
that #includes "errorcodes.h" has to be recompiled.

You also tempt people to do things like:

/* errors generated by get_value() */
#define TOO_MANY_DECIMAL_POINTS (27)
#define MISSING_SIGN (28)
#define LOST_PRECISION (29)

/* errors generated by run_motor() */
...

if ( (error >= TOO_MANY_DECIMAL_POINTS)
&& (error <= LOST_PRECISION) ) {
// this must be an error propagated up from get_value()

So, if you later add a TOO_MANY_SIGNS error code, you risk
breaking this code (which should never have existed in the
first place). Error codes aren't "numbers" that can be
manipulated as such. They have no relationships to each
other.
This is a kind of library. When you defined anything (say, a
function) in a library, you can never know whether the
client really uses it.

In *my* scheme, errors that are never signaled magically disappear
from the "error space" -- just like opting not to link in a
routine from a library.
I only see two extremes: Either maintain one table per
function (which I do), or one global table for all your
code (like Microsofts HRESULTs, see

http://source.winehq.org/source/include/winerror.h

). You seem to prefer some kind of mixture: one table
per subsystem. This is confusing since now merging, splitting
or changing subsystem boundaries will become more difficult,
because one also needs to adjust this table each time.

You have those problems IF YOU WANT TO MAINTAIN THAT TABLE!
*I* don't have "that table"! :>
If you do not want them to overlap, then you have
effectively one single global table per programmer (=Don Y).

No. No table. Period.
What happens when you join another programmer or
organization for a project? Will they agree to use your
table? Will they force you to use their error-code
convention?

That's a management issue. Another programmer joins your team.
Does your Management insist on coding standards and procedures?
If not, then, "Hey, this is how *my* code reports errors. I
don't really care how *yours* does -- except to the extent
that I use yours. Demonstrate how your scheme handles these
issues and I'll show you how MINE does..."

Sure. So you examine the entire source tree each time you
want to locate the source of an error? How do you guarantee
that get_isbn_number() doesn't use "TOO_MANY_DECIMAL_POINTS"
to indicate a malformed ISBN? What do you use to find error
code "instances" in the source tree? Is "TOO_MANY_POTATOES"
an error code?
You add tags in comments

#define OVERFLOW_DOUBLE 2 /* tag:numeric tag:iserror */

Then you can find all numeric error codes with grep.

Again, you are having to do this MANUALLY. I'm sorry but *I*
am not infallible :> And, most of the folks I've worked with
are no better than me!
If it is programmed in this way.


Such a tool cannot know the domain vocabulary of the
application.

The tool doesn't have to know the vocabulary. It provides a means
for the *developer* to write the messages. All it does is bind
error codes to messages, ensure that those error codes are
unique, indicates where each is raised in the source tree and
give a handle to this information back to the devloper (and
the code he writes)
Usually, an application calls a library function, which
returns application-domain independent error codes. The
application then translates these into error messages which
use the language (domain vocabulary and natural language)
of the user of the application.

Exactly. As you move up the abstraction hierarchy, you apply
more context to your messages. "Bad value" becomes "Bad house
number" (as in a street address). The advantage is that an
upper layer of abstraction can get a description of the
exact nature of the "problem" instead of just treating it as
a "bad number" (what, specifically, *makes* it a "bad number"?)
When there are no keystrokes, it usually means that no one
has pressed a key, which does not seem to be an error. The
computer usually waits patiently.

Or the keyboard is unplugged. Or the pty connected to stdin
has disconnected. Or...
When the temperatur controller defines this syntax
for its interface, it does not have to be the same syntax
that is used in the user interface of the application.

The temperature is an *aspect* of the application (not a hardware
device, etc.). It would be silly to expect users to specify
temperatures using roman numerals when all other numeric values
are specified in decimal.

I.e., "get_value()" would tend to be an implicit part of the
design of the user interface. You wouldn't want the user to
learn different rules for different parts of the application.
The application defines the syntax it expects from its
user. It then checks this syntax reporting any errors
to the user.

You don't *really* parse the user input, keystroke by keystroke,
in your main(), do you?? No atof(), strtod(), sscanf(), etc.?
If there are no errors, it converts this
syntax into whatever syntax the temperatur controller
uses. (We do not want the user to learn a new application
syntax, whenever the temperatur controller was exchanged.)

If the temperature controller then reports any syntax
errors, that's an error of the application programmer,
not of the user.


I do not see anything above the application layer. The
application is the highest layer that communicates directly
with the user. It uses a layer below, which is the library
layer.

There are *many* layers. Within libraries, between libraries,
within subsystems, between subsystems, etc. Any of these can
be designed to talk to the user.

E.g., main() calls OpenFileDialog() to get a filename from
the user. OpenFileDialog() is "communicating directly with the
user" -- yet it is NOT at the top/application layer!
It is the duty of the application to explain that to him.


No, it exposes its own codes. When they happen to be the
same as the codes of a lower level, this is a coincidence.
For example say:

int f(){ return g(); }

here, f returns its own codes, converting the codes it
got from g, even if, in this special case, the conversion
happens to be simple, because it is the identity function,
by pure coincidence.


You have to translate them anyway, because all text
messages of libraries usually use a specific natural
language, say: English.

Now, as we all know, users typically do not understand
English but German. Well, whatever language they understand:
The library does not need to know. A file-copy-library
should have to know about natural languages.

Yes, the application must translate each and every little error
message into a language that the user understands, including
his domain vocabulary. As they say:

»Writing an application is a dirty job,
but somebody's got to do it.«

No. All you have to do is get a suitable message to the user
accounting for L10N/I18N. The mechanism that you use to do this
is up to you.

I can use Literate Programming to prepare documentation in
French and English *alongside* the actual sources that I
am documenting. Just because you don't use that technique
doesn't mean others can't!
The uppermost level (= the application layer) has to explain
the errors to the user.

Any level that deals with any (type of) user has to explain the
errors/results to that user. The LIBRARY has to explain its
results to the APPLICATION (using your terminology). In my
scheme, a layer can rely on its successive underlings for more
*specific*/detailed information about the error.
 
S

Stefan Ram

Don Y said:
You don't *really* parse the user input, keystroke by keystroke,
in your main(), do you?? No atof(), strtod(), sscanf(), etc.?

When I say »the application does this and that«, this
includes the possibility that the application might use
library functions to accomplish parts of »this and that«.

However, the application is in charge of what those library
functions do, because it calls them to do certain thinks
(the library functions are just »vicarious agents« or
»auxiliary person«, they act on behalf of the application).
One says that Gustave Eiffel built the Eiffel Tower, while,
in reality, other people took part in that effort, too.

When I write a (recursive-descend) parser, usually parsing
char by char is more easy and feels mor safe to me. For example:

(example is provided below as appendix 1.)
There are *many* layers. Within libraries, between libraries,
within subsystems, between subsystems, etc. Any of these can
be designed to talk to the user.

That is tha magic of layers: They are called »layers« for a
reason! They are opaque! So, when you look at a layer from
the top, you just see that layer and not possible layers
below it. In a sense, to any layer, there is always only one
layer: The layer /directly/ below it. It does not have to be
aware of the layer above it. And surely not of the layers
below the layer below it. So there are always just two
layers to consider. That is how layering reduces complexity!

Otherwise, they are not layers, but components or subsystems
or so.

For example, when writing TCP code, you are just concerned
with IP, not with the hardware layer below it.

When one uses a standard file dialog, one is still
responsible for what the dialog does. Insofar, when an
application calls a file dialog, it is not forced to do so.
It could built a custom dialog. To the end user, it's all
part of just the application. The file dialog has become
part of the application. The application author is still
responsible. If the file dialog does not talk to the user as
it should according to the application guidelines /
contract, it has to be replaced by a custom dialog.

Appendix 1:

Grammar:

<numeral> ::= '2' | '4' | '5'.
<product> ::= <numeral>{ '*' <numeral> }.
<sum> ::= <product>{ '+' <product> }.
start symbol: <sum>.

Semantics:

as in C.

C Code:

#include <stdio.h> /* printf */

/* scanner */

static inline char get( int const move )
{ static char const * const source = "2+4*5)";
static int pos = 0;
return source[ pos += move ]; }

/* parser */

static inline int numeral(){ return get( 1 )- '0'; }

static int product(){ int result = numeral();
while( '*' == get( 0 )){ get( 1 ); result *= numeral(); }
return result; }

static int sum(){ int result = product();
while( '+' == get( 1 ))result += product();
return result; }

/* main */

int main( void ){ printf( "sum = %d\n", sum() ); }

Error handling:

The code above has no error detection, because it was taken
from my short parser tutorial where adding proper error
handling to the code was given as an exercise to the reader.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,904
Latest member
HealthyVisionsCBDPrice

Latest Threads

Top