C, really portable?

K

Keith Thompson

jacob navia said:
I am not saying that the standard should specify
all possible errors. I am only saying that certain RANGES
of error codes should be specified so that portable programs
could test for i/o errors. This is all the more evident in
fprintf:
r = fprintf(...);
if (r >= MIN_PRINTF_IOERROR && r < MAX_PRINTF_IOERROR ) {
// Error handling for i/o errors
}

That would be portable now you see?

if (r >= MIN_PRINTF_FORMAT_ERROR && r < MAX_PRINTF_FORMAT_ERROR)
// handle printf format errors

What do you DO now to test for fprintf errors?

Well, you have to code:
r = fprintf(....);
#ifdef LINUX
#ifdef TRIO_PRINTF
if (r == ...)
#elif GCC_PRINTF
if (r ==...)
#elif defined(WINDOWS)
#ifdef LCC_WIN32
if (r == ...)
#elif defined(WATCOM)
#elif defined (GCC)

#endif

etc etc, ad nauseum!

Actually, that's unlikely to work.

<SOMEWHAT_OT>
There is no GCC printf. gcc is a compiler, not a complete
implementation; it uses whatever runtime library is provided by the
underlying system. On some systems, that may be the GNU libc.
</SOMEWHAT_OT>

I'm fairly sure that many of the printf() implementations out there
don't use the returned value to distinguish among different kinds of
errors.

A quick look at a small number of implementations (documentation and
source) indicates that some (most?) versions are documented merely to
return a negative value on error; the actual value returned is likely
to be -1. Since this behavior conforms to the standard, I'd say it's
perfectly acceptable.

If you want to propose requiring printf() to return different values
for different kinds of errors, go ahead. Another approach would be to
require printf() to set errno to some meaningful value on errors; I'm
not sure which is better.

C's error handling is, in many ways, weak. Some functions indicate
errors via their returned values, which are too easy to ignore; others
set errno (which has its own problems). I'm not convinced that making
incremental improvements in particular functions is going to be all
that useful. I wouldn't mind seeing some kind of exception handling
mechanism in a future C standard, though not one as elaborate as what
C++ has. But I'm not sure it could be done cleanly without breaking
backward compatibility.

Again, comp.std.c is a better place to have this discussion.
 
K

Keith Thompson

Netocrat said:
On Sun, 11 Dec 2005 11:08:33 -0500, Eric Sosman wrote:
[discriminating fprintf errors]
f = fprintf(....);
if (r < 0) {
perror ("fprintf"); /* debatable */
...
}

The perror() call is dubious because the Standard doesn't
require that fprintf() set errno to anything meaningful.

It does guarantee that fprintf won't set errno to zero though (it may set
it to non-zero). So the code could be extended:

errno = 0;
f = fprintf(....);
if (r < 0) {
if (errno > 0)
perror ("fprintf"); /* debatable */
else
fputs(stderr, "fprintf: unspecified error");
...
}

You probably want "if (errno != 0)". I don't see anything in the
standard that prohibits setting errno to a negative value, though the
values EDOM, EILSEQ, and ERANGE are required to be distinct and
positive. (I think using only positive values for errors is common
practice, though.)
 
K

Keith Thompson

Malcolm said:
How do you know you are not suffering from a mental illness?

I won't try to guess what you actually meant by that remark, but it
could easily be interpreted as a personal insult, and I think we've
seen more than enough of that in this thread.
 
J

jacob navia

Eric said:
Yes, but since the supposed non-portability depends
entirely on the elided material, it's a silly snippage.
That, I think, is Emmanuel's point.
No. In general integer overflow error is not handled at all
in C, contrary to floating point overflow,
where you can use the fegetenv functions to query
the overflow flag.

This distinction between integer overflow and floating point
overflow is quite surprising. Why a detail like the machine
representation would make any difference? Nowhere is that
explained.

The code I showed was just a short way of telling people what I am
speaking about. The "..." represents omitted code that assigns
some values to those variables. A more complete code snippet would
be:

int fn(void);
int fn1(void);
int a,b=fn(),c=fn1();
a = b+c;

I.e. b and c contain *some* integer value.

This was shortened to
int a,b,c;
....
c = a+b;

It was clear what I am speaking about, but instead of
concentrating in the discussion topic, you choose to
make as if this was not clear.
 
M

Mark McIntyre

I wanted to show how difficult is to write anything really portable
because of the absence of error analysis standards in C.

And instead all you've shown is that you don't have much idea what
portability is.
 
M

Mark McIntyre

This distinction between integer overflow and floating point
overflow is quite surprising.

Only if you don't know much about how different architectures handle
integers. Again, I think the problem is that you have a limited
experience from which you're trying to generalise.
Why a detail like the machine
representation would make any difference? Nowhere is that
explained.

Why shold it be? The Standard is the standard, not the rationale.
 
E

Eric Sosman

jacob said:
Since the errors are in a RANGE of numbers, and we have at least 16
bits in an int, we can easily make up ranges big enough to accomodate
for a LOT of error codes, 32767 in fact. The standard would define
several ranges, and leave a lot of logical eror code space unused
for the implementations to use.

My point is that you can't cram all the distinct errors
into a mere 32767 codes without discarding information. You
think that C discards too much, and propose a richer set of
error codes. But are you the final authority on what might
be interesting? You'd like to distinguish "disk full" from
"icky format string," but the next guy would also like to know
which disk filled up, or at which character position the format's
ickiness became evident.
It is absolutely not necessary for the standard to enumerate all
categories but the most probable ones: i/o and format errors, like
missing specifier for the conversion, etc. This two ranges would
suffice for most applications and would improve what is possible
in standard C...

I am not claiming and have not claimed that C's way of
reporting errors is ideal. I will freely grant that it would
be useful to be able to distinguish some kinds of programming
errors from some kinds of "environmental" errors. But I am
still not persuaded that the current state of affairs makes
"reasonable usage [...] impossible," nor that the shortcomings
present any barrier to portability.
 
J

Jordan Abel

No. In general integer overflow error is not handled at all
in C

But it is possible to guarantee that there is no overflow, depending on
what is in the elided code segment.
 
E

Eric Sosman

jacob said:
Very funny.

What do you do with
fprintf(somefile,...);

1) You do not test the result. Anyway, a disk full is something never
happens, not to you anyway.

It depends on the circumstances. I try to apply a
level of testing that is calibrated to the importance
of the output. I cannot recall ever checking the value
returned by fprintf(stderr,...), for example, but I
certainly have checked whether the "main business"
output succeeded. (Sometimes I use ferror() instead of
checking every single fprintf(), but in any event I see
to it that there's an attempt to alert the user to a
problem.)
2) You code:
#ifdef LINUX
#ifdef TRIO_PRINTF
if (r == ...)
#elif GCC_PRINTF
if (r ==...)
#elif defined(WINDOWS)
#ifdef LCC_WIN32
if (r == ...)
#elif defined(WATCOM)
#elif defined (GCC)

#endif

etc etc, ad nauseum!

Second time around on this one -- and after it's
already been refuted by counter-example, too.
3) If there is an error you do a sloppy job
like assuming any error is a disk full error.

Thanks for your assessment of my code. I'm just
the sloppiest guy going.
It is easy to show irony, much harder to discuss
correctly and understand what the other is
saying/proposing, and answering accordingly.

I wanted to show how difficult is to write anything really portable
because of the absence of error analysis standards in C.

Jacob, Jacob: THE ERRORS THEMSELVES ARE NOT PORTABLE!
This is
due, as this discussion demonstrates, to the mentality "error
analysis is for wimps". Macho programmers never have any errors,
and anyway it is not worth thinking about this, since errors
are messy.

This is nonsensical muttering and a retreat from rationality.
 
C

Chris Torek

Since ... we have at least 16 bits in an int, we can easily make
up ranges big enough to accomodate for a LOT of error codes, 32767
in fact. The standard would define several ranges, and leave a lot
of logical eror code space unused for the implementations to use.

Please go use VMS for several years. It *does* this. Once you
have experience with how it works (and when and how it fails),
*then* come back and propose putting it into a future C standard.
 
M

Malcolm

^^^^^^^^^^^^^^^^^^^^^^^^
I won't try to guess what you actually meant by that remark, but it
could easily be interpreted as a personal insult, and I think we've
seen more than enough of that in this thread.
Why would it be an insult? It's the obvious riposte to the claim being made.
 
J

jacob navia

Eric said:
Thanks for your assessment of my code. I'm just
the sloppiest guy going.

Well, that's what *I* do <grin>.

I assume that if fprintf returns a negative value the disk
is full.

It is the only portable thing I can do without tying the
code to some specific printf implementation.

It would be much better if I could portable write

err = fprintf(...);
if (err == EOF) {
// no space. Erase temporary files
}

This could be encapsulated in a
int MyFprintf(FILE *f,char *fmt,...)
{
int err = fprintf(... /*call ellided*/);
switch (err) {
case EOF:
// No space. Erase temp files
break;
default:
// Other errors
}
return err;
}
 
S

Skarmander

jacob said:
Well, that's what *I* do <grin>.

I assume that if fprintf returns a negative value the disk
is full.

It is the only portable thing I can do without tying the
code to some specific printf implementation.

It would be much better if I could portable write

err = fprintf(...);
if (err == EOF) {
// no space. Erase temporary files
}
You're assuming erasing the temporary files will free up space. How do you
know? C does not even have a concept of "disks".

On my Linux system, /tmp links to a memory device. It could also easily link
to a different disk partition. Erasing temporary files would do exactly
nothing to alleviate the problem.

Now, you could say that you *do* happen to know erasing the temporary files
will be a good idea, or that you are willing to assume it will be a good
idea most of the time. Even so, why are you so bent on being able to do this
portably?

If printf() returned a "CRC failed, bad media" error, erasing the temporary
files would clearly be pointless. In fact, it's unlikely you could do
anything about such an error. Would it help if we required implementations
to return such errors where able? Would you be angry at a vendor if printf()
did not return the "disk full" error when you are expecting it to, but
instead a generic error because the low-level write routines cannot detect
this particular situation?

Would it be so bad to have to call a non-portable "get free space on disk"
function and erase the temporary files if it was low? Why would having to
depend on system-generated error codes be so much better? Almost no error
that could be generated admits a portable error handler in the first place.

Facilities for error handling are important. I think that demanding that the
standard provide more specific ones will not be helpful, however, for all
the reasons others have already mentioned. Detailed error handling is useful
only when you know what sort of things can go wrong on the particular
platforms you're writing for, and then you'll have to introduce some
unportable assumptions in the first place.

The only thing the standard could be taken to task for is that it does not
require that printf() update errno on encountering an error. It does not
disallow this either, but it needlessly complicates error recovery in
response to printf() calls even where we do know what could go wrong.

S.
 
K

Keith Thompson

Skarmander said:
The only thing the standard could be taken to task for is that it does
not require that printf() update errno on encountering an error. It
does not disallow this either, but it needlessly complicates error
recovery in response to printf() calls even where we do know what
could go wrong.

The standard says that fprintf() "returns the number of characters
transmitted, or a negative value if an output or encoding error
occurred". It would be nice if it distinguished between output errors
and encoding errors. But personally, I don't think it's that big a
deal.
 
N

Netocrat

Netocrat said:
On Sun, 11 Dec 2005 11:08:33 -0500, Eric Sosman wrote:
[discriminating fprintf errors]
f = fprintf(....);
if (r < 0) {
perror ("fprintf"); /* debatable */
...
}

The perror() call is dubious because the Standard doesn't
require that fprintf() set errno to anything meaningful.

It does guarantee that fprintf won't set errno to zero though (it may set
it to non-zero). So the code could be extended:

errno = 0;
f = fprintf(....);
if (r < 0) {
if (errno > 0)
perror ("fprintf"); /* debatable */
else
fputs(stderr, "fprintf: unspecified error");
...
}

You probably want "if (errno != 0)".

I don't believe so.
I don't see anything in the standard that prohibits setting errno to a
negative value,

True, certain library functions may even do it in the absence of errors:

N1124,7.5 #3:

| The value of errno may be set to nonzero by a library function call
| whether or not there is an error, provided the use of errno is not
| documented in the description of the function in this International
| Standard.

(fprintf is not documented as setting errno)
though the values EDOM, EILSEQ, and ERANGE are required to be distinct
and positive.

There's also #2:

| errno ... expands to a modifiable lvalue171) that has type int,
| the value of which is set to a positive error number by several library
| functions.
(I think using only positive values for errors is common practice,
though.)

The wording of #2 seems to require a positive value for functions
documented as setting errno.

#3 also seems to require that a library function documented as setting
errno may only set it to a positive value, but it could be interpreted
purely as an allowance for functions that do not set errno rather than a
prohibition against functions that do set it (I don't subscribe to that
interpretation).
 
K

Keith Thompson

Netocrat said:
I don't believe so.


True, certain library functions may even do it in the absence of errors:

N1124,7.5 #3:

| The value of errno may be set to nonzero by a library function call
| whether or not there is an error, provided the use of errno is not
| documented in the description of the function in this International
| Standard.

(fprintf is not documented as setting errno)


There's also #2:

| errno ... expands to a modifiable lvalue171) that has type int,
| the value of which is set to a positive error number by several library
| functions.

Ok, I missed that.

I'd still rather test for errno!=0 than errno>0. If errno *is*
somehow set to a negative value, it probably indicates an error (in
the code that sets errno if nothing else).
 
W

Walter Roberson

Perl as a -language- includes a number of hooks to operating
system facilities, such as signals, fifos, pipes, subprocesses, sockets,
semaphores, and threads. These are, of course, non-portable,

Following up to myself, to reply to a different aspect of Paul's
comments:

Many of the problems with Perl -are- problems of the language.

Essentially, a lot of what Perl -is-, is an a attempt to put together a
(more or less) unified API to provide traditional Unix services, to
hide some of the interface differences under the rug, and to try to
provide usable replacements for facilities not present in some OSes.

Interfaces to facilities usually provided by operating systems pervade
perl, as fully fledged members of perl, with (for example) "open"
having the same theoretical status in the language as "socket". Perl
might have started life with a text-processing emphasis, but it outgrew
that a long time ago.

Perl does aim to increase portability, by hiding OS differences when
that can be done with a reasonable amount of effort. That's a good
thing, but in practice Perl is all a compromises. If there is something
in perl which does not or cannot work or which would take more
work than anyone has bothered to put in, then that thing just
drops out... and perl doesn't make any effort to provide mechanisms
to probe to find out whether some facility has or has not been
made available.

perl does have the advantage over C that perl has exceptions, so
you can attempt something and recover if it doesn't work.
perl does -not-, though, have any kind of unified error handling
such as being proposed by Jacob. You get whatever error the
implementation for that platform happens to return. Using Jacob's
example: perl's printf's error handling is defined only in terms
of the function returning true if the conversion suceeeded; none of
the sublayers referred to for printf have any more error control.
sprintf does not mention the possibilities of errors at all.


Perl is not designed as an operating-system-calling program as
such: it is more a "common look and feel", with no specific consistancy
about which platform's look or feel it emulates. I have, in the
past, encountered a small number of programs -designed- for calling
operating system functions and smoothly handling argument
validation and error control. [I seem to recall that the one on the
Honeywell L6*/ STS series was quite nicely designed, but that was
20 years ago so I might not have noticed deficiencies at the time.]
 
S

Skarmander

Keith said:
The standard says that fprintf() "returns the number of characters
transmitted, or a negative value if an output or encoding error
occurred". It would be nice if it distinguished between output errors
and encoding errors. But personally, I don't think it's that big a
deal.
Well, the point is that if fprintf() did unambiguously update errno,
perror() would always work, and there would also be no reason to deal with
the negative value it returns (in non-standard code, of course); platforms
could return an error code, but this would be senseless duplication of the
errno functionality.

Still, this is more a QoI issue.

S.
 
R

Richard Heathfield

Old Wolf said:
BTW, see the excellent post by Walter Roberson on how
non-portable most Perl programs are.

To be fair, most C programs are totally non-portable too. But then most C
programs are written by people who don't know C very well, who don't
realise just how portable well-written C programs can be, and who, quite
frankly, don't care.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top