Definition of perror() ?

D

Disc Magnet

Is perror(s) just a macro to fprintf(stderr, "%s: %s\n", s,
strerror(errno)) ?
 
B

Ben Pfaff

Disc Magnet said:
Is perror(s) just a macro to fprintf(stderr, "%s: %s\n", s,
strerror(errno)) ?

Literally? No. But the effects are similar in the common case.
 
K

Keith Thompson

Disc Magnet said:
Is perror(s) just a macro to fprintf(stderr, "%s: %s\n", s,
strerror(errno)) ?

perror() is a standard library function. For the actual definition,
grab a copy of
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf>
(that's the latest draft of the C standard) and read section 7.19.10.4.

Like any standard library function, it may additionally be defined as
a macro.

For a couple of reasons, the definition you present would not be
legal; first, the reference to s needs to be parenthesized, and
second, the behavior is different if s is a null pointer or points
to a null character.

I'm not sure how I'd define perror() as a macro that doesn't
potentially evaluate its argument more than once. And there's
probably not much reason to bother doing so; a call to perror()
isn't likely to be a performance bottleneck.
 
E

Eric Sosman

perror() is a standard library function. For the actual definition,
grab a copy of
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf>
(that's the latest draft of the C standard) and read section 7.19.10.4.

Like any standard library function, it may additionally be defined as
a macro.

For a couple of reasons, the definition you present would not be
legal; first, the reference to s needs to be parenthesized, and
second, the behavior is different if s is a null pointer or points
to a null character.

Another reason it wouldn't be legal is that you can use perror()
in a module that #include's <stdio.h> but omits <string.h>, which is
where strerror() is declared.

And there's yet another reason, this one truly nit-picky: "The
implementation shall behave as if no library function calls the
strerror function." If strerror() uses a static internal buffer to
hold the message string, and overwrites the buffer at each call, the
macro-ized perror() would cause a forbidden overwrite. That is, in

const char *message = strerror(EDOM);
strcpy (big_enough_array, message);
errno = ERANGE;
perror("This is a range error");
assert (strcmp(message, big_enough_array) == 0);

.... the assertion must not fail, as it (almost certainly) would if
perror() called strerror() and the latter overwrote its buffer.
I'm not sure how I'd define perror() as a macro that doesn't
potentially evaluate its argument more than once. And there's
probably not much reason to bother doing so; a call to perror()
isn't likely to be a performance bottleneck.

Indeed. A program whose performance is limited by the speed at
which it can report its own errors most likely has problems other
than performance. ;-)
 
E

Eric Sosman

On 1/3/2011 5:45 PM, Keith Thompson wrote: [...]
I'm not sure how I'd define perror() as a macro that doesn't
potentially evaluate its argument more than once. And there's
probably not much reason to bother doing so; a call to perror()
isn't likely to be a performance bottleneck.

Indeed. A program whose performance is limited by the speed at
which it can report its own errors most likely has problems other
than performance. ;-)

Well, in college, we had a FORTRAN compiler which was very efficient at
generating non-optimized code. (The point being, why waste CPU cycles
optimizing a program which is likely to not compile the first N times,
and which would likely be run only a single time once it actually did
compile.) It would generate extremely verbose, and extremely numerous,
error messages.

So it spat out lots of messages about *your* errors, not about
"its own errors," right? :)
 
R

robertwessel2

On 1/3/2011 5:45 PM, Keith Thompson wrote: [...]
I'm not sure how I'd define perror() as a macro that doesn't
potentially evaluate its argument more than once. And there's
probably not much reason to bother doing so; a call to perror()
isn't likely to be a performance bottleneck.
Indeed. A program whose performance is limited by the speed at
which it can report its own errors most likely has problems other
than performance. ;-)

Well, in college, we had a FORTRAN compiler which was very efficient at
generating non-optimized code.  (The point being, why waste CPU cycles
optimizing a program which is likely to not compile the first N times, and
which would likely be run only a single time once it actually did compile..)
  It would generate extremely verbose, and extremely numerous, error messages.


Good old WATFOR and WATFIV... Typically these student jobs would be
compile-link-and-go - the executable would never even get written to
disk (after the linker finished, it would just be run from memory).
So even if the student wanted to run the program more than once, it
would be recompiled, since it wouldn't have been saved anyway. And of
course, typical student problems were pretty small anyway, so not only
was the program only run once, it would only process a dataset with
(at most) a few hundred entries.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top