ferror()

A

Alan Balmer

Chapter and verse, please.

Dan
Sorry for not repeating the entire post. It was in reference to the
preceding post, which was implementation specific, and clearly
indicated as such.
 
A

Alan Balmer

It happened in perfectly ordinary "Unixy" code on Sun workstations
using NFS servers, even with all the hardware working perfectly.

Files written to the server would be write-behind cached on the
workstations. On the final fflush()-before-close, the last data
would be transferred from the user process to the client workstation
kernel. The kernel continued to cache the data, not sending any
of it to the NFS server yet.

On the close(), the workstation would realize that it was now
time to send the cached data to the server, which would reject
the write with EDQUOT, "user is over quota".

The close() would return the EDQUOT error to the user process,
alerting the user that his file was incomplete because he was
now out of administrator-assigned disk space.

That must have been fun to track down the first time ;-) I see your
point, though I would be inclined to call this a case of needing the
user process to cover a system design quirk. It means that you can't
trust fflush to actually force data to be written. Could be awkward if
you're relying on it for synchronization with another system and don't
really want to close and reopen the stream every time. But such things
are off-topic here anyway :)
 
A

Alan Balmer

All fflush can tell you is that the data has successfully left the
stdio buffers. It may still be bufferred by the OS. Only fclose can
confirm that it successfully reached its final destination.

How does fclose confirm that? The description of fclose in this
respect is identical to that of fflush: "Any unwritten buffered data
for the stream are delivered to the host environment to be written to
the file;"
Even if it's not recoverable, the user still needs to be informed about
the problem. As it is impossible to predict the consequences of a
failed fclose, it is unacceptable to ignore this possibility.

That doesn't mean that catching the problem *before* closing the file
is a bad thing, does it?
 
D

Dan Pop

In said:
How does fclose confirm that? The description of fclose in this
respect is identical to that of fflush: "Any unwritten buffered data
for the stream are delivered to the host environment to be written to
the file;"

fclose() does more than fflush().

2 A successful call to the fclose function causes the stream
pointed to by stream to be flushed and the associated file to be
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
closed.
^^^^^^
This is what allows fclose to detect what fflush may not be able to
detect. Closing the associated file implies flushing all the buffers
associated to that file, even those stdio (and, implicitly, fflush) has
no control upon.
That doesn't mean that catching the problem *before* closing the file
is a bad thing, does it?

Have I said or implied otherwise? My point was that this check does NOT
make the fclose check superfluous, not that checking fflush is
superfluous. Checking fflush is still needed, but only its failure has
any relevance, its success does not guarantee that the data has been
properly written to its final destination. Only the success of fclose
provides such a guarantee. I thought that was clear enough, but it
seems that I was overoptimistic.

Dan
 
A

Alan Balmer

fclose() does more than fflush().

2 A successful call to the fclose function causes the stream
pointed to by stream to be flushed and the associated file to be
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
closed.
^^^^^^
This is what allows fclose to detect what fflush may not be able to
detect. Closing the associated file implies flushing all the buffers
associated to that file, even those stdio (and, implicitly, fflush) has
no control upon.

Sorry, I don't see that. It certainly implies that the file is no
longer associated with the calling program, but I don't know what
prevents the implementation from caching the actual final writes,
directory updates, etc. until it finds a propitious moment. There may
even be more than one system involved, as in the case of a
network-connected file.
 
G

glen herrmannsfeldt

Chris said:
(snip)

It happened in perfectly ordinary "Unixy" code on Sun workstations
using NFS servers, even with all the hardware working perfectly.
Files written to the server would be write-behind cached on the
workstations. On the final fflush()-before-close, the last data
would be transferred from the user process to the client workstation
kernel. The kernel continued to cache the data, not sending any
of it to the NFS server yet.

The sun tradition was to get everything to disk before notifying
the program that it was written. I am not sure now about the
cache on the workstation. There were big questions when
disk drives with write behind cache came out. One couldn't be
sure that the data actually made it to disk in the case of
a power failure.
On the close(), the workstation would realize that it was now
time to send the cached data to the server, which would reject
the write with EDQUOT, "user is over quota".

Systems I used didn't run quota, but disk full was always
possible. I did once lose a 10 line file editing it in vi
(when I was new to vi) when the disk was full. It was
apparently a very important 10 line file.
The close() would return the EDQUOT error to the user process,
alerting the user that his file was incomplete because he was
now out of administrator-assigned disk space.
(This kind of failure generally came as a total shock to the users,
whose programs completely ignored the close() failure and often
followed the failed close() by a rename() operation that wiped
out the original backup file. Now they had plenty of disk space
for the data, but no data to go in it.)

Another effect that I saw once was a program that was writing out
a series of numbers that were supposed to be within a certain range.
It seems that the disk got full while writing, but the error was
not noticed. Later, more space became available and writing
continued. Digits from one number were concatenated with digits
from another, resulting in an out of range number.

After that, much more checking was done on writes.

-- glen
 
D

Dan Pop

In said:
Sorry, I don't see that. It certainly implies that the file is no
longer associated with the calling program, but I don't know what
prevents the implementation from caching the actual final writes,
directory updates, etc. until it finds a propitious moment.

If it's delayed, an error may happen when the closing is actually
attempted and there is no way to report it to the fclose() caller.
While the standard says that a successful fclose call cause the file
to be closed.
There may
even be more than one system involved, as in the case of a
network-connected file.

I'm sorry, but I can't find an alternate interpretation for "closing the
associated file", no matter where it is physically located and how many
systems are involved in actually performing this action.

I'm not claiming that each and every implementation actually does what
the standard requires, merely that the requirement is written in
unambiguous terms.

Dan
 
G

glen herrmannsfeldt

Dan Pop wrote:

(snip)
(someone wrote)
If it's delayed, an error may happen when the closing is actually
attempted and there is no way to report it to the fclose() caller.
While the standard says that a successful fclose call cause the file
to be closed.
(snip)

I'm not claiming that each and every implementation actually does what
the standard requires, merely that the requirement is written in
unambiguous terms.

Traditional NFS was pretty strict on doing things right, though possibly
not fast. In the name of speed, some have added options like
asynchronous writes and soft mounts. Also, some disk drives now buffer
writes internally, without a guarantee that the data actually makes it
to the disk.

In a traditional NFS hard mount the client will wait forever for the
server to reply. Once we had to move a server for some diskless
machines, and it was down an entire weekend. The clients waited
patiently for it to come back, and continued on just fine when it
came back up three days later. Some people are too impatient, though.

-- glen
 
A

Alan Balmer

I'm sorry, but I can't find an alternate interpretation for "closing the
associated file", no matter where it is physically located and how many
systems are involved in actually performing this action.

That's the point of my concerns - I can't find (in the standard) *any*
interpretation of "closing the associated file." I don't see that the
standard can require any particular action by the system, any more
than it can guarantee that another process doesn't have the same file
open.

If there is such a guarantee for conforming implementations, I would
be interested, since it would be useful.
 
D

Dan Pop

In said:
That's the point of my concerns - I can't find (in the standard) *any*
interpretation of "closing the associated file." I don't see that the

Most likely because the semantics of closing a file are not specific to
the C language.
standard can require any particular action by the system, any more
than it can guarantee that another process doesn't have the same file
open.

That's orthogonal to the issue. From the C standard POV there is no
other process. But the same program may (or may not, it's implementation
specific) have more than one stream connected to the same file. Yet,
there is no ambiguity WRT the meaning of closing the file: all the
changes created through that stream that have not yet been physically
applied to the file, must be. There is no point in inventing a set of
semantics for "closing a file" that are specific to the C language.

The important bit for this discussion is that the failure of the file
closing operation needs to be reported to the fclose caller. If fclose
reports success, the changes have been successfully applied to the
physical file (mainly because a later failure in the process can no longer
be reported).

Again, I'm not claiming that all implementations are behaving as
specified. Anyone familiar with the umount command on "slow"
output devices under Linux knows what I'm talking about. Some OS's do
trade the semantics of the file closing operation for increased I/O
speed, rendering the I/O system faster, but less reliable. There is also
the issue of the write caching performed by certain disks, behind the
back of the OS.

Dan
 
S

Stephen Howe

Right. In general, pay attention to the return value of fclose()
If it's opened for input only, you couldn't/shouldn't care less.

That says nothing.
Being pedantic, you should check every return value of every call of all
functions in <stdio.h> even if opened for input only.

For all I know, the OS filing system may be up the creek and the return
value of fclose() might pick this up.
One can never tell.
Better err on the side of caution.

Stephen Howe
 
C

Christopher Benson-Manica

Stephen Howe said:
Being pedantic, you should check every return value of every call of all
functions in <stdio.h> even if opened for input only.

I doubt you check the return value of printf - if you do, I'm glad I
don't have to read your code...
 
E

Eric Sosman

Christopher said:
I doubt you check the return value of printf - if you do, I'm glad I
don't have to read your code...

#include <stdio.h>
int main(void) {
if (printf("Hello, world!\n") != 14) {
if (fprintf(stderr, "printf failed!\n") != 15) {
if (fprintf(stderr, "fprintf failed!\n") != 16) {
...
 
K

Keith Thompson

Eric Sosman said:
#include <stdio.h>
int main(void) {
if (printf("Hello, world!\n") != 14) {
if (fprintf(stderr, "printf failed!\n") != 15) {
if (fprintf(stderr, "fprintf failed!\n") != 16) {
...

Harumph. You call that error checking?

#include <stdio.h>
#include <stdlib.h>

int main(void) {
if (printf("Hello, world!\n") != 14) {
if (fprintf(stderr, "printf failed!\n") != 15) {
if (fprintf(stderr, "fprintf failed!\n") != 16) {
exit(EXIT_FAILURE);
if (fprintf(stderr,
"exit(EXIT_FAILURE) failed!!\n") != 28) {
abort();
if (fprintf(stderr, "abort() failed!!\n") != 17) {
/* ... */
}
}
}
}
}
exit(EXIT_SUCCESS);
if (fprintf(stderr, "exit(EXIT_SUCCESS) failed!!\n") != 28) {
abort();
if (fprintf(stderr, "abort() failed!!\n") != 17) {
/* ... */
}
}
}

(Filling in the "/* ... */"s is left as an exercise.)
 
C

CBFalconer

Christopher said:
I doubt you check the return value of printf - if you do, I'm glad
I don't have to read your code...

Here is an example. There are easier ways to do this, but notice
the use of the return from sprintf.

/* format doubles and align output */
/* Public domain, by C.B. Falconer */

#include <stdio.h>

#define dformat(r, d, f) fdformat(stdout, r, d, f)

/* output r in field with fpart digits after dp */
/* At least 1 blank before and after the output */
/* Returns neg on param error, else field used */
/* Allows for exponents from -999 to +999. */
/* Too small fields are automatically expanded */
int fdformat(FILE *fp, double r, int fpart, int field)
{
#define CPMAX 100
char cp[CPMAX];
int n, spacebefore, spaceafter, minchars;

/* Protect against evil arguments */
if (fpart < 1) fpart = 1;
if (r < 0.0) minchars = 9;
else minchars = 8;
if (field < (fpart + minchars)) field = fpart + minchars;
if (field >= CPMAX) return -1;

/* Try the effect of "%.*g" and "%.*e" below */
n = sprintf(cp, "%.*e", fpart, r);
if (n < 0) return n;
spacebefore = field - minchars - fpart;
spaceafter = field - spacebefore - n;
return fprintf(fp, "%*c%s%*c",
spacebefore, ' ', cp, spaceafter, ' ');
} /* fdformat */

/* --------------- */

void testit(double r, int places, int field)
{
/* Note use of side effect of calling dformat */
printf(", %d (places=%d, field=%d)\n",
dformat(r, places, field), places, field);
} /* testit */

/* --------------- */

int main(void)
{
size_t i;
double arr[] = { 413.12e+092,
257.90e+102,
257.9011e-103,
43.67e+099,
43.667e-99,
1.0, 0.0};

for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(arr, 2, 12);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(-arr, 2, 12);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(arr, 3, 12);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(arr, 3, 2);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(arr, 5, 2);
for (i = 0; i < ((sizeof arr) / (sizeof arr[0])); i++)
testit(-arr, 5, 2);
return 0;
} /* main */
 
S

Stephen Howe

I doubt you check the return value of printf

I do. stdin could be redirected to disk which means even printf() could fill
a disk.
Even stderr needs checking as that could be redirected as well.

Stephen Howe
 
S

Stephen Howe

Harumph. You call that error checking?
#include <stdio.h>
#include <stdlib.h>

int main(void) {
if (printf("Hello, world!\n") != 14) {
if (fprintf(stderr, "printf failed!\n") != 15) {
if (fprintf(stderr, "fprintf failed!\n") != 16) {
exit(EXIT_FAILURE);
if (fprintf(stderr,
"exit(EXIT_FAILURE) failed!!\n") != 28) {
abort();
if (fprintf(stderr, "abort() failed!!\n") != 17) {
/* ... */
}
}
}
}
}
exit(EXIT_SUCCESS);
if (fprintf(stderr, "exit(EXIT_SUCCESS) failed!!\n") != 28) {
abort();
if (fprintf(stderr, "abort() failed!!\n") != 17) {
/* ... */
}
}
}

Much much better.

Stephen Howe
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,055
Latest member
SlimSparkKetoACVReview

Latest Threads

Top