Catching print errors

H

himanshu.garg

Hello,

I have written the following script to simulate write failure
conditions in a real application.

open(FILE, ">test.txt");

#
# rm test.txt or chmod 000 test.txt, before the user gives input
#
my $input = <STDIN>;

print FILE "Hello, World!\n" or die "Couldn't write to file: $!\n";

close (FILE);

The problem is if I delete the file the program doesn't show any
error msg and the file remains deleted.

If I change the permissions to 000 the program is still able to
write to file even though the permissions remain 000.

Could you suggest, what it is that I am missing?

Thank You,
HG
 
P

Peter J. Holzer

Hello,

I have written the following script to simulate write failure
conditions in a real application.

open(FILE, ">test.txt");

#
# rm test.txt or chmod 000 test.txt, before the user gives input
#
my $input = <STDIN>;

print FILE "Hello, World!\n" or die "Couldn't write to file: $!\n";

close (FILE);

close(FILE) or die "Couldn't close file: $!";

Won't make any difference in this case (there is no error), but in
general the last few kB will be written only when the file is closed so
you can get an error on close (and for very short short files you will
get an error *only* on close, not on print).

The problem is if I delete the file the program doesn't show any
error msg and the file remains deleted.

If I change the permissions to 000 the program is still able to
write to file even though the permissions remain 000.

Could you suggest, what it is that I am missing?

Since you mentioned "rm" I assume you are using some kind of Unix.
So you are missing 2 things:


1) in Unix a file exists independently of its directory entries. The
unlink system call (used by rm) removes only the directory entry, but
not the file. So you can still write to and read the file. Only when
the last reference to a file (either a directory entry or an open
file handle) is removed, the file itself is removed. In fact,
unlinking a file immediately after creating them is a common trick to
create private temporary files (no other process can open them since
they don't have a directory entry and they will be automatically
removed when closed).

2) In Unix permissions are checked only when a file is opened. Changing
permissions doesn't have any effect on open file handles.

hp
 
H

himanshu.garg

close(FILE) or die "Couldn't close file: $!";

Won't make any difference in this case (there is no error), but in
general the last few kB will be written only when the file is closed so
you can get an error on close (and for very short short files you will
get an error *only* on close, not on print).

use strict;

open(FILE, ">test.txt");

my $input = <STDIN>;

print FILE "Hello, World!\n" or die "Couldn't write to file: $!\n";

close (FILE) or die "Couldn't close: $!\n";

The above doesn't show the error message either since as you said
there is no error. Any suggestions as to how I can test my script to
check if it dies gracefully on disk full?

Thank You,
HG
 
P

Peter J. Holzer

The above doesn't show the error message either since as you said
there is no error. Any suggestions as to how I can test my script to
check if it dies gracefully on disk full?

I'm not sure if I understand you correctly. If you want to distinguish
between a write error because of a full disk and a write error because
of a different reason, you can use the %! hash (see perldoc perlvar):

if ($!{ENOSPC}) {
# disk is full
} else {
# some other error
}

In practice, if you start doing this, you will probably have to check
for all possible error conditions and decide what you want to do with
them (e.g., you may also want to handle EDQUOT (Disk quota exceeded)
specially) and the set of possible error conditions is somewhat system
dependent.

You may also want to look at eval BLOCK to catch and handle errors in a
central place.

hp
 
J

Jamie

In said:
Hello,

I have written the following script to simulate write failure
conditions in a real application.

open(FILE, ">test.txt");

#
# rm test.txt or chmod 000 test.txt, before the user gives input
#
my $input = <STDIN>;

print FILE "Hello, World!\n" or die "Couldn't write to file: $!\n";

close (FILE);

The problem is if I delete the file the program doesn't show any
error msg and the file remains deleted.

If I change the permissions to 000 the program is still able to
write to file even though the permissions remain 000.

Could you suggest, what it is that I am missing?

As others pointed out, the file handle and permissions check are done on the
open call. (even if you unlink it, the handle is valid) also, data is buffered.

If your platform supports it.... you might create a small filesystem on a disk
(with linux it's the loop device) fill it up and then run your checks.

Kind of drastic measures, normally there isn't anything you can do about a full
disk. (which is why I would generally just ignore a problem like that, what
can I do about it?)

If this is a serious system, you might write all output to a temp file and use
rename() so that the update is atomic.

There is absolutely no way to "test" for a kill -9 and you could be stuck with
1/2 a file written to disk, if you're updating a file, this can have
consequences.

Jamie
 
P

Peter J. Holzer

Kind of drastic measures, normally there isn't anything you can do about a full
disk. (which is why I would generally just ignore a problem like that, what
can I do about it?)

At least you can stop and exit with error message, alerting the user or
sysadmin that something is wrong, instead of blindly ploughing on and
possibly damaging other data.

Also, before exiting, you can maybe remove temporary files, thereby
freeing up space again. (So at least the other programs can continue to
work).

If this is a serious system, you might write all output to a temp file and use
rename() so that the update is atomic.

But that also only works if you detect that writing failed.

Consider:

my ($fh, '>', "$file.$$") or die;
while ($data = whatever()) {
print $fh $data;
}
close($fh);
rename("$file.$$", $file) or die;

If the disk fills up while you are writing to $fh, you will clobber
$file with an imcomplete new version. OTOH:

my ($fh, '>', "$file.$$") or die;
while ($data = whatever()) {
print $fh $data or die;
}
close($fh) or die;
rename("$file.$$", $file) or die;

Here the rename will only be reached if all the print calls and the
close call were successful. But it will leave a temporary file lying
around. So something like this would be better:

my ($fh, '>', "$file.$$") or die;
eval {
while ($data = whatever()) {
print $fh $data or die;
}
close($fh) or die;
rename("$file.$$", $file) or die;
};
if ($@) {
unlink("$file.$$");
die $@;
}

Note that in these examples I didn't actually test for the reason of the
failure. The disk might be full, or the user may have exceeded his
quota, or there might be an I/O error on the disk, etc. It doesn't
matter for the program, because in any case it cannot continue. It may
matter for the user, because he needs to do different things to remedy
the problem before starting the program again.
There is absolutely no way to "test" for a kill -9 and you could be stuck with
1/2 a file written to disk, if you're updating a file, this can have
consequences.

Yep, but the person who needs to consider these consequences is the
person who issues the kill -9, not the person who writes the script.

hp
 
J

Jamie

In <[email protected]>,
Peter J. Holzer said:
Note that in these examples I didn't actually test for the reason of the
failure. The disk might be full, or the user may have exceeded his
quota, or there might be an I/O error on the disk, etc. It doesn't
matter for the program, because in any case it cannot continue. It may
matter for the user, because he needs to do different things to remedy
the problem before starting the program again.

Thats true, but, in drastic measures like that, *usually* (as in, all the
cases I've seen so far) other tools will tell them something is wrong.)

You're right about the 1/2 file clobbering a complete one. Hadn't considered that.
Yep, but the person who needs to consider these consequences is the
person who issues the kill -9, not the person who writes the script.

Or a power out. I sometimes go to great pains to apply the rename atomic
approach to I/O.

It's funny in a way that I go through all that trouble to prevent a corrupt
file and then don't test the result of close. Especially considering this
is the one place I could actually do something about it.

Jamie
 
P

Peter J. Holzer

Thats true, but, in drastic measures like that, *usually* (as in, all the
cases I've seen so far) other tools will tell them something is wrong.)

Maybe, maybe not. If you have one cronjob which creates huge temporary
files and overruns the quota there is a good chance that this goes
undetected if the job doesn't itself complain.

Disk full and I/O errors are usually written to the syslog and a
monitoring tool might pick them up from there, but you still don't know
if this resulted in data corruption and which files were affected.

hp
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,015
Latest member
AmbrosePal

Latest Threads

Top