Catching print errors

Discussion in 'Perl Misc' started by himanshu.garg@gmail.com, Apr 2, 2007.

  1. Guest

    Hello,

    I have written the following script to simulate write failure
    conditions in a real application.

    open(FILE, ">test.txt");

    #
    # rm test.txt or chmod 000 test.txt, before the user gives input
    #
    my $input = <STDIN>;

    print FILE "Hello, World!\n" or die "Couldn't write to file: $!\n";

    close (FILE);

    The problem is if I delete the file the program doesn't show any
    error msg and the file remains deleted.

    If I change the permissions to 000 the program is still able to
    write to file even though the permissions remain 000.

    Could you suggest, what it is that I am missing?

    Thank You,
    HG
     
    , Apr 2, 2007
    #1
    1. Advertising

  2. On 2007-04-02 07:52, <> wrote:
    > Hello,
    >
    > I have written the following script to simulate write failure
    > conditions in a real application.
    >
    > open(FILE, ">test.txt");
    >
    > #
    > # rm test.txt or chmod 000 test.txt, before the user gives input
    > #
    > my $input = <STDIN>;
    >
    > print FILE "Hello, World!\n" or die "Couldn't write to file: $!\n";
    >
    > close (FILE);


    close(FILE) or die "Couldn't close file: $!";

    Won't make any difference in this case (there is no error), but in
    general the last few kB will be written only when the file is closed so
    you can get an error on close (and for very short short files you will
    get an error *only* on close, not on print).


    > The problem is if I delete the file the program doesn't show any
    > error msg and the file remains deleted.
    >
    > If I change the permissions to 000 the program is still able to
    > write to file even though the permissions remain 000.
    >
    > Could you suggest, what it is that I am missing?


    Since you mentioned "rm" I assume you are using some kind of Unix.
    So you are missing 2 things:


    1) in Unix a file exists independently of its directory entries. The
    unlink system call (used by rm) removes only the directory entry, but
    not the file. So you can still write to and read the file. Only when
    the last reference to a file (either a directory entry or an open
    file handle) is removed, the file itself is removed. In fact,
    unlinking a file immediately after creating them is a common trick to
    create private temporary files (no other process can open them since
    they don't have a directory entry and they will be automatically
    removed when closed).

    2) In Unix permissions are checked only when a file is opened. Changing
    permissions doesn't have any effect on open file handles.

    hp

    --
    _ | Peter J. Holzer | Blaming Perl for the inability of programmers
    |_|_) | Sysadmin WSR | to write clearly is like blaming English for
    | | | | the circumlocutions of bureaucrats.
    __/ | http://www.hjp.at/ | -- Charlton Wilbur in clpm
     
    Peter J. Holzer, Apr 2, 2007
    #2
    1. Advertising

  3. Guest

    On Apr 2, 1:26 pm, "Peter J. Holzer" <> wrote:
    > On 2007-04-02 07:52, <> wrote:
    >
    > > Hello,

    >
    > > I have written the following script to simulate write failure
    > > conditions in a real application.

    >
    > > open(FILE, ">test.txt");

    >
    > > #
    > > # rm test.txt or chmod 000 test.txt, before the user gives input
    > > #
    > > my $input = <STDIN>;

    >
    > > print FILE "Hello, World!\n" or die "Couldn't write to file: $!\n";

    >
    > > close (FILE);

    >
    > close(FILE) or die "Couldn't close file: $!";
    >
    > Won't make any difference in this case (there is no error), but in
    > general the last few kB will be written only when the file is closed so
    > you can get an error on close (and for very short short files you will
    > get an error *only* on close, not on print).


    use strict;

    open(FILE, ">test.txt");

    my $input = <STDIN>;

    print FILE "Hello, World!\n" or die "Couldn't write to file: $!\n";

    close (FILE) or die "Couldn't close: $!\n";

    The above doesn't show the error message either since as you said
    there is no error. Any suggestions as to how I can test my script to
    check if it dies gracefully on disk full?

    Thank You,
    HG
     
    , Apr 2, 2007
    #3
  4. On 2007-04-02 08:44, <> wrote:
    > The above doesn't show the error message either since as you said
    > there is no error. Any suggestions as to how I can test my script to
    > check if it dies gracefully on disk full?


    I'm not sure if I understand you correctly. If you want to distinguish
    between a write error because of a full disk and a write error because
    of a different reason, you can use the %! hash (see perldoc perlvar):

    if ($!{ENOSPC}) {
    # disk is full
    } else {
    # some other error
    }

    In practice, if you start doing this, you will probably have to check
    for all possible error conditions and decide what you want to do with
    them (e.g., you may also want to handle EDQUOT (Disk quota exceeded)
    specially) and the set of possible error conditions is somewhat system
    dependent.

    You may also want to look at eval BLOCK to catch and handle errors in a
    central place.

    hp


    --
    _ | Peter J. Holzer | Blaming Perl for the inability of programmers
    |_|_) | Sysadmin WSR | to write clearly is like blaming English for
    | | | | the circumlocutions of bureaucrats.
    __/ | http://www.hjp.at/ | -- Charlton Wilbur in clpm
     
    Peter J. Holzer, Apr 2, 2007
    #4
  5. Jamie Guest

    In <>,
    mentions:
    >Hello,
    >
    > I have written the following script to simulate write failure
    >conditions in a real application.
    >
    >open(FILE, ">test.txt");
    >
    >#
    ># rm test.txt or chmod 000 test.txt, before the user gives input
    >#
    >my $input = <STDIN>;
    >
    >print FILE "Hello, World!\n" or die "Couldn't write to file: $!\n";
    >
    >close (FILE);
    >
    > The problem is if I delete the file the program doesn't show any
    >error msg and the file remains deleted.
    >
    > If I change the permissions to 000 the program is still able to
    >write to file even though the permissions remain 000.
    >
    > Could you suggest, what it is that I am missing?


    As others pointed out, the file handle and permissions check are done on the
    open call. (even if you unlink it, the handle is valid) also, data is buffered.

    If your platform supports it.... you might create a small filesystem on a disk
    (with linux it's the loop device) fill it up and then run your checks.

    Kind of drastic measures, normally there isn't anything you can do about a full
    disk. (which is why I would generally just ignore a problem like that, what
    can I do about it?)

    If this is a serious system, you might write all output to a temp file and use
    rename() so that the update is atomic.

    There is absolutely no way to "test" for a kill -9 and you could be stuck with
    1/2 a file written to disk, if you're updating a file, this can have
    consequences.

    Jamie
    --
    http://www.geniegate.com Custom web programming
    Perl * Java * UNIX User Management Solutions
     
    Jamie, Apr 2, 2007
    #5
  6. On 2007-04-02 10:37, Jamie <> wrote:
    > Kind of drastic measures, normally there isn't anything you can do about a full
    > disk. (which is why I would generally just ignore a problem like that, what
    > can I do about it?)


    At least you can stop and exit with error message, alerting the user or
    sysadmin that something is wrong, instead of blindly ploughing on and
    possibly damaging other data.

    Also, before exiting, you can maybe remove temporary files, thereby
    freeing up space again. (So at least the other programs can continue to
    work).


    > If this is a serious system, you might write all output to a temp file and use
    > rename() so that the update is atomic.


    But that also only works if you detect that writing failed.

    Consider:

    my ($fh, '>', "$file.$$") or die;
    while ($data = whatever()) {
    print $fh $data;
    }
    close($fh);
    rename("$file.$$", $file) or die;

    If the disk fills up while you are writing to $fh, you will clobber
    $file with an imcomplete new version. OTOH:

    my ($fh, '>', "$file.$$") or die;
    while ($data = whatever()) {
    print $fh $data or die;
    }
    close($fh) or die;
    rename("$file.$$", $file) or die;

    Here the rename will only be reached if all the print calls and the
    close call were successful. But it will leave a temporary file lying
    around. So something like this would be better:

    my ($fh, '>', "$file.$$") or die;
    eval {
    while ($data = whatever()) {
    print $fh $data or die;
    }
    close($fh) or die;
    rename("$file.$$", $file) or die;
    };
    if ($@) {
    unlink("$file.$$");
    die $@;
    }

    Note that in these examples I didn't actually test for the reason of the
    failure. The disk might be full, or the user may have exceeded his
    quota, or there might be an I/O error on the disk, etc. It doesn't
    matter for the program, because in any case it cannot continue. It may
    matter for the user, because he needs to do different things to remedy
    the problem before starting the program again.

    > There is absolutely no way to "test" for a kill -9 and you could be stuck with
    > 1/2 a file written to disk, if you're updating a file, this can have
    > consequences.


    Yep, but the person who needs to consider these consequences is the
    person who issues the kill -9, not the person who writes the script.

    hp


    --
    _ | Peter J. Holzer | Blaming Perl for the inability of programmers
    |_|_) | Sysadmin WSR | to write clearly is like blaming English for
    | | | | the circumlocutions of bureaucrats.
    __/ | http://www.hjp.at/ | -- Charlton Wilbur in clpm
     
    Peter J. Holzer, Apr 2, 2007
    #6
  7. Jamie Guest

    In <>,
    "Peter J. Holzer" <> mentions:

    >Note that in these examples I didn't actually test for the reason of the
    >failure. The disk might be full, or the user may have exceeded his
    >quota, or there might be an I/O error on the disk, etc. It doesn't
    >matter for the program, because in any case it cannot continue. It may
    >matter for the user, because he needs to do different things to remedy
    >the problem before starting the program again.


    Thats true, but, in drastic measures like that, *usually* (as in, all the
    cases I've seen so far) other tools will tell them something is wrong.)

    You're right about the 1/2 file clobbering a complete one. Hadn't considered that.

    >> There is absolutely no way to "test" for a kill -9 and you could be stuck with
    >> 1/2 a file written to disk, if you're updating a file, this can have
    >> consequences.

    >
    >Yep, but the person who needs to consider these consequences is the
    >person who issues the kill -9, not the person who writes the script.


    Or a power out. I sometimes go to great pains to apply the rename atomic
    approach to I/O.

    It's funny in a way that I go through all that trouble to prevent a corrupt
    file and then don't test the result of close. Especially considering this
    is the one place I could actually do something about it.

    Jamie
    --
    http://www.geniegate.com Custom web programming
    Perl * Java * UNIX User Management Solutions
     
    Jamie, Apr 3, 2007
    #7
  8. On 2007-04-03 00:03, Jamie <> wrote:
    > In <>,
    > "Peter J. Holzer" <> mentions:
    >>Note that in these examples I didn't actually test for the reason of the
    >>failure. The disk might be full, or the user may have exceeded his
    >>quota, or there might be an I/O error on the disk, etc. It doesn't
    >>matter for the program, because in any case it cannot continue. It may
    >>matter for the user, because he needs to do different things to remedy
    >>the problem before starting the program again.

    >
    > Thats true, but, in drastic measures like that, *usually* (as in, all the
    > cases I've seen so far) other tools will tell them something is wrong.)


    Maybe, maybe not. If you have one cronjob which creates huge temporary
    files and overruns the quota there is a good chance that this goes
    undetected if the job doesn't itself complain.

    Disk full and I/O errors are usually written to the syslog and a
    monitoring tool might pick them up from there, but you still don't know
    if this resulted in data corruption and which files were affected.

    hp

    --
    _ | Peter J. Holzer | Blaming Perl for the inability of programmers
    |_|_) | Sysadmin WSR | to write clearly is like blaming English for
    | | | | the circumlocutions of bureaucrats.
    __/ | http://www.hjp.at/ | -- Charlton Wilbur in clpm
     
    Peter J. Holzer, Apr 3, 2007
    #8
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Mikael Engdahl

    Catching ASP.NET errors

    Mikael Engdahl, Aug 15, 2003, in forum: ASP .Net
    Replies:
    3
    Views:
    2,386
    S. Justin Gengo
    Aug 15, 2003
  2. Mark Goldin

    Errors, errors, errors

    Mark Goldin, Jan 17, 2004, in forum: ASP .Net
    Replies:
    2
    Views:
    985
    Mark Goldin
    Jan 17, 2004
  3. keto
    Replies:
    0
    Views:
    998
  4. David Cournapeau

    print a vs print '%s' % a vs print '%f' a

    David Cournapeau, Dec 30, 2008, in forum: Python
    Replies:
    0
    Views:
    374
    David Cournapeau
    Dec 30, 2008
  5. yawnmoth
    Replies:
    97
    Views:
    4,757
    Bent C Dalager
    Feb 27, 2009
Loading...

Share This Page