J
Jon
Lately I have been reviewing how my Perl scripts handles files they
write to, as until now I have already taken for granted that it will
"just work", so many of my scripts never stopped when they could not
write to or read a file.
My normal setup is Perl 5.6.0, Linux 2.4.x kernel, ext2 filesystem
(although our newer servers do use ext3, however unfortunately most
still use ext2).
open(LOG,">>/path/to/file") or &bad_file;
print LOG "xx";
close(LOG);
Now that works fine, and if the path is invalid it will error out
nicely, however, one of my scripts has the possibility of being run
regularly, each time it will write to a file.
So my question is, if 2 scripts run at the same split second and get
around to writing to the same file at the same time what would be the
result? Would one fail and the other succeed, or would the file be
damaged maybe (if I was using the above code for example).
Thank you for any advise,
Jon.
write to, as until now I have already taken for granted that it will
"just work", so many of my scripts never stopped when they could not
write to or read a file.
My normal setup is Perl 5.6.0, Linux 2.4.x kernel, ext2 filesystem
(although our newer servers do use ext3, however unfortunately most
still use ext2).
open(LOG,">>/path/to/file") or &bad_file;
print LOG "xx";
close(LOG);
Now that works fine, and if the path is invalid it will error out
nicely, however, one of my scripts has the possibility of being run
regularly, each time it will write to a file.
So my question is, if 2 scripts run at the same split second and get
around to writing to the same file at the same time what would be the
result? Would one fail and the other succeed, or would the file be
damaged maybe (if I was using the above code for example).
Thank you for any advise,
Jon.