illegal seek

D

Dave Saville

I was debugging a totally unrelated problem yesterday and put in a
quick and dirty print line with $! in it. Now as it happened the bit of
code I was trying to debug was not giving the error I thought it was
and so $! of course had a previous value.

What printed out was "Illegal Seek" - looking backwards I found a "seek
HANDLE,0,0; Now I have never seen any code where the return from seek
is checked although I guess it should be :)

The actual code is as follows:

open LIST, "<$clean_username" or die "$clean_username $!";
flock LIST, $LOCK_EX;
seek LIST, 0, 0;

I had read that one should re seek after flock in case another process
modified the file between you opening it and flock coming back with the
exclusive lock.

Or does seek give an error if it is already where one wants to seek to?

TIA


Regards

Dave Saville

NB switch saville for nospam in address
 
N

news

Dave Saville said:
What printed out was "Illegal Seek" - looking backwards I found a "seek
HANDLE,0,0

The error value is only set on an error: it's never implicitly reset. So,
can you be sure that this is the relevant seek that generated the
error? What happened when you /did/ check the return value from it?
The actual code is as follows:
open LIST, "<$clean_username" or die "$clean_username $!";

The "<" is implicit, and so can be omitted. This leaves a quoted simple
variable, so the quotes can now also be stripped:

open LIST, $clean_username or die "can't open $clean_username: $!";
flock LIST, $LOCK_EX;

Depending on the underlying implementation of flock and your cross
platform requirements, you cannot /guarantee/ an exclusive lock with
read-only file access. (Mostly it will work, but just be aware...)
seek LIST, 0, 0;
I had read that one should re seek after flock in case another process
modified the file between you opening it and flock coming back with the
exclusive lock.

You usually need a seek (even if it's to the current position, ick!) when
changing from reading to writing on the same file, as this flushes the
I/O buffers. However, if you've not read anything then there's no point
in rewinding the stream as it's already there. A flock is applied to the
entire file, not just to a section, so when you get the lock you've got
the file in its entirety.
Or does seek give an error if it is already where one wants to seek to?

No, it's quite happy with that. What it does object to is trying to seek
on a pipe (e.g. stdin) or non-seekable device (e.g. tape). Maybe your
file $clean_username is actually a named pipe?

Cheers,
Chris
 
A

Anno Siegel

Dave Saville said:
I was debugging a totally unrelated problem yesterday and put in a
quick and dirty print line with $! in it. Now as it happened the bit of
code I was trying to debug was not giving the error I thought it was
and so $! of course had a previous value.

What printed out was "Illegal Seek" - looking backwards I found a "seek
HANDLE,0,0;

$! is meaningless, unless immediately after the return from an
failed system call. The "Illegal seek" value may or may not come
from the call in your code, nobody knows.
Now I have never seen any code where the return from seek
is checked although I guess it should be :)

Then you haven't seen enough code. Of course you must check seek() if
it's important. You also ought to check the flock() call.
The actual code is as follows:

open LIST, "<$clean_username" or die "$clean_username $!";
flock LIST, $LOCK_EX;

"$LOCK_EX" looks fishy. The constant exported by Fcntl.pm is "LOCK_EX".
Are you sure this is literally your code?

Anyway, you don't want an exclusive lock on a read-only file. Many
systems (Solaris, for instance) won't even give you one. That's one
of the reasons why it's important to check the call to flock().
seek LIST, 0, 0;

I had read that one should re seek after flock in case another process
modified the file between you opening it and flock coming back with the
exclusive lock.

That's okay, though your code has bigger problems.
Or does seek give an error if it is already where one wants to seek to?

No. But, as noted, forget about the error code, it can come from
anywhere.

Anno
 
D

Dave Saville

The error value is only set on an error: it's never implicitly reset. So,
can you be sure that this is the relevant seek that generated the
error? What happened when you /did/ check the return value from it?

Because it was just before - I have not got around to changing anything
before I fully understood what was going on.
The "<" is implicit, and so can be omitted. This leaves a quoted simple
variable, so the quotes can now also be stripped:

open LIST, $clean_username or die "can't open $clean_username: $!";


Depending on the underlying implementation of flock and your cross
platform requirements, you cannot /guarantee/ an exclusive lock with
read-only file access. (Mostly it will work, but just be aware...)

I am now *really* confused - I do not understand about flock and read
only - surely the use of a lock is to protect from concurrent usage - I
am taking an exclusive lock because it is entirely possible that
another program could be updating the same file when I want to read it
- and of course I need to process it in a consistent state.

I was taking an exclusive lock on reading <$file and writing - either
$file or >>$file

So how *should* I be doing it?

Regards

Dave Saville

NB switch saville for nospam in address
 
A

Anno Siegel

Dave Saville said:
Dave Saville <[email protected]> wrote:
[...]
Depending on the underlying implementation of flock and your cross
platform requirements, you cannot /guarantee/ an exclusive lock with
read-only file access. (Mostly it will work, but just be aware...)

I am now *really* confused - I do not understand about flock and read
only - surely the use of a lock is to protect from concurrent usage - I
am taking an exclusive lock because it is entirely possible that
another program could be updating the same file when I want to read it
- and of course I need to process it in a consistent state.

That's fine. If the updating process gets itself an exclusive lock,
it will have to wait till the reading process gives up its shared lock.
That's how file locking works.
I was taking an exclusive lock on reading <$file and writing - either

So how *should* I be doing it?

Get shared locks for reading and exclusive locks for writing.

Anno
 
A

Alan J. Flavell

Get shared locks for reading and exclusive locks for writing.

Indeed. However, AIUI anyone who's reading a file in order to
subsequently update it will need an exclusive lock from the start.

cheers
 
D

Dave Saville

Get shared locks for reading and exclusive locks for writing.

Anno

Thank's - I was used to locking on other languages/systems and there
you took exclusive all round.

Regards

Dave Saville

NB switch saville for nospam in address
 
A

Anno Siegel

Dave Saville said:
Thank's - I was used to locking on other languages/systems and there
you took exclusive all round.

So multiple reads on a locked file aren't possible in those languages?
That sounds less than optimal.

Anno
 
N

news

Indeed. However, AIUI anyone who's reading a file in order to
subsequently update it will need an exclusive lock from the start.

Yes - but in this case you'd open the file for updating (with "<+ $file"),
so it's effectively an exclusive write lock that you'd need here anyway.

Chris
 
A

Alan J. Flavell

Yes - but in this case you'd open the file for updating (with "<+ $file"),
so it's effectively an exclusive write lock that you'd need here anyway.

Yes, I agree: but I've heard people saying that it would take them a
long time to read their file, but adding a new record at the end would
be fast, so can't they take a shared lock to read the file, and then
somehow upgrade it to an exclusive lock to add the new record?
Sounds a plausible idea, but I don't think it can be done in those
terms.

cheers
 
N

news

Alan J. Flavell said:
Yes, I agree: but I've heard people saying that it would take them a
long time to read their file, but adding a new record at the end would
be fast, so can't they take a shared lock to read the file, and then
somehow upgrade it to an exclusive lock to add the new record?

Hmm (puts thinking hat on). What about this (with dies omitted):

open (F, "<+ somefile");
flock F, LOCK_SH;
# Do lots of reads

flock F, LOCK_EX;
seek F, END_OF_FILE; # Can't remember symbolic offhand
# Do lots of writes at EOF

close F;

Chris
 
A

Anno Siegel

Alan J. Flavell said:
Yes, I agree: but I've heard people saying that it would take them a
long time to read their file, but adding a new record at the end would
be fast, so can't they take a shared lock to read the file, and then
somehow upgrade it to an exclusive lock to add the new record?
Sounds a plausible idea, but I don't think it can be done in those
terms.

That's a good description of the scenario where a lock upgrade looks
tempting. But, as you say, it can't be done. To get a write lock, you'd
have to give up the read lock, which means someone else can change the
file before you do. End of story, until an OS gives us LOCK_UPGD.

Anno
 
A

Alan J. Flavell

Hmm (puts thinking hat on). What about this (with dies omitted):

open (F, "<+ somefile");
flock F, LOCK_SH;
# Do lots of reads

flock F, LOCK_EX;
seek F, END_OF_FILE; # Can't remember symbolic offhand

[...]

First off, it seems one can't actually do that. I don't see it
clearly ruled out in the Perl documentation, and I can't testify to it
from any personal knowledge of the internals, but if I try a man page
for flock (i.e the underlying OS implementation) it says:

A single file may not simultaneously have both shared and
exclusive locks.

So, on *that* rather popular unix-ish OS, which also claims
conformance to BSD, you'd have to let-go the shared lock before you
could get the exclusive lock, thus blowing the scheme out of the
water.

Secondly: even if it _was_ feasible, I believe there's a very high
risk of creating a deadlock situation if two simultaneous processes
apply the proposed technique. Looks to me as if it would need a
back-off strategy in most practical situations - just possibly if
you're certain that although there can be many readers there will only
ever be one read/writer at a time, the plan might work - but in view
of the problem with flock itself, I think you'd need an alternative
mechanism, maybe based on semaphore file(s)(?)

But I certainly could be wrong, I can assure you that I've made
mistakes with locking before :-}
 
V

Vlad Tepes

....
But, as you say, it can't be done. To get a write lock, you'd
have to give up the read lock, which means someone else can change the
file before you do. End of story, until an OS gives us LOCK_UPGD.

Are you sure about this? I've tried Chris's idea. It seems to be
possible to upgrade the lock from LOCK_SH to LOCK_EX without unlocking
first, but I could be missing something.

Here are the two scripts I tried, followed by output from a small
command line session:

lockupgrade.pl:

#!/usr/bin/perl -w

use strict;
use Fcntl qw( :flock :seek );
$|++;

open F, "+<", "lock.txt" or die "$0 open : $!";
print "\n$0: waiting for shared lock...";
flock F, LOCK_SH or die "$0 flocksh : $!";
print "\n$0: got shared lock\n";
sleep 10;

print "$0: waiting for exclusive lock...\n";
flock F, LOCK_EX or die "$0 flockex: $!";
print "$0: got exclusive lock!\n";
seek F, SEEK_END, 2 or die "$0 seekend: $!";
print F "\nwritten by $0 at @{[ scalar localtime ]}\n";
flock F, LOCK_UN or die "$0 cannot unlock: $!";
close F or die "$0 closeex: $!";

__END__


locksh.pl:

#!/usr/bin/perl

use warnings;
use strict;
use Fcntl qw( :flock :seek );

open F, "lock.txt" or die "open : $!";
print "$0: waiting for shared lock..\n";
flock F, LOCK_SH or die "$0 flocksh: $!";
print "$0: got shared lock!....\n";
sleep 10;
flock F, LOCK_UN or die "$0 unlock: $!";
print "$0: released readlock!....\n";
close F;

__END__

Here's what I did, together with output:

$ perl lockupg.pl &; sleep 2; perl locksh.pl ; sleep 3

[1] 5265
lockupg.pl: waiting for shared lock...
lockupg.pl: got shared lock
locksh.pl: waiting for shared lock..
locksh.pl: got shared lock!....
lockupg.pl: waiting for exclusive lock...
locksh.pl: released readlock!....
lockupg.pl: got exclusive lock!
[1] + 5265 done perl lockupg.pl
$ cat lock.txt
la
lala
lalala
lalalala

written by lockupg.pl at Tue Sep 9 19:33:44 2003
$
 
A

Anno Siegel

Vlad Tepes said:
...
But, as you say, it can't be done. To get a write lock, you'd
have to give up the read lock, which means someone else can change the
file before you do. End of story, until an OS gives us LOCK_UPGD.

Are you sure about this? I've tried Chris's idea. It seems to be
possible to upgrade the lock from LOCK_SH to LOCK_EX without unlocking
first, but I could be missing something.

Here are the two scripts I tried, followed by output from a small
command line session:

lockupgrade.pl:

#!/usr/bin/perl -w

use strict;
use Fcntl qw( :flock :seek );
$|++;

open F, "+<", "lock.txt" or die "$0 open : $!";
print "\n$0: waiting for shared lock...";
flock F, LOCK_SH or die "$0 flocksh : $!";
print "\n$0: got shared lock\n";
sleep 10;

print "$0: waiting for exclusive lock...\n";
flock F, LOCK_EX or die "$0 flockex: $!";
print "$0: got exclusive lock!\n";
seek F, SEEK_END, 2 or die "$0 seekend: $!";
print F "\nwritten by $0 at @{[ scalar localtime ]}\n";
flock F, LOCK_UN or die "$0 cannot unlock: $!";
close F or die "$0 closeex: $!";

__END__

[mode code snipped]

This looks like an upgrade, and it is even called an upgrade in the
flock man page, but it isn't an upgrade in the sense that the process
seamlessly keeps a lock on the file:

A shared lock may be upgraded to an exclusive lock, and vice versa, sim-
ply by specifying the appropriate lock type; this results in the previous
lock being released and the new lock applied (possibly after other pro-
cesses have gained and released the lock).

Still end of story, no LOCK_UPGD yet.

Anno
 
N

news

Anno Siegel said:
That's a good description of the scenario where a lock upgrade looks
tempting. But, as you say, it can't be done. To get a write lock, you'd
have to give up the read lock, which means someone else can change the
file before you do. End of story, until an OS gives us LOCK_UPGD.

Hmm. It gets worse - and not just in perl but also in native C (I've
tested both). When you "upgrade" a flock from LOCK_SH to LOCK_EX the
kernel actually releases the lock transiently whilst *appearing* to
succeed atomically. Ouch.

The issue here (as far as I'm concerned) is that flock *appears* to let
you upgrade atomically but it doesn't really do so. Can this warning be
added explicitly to perldoc for flock?

(This is on GNU/Linux kernel 2.4.20. I haven't checked Solaris.)

Chris
 
A

Anno Siegel

Hmm. It gets worse - and not just in perl but also in native C (I've
tested both). When you "upgrade" a flock from LOCK_SH to LOCK_EX the
kernel actually releases the lock transiently whilst *appearing* to
succeed atomically. Ouch.

The issue here (as far as I'm concerned) is that flock *appears* to let
you upgrade atomically but it doesn't really do so. Can this warning be
added explicitly to perldoc for flock?

This doesn't belong in perldoc -- it depends on the underlying OS and
isn't a property of Perl. The documentation of all system calls in
Perl must be read in conjunction with the man page of that call.

In the case of flock(), the man page is quite explicit that "lock
upgrading" implies giving up the previous lock and acquiring a new one
(with the consequence that other processes can get access to the file
in between). The choice of the term "upgrading" is certainly unfortunate
for that action.

Anno
 
A

Alan J. Flavell

This doesn't belong in perldoc -- it depends on the underlying OS and
isn't a property of Perl.

With respect, that's an over-purist approach to documentation. As
we've seen in discussion, this was a very tempting approach that some
programmers believe in good faith is feasible: at the very least, the
Perl documentation which the Perl programmer is supposed to consult
should draw attention to the potential issue, and direct them to look
at the portability documentation, where they should then be able to
find relevant details for their particular OS.

But anyone with an eye to portable programming (and this _is_ one of
Perl's strengths, after all) should be able to select a portable
programming technique without being forced to trawl through the
portability documentation for umpteen OSes with which they're
otherwise totally unfamiliar. Am I being unreasonable?
The documentation of all system calls in
Perl must be read in conjunction with the man page of that call.

Sorry, but I stand by my point. Someone amongst the perlporters is in
a position to know about the OSes that are unfamiliar to me as a
programmer: I want to be able to select a portable programming
technique "at the point of service", i.e in this case starting out
from Perl's own documentation relating to file locking, based on their
advice, without having to do my own research on numerous unfamiliar
OSes. Whether it's from perldoc -f flock, or from perlopentut, or
from the relevant FAQ, in a practical sense I want to be directed to
any specific caveats that I need. Telling me to read the man pages
for umpteen OSes that I don't have and am completely unfamiliar with
does not answer that requirement, do you see what I mean?

best regards
 
A

Anno Siegel

Alan J. Flavell said:
With respect, that's an over-purist approach to documentation. As
we've seen in discussion, this was a very tempting approach that some
programmers believe in good faith is feasible: at the very least, the
Perl documentation which the Perl programmer is supposed to consult
should draw attention to the potential issue, and direct them to look
at the portability documentation, where they should then be able to
find relevant details for their particular OS.

But anyone with an eye to portable programming (and this _is_ one of
Perl's strengths, after all) should be able to select a portable
programming technique without being forced to trawl through the
portability documentation for umpteen OSes with which they're
otherwise totally unfamiliar. Am I being unreasonable?

Unreasonable, you? You must be kidding! You are a rock of reason
in the sea of Usenet.
Sorry, but I stand by my point.

It is, however, how the Perl documentation proper deals with system
calls. "perldoc -f exec" basically points you at the man pages for
execvp(3) and sh(1), which must be frustrating when your system has
neither. The get* chapters do the same, quite summarily.
Someone amongst the perlporters is in
a position to know about the OSes that are unfamiliar to me as a
programmer: I want to be able to select a portable programming
technique "at the point of service", i.e in this case starting out
from Perl's own documentation relating to file locking, based on their
advice, without having to do my own research on numerous unfamiliar
OSes. Whether it's from perldoc -f flock, or from perlopentut, or
from the relevant FAQ, in a practical sense I want to be directed to
any specific caveats that I need. Telling me to read the man pages
for umpteen OSes that I don't have and am completely unfamiliar with
does not answer that requirement, do you see what I mean?

Oh, I do see what you mean. File locking is one of the functions that
suffer most from portability issues. A nice matrix that shows flock()
features against Perl ports would be a meritorious project, but someone
has to do it, and maintain it.

At the moment, I'd stick to the most elementary locking methods if
portability is an issue. It doesn't pay to cut corners, even if it
"works" on one system. A false sense of security may be the result.

If that it unbearably slow for some reason, I'd try to optimize it
for the system(s) it is running on, turning to the relevant basic
man pages.

Anno
 
A

Alan J. Flavell

With respect, that's an over-purist approach to documentation.
[...]
OSes. Whether it's from perldoc -f flock, or from perlopentut, or
from the relevant FAQ, in a practical sense I want to be directed to
any specific caveats that I need. Telling me to read the man pages
for umpteen OSes that I don't have and am completely unfamiliar with
does not answer that requirement, do you see what I mean?

Oh, I do see what you mean. File locking is one of the functions that
suffer most from portability issues. A nice matrix that shows flock()
features against Perl ports would be a meritorious project, but someone
has to do it, and maintain it.

Well, for the moment I second Chris's proposal, since, as you imply,
a more thoroughgoing answer may be too much to ask for.

I've looked again at the (5.8.0) documentation to get an idea where
that might best fit. perldoc -f flock already has a cluster of
Note:... and Note also:... paragraphs, it might do no harm to add
another one there, but on balance I think the file locking section of
perlopentut might be a preferable place for it. I've looked at
perlport and concluded that's not the right place. Naturally one
should look at the latest code version rather than 5.8.0 before
finalising the draft.

I'm thinking of something along the lines of

__
/

One can NOT in general expect to upgrade a shared file lock to an
exclusive lock in an atomic fashion: an "upgrade" would involve
releasing the shared lock (at which point another process could get
the lock) before the exclusive lock can be taken. Therefore, if you
will ultimately need the exclusive lock, e.g for updating the file
based on its previously-read content, you will need to take it from
the outset.

\___

Can it be said shorter - more accurately? Hatchets out...

all the best
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,776
Messages
2,569,603
Members
45,189
Latest member
CryptoTaxSoftware

Latest Threads

Top