flock or IPC semaphore for atomic appends

Discussion in 'Perl Misc' started by jtd, Nov 14, 2003.

  1. jtd

    jtd Guest

    Hi all,

    I'm running linux 2.4.20. I have 50-100 processes appending to a
    single file. Each process appends a block of binary data between 1
    byte to 100KB in size, at the rate of at least 1 per second. Using
    flock appears to ensure that the appends are atomic, but it does use
    up a lot of CPU time compared to not locking at all.

    My questions:
    1) Is my assumption that flock will ensure that appends are atomic
    correct?
    2) Why is flock processor intensive? Does flock write something to
    disk when locking/unlocking?
    3) Would IPC semaphores be a better idea since it is an in-memory
    structure?

    open(F, ">> append.txt");
    binmode(F);
    while(1) {
    flock(F, LOCK_EX);
    print F (large block of binary data)
    flock(F, LOCK_UN);
    sleep(rand(1));
    }
    close(F);

    Thanks for any suggestions,
    jtd
     
    jtd, Nov 14, 2003
    #1
    1. Advertising

  2. (jtd) wrote in
    news::

    > 2) Why is flock processor intensive? Does flock write something to
    > disk when locking/unlocking?


    From perldoc -f flock:

    Two potentially non-obvious but traditional "flock" semantics
    are that it waits indefinitely until the lock is granted, and
    that its locks merely advisory.

    I haven't looked at the source, but I think it is this waiting indefinitely
    part (polling?) that's causing CPU usage to go up.

    Sinan.

    --
    A. Sinan Unur

    Remove dashes for address
    Spam bait: mailto:
     
    A. Sinan Unur, Nov 14, 2003
    #2
    1. Advertising

  3. jtd

    Anno Siegel Guest

    A. Sinan Unur <> wrote in comp.lang.perl.misc:
    > (jtd) wrote in
    > news::
    >
    > > 2) Why is flock processor intensive? Does flock write something to
    > > disk when locking/unlocking?

    >
    > From perldoc -f flock:
    >
    > Two potentially non-obvious but traditional "flock" semantics
    > are that it waits indefinitely until the lock is granted, and
    > that its locks merely advisory.
    >
    > I haven't looked at the source, but I think it is this waiting indefinitely
    > part (polling?) that's causing CPU usage to go up.


    No. Typically, file locking is event driven and no polling goes on.
    Even when a lock is released implicitly (on close), the system knows
    about it and can activate the new lock owner.

    Anno
     
    Anno Siegel, Nov 14, 2003
    #3
  4. -berlin.de (Anno Siegel) wrote in
    news:bp2snb$otk$-Berlin.DE:

    > A. Sinan Unur <> wrote in comp.lang.perl.misc:
    >> (jtd) wrote in
    >> news::
    >>
    >> > 2) Why is flock processor intensive? Does flock write something to
    >> > disk when locking/unlocking?

    ....
    >> I haven't looked at the source, but I think it is this waiting
    >> indefinitely part (polling?) that's causing CPU usage to go up.

    >
    > No. Typically, file locking is event driven and no polling goes on.
    > Even when a lock is released implicitly (on close), the system knows
    > about it and can activate the new lock owner.


    OK. Thanks for the correction.

    Sinan.
     
    A. Sinan Unur, Nov 14, 2003
    #4
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Sanjay Patil
    Replies:
    0
    Views:
    773
    Sanjay Patil
    May 10, 2004
  2. Ryan Taylor
    Replies:
    2
    Views:
    983
    Ryan Taylor
    Nov 10, 2004
  3. techi_C
    Replies:
    2
    Views:
    1,442
    Richard Bos
    Aug 10, 2006
  4. Charles Oliver Nutter

    [ANN] atomic 0.0.1 - An atomic reference for Ruby

    Charles Oliver Nutter, Jun 8, 2010, in forum: Ruby
    Replies:
    5
    Views:
    238
    Robert Dober
    Jun 8, 2010
  5. atomic cross-window IPC?

    , Dec 22, 2005, in forum: Javascript
    Replies:
    5
    Views:
    97
    Randy Webb
    Dec 24, 2005
Loading...

Share This Page