A quota based lock

Discussion in 'Java' started by Robert Stark, Aug 8, 2011.

  1. Robert Stark

    Robert Stark Guest

    I want to write a lock to control access to a resource, there are
    different kind of jobs using this resource, say job A,B,C, at the
    beginning, i use Lock api from jdk concurrent package, but i suffered
    from serious job starvation, so i want to do something like this:

    class T LockManager<T extends Enum>{
    //the input is like this, A=>20, B=>70, C=>10, assign quota to
    different jobs
    public LockManager(Map<T,Integer> quota){

    }

    private void lock(T type){......}

    private void unlock(T type){........}

    public Lock getLock(T type){
    return new QutaLock(type);
    }

    public class QutaLock implements Lock{
    public QutaLock(T type){
    .....
    }

    public void lock(){.....}
    public void unlock(){.........}
    ......
    }

    }

    My idea is input a map of percentages you want to assign for each job,
    and provide a simple lock api.

    Input A=>20, B=>70, C=>10 means A=>20%, B=>70%, C=>10%
    If there's no A jobs pending on the lock, then its quota would be
    divided evenly to other pending jobs that is B=>80%, C=>20%.(This rule
    apply to other type of jobs as well).

    Then i got stuck, the only way i can think about is to introduce an
    extra dispatch thread and several queues, can someone give me some
    hint?
     
    Robert Stark, Aug 8, 2011
    #1
    1. Advertising

  2. Robert Stark

    Eric Sosman Guest

    On 8/8/2011 3:13 AM, Robert Stark wrote:
    > I want to write a lock to control access to a resource, there are
    > different kind of jobs using this resource, say job A,B,C, at the
    > beginning, i use Lock api from jdk concurrent package, but i suffered
    > from serious job starvation, so i want to do something like this:
    >[...]
    >
    > My idea is input a map of percentages you want to assign for each job,
    > and provide a simple lock api.
    >
    > Input A=>20, B=>70, C=>10 means A=>20%, B=>70%, C=>10%
    > If there's no A jobs pending on the lock, then its quota would be
    > divided evenly to other pending jobs that is B=>80%, C=>20%.(This rule
    > apply to other type of jobs as well).
    >
    > Then i got stuck, the only way i can think about is to introduce an
    > extra dispatch thread and several queues, can someone give me some
    > hint?


    Your requirements aren't entirely clear to me, but here are a
    couple of thoughts:

    If the locks are held for short durations (as most locks should
    be), this "quota" notion doesn't seem to make a lot of sense. If a
    worker takes the lock, does something critical for a microsecond, then
    drops it and goes about its business for a second, whatever problems
    you're having aren't really to do with the lock per se. So I guess
    you're thinking about a lock that proxies for an unshareable resource
    a worker will use for macroscopic time, and it's not truly the lock you
    want quotas for, but the resource behind it.

    If so, right away you can see you'll need a way to ensure that a
    worker will release the resource "soon" to give other workers a chance
    at it. If this property isn't inherent in your design, you'll need to
    include some kind of preemption mechanism, a way to wrest the resource
    from its holder involuntarily (possibly involving rollbacks or other
    kinds of recovery for the worker that got preempted). Lacking either
    preemption or a guarantee of timely release, you can't be sure that
    Worker A (20% quota) won't hold the resource for six solid months.

    This is the kind of problem an O/S' task scheduler solves: It
    doles out various unshareable resources (CPU cycles, RAM locations,
    I/O access, ...) to competing consumers. I'm not an expert on the
    various techniques used, but I don't think it's a "solved problem"
    (if it were, there wouldn't be so many different schedulers). That
    is, I don't think there's a one-size-fits-all sure-fire solution to
    your problem; you'll need to consider your goals and constraints in
    quite a bit of detail to arrive at a tolerable compromise, probably
    less than completely satisfactory.

    So, one thought is to give the problem to the O/S' scheduler!
    If the unshareable resource is something the O/S understands, maybe
    your workers should be separate "tasks" or "jobs" or whatever the
    O/S wants to call them. It could save you a lot of hard work if you
    can let an existing software component do the job for you.

    Failing that, I'd suggest keeping things as simple as possible.
    Start with an ordinary queue, with each worker joining the end of
    the queue when it wants the resource and then getting the resource
    when it reaches the front. If Worker A holds the resource (and
    eventually releases it), all the other workers will get a crack at
    it (if they want it) before A gets it again. Don't mess around with
    quotas or priorities or whatnot until and unless the simple solution
    is found wanting.

    If the simple solution doesn't work out, report back (with more
    detail) and I'm sure folks will suggest more sophisticated approaches.
    But for starters, KISS.

    --
    Eric Sosman
    d
     
    Eric Sosman, Aug 8, 2011
    #2
    1. Advertising

  3. Robert Stark

    markspace Guest

    On 8/8/2011 12:13 AM, Robert Stark wrote:
    > I want to write a lock to control access to a resource, there are
    > different kind of jobs using this resource, say job A,B,C, at the
    > beginning, i use Lock api from jdk concurrent package, but i suffered
    > from serious job starvation, so i want to do something like this:



    I got to agree with Eric: I don't think the problem specification is
    clear. Could you show, in a short code example, how the problem
    currently occurs? Make a couple of jobs and show how one gets
    "starved?" See SSCCE:

    <http://sscce.org/>

    If you have limited resources, I don't see how changing the lock system
    will help. Limited resources mean some jobs might get starved if other
    jobs hog the resources. I think an example of how the current system is
    working my help us understand better why you think there's room for
    improvement.
     
    markspace, Aug 8, 2011
    #3
  4. On 8/8/2011 4:58 AM, Eric Sosman wrote:
    > Failing that, I'd suggest keeping things as simple as possible.
    > Start with an ordinary queue, with each worker joining the end of
    > the queue when it wants the resource and then getting the resource
    > when it reaches the front. If Worker A holds the resource (and
    > eventually releases it), all the other workers will get a crack at
    > it (if they want it) before A gets it again. Don't mess around with
    > quotas or priorities or whatnot until and unless the simple solution
    > is found wanting.


    I like that idea with a twist, have the higher priority jobs get put in
    the queue closer to the top.

    --

    Knute Johnson
     
    Knute Johnson, Aug 8, 2011
    #4
  5. Robert Stark

    markspace Guest

    On 8/8/2011 9:48 AM, Knute Johnson wrote:
    >
    > I like that idea with a twist, have the higher priority jobs get put in
    > the queue closer to the top.



    No, see this is why "one little twist" doesn't work. Higher priority
    jobs will then starve out the lower priority jobs if there are enough of
    them. Putting them in "closer" will bump them ahead of other jobs,
    which may stay permanently in the back of the queue if there are enough
    bumps.


    FIFO scheduling is best until you figure out something else that
    actually works. Perturbing that FIFO algorithm isn't likely to work and
    will cause starvation, as noted above.
     
    markspace, Aug 8, 2011
    #5
  6. On 8/8/2011 10:00 AM, markspace wrote:
    > On 8/8/2011 9:48 AM, Knute Johnson wrote:
    >>
    >> I like that idea with a twist, have the higher priority jobs get put in
    >> the queue closer to the top.

    >
    >
    > No, see this is why "one little twist" doesn't work. Higher priority
    > jobs will then starve out the lower priority jobs if there are enough of
    > them. Putting them in "closer" will bump them ahead of other jobs, which
    > may stay permanently in the back of the queue if there are enough bumps.
    >
    >
    > FIFO scheduling is best until you figure out something else that
    > actually works. Perturbing that FIFO algorithm isn't likely to work and
    > will cause starvation, as noted above.
    >


    No priority scheme will ever be truly fair. I'll bet you could get
    pretty close without being too complicated. I'll think about it some more.

    --

    Knute Johnson
     
    Knute Johnson, Aug 8, 2011
    #6
  7. Robert Stark

    markspace Guest

    On 8/8/2011 11:39 AM, Knute Johnson wrote:

    > No priority scheme will ever be truly fair. I'll bet you could get
    > pretty close without being too complicated. I'll think about it some more.



    A simple priority system might involve multiple queues, where the high
    priority queues are serviced X times more than the lower ones.

    E.g., two queues. Queue A gets 10 jobs executed for each 1 job that
    queue B gets executed. But because queue B is always guaranteed to be
    serviced eventually, there is no starvation.

    This is a simple step up from round-robin service (which is what Eric
    proposed). There are many algorithms existing. Check out any text on
    OSs and job scheduling.
     
    markspace, Aug 8, 2011
    #7
  8. On 08.08.2011 20:57, markspace wrote:
    > On 8/8/2011 11:39 AM, Knute Johnson wrote:
    >
    >> No priority scheme will ever be truly fair. I'll bet you could get
    >> pretty close without being too complicated. I'll think about it some
    >> more.

    >
    >
    > A simple priority system might involve multiple queues, where the high
    > priority queues are serviced X times more than the lower ones.
    >
    > E.g., two queues. Queue A gets 10 jobs executed for each 1 job that
    > queue B gets executed. But because queue B is always guaranteed to be
    > serviced eventually, there is no starvation.
    >
    > This is a simple step up from round-robin service (which is what Eric
    > proposed). There are many algorithms existing. Check out any text on OSs
    > and job scheduling.


    Another idea would be to take the time a task has access to the
    resource, sum up per task category and for the next task pick the first
    one from the category which is furthest below its specified share
    (percentage). Basically your approach measures executions and this
    approach measures actual resource usage time.

    An interesting thing to learn would be whether tasks are simply executed
    by threads calling a method or whether tasks can also be submitted (e.g.
    as Runnable or similar). That would also affect the internal layout and
    the way scheduling could be done.

    Kind regards

    robert

    --
    remember.guy do |as, often| as.you_can - without end
    http://blog.rubybestpractices.com/
     
    Robert Klemme, Aug 8, 2011
    #8
  9. Robert Stark

    Eric Sosman Guest

    On 8/8/2011 3:46 PM, Robert Klemme wrote:
    > On 08.08.2011 20:57, markspace wrote:
    >> On 8/8/2011 11:39 AM, Knute Johnson wrote:
    >>
    >>> No priority scheme will ever be truly fair. I'll bet you could get
    >>> pretty close without being too complicated. I'll think about it some
    >>> more.

    >>
    >>
    >> A simple priority system might involve multiple queues, where the high
    >> priority queues are serviced X times more than the lower ones.
    >>
    >> E.g., two queues. Queue A gets 10 jobs executed for each 1 job that
    >> queue B gets executed. But because queue B is always guaranteed to be
    >> serviced eventually, there is no starvation.
    >>
    >> This is a simple step up from round-robin service (which is what Eric
    >> proposed). There are many algorithms existing. Check out any text on OSs
    >> and job scheduling.

    >
    > Another idea would be to take the time a task has access to the
    > resource, sum up per task category and for the next task pick the first
    > one from the category which is furthest below its specified share
    > (percentage). Basically your approach measures executions and this
    > approach measures actual resource usage time.


    Yes, all these disciplines are plausible. My main piece of advice
    is KISS: Begin with the simplest possible solution, and elaborate it
    only when there's solid evidence it won't suffice.

    Sometimes the evidence can be gathered in advance: If you know
    things about arrival rates and hold times and latency requirements, you
    may be able to do a calculation that shows simple FIFO won't hack it.
    More often, given the inherent complexity and "brittleness" of software
    systems, you'll need to implement first and measure afterwards to learn
    about a solution's shortcomings. This can lead to discarding the first
    solution -- but, hey: It was the simplest one you could imagine, so you
    probably didn't expend inordinate effort on it, right? Much cheaper to
    jettison a simple approach than a complicated one.

    KISS.

    --
    Eric Sosman
    d
     
    Eric Sosman, Aug 9, 2011
    #9
  10. Robert Stark

    Tom Anderson Guest

    On Mon, 8 Aug 2011, Knute Johnson wrote:

    > No priority scheme will ever be truly fair.


    True. And this is why humanity will never be happy.

    It is also a problem in programming.

    tom

    --
    One chants out between two worlds Fire - Walk With Me.
     
    Tom Anderson, Aug 9, 2011
    #10
  11. Robert Stark

    Tom Anderson Guest

    On Mon, 8 Aug 2011, Robert Stark wrote:

    > I want to write a lock to control access to a resource, there are
    > different kind of jobs using this resource, say job A,B,C, [...] My idea
    > is input a map of percentages you want to assign for each job, and
    > provide a simple lock api.
    >
    > Input A=>20, B=>70, C=>10 means A=>20%, B=>70%, C=>10% If there's no A
    > jobs pending on the lock, then its quota would be divided evenly to
    > other pending jobs that is B=>80%, C=>20%.


    So there's a single resource, right? And at any time, only one thread can
    hold the lock on the resource? But you want to share the lock between the
    threads in a fair (or at least controlled) way over time? Do you want to
    be fair in terms of time for which the lock his held, or the number of
    lock acquisitions?

    > Then i got stuck, the only way i can think about is to introduce an
    > extra dispatch thread and several queues, can someone give me some hint?


    You will need a separate queue for each kind of thread. I don't think you
    need another thread to handle the dispatching.

    When a thread tries to acquire the lock, if the lock is available, it
    takes it (of course). If the lock is not available, it joins the right
    queue.

    When a thread releases the lock, if there are no threads waiting, it does
    nothing (of course). If there are threads waiting, it decides which of the
    kinds of thread should get the lock next, and hands the lock to the next
    thread in that queue.

    The only problem is how you decide which kind of thread should get the
    lock next.

    The easiest would be to keep counters for each kind of thread, counting
    how many times threads of that kind have acquired the lock (or how long in
    total they have held it for, if you prefer). You can then easily calculate
    the total number of acquisitions (or the total time of holding), and so
    the share of the lock which each kind of thread has had so far. At any
    point in time, you can compare that historical share to your desired
    share, and put the kinds of threads in order, from the most short-changed
    to the most overprivileged, and award the lock to a thread of the most
    needy kind which currently has threads waiting.

    This approach will give you a fair apportionment over time.

    However, it means that if a kind of thread does not make full use of its
    allowance at some point in time (perhaps because there are not many of
    that kind of thread running), then it effectively builds up a 'share
    credit' which will allow it to make greater demands on the resource later
    on. You might consider that unfair. You might not.

    If you do consider it unfair, you could address it by decaying the share
    counts over time - it would be a simple use of timestamps and Math.exp()
    to apply an exponential decay immediately before updating the counters.

    tom

    --
    One chants out between two worlds Fire - Walk With Me.
     
    Tom Anderson, Aug 9, 2011
    #11
  12. On 09.08.2011 02:41, Eric Sosman wrote:
    > On 8/8/2011 3:46 PM, Robert Klemme wrote:
    >> On 08.08.2011 20:57, markspace wrote:
    >>> On 8/8/2011 11:39 AM, Knute Johnson wrote:
    >>>
    >>>> No priority scheme will ever be truly fair. I'll bet you could get
    >>>> pretty close without being too complicated. I'll think about it some
    >>>> more.
    >>>
    >>>
    >>> A simple priority system might involve multiple queues, where the high
    >>> priority queues are serviced X times more than the lower ones.
    >>>
    >>> E.g., two queues. Queue A gets 10 jobs executed for each 1 job that
    >>> queue B gets executed. But because queue B is always guaranteed to be
    >>> serviced eventually, there is no starvation.
    >>>
    >>> This is a simple step up from round-robin service (which is what Eric
    >>> proposed). There are many algorithms existing. Check out any text on OSs
    >>> and job scheduling.

    >>
    >> Another idea would be to take the time a task has access to the
    >> resource, sum up per task category and for the next task pick the first
    >> one from the category which is furthest below its specified share
    >> (percentage). Basically your approach measures executions and this
    >> approach measures actual resource usage time.

    >
    > Yes, all these disciplines are plausible. My main piece of advice
    > is KISS: Begin with the simplest possible solution, and elaborate it
    > only when there's solid evidence it won't suffice.
    >
    > Sometimes the evidence can be gathered in advance: If you know
    > things about arrival rates and hold times and latency requirements, you
    > may be able to do a calculation that shows simple FIFO won't hack it.
    > More often, given the inherent complexity and "brittleness" of software
    > systems, you'll need to implement first and measure afterwards to learn
    > about a solution's shortcomings. This can lead to discarding the first
    > solution -- but, hey: It was the simplest one you could imagine, so you
    > probably didn't expend inordinate effort on it, right? Much cheaper to
    > jettison a simple approach than a complicated one.
    >
    > KISS.


    Absolutely agree. And we still need the OP to state his problem /
    requirements clearly. Robert, what is it that you really need? What is
    your use case and your ultimate goal?

    Kind regards

    robert



    --
    remember.guy do |as, often| as.you_can - without end
    http://blog.rubybestpractices.com/
     
    Robert Klemme, Aug 10, 2011
    #12
  13. Robert Stark

    Robert Stark Guest

    Sorry, i use google groups and i could not find my post until today.
    Thank you all!
    There is a single resource and i want more sophisticated control over
    it, there're different types of jobs( client requests, back-end jobs)
    running in my system, every job would hold the resource for 1ms~20ms,
    some back-end jobs will run for several hours and they will
    continuously acquire this resource, in this case, client requests will
    suffered low throughput even starvation, so i come up with this idea,
    and i do want to keep it as simple as possible. Client request may
    come once a while (10-50 per second), acquire lock 1-2 times and exit.

    I keep a queue for each type of job, and keep counters to make
    schedule decision, the counters expired when their sum exceed 100 or
    some jobs' counter exceed their quota and no other jobs pending. But i
    still introduce an extra dispatch thread, i find it extremely hard to
    write something like Tom Anderson said ...
    I want to complete it and test how it goes.

    On Aug 10, 3:36 pm, Robert Klemme <> wrote:
    > On 09.08.2011 02:41, Eric Sosman wrote:
    >
    >
    >
    >
    >
    >
    >
    >
    >
    > > On 8/8/2011 3:46 PM, Robert Klemme wrote:
    > >> On 08.08.2011 20:57, markspace wrote:
    > >>> On 8/8/2011 11:39 AM, Knute Johnson wrote:

    >
    > >>>> No priority scheme will ever be truly fair. I'll bet you could get
    > >>>> pretty close without being too complicated. I'll think about it some
    > >>>> more.

    >
    > >>> A simple priority system might involve multiple queues, where the high
    > >>> priority queues are serviced X times more than the lower ones.

    >
    > >>> E.g., two queues. Queue A gets 10 jobs executed for each 1 job that
    > >>> queue B gets executed. But because queue B is always guaranteed to be
    > >>> serviced eventually, there is no starvation.

    >
    > >>> This is a simple step up from round-robin service (which is what Eric
    > >>> proposed). There are many algorithms existing. Check out any text on OSs
    > >>> and job scheduling.

    >
    > >> Another idea would be to take the time a task has access to the
    > >> resource, sum up per task category and for the next task pick the first
    > >> one from the category which is furthest below its specified share
    > >> (percentage). Basically your approach measures executions and this
    > >> approach measures actual resource usage time.

    >
    > > Yes, all these disciplines are plausible. My main piece of advice
    > > is KISS: Begin with the simplest possible solution, and elaborate it
    > > only when there's solid evidence it won't suffice.

    >
    > > Sometimes the evidence can be gathered in advance: If you know
    > > things about arrival rates and hold times and latency requirements, you
    > > may be able to do a calculation that shows simple FIFO won't hack it.
    > > More often, given the inherent complexity and "brittleness" of software
    > > systems, you'll need to implement first and measure afterwards to learn
    > > about a solution's shortcomings. This can lead to discarding the first
    > > solution -- but, hey: It was the simplest one you could imagine, so you
    > > probably didn't expend inordinate effort on it, right? Much cheaper to
    > > jettison a simple approach than a complicated one.

    >
    > > KISS.

    >
    > Absolutely agree.  And we still need the OP to state his problem /
    > requirements clearly.  Robert, what is it that you really need?  Whatis
    > your use case and your ultimate goal?
    >
    > Kind regards
    >
    >         robert
    >
    > --
    > remember.guy do |as, often| as.you_can - without endhttp://blog.rubybestpractices.com/
     
    Robert Stark, Aug 10, 2011
    #13
  14. Please do not top post.

    On 10.08.2011 13:40, Robert Stark wrote:
    > Sorry, i use google groups and i could not find my post until today.
    > Thank you all!
    > There is a single resource and i want more sophisticated control over
    > it, there're different types of jobs( client requests, back-end jobs)
    > running in my system, every job would hold the resource for 1ms~20ms,
    > some back-end jobs will run for several hours and they will
    > continuously acquire this resource, in this case, client requests will
    > suffered low throughput even starvation, so i come up with this idea,
    > and i do want to keep it as simple as possible. Client request may
    > come once a while (10-50 per second), acquire lock 1-2 times and exit.


    This can never work: if you have jobs that - by design - hold the
    resource for hours then no amount of lock implementation smartness will
    prevent starvation without preemption. You cannot have a resource with
    exclusive access, long access times and responsiveness at the same time.
    Doing preemption manually will be difficult. It's better to break up
    long running tasks into smaller sub tasks which need exclusive resource
    access. Whether that's possible or not depends of course on your
    business logic (which we still haven't seen).

    > I want to complete it and test how it goes.


    Good idea.

    Cheers

    robert

    --
    remember.guy do |as, often| as.you_can - without end
    http://blog.rubybestpractices.com/
     
    Robert Klemme, Aug 10, 2011
    #14
  15. On Wed, 10 Aug 2011 18:55:43 +0200, Robert Klemme wrote:

    > This can never work: if you have jobs that - by design - hold the
    > resource for hours then no amount of lock implementation smartness will
    > prevent starvation without preemption. You cannot have a resource with
    > exclusive access, long access times and responsiveness at the same time.
    > Doing preemption manually will be difficult. It's better to break up
    > long running tasks into smaller sub tasks which need exclusive resource
    > access. Whether that's possible or not depends of course on your
    > business logic (which we still haven't seen).
    >

    Total agreement.

    If you have a long running job that requires exclusive resource access,
    then by definition no other task will ever get a look-in while its
    running.

    If such long running background jobs are doing housekeeping tasks on a
    database, which they often are, its usually possible to identify
    relatively short processing cycles that amend the database and can be
    split out into separate transactions that run in the same timescale as
    online transactions. By splitting the task up this way it can take its
    turn in obtaining the resource lock for each transaction and your locking
    mechanism may not need to be any more complex that a FIFO queue.

    The long-running task may run a bit shower due to transaction commit
    overheads and (possibly) a need to save running totals between
    transactions, but often the running totals etc can be collected in a
    separate read-only, and hence non-locking, transaction and the results
    committed in separate, short update transaction.

    On the bright side, if the long process is redesigned along these lines
    and also keeps track of progress, it will recover *much* faster when a
    system crash, exception or whatever occurs while its running.


    --
    martin@ | Martin Gregorie
    gregorie. | Essex, UK
    org |
     
    Martin Gregorie, Aug 10, 2011
    #15
  16. Robert Stark

    Robert Stark Guest

    On Aug 11, 3:37 am, Patricia Shanahan <> wrote:
    > On 8/10/2011 9:55 AM, Robert Klemme wrote:
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    >
    > > Please do not top post.

    >
    > > On 10.08.2011 13:40, Robert Stark wrote:
    > >> Sorry, i use google groups and i could not find my post until today.
    > >> Thank you all!
    > >> There is a single resource and i want more sophisticated control over
    > >> it, there're different types of jobs( client requests, back-end jobs)
    > >> running in my system, every job would hold the resource for 1ms~20ms,
    > >> some back-end jobs will run for several hours and they will
    > >> continuously acquire this resource, in this case, client requests will
    > >> suffered low throughput even starvation, so i come up with this idea,
    > >> and i do want to keep it as simple as possible. Client request may
    > >> come once a while (10-50 per second), acquire lock 1-2 times and exit.

    >
    > > This can never work: if you have jobs that - by design - hold the
    > > resource for hours then no amount of lock implementation smartness will
    > > prevent starvation without preemption. You cannot have a resource with
    > > exclusive access, long access times and responsiveness at the same time..
    > > Doing preemption manually will be difficult. It's better to break up
    > > long running tasks into smaller sub tasks which need exclusive resource
    > > access. Whether that's possible or not depends of course on your
    > > business logic (which we still haven't seen).

    >
    > I interpreted the paragraph you quoted rather differently. My picture is
    > that each back-end job runs for a long time, but during that time
    > repeatedly acquires the contended resource, still only keeping it for
    > order milliseconds at a time. I base this on "continuously acquire"
    > rather than "continuously hold" the resource.
    >
    > If that picture is correct, simple FIFO lock handling may be sufficient
    > to let the client requests get through fast enough.
    >
    > However, any time a lock becomes enough of an issue that it requires
    > this sort of discussion, the first thing to do is to examine whether it
    > can be refactored to reduce contention. Is the lock really covering only
    > one resource? Can that resource be replicated? Can it be divided up?
    >
    > Patricia


    You are right, Patricia.
    "each back-end job runs for a long time, but during that time
    repeatedly acquires the contended resource, still only keeping it for
    order milliseconds at a time"
    Sorry i failed to make it clear in the first time.
    I think i have to do more test using a simple FIFO lock, maybe there's
    something wrong in my code!
     
    Robert Stark, Aug 11, 2011
    #16
  17. Robert Stark

    markspace Guest

    On 8/10/2011 6:30 PM, Robert Stark wrote:
    > I think i have to do more test using a simple FIFO lock, maybe there's
    > something wrong in my code!



    If this is the case, try encapsulating your lock, and recording the
    timing of each lock and unlock. Print out any thread that holds the
    lock for a long time (> 500 ms?). This might clue you in to where some
    threads are being starved.

    Also note the "fair" constructor for ReentrantLock:

    <http://download.oracle.com/javase/6/docs/api/java/util/concurrent/locks/ReentrantLock.html>
     
    markspace, Aug 11, 2011
    #17
  18. On 11.08.2011 03:30, Robert Stark wrote:
    > On Aug 11, 3:37 am, Patricia Shanahan<> wrote:
    >> On 8/10/2011 9:55 AM, Robert Klemme wrote:
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>> Please do not top post.

    >>
    >>> On 10.08.2011 13:40, Robert Stark wrote:
    >>>> Sorry, i use google groups and i could not find my post until today.
    >>>> Thank you all!
    >>>> There is a single resource and i want more sophisticated control over
    >>>> it, there're different types of jobs( client requests, back-end jobs)
    >>>> running in my system, every job would hold the resource for 1ms~20ms,
    >>>> some back-end jobs will run for several hours and they will
    >>>> continuously acquire this resource, in this case, client requests will
    >>>> suffered low throughput even starvation, so i come up with this idea,
    >>>> and i do want to keep it as simple as possible. Client request may
    >>>> come once a while (10-50 per second), acquire lock 1-2 times and exit.

    >>
    >>> This can never work: if you have jobs that - by design - hold the
    >>> resource for hours then no amount of lock implementation smartness will
    >>> prevent starvation without preemption. You cannot have a resource with
    >>> exclusive access, long access times and responsiveness at the same time.
    >>> Doing preemption manually will be difficult. It's better to break up
    >>> long running tasks into smaller sub tasks which need exclusive resource
    >>> access. Whether that's possible or not depends of course on your
    >>> business logic (which we still haven't seen).

    >>
    >> I interpreted the paragraph you quoted rather differently. My picture is
    >> that each back-end job runs for a long time, but during that time
    >> repeatedly acquires the contended resource, still only keeping it for
    >> order milliseconds at a time. I base this on "continuously acquire"
    >> rather than "continuously hold" the resource.


    I considered that as well but figured that then the starvation issue
    wouldn't be so dramatic. :)

    >> If that picture is correct, simple FIFO lock handling may be sufficient
    >> to let the client requests get through fast enough.


    I would have assumed that this is what Robert tried first and hit the
    starvation issue then.

    >> However, any time a lock becomes enough of an issue that it requires
    >> this sort of discussion, the first thing to do is to examine whether it
    >> can be refactored to reduce contention. Is the lock really covering only
    >> one resource? Can that resource be replicated? Can it be divided up?


    Absolutely. Often refactoring helps a great deal more. Maybe there is
    a solution where an immutable fraction of the state can be shared
    reducing need for locking. Or state can be copied and merged after
    manipulation. There are so many options...

    > You are right, Patricia.
    > "each back-end job runs for a long time, but during that time
    > repeatedly acquires the contended resource, still only keeping it for
    > order milliseconds at a time"


    Thanks for clarifying.

    > Sorry i failed to make it clear in the first time.


    No prob, I might actually have stumbled over my non native readerness. :)

    > I think i have to do more test using a simple FIFO lock, maybe there's
    > something wrong in my code!


    Please see Mark's suggestion of using fair mode as well. You do pay a
    performance penalty for the fair mode. Whether that is relevant in your
    case needs to be measured of course.

    Kind regards

    robert

    --
    remember.guy do |as, often| as.you_can - without end
    http://blog.rubybestpractices.com/
     
    Robert Klemme, Aug 11, 2011
    #18
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Bart Simpson
    Replies:
    1
    Views:
    419
    Werner Schiendl
    Oct 27, 2003
  2. Fuzzyman
    Replies:
    3
    Views:
    508
    Andrew MacIntyre
    Dec 5, 2003
  3. Robert Brewer
    Replies:
    0
    Views:
    500
    Robert Brewer
    Dec 5, 2003
  4. k3xji
    Replies:
    7
    Views:
    845
    Gabriel Genellina
    Dec 30, 2008
  5. nano2k

    Application.Lock()/UnLock() or lock(Application)

    nano2k, Jul 23, 2007, in forum: ASP .Net Web Services
    Replies:
    2
    Views:
    298
    nano2k
    Aug 9, 2007
Loading...

Share This Page