Control flow design question

Discussion in 'Java' started by JannaB, Jun 22, 2009.

  1. JannaB

    JannaB Guest

    I have a sockets server, listening on a single port for connections of
    from 1 to 400 different terminals on the internet.

    These terminals are grouped into what I call "channels." That is,
    terminal 1,5 and 119 may be all from channel "A."

    I only want to process 1 channel at a time. In other words, if I get a
    signal from terminal 5, and one from 119, the latter must wait until
    the former is processed. (Incidentally, I don;t write out from the
    sockets server, rather, it writes some JSON data that is ultimately
    transmitted back to the appropriate channel terminals).

    Each connection to my sockets server will be making some JDBC inserts
    and undates.

    I am wondering, structurally, how best to handle this, realising that
    archtiectural questions such as this are best handled properly from
    the outset. Thank you, Janna B.
     
    JannaB, Jun 22, 2009
    #1
    1. Advertising

  2. JannaB

    Eric Sosman Guest

    JannaB wrote:
    > I have a sockets server, listening on a single port for connections of
    > from 1 to 400 different terminals on the internet.
    >
    > These terminals are grouped into what I call "channels." That is,
    > terminal 1,5 and 119 may be all from channel "A."
    >
    > I only want to process 1 channel at a time. In other words, if I get a
    > signal from terminal 5, and one from 119, the latter must wait until
    > the former is processed. (Incidentally, I don;t write out from the
    > sockets server, rather, it writes some JSON data that is ultimately
    > transmitted back to the appropriate channel terminals).
    >
    > Each connection to my sockets server will be making some JDBC inserts
    > and undates.
    >
    > I am wondering, structurally, how best to handle this, realising that
    > archtiectural questions such as this are best handled properly from
    > the outset. Thank you, Janna B.


    My first thought would be to use a different incoming
    socket for connections on each channel, then write what
    amounts to an ordinary single-threaded one-request-at-a-time
    server for each channel's socket. But if you've got to use
    the same IP/port for all the clients, that won't work.

    Another approach would be to maintain N queues of requests,
    one for each channel. A single thread accepts incoming
    requests on the single port, figures out which channel each
    belongs to, and appends each request to its channel's queue.
    Each queue's requests are processed by one dedicated worker
    thread.

    A disadvantage of the second approach is that it forces
    you to have the same number of worker threads as channels,
    which might not be convenient (the one-request-per-channel
    requirement means you can't have *more* workers than channels,
    but if the channel count is large compared to the "CPU count"
    you might well want to have fewer). Instead, you could have
    N queues as above but just M worker threads: A worker finds
    the queue with the oldest (or highest-priority) request at
    the front and processes that request, but during the processing
    it marks the queue "ineligible" so no other worker will look
    at it. When the queue's first request is completed, the queue
    becomes "eligible" again and the worker repeats its cycle.

    --
    Eric Sosman
    lid
     
    Eric Sosman, Jun 22, 2009
    #2
    1. Advertising

  3. JannaB

    JannaB Guest

    Thanks Eric,

    As it turns out, I CAN use a different IP/Port for each socket, so I
    CAN go with your former idea. Seems like the most efficient AND
    safest. However, what to do about db connections? I must keep
    persistent connections, I cannot keep re-esablishing connections. -
    Janna

    On Jun 22, 9:05 am, Eric Sosman <> wrote:
    > JannaB wrote:
    > > I have a sockets server, listening on a single port for connections of
    > > from 1 to 400 different terminals on the internet.

    >
    > > These terminals are grouped into what I call "channels." That is,
    > > terminal 1,5 and 119 may be all from channel "A."

    >
    > > I only want to process 1 channel at a time. In other words, if I get a
    > > signal from terminal 5, and one from 119, the latter must wait until
    > > the former is processed. (Incidentally, I don;t write out from the
    > > sockets server, rather, it writes some JSON data that is ultimately
    > > transmitted back to the appropriate channel terminals).

    >
    > > Each connection to my sockets server will be making some JDBC inserts
    > > and undates.

    >
    > > I am wondering, structurally, how best to handle this, realising that
    > > archtiectural questions such as this are best handled properly from
    > > the outset. Thank you, Janna B.

    >
    >      My first thought would be to use a different incoming
    > socket for connections on each channel, then write what
    > amounts to an ordinary single-threaded one-request-at-a-time
    > server for each channel's socket.  But if you've got to use
    > the same IP/port for all the clients, that won't work.
    >
    >      Another approach would be to maintain N queues of requests,
    > one for each channel.  A single thread accepts incoming
    > requests on the single port, figures out which channel each
    > belongs to, and appends each request to its channel's queue.
    > Each queue's requests are processed by one dedicated worker
    > thread.
    >
    >      A disadvantage of the second approach is that it forces
    > you to have the same number of worker threads as channels,
    > which might not be convenient (the one-request-per-channel
    > requirement means you can't have *more* workers than channels,
    > but if the channel count is large compared to the "CPU count"
    > you might well want to have fewer).  Instead, you could have
    > N queues as above but just M worker threads: A worker finds
    > the queue with the oldest (or highest-priority) request at
    > the front and processes that request, but during the processing
    > it marks the queue "ineligible" so no other worker will look
    > at it.  When the queue's first request is completed, the queue
    > becomes "eligible" again and the worker repeats its cycle.
    >
    > --
    > Eric Sosman
    >
     
    JannaB, Jun 22, 2009
    #3
  4. On 22.06.2009 15:05, Eric Sosman wrote:
    > JannaB wrote:
    >> I have a sockets server, listening on a single port for connections of
    >> from 1 to 400 different terminals on the internet.
    >>
    >> These terminals are grouped into what I call "channels." That is,
    >> terminal 1,5 and 119 may be all from channel "A."
    >>
    >> I only want to process 1 channel at a time. In other words, if I get a
    >> signal from terminal 5, and one from 119, the latter must wait until
    >> the former is processed. (Incidentally, I don;t write out from the
    >> sockets server, rather, it writes some JSON data that is ultimately
    >> transmitted back to the appropriate channel terminals).
    >>
    >> Each connection to my sockets server will be making some JDBC inserts
    >> and undates.
    >>
    >> I am wondering, structurally, how best to handle this, realising that
    >> archtiectural questions such as this are best handled properly from
    >> the outset. Thank you, Janna B.

    >
    > My first thought would be to use a different incoming
    > socket for connections on each channel, then write what
    > amounts to an ordinary single-threaded one-request-at-a-time
    > server for each channel's socket. But if you've got to use
    > the same IP/port for all the clients, that won't work.


    That does not exactly meet the requirement as stated above: OP said that
    he wanted to process only one channel at a time. It may be though that
    he meant that he wants to process only one event at a time _per channel_.

    > Another approach would be to maintain N queues of requests,
    > one for each channel. A single thread accepts incoming
    > requests on the single port, figures out which channel each
    > belongs to, and appends each request to its channel's queue.
    > Each queue's requests are processed by one dedicated worker
    > thread.


    Yep. That would be my favorite although I have no idea how the channel
    is obtained from the connection.

    > A disadvantage of the second approach is that it forces
    > you to have the same number of worker threads as channels,
    > which might not be convenient (the one-request-per-channel
    > requirement means you can't have *more* workers than channels,
    > but if the channel count is large compared to the "CPU count"
    > you might well want to have fewer). Instead, you could have
    > N queues as above but just M worker threads: A worker finds
    > the queue with the oldest (or highest-priority) request at
    > the front and processes that request, but during the processing
    > it marks the queue "ineligible" so no other worker will look
    > at it. When the queue's first request is completed, the queue
    > becomes "eligible" again and the worker repeats its cycle.


    In that case I'd rather have M queues and M workers and put something
    into the event that is enqueued which allows detection of the channel.
    Then place events for multiple channels in one queue.

    Kind regards

    robert


    --
    remember.guy do |as, often| as.you_can - without end
    http://blog.rubybestpractices.com/
     
    Robert Klemme, Jun 22, 2009
    #4
  5. In article
    <>,
    JannaB <> wrote:

    On Jun 22, 9:05 am, Eric Sosman <> wrote:

    > >      My first thought would be to use a different incoming
    > > socket for connections on each channel, then write what
    > > amounts to an ordinary single-threaded one-request-at-a-time
    > > server for each channel's socket.  But if you've got to use
    > > the same IP/port for all the clients, that won't work.

    >
    > As it turns out, I CAN use a different IP/Port for each socket, so I
    > CAN go with your former idea. Seems like the most efficient AND
    > safest. However, what to do about db connections? I must keep
    > persistent connections, I cannot keep re-esablishing connections. -


    If you're going with Eric's first plan, a single socket for each
    channel that handles one-request-at-a-time, let each handler maintain
    it's own persistent connection. Each channel handler can use the same
    or a different login for its individual connection. That user should
    generally have the fewest privilege needed to effect a transaction for
    a given channel.

    Alternatively, you might look at connection pooling:

    <http://java.sun.com/developer/onlineTraining/Programming/JDCBook/conpool.html>

    [Please don't top post.]
    --
    John B. Matthews
    trashgod at gmail dot com
    <http://sites.google.com/site/drjohnbmatthews>
     
    John B. Matthews, Jun 22, 2009
    #5
  6. On 22.06.2009 21:17, Eric Sosman wrote:
    > Robert Klemme wrote:
    >> In that case I'd rather have M queues and M workers and put something
    >> into the event that is enqueued which allows detection of the channel.
    >> Then place events for multiple channels in one queue.

    >
    > If I understand you, requests would be sprayed across all M
    > queues and thus be eligible for processing by any of the M workers.


    Yes, but not requests of a single channel. Those all go to a single
    queue! That's what I meant although I see that I did not state it
    explicitly.

    > But if worker W1 is processing a request from channel A, worker W2
    > must not start work on another channel A request until W1 finishes.


    The would be easily achieved by having fixed assignments of channels to
    queues. Of course, this could waste threads but since we are talking
    about a "huge amount of channels" scenario this is probably negligible.

    That could be solved with a more complex scheme but, as you said, I
    believe we have provided quite a bit food for thought already.

    > Also, if it is important to process A's requests in the order they
    > arrived or in order by their priorities (the O.P. didn't address
    > the matter, but an ordering discipline of some kind is often wanted),
    > spraying A's events across multiple queues will make it harder
    > to keep their relative order intact.


    This is also solved by the fixed assignment of channels to queues.

    > At any rate, it seems we've given the O.P. sufficient food
    > for thought.


    Definitively. :) I'd also throw Doug Lea's book into the mix, which is
    an excellent source:
    http://gee.cs.oswego.edu/dl/cpj/

    Kind regards

    robert

    --
    remember.guy do |as, often| as.you_can - without end
    http://blog.rubybestpractices.com/
     
    Robert Klemme, Jun 22, 2009
    #6
  7. On 22.06.2009 23:44, Eric Sosman wrote:
    > Robert Klemme wrote:
    >> On 22.06.2009 21:17, Eric Sosman wrote:
    >>> Robert Klemme wrote:
    >>>> In that case I'd rather have M queues and M workers and put
    >>>> something into the event that is enqueued which allows detection of
    >>>> the channel. Then place events for multiple channels in one queue.
    >>>
    >>> If I understand you, requests would be sprayed across all M
    >>> queues and thus be eligible for processing by any of the M workers.

    >>
    >> Yes, but not requests of a single channel. Those all go to a single
    >> queue! That's what I meant although I see that I did not state it
    >> explicitly.

    >
    > I still don't get it. If channels A and B send all their
    > requests to queue Q1 for service by thread T1, and if only T1
    > processes requests on Q1, then in effect you've combined A and B
    > into one super-channel and applied the "No simultaneous service"
    > constraint to the combined channel instead of to A and B separately.
    > In particular, while T1 is busy with an A request, idle thread T2
    > cannot work on a B request -- a constraint not present in the
    > original problem.
    >
    > And if the threads *can* pluck work from any queue at all,
    > then ensuring that T1 and T2 don't both work on A requests at
    > the same time becomes difficult again.


    As I said, there may be some thread wastage and different schemes can be
    realized. Since we are talking about the high load situation your
    concern is valid but negligible - at least if there is traffic on all
    channels most of the time. All threads will be busy most of the time
    anyway.

    Cheers

    robert

    --
    remember.guy do |as, often| as.you_can - without end
    http://blog.rubybestpractices.com/
     
    Robert Klemme, Jun 22, 2009
    #7
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Crimson_M

    Design Flow: STA to Synthesis

    Crimson_M, Sep 8, 2003, in forum: VHDL
    Replies:
    1
    Views:
    818
    Brian Drummond
    Sep 9, 2003
  2. Replies:
    2
    Views:
    958
    Arnaud
    Apr 8, 2006
  3. tshad

    Design flow for Code-behind

    tshad, Mar 1, 2005, in forum: ASP .Net
    Replies:
    17
    Views:
    559
    Showjumper
    Mar 2, 2005
  4. William Gill
    Replies:
    10
    Views:
    580
    Florian Diesch
    Sep 1, 2005
  5. Jack Dowson
    Replies:
    0
    Views:
    474
    Jack Dowson
    May 7, 2007
Loading...

Share This Page