How to design enterprise level real time event communication?

Discussion in 'Java' started by Timasmith, Oct 18, 2006.

  1. Timasmith

    Timasmith Guest

    Suppose you were designing an app from scratch (I am using Java + j2ee
    server but thats neither here nor there) which has a cluster of
    application servers, 1000 distributed fat clients and the need to push
    events out to the client.

    Normally processing is your typically crud access but there are a
    number of use cases where it would be rather advantageous to send
    business events.

    So lets say you are working on a widget and someone else depleted the
    inventory. Instead of continuing to work on the widget it would be
    nice to be notified of the event so you can beat up on that person.

    I can think of a couple of strategies off the top of my head

    a) Client application has a thread constantly polling the server for
    any new events - say every minute (configurable)
    b) Client application has an open socket on the server with a thread
    blocking until something is written to the socket.
    c) Client application has an open port which the server can connect to
    and push events out.

    So my thoughts would be

    a) Wasteful, with 1000 users perhaps that communication adds up -
    regardless if the event queue is stored in the database or in one of
    the application servers (or all app servers).

    b) Seems like the server might get overloaded with 1000 sockets open.

    c) Not bad, perhaps a security concern that anyone could connect to
    that port. Server has to efficiently handle clients that disconnected
    without telling them.

    Any other ideas?
    Timasmith, Oct 18, 2006
    #1
    1. Advertising

  2. "Timasmith" <> writes:

    > [...]
    > b) Client application has an open socket on the server with a thread
    > blocking until something is written to the socket.
    > [...]
    > b) Seems like the server might get overloaded with 1000 sockets open.


    I recommend b. 1000 sockets isn't much, and even if it is you can
    write a proxy that sits between the server and the clients and takes
    some load off the server.

    Kicking off the push is usually the interesing part as most DBs don't
    provide much help so your app has to do it itself.

    m.
    Matt Atterbury, Oct 18, 2006
    #2
    1. Advertising

  3. Timasmith

    davout Guest

    How about using a JMS message queue?

    Clients would push (events) messages onto the queue, whilst the server could
    support multiple threading queue readers that pull messages off the queue.

    "Timasmith" <> wrote in message
    news:...
    > Suppose you were designing an app from scratch (I am using Java + j2ee
    > server but thats neither here nor there) which has a cluster of
    > application servers, 1000 distributed fat clients and the need to push
    > events out to the client.
    >
    > Normally processing is your typically crud access but there are a
    > number of use cases where it would be rather advantageous to send
    > business events.
    >
    > So lets say you are working on a widget and someone else depleted the
    > inventory. Instead of continuing to work on the widget it would be
    > nice to be notified of the event so you can beat up on that person.
    >
    > I can think of a couple of strategies off the top of my head
    >
    > a) Client application has a thread constantly polling the server for
    > any new events - say every minute (configurable)
    > b) Client application has an open socket on the server with a thread
    > blocking until something is written to the socket.
    > c) Client application has an open port which the server can connect to
    > and push events out.
    >
    > So my thoughts would be
    >
    > a) Wasteful, with 1000 users perhaps that communication adds up -
    > regardless if the event queue is stored in the database or in one of
    > the application servers (or all app servers).
    >
    > b) Seems like the server might get overloaded with 1000 sockets open.
    >
    > c) Not bad, perhaps a security concern that anyone could connect to
    > that port. Server has to efficiently handle clients that disconnected
    > without telling them.
    >
    > Any other ideas?
    >
    davout, Oct 18, 2006
    #3
  4. Timasmith

    Patrick May Guest

    "Timasmith" <> writes:
    > Suppose you were designing an app from scratch (I am using Java +
    > j2ee server but thats neither here nor there) which has a cluster of
    > application servers, 1000 distributed fat clients and the need to
    > push events out to the client.

    [ . . . ]
    > I can think of a couple of strategies off the top of my head
    >
    > a) Client application has a thread constantly polling the server for
    > any new events - say every minute (configurable)
    > b) Client application has an open socket on the server with a thread
    > blocking until something is written to the socket.
    > c) Client application has an open port which the server can connect
    > to and push events out.


    This is exactly the type of problem that TIBCO Rendezvous
    (http://www.tibco.com) was designed to solve. The TIBCO approach,
    along with the MQ Series queueing approach, has been incorporated into
    JMS.

    I would also recommend looking at Jini (http://www.jini.org),
    both the Remote Event capabilities and the use of a JavaSpace for
    sharing events.

    Regards,

    Patrick

    ------------------------------------------------------------------------
    S P Engineering, Inc. | Large scale, mission-critical, distributed OO
    | systems design and implementation.
    | (C++, Java, Common Lisp, Jini, middleware, SOA)
    Patrick May, Oct 18, 2006
    #4
  5. Timasmith

    David Guest

    On Wed, 18 Oct 2006 03:43:42 UTC, "Timasmith" <> wrote:

    > Suppose you were designing an app from scratch (I am using Java + j2ee
    > server but thats neither here nor there) which has a cluster of
    > application servers, 1000 distributed fat clients and the need to push
    > events out to the client.
    >
    > Normally processing is your typically crud access but there are a
    > number of use cases where it would be rather advantageous to send
    > business events.
    >
    > So lets say you are working on a widget and someone else depleted the
    > inventory. Instead of continuing to work on the widget it would be
    > nice to be notified of the event so you can beat up on that person.
    >
    > I can think of a couple of strategies off the top of my head
    >
    > a) Client application has a thread constantly polling the server for
    > any new events - say every minute (configurable)
    > b) Client application has an open socket on the server with a thread
    > blocking until something is written to the socket.
    > c) Client application has an open port which the server can connect to
    > and push events out.
    >
    > So my thoughts would be
    >
    > a) Wasteful, with 1000 users perhaps that communication adds up -
    > regardless if the event queue is stored in the database or in one of
    > the application servers (or all app servers).
    >
    > b) Seems like the server might get overloaded with 1000 sockets open.
    >
    > c) Not bad, perhaps a security concern that anyone could connect to
    > that port. Server has to efficiently handle clients that disconnected
    > without telling them.
    >
    > Any other ideas?


    B is usually the most desirable, at least to me.

    A roughly the same as B, except that the client need not actually worry
    about finding work. B is ready to accept new work and when it isn't
    can ignore messages or disconnect from the server.

    C isn't so much a security concern, as the same problem exists with B.
    You must do a little extra work to insure that the client and server
    now how to communicate.

    A computer with lots of sockets open for communication isn't necessarily
    a burden. Most systems handle 10,000 without any problem at all. What
    matters are the system limits such as TCP/IP tuning that may limit the
    number of open sockets, bandwith requirements (10,000 idle sessions
    are easy to support, 10 very active video streams may not be), external
    limits imposed by greedy vendors/administrators (Microsoft licensing).

    Your typical 1-2Ghz PC with sufficient memory should handle just about
    anything. Multiple applications shouldn't be a problem unless you
    are trying to display a dozen TV channels on a dozen monitors.

    Whenever you start to think about whether A, B, or C is the right
    solution for a problem make sure you consider bandwith and other
    concerns. Perhaps B takes more effort to code for some reason.
    There are many ways to solve the problem. The most important
    thing to remember is that if your problem seems too big for your
    computer, or network of computers, you can probably restructure
    the problem to be much simpler and use existing infrastructure.

    David
    David, Oct 18, 2006
    #5
  6. Timasmith

    Guest

    Timasmith a écrit :

    > Suppose you were designing an app from scratch (I am using Java + j2ee
    > server but thats neither here nor there) which has a cluster of
    > application servers, 1000 distributed fat clients and the need to push
    > events out to the client.
    >
    > Normally processing is your typically crud access but there are a
    > number of use cases where it would be rather advantageous to send
    > business events.
    >
    > So lets say you are working on a widget and someone else depleted the
    > inventory. Instead of continuing to work on the widget it would be
    > nice to be notified of the event so you can beat up on that person.
    >
    > I can think of a couple of strategies off the top of my head
    >
    > a) Client application has a thread constantly polling the server for
    > any new events - say every minute (configurable)
    > b) Client application has an open socket on the server with a thread
    > blocking until something is written to the socket.
    > c) Client application has an open port which the server can connect to
    > and push events out.
    >
    > So my thoughts would be
    >
    > a) Wasteful, with 1000 users perhaps that communication adds up -
    > regardless if the event queue is stored in the database or in one of
    > the application servers (or all app servers).
    >
    > b) Seems like the server might get overloaded with 1000 sockets open.
    >
    > c) Not bad, perhaps a security concern that anyone could connect to
    > that port. Server has to efficiently handle clients that disconnected
    > without telling them.
    >
    > Any other ideas?


    On our project, we have implemented the B solution, to have a minimal
    delay between the update in the database and the event sent to the
    clients.

    Depending of the number of business events, and the needed delay
    between the modification in database, the A solution can match your
    problem

    C solution is incompatible with our network topology (there is firewall
    between the server and the clients, for security reasons), so check it
    doesn't cause a problem in your case
    , Oct 18, 2006
    #6
  7. Timasmith

    Timasmith Guest

    David wrote:
    > On Wed, 18 Oct 2006 03:43:42 UTC, "Timasmith" <> wrote:
    >
    > > Suppose you were designing an app from scratch (I am using Java + j2ee
    > > server but thats neither here nor there) which has a cluster of
    > > application servers, 1000 distributed fat clients and the need to push
    > > events out to the client.
    > >
    > > Normally processing is your typically crud access but there are a
    > > number of use cases where it would be rather advantageous to send
    > > business events.
    > >
    > > So lets say you are working on a widget and someone else depleted the
    > > inventory. Instead of continuing to work on the widget it would be
    > > nice to be notified of the event so you can beat up on that person.
    > >
    > > I can think of a couple of strategies off the top of my head
    > >
    > > a) Client application has a thread constantly polling the server for
    > > any new events - say every minute (configurable)
    > > b) Client application has an open socket on the server with a thread
    > > blocking until something is written to the socket.
    > > c) Client application has an open port which the server can connect to
    > > and push events out.
    > >
    > > So my thoughts would be
    > >
    > > a) Wasteful, with 1000 users perhaps that communication adds up -
    > > regardless if the event queue is stored in the database or in one of
    > > the application servers (or all app servers).
    > >
    > > b) Seems like the server might get overloaded with 1000 sockets open.
    > >
    > > c) Not bad, perhaps a security concern that anyone could connect to
    > > that port. Server has to efficiently handle clients that disconnected
    > > without telling them.
    > >
    > > Any other ideas?

    >
    > B is usually the most desirable, at least to me.
    >
    > A roughly the same as B, except that the client need not actually worry
    > about finding work. B is ready to accept new work and when it isn't
    > can ignore messages or disconnect from the server.
    >
    > C isn't so much a security concern, as the same problem exists with B.
    > You must do a little extra work to insure that the client and server
    > now how to communicate.
    >
    > A computer with lots of sockets open for communication isn't necessarily
    > a burden. Most systems handle 10,000 without any problem at all. What
    > matters are the system limits such as TCP/IP tuning that may limit the
    > number of open sockets, bandwith requirements (10,000 idle sessions
    > are easy to support, 10 very active video streams may not be), external
    > limits imposed by greedy vendors/administrators (Microsoft licensing).
    >
    > Your typical 1-2Ghz PC with sufficient memory should handle just about
    > anything. Multiple applications shouldn't be a problem unless you
    > are trying to display a dozen TV channels on a dozen monitors.
    >
    > Whenever you start to think about whether A, B, or C is the right
    > solution for a problem make sure you consider bandwith and other
    > concerns. Perhaps B takes more effort to code for some reason.
    > There are many ways to solve the problem. The most important
    > thing to remember is that if your problem seems too big for your
    > computer, or network of computers, you can probably restructure
    > the problem to be much simpler and use existing infrastructure.
    >
    > David


    So sockets are ok but I am thinking that the server thread would block
    while listening for communication on the socket.

    Would I need 1000 threads to each be listening on each socket?

    Trying to remember back to my Comp Sci days...

    After perusing google a bit perhaps I could use a Jabber server/client
    for this purpose - might be easier than writing the code.
    Timasmith, Oct 18, 2006
    #7
  8. Timasmith

    Patrick May Guest

    "Timasmith" <> writes:
    > So sockets are ok but I am thinking that the server thread would
    > block while listening for communication on the socket.
    >
    > Would I need 1000 threads to each be listening on each socket?


    The Java NIO classes support the POSIX select() mechanism,
    allowing one thread to handle multiple connections. I suspect that
    you'd be better off using JMS or a JavaSpace, though.

    Regards,

    Patrick

    ------------------------------------------------------------------------
    S P Engineering, Inc. | Large scale, mission-critical, distributed OO
    | systems design and implementation.
    | (C++, Java, Common Lisp, Jini, middleware, SOA)
    Patrick May, Oct 18, 2006
    #8
  9. Timasmith

    David Guest

    On Wed, 18 Oct 2006 14:11:06 UTC, "Timasmith" <> wrote:

    > David wrote:
    > > On Wed, 18 Oct 2006 03:43:42 UTC, "Timasmith" <> wrote:
    > >
    > > > Suppose you were designing an app from scratch (I am using Java + j2ee
    > > > server but thats neither here nor there) which has a cluster of
    > > > application servers, 1000 distributed fat clients and the need to push
    > > > events out to the client.
    > > >
    > > > Normally processing is your typically crud access but there are a
    > > > number of use cases where it would be rather advantageous to send
    > > > business events.
    > > >
    > > > So lets say you are working on a widget and someone else depleted the
    > > > inventory. Instead of continuing to work on the widget it would be
    > > > nice to be notified of the event so you can beat up on that person.
    > > >
    > > > I can think of a couple of strategies off the top of my head
    > > >
    > > > a) Client application has a thread constantly polling the server for
    > > > any new events - say every minute (configurable)
    > > > b) Client application has an open socket on the server with a thread
    > > > blocking until something is written to the socket.
    > > > c) Client application has an open port which the server can connect to
    > > > and push events out.
    > > >
    > > > So my thoughts would be
    > > >
    > > > a) Wasteful, with 1000 users perhaps that communication adds up -
    > > > regardless if the event queue is stored in the database or in one of
    > > > the application servers (or all app servers).
    > > >
    > > > b) Seems like the server might get overloaded with 1000 sockets open.
    > > >
    > > > c) Not bad, perhaps a security concern that anyone could connect to
    > > > that port. Server has to efficiently handle clients that disconnected
    > > > without telling them.
    > > >
    > > > Any other ideas?

    > >
    > > B is usually the most desirable, at least to me.
    > >
    > > A roughly the same as B, except that the client need not actually worry
    > > about finding work. B is ready to accept new work and when it isn't
    > > can ignore messages or disconnect from the server.
    > >
    > > C isn't so much a security concern, as the same problem exists with B.
    > > You must do a little extra work to insure that the client and server
    > > now how to communicate.
    > >
    > > A computer with lots of sockets open for communication isn't necessarily
    > > a burden. Most systems handle 10,000 without any problem at all. What
    > > matters are the system limits such as TCP/IP tuning that may limit the
    > > number of open sockets, bandwith requirements (10,000 idle sessions
    > > are easy to support, 10 very active video streams may not be), external
    > > limits imposed by greedy vendors/administrators (Microsoft licensing).
    > >
    > > Your typical 1-2Ghz PC with sufficient memory should handle just about
    > > anything. Multiple applications shouldn't be a problem unless you
    > > are trying to display a dozen TV channels on a dozen monitors.
    > >
    > > Whenever you start to think about whether A, B, or C is the right
    > > solution for a problem make sure you consider bandwith and other
    > > concerns. Perhaps B takes more effort to code for some reason.
    > > There are many ways to solve the problem. The most important
    > > thing to remember is that if your problem seems too big for your
    > > computer, or network of computers, you can probably restructure
    > > the problem to be much simpler and use existing infrastructure.
    > >
    > > David

    >
    > So sockets are ok but I am thinking that the server thread would block
    > while listening for communication on the socket.
    >
    > Would I need 1000 threads to each be listening on each socket?
    >
    > Trying to remember back to my Comp Sci days...
    >
    > After perusing google a bit perhaps I could use a Jabber server/client
    > for this purpose - might be easier than writing the code.


    You can have a small number of threads (even one) manage and process
    messages for all threads in your application. If you would rather
    use a separate thread/process for each socket that is allowable too.

    Sockets can be blocking or non-blocking. To keep from blocking on
    sockets without any data use the non-blocking socket calls. ioctl()
    is the magic function that your management thread/process needs to
    use. ioctl() allows your application to look at all the sockets
    and return three tables of data -- the sockets with data to read,
    the sockets that can now be written to, and the sockets with
    errors to report. *in and Windows have slightly different
    implementations for ioctl but they do the same thing.

    It is perfectly legal to support TCP, UDP, and IP in the same
    application, as well as multiple services. You don't usually
    do that since the application can get rather complex and you
    must still handle all the incomming messages within the desired
    time frame.

    I have applications that are much like IM servers are today. They
    are mostly idle with bursts of activity. Sometimes the bursts
    come from incomming sockets and other times they are internal
    states that trigger mass notifications.

    A well designed communications application can be ported, supported,
    and extended without much trouble. Just like in other areas, try
    to keep your communications separate from the rest of the logic.

    I started with very early Unix examples of multi-protocol and
    multi-service servers and moved on from there. I'm sure that you
    could find reasonable examples that are close to what you are trying
    to accomplish.

    David
    David, Oct 19, 2006
    #9
  10. Timasmith

    Guest

    Timasmith wrote:
    > Suppose you were designing an app from scratch (I am using Java + j2ee
    > server but thats neither here nor there) which has a cluster of
    > application servers, 1000 distributed fat clients and the need to push
    > events out to the client.
    >
    > Normally processing is your typically crud access but there are a
    > number of use cases where it would be rather advantageous to send
    > business events.
    >
    > So lets say you are working on a widget and someone else depleted the
    > inventory. Instead of continuing to work on the widget it would be
    > nice to be notified of the event so you can beat up on that person.
    >
    > I can think of a couple of strategies off the top of my head
    >
    > a) Client application has a thread constantly polling the server for
    > any new events - say every minute (configurable)
    > b) Client application has an open socket on the server with a thread
    > blocking until something is written to the socket.
    > c) Client application has an open port which the server can connect to
    > and push events out.
    >
    > So my thoughts would be
    >
    > a) Wasteful, with 1000 users perhaps that communication adds up -
    > regardless if the event queue is stored in the database or in one of
    > the application servers (or all app servers).
    >
    > b) Seems like the server might get overloaded with 1000 sockets open.
    >
    > c) Not bad, perhaps a security concern that anyone could connect to
    > that port. Server has to efficiently handle clients that disconnected
    > without telling them.
    >
    > Any other ideas?


    Hi, I would consider to set up an UDP (or better a Multicast socket
    communication) because you are in the typical one-to-many scenario,
    therefore if you adopt either multicast or UDP socket you should easily
    avoid the overhead. If you than have different classess of messages for
    different profiles, than I would adopt a publish/subscribe approach,
    letting the client - after authenticated and profiled - subscribe to
    the event socket of interest for its profile :)

    I hope it helps

    Ciao
    ANdreaT

    --
    Andrea Tomasini - -
    Software Development Process Analysis and Consulting
    Blog: http://processinpractice.andreat.de
    Home: http://www.andreat.de
    , Oct 19, 2006
    #10
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Replies:
    3
    Views:
    527
  2. Replies:
    3
    Views:
    380
    Tom Dyess
    Jan 20, 2005
  3. Ray Schumacher
    Replies:
    1
    Views:
    412
    seletz
    Jul 28, 2006
  4. pabbu
    Replies:
    8
    Views:
    701
    Marc Boyer
    Nov 7, 2005
  5. Replies:
    6
    Views:
    541
    Dennis Lee Bieber
    Mar 31, 2007
Loading...

Share This Page