Multithreaded and/or nonblocking server?

M

macapone

Hi Folks,

I'm trying to write a TCP server that will process and respond to
client XML queries. Nothing new there, plenty of examples on the web.
But here's the catch...

When a client connects, the connection is persistent, similar to a chat
server. the client "chats" a request and gets a response, chats
another request, etc. However, some requests are long-running, and
while waiting for a response to request #1, the client should be able
to send requests #2 and #3 and get responses (without having to open a
new connection to the server).

The only way to do this is to either fork or thread the request
processing code on the server.

Right now I'm using Net::Daemon as my server, although I'm certainly
open to alternatives. I've also looked at POE and a few other modules
out there. Basically, I need my handle_request() function to somehow
asynchronously process the request and then communicate the result back
to the client (via event handler? signal? fork? not sure...) when it
has finished processing. It seems as though all the examples out there
are geared towards forking/spawing/etc when a connection is made, and
closing that connection when finished. But I'm looking to keep the
connection open and be able to send requests and get responses
asynchronously.

I'm hoping someone out there has done this and can point me in the
right direction. Any thoughts?

Thanks in advance!
MAC
 
R

robic0

Hi Folks,

I'm trying to write a TCP server that will process and respond to
client XML queries. Nothing new there, plenty of examples on the web.
But here's the catch...

When a client connects, the connection is persistent, similar to a chat
server. the client "chats" a request and gets a response, chats
another request, etc. However, some requests are long-running, and
while waiting for a response to request #1, the client should be able
to send requests #2 and #3 and get responses (without having to open a
new connection to the server).

The only way to do this is to either fork or thread the request
processing code on the server.

Right now I'm using Net::Daemon as my server, although I'm certainly
open to alternatives. I've also looked at POE and a few other modules
out there. Basically, I need my handle_request() function to somehow
asynchronously process the request and then communicate the result back
to the client (via event handler? signal? fork? not sure...) when it
has finished processing. It seems as though all the examples out there
are geared towards forking/spawing/etc when a connection is made, and
closing that connection when finished. But I'm looking to keep the
connection open and be able to send requests and get responses
asynchronously.

I'm hoping someone out there has done this and can point me in the
right direction. Any thoughts?

Thanks in advance!
MAC

You have said alot there. I havent used Net::Daemon (Unix service?).
Are you writing the cline and server?

I don't know about Perl but the server side on a Win32 platform
a simple server model goes something like this:

A single, usually non-threaded routine waits for connect
requests. Upon connect, the user info, ip, etc.. info is recorded into
an array. Then a service thread (the same function every time) is started
to handle just the requests from that client. When the client disconnects,
ahhh, the record of his connect data is expunged.

This happens via many syncronization layers, depending on how complex a
model.

To write the client is trivial.
The general server thread CAN however (if the connect routine is threaded and
queued) host multiple requests from the same ip, etc.. however it complicates
things, but is still trivial.

Doing this in Perl is laughable since it puts the onus on a fixed design dll
w32 layer to be expanded, having to matriculate through the perl interfaces.
Not something likely. Generality at best is what you can expect.

You should learn syncronization control programming, then use w32 (or Unix)
to create your own dll's. After that step, you may shine on Perl altogether.

-good luck-
 
E

Eric Schwartz

robic0 said:
Doing this in Perl is laughable since it puts the onus on a fixed design dll
w32 layer to be expanded, having to matriculate through the perl interfaces.
Not something likely. Generality at best is what you can expect.

I normally avoid following up on trolls, but this was a thing of
beauty. I have rarely seen such an artful misuse of "matriculate".
Thanks for brightening my day a little.

-=Eric
 
M

macapone

I should have clarified the platform; it is Unix (Linux), although
Net::Daemon does have a perl-win32 port.

My sense is that this is a threading issue or an event issue. Perl is
lamentably remiss in real thread documentation, it being a (relatively)
new addition to the language.

I know that it can be done quite easily in Java (a proof-of-concept
written by a coworker worked well), but tragically, I only speak Perl
for now.
 
X

xhoster

Hi Folks,

I'm trying to write a TCP server that will process and respond to
client XML queries. Nothing new there, plenty of examples on the web.
But here's the catch...

Does the server only talk to one client, or does it have to deal with
multiple concurrent clients as well as multiple concurrent requests per
client?
When a client connects, the connection is persistent, similar to a chat
server. the client "chats" a request and gets a response, chats
another request, etc. However, some requests are long-running, and
while waiting for a response to request #1, the client should be able
to send requests #2 and #3 and get responses (without having to open a
new connection to the server).

Are the responses nice discrete packages? If the response to request #2
is 38 Gig and the server is trying to send that to the client, can the
server refuse to listen to any more requests from the client until the
client has finished reading response #2?

What happens when requests 2, 3, 4, etc. are just as long running as
request 1, and then pretty soon have 1_987_235 pending requests?
The only way to do this is to either fork or thread the request
processing code on the server.

I'm not sure that that is the only way. I guess the range of options would
depend on the exact nature of the requests. Of course, you also have to
do something just about as challenging on the client, too.

Right now I'm using Net::Daemon as my server, although I'm certainly
open to alternatives. I've also looked at POE and a few other modules
out there. Basically, I need my handle_request() function

Net::Daemon doesn't have a handle_request method, so it isn't exactly clear
what handle_request() means to you.
to somehow
asynchronously process the request and then communicate the result back
to the client (via event handler? signal? fork? not sure...)

None of those are effective ways of communicating back to a client. Those
are ways of communicating within/among the server.
when it
has finished processing. It seems as though all the examples out there
are geared towards forking/spawing/etc when a connection is made, and
closing that connection when finished.

You are probably looking in the wrong place. A lot of work in the OS,
TCP/IP, etc, has gone into enabling that strand of copper connecting your
computer to mine to look like an abstraction of a whole bunch of different
connections which we can make and break at will, run in "parallel", etc.
Now it seems like you are trying to take that abstraction, and use Perl to
throw away all of it back down to a virtual strand of copper wire, and then
re-abstract back up again. I don't think many people want to do that in
perl.

Anyway, look into Threads::Running or Thread::Queue. Or just flocking
sockets.

Xho
 
X

xhoster

I should have clarified the platform; it is Unix (Linux), although
Net::Daemon does have a perl-win32 port.

My sense is that this is a threading issue or an event issue. Perl is
lamentably remiss in real thread documentation, it being a (relatively)
new addition to the language.

What parts of the docs did you read and not understand?
I know that it can be done quite easily in Java (a proof-of-concept
written by a coworker worked well), but tragically, I only speak Perl
for now.

Which specific features in Java did your cow orker use that you couldn't
figure out how to do in Perl?


#server
use strict;
use IO::Socket qw:)DEFAULT :crlf);
use IO::File;
use Storable;
use threads;
use threads::shared;

my $port = 1426;
my $socket = IO::Socket::INET->new( Proto => 'tcp', LocalPort => $port,
Listen => 1,Reuse => SO_REUSEADDR)
or die "Can't create listen socket: $!";

my $conn;
my $conn_semaphore :shared;

my $x='x'x10_000; # extra data to stress the locking/buffering mechanism

{
$conn = $socket->accept or ($! eq 'Interrupted system call' and redo)
or die $!;
$conn->autoflush(1);
while (my $request=Storable::fd_retrieve($conn)) {
last unless @$request;
threads->create("do_it", $request)->detach;
};
Storable::nstore_fd([],$conn) or die $!; # end of stream marker
close $conn or die $!;
};


sub do_it {
my $request=shift;
#simulate long running queries
select undef,undef,undef,rand();
my $answer = sqrt $request->[1];
{
lock $conn_semaphore;
Storable::nstore_fd([$request->[0],$answer,$x],$conn) or die $!;
};
};

__END__

# client
use strict;
use IO::Socket qw:)DEFAULT :crlf);
use IO::File;
use Storable;

my $port = 1426;
my $socket = IO::Socket::INET->new( "localhost:$port")
or die "Can't create socket: $!";


##Get ahead of the server
foreach (1..20) {
## Each request is a serial number followed by the query.
Storable::nstore_fd([$_,$_],$socket) or die $!;
};

##But not too much ahead
foreach (21..500) {
Storable::nstore_fd([$_,$_],$socket) or die $!;
my $reply=Storable::fd_retrieve($socket);
print "@$reply[0,1]\n";
};
Storable::nstore_fd([],$socket) or die $!; # end marker

while (1) {
my $reply=Storable::fd_retrieve($socket);
last unless @$reply;
};


Xho
 
C

Csaba

bwah-ha-ha-ha-ha-ha. what a maroon!


According to dictionary.com, "maroon" as a noun means

1. often Maroon
1. A fugitive Black slave in the West Indies in the 17th and 18th
centuries.
2. A descendant of such a slave.
2. A person who is marooned, as on an island.


I fail to see the connection, especially since being abandoned on an island
usually means no Usenet access...
 
R

Rocco Caputo

When a client connects, the connection is persistent, similar to a chat
server. the client "chats" a request and gets a response, chats
another request, etc. However, some requests are long-running, and
while waiting for a response to request #1, the client should be able
to send requests #2 and #3 and get responses (without having to open a
new connection to the server).

The only way to do this is to either fork or thread the request
processing code on the server.

This is true if your server-side bottleneck is CPU. If you're
spending most of your time waiting on I/O, you can make that
non-blocking and do it in the server itself.
Right now I'm using Net::Daemon as my server, although I'm certainly
open to alternatives. I've also looked at POE and a few other modules
out there. Basically, I need my handle_request() function to somehow
asynchronously process the request and then communicate the result back
to the client (via event handler? signal? fork? not sure...) when it
has finished processing.

I can vouch for POE supporting client NBIO in the server as well as
forking off processes to handle long-term jobs. Examples of both are
at http://poe.perl.org/?POE_Cookbook .
But I'm looking to keep the
connection open and be able to send requests and get responses
asynchronously.

Your prototol will need enough header information that your client can
match responses back to its requests. Whatever you're doing will
probably fall apart if responses arrive out of order.
 
M

macapone

Hi Folks,

Haven't responded to this thread recently, partly because I wanted to
try out some of your suggestions and digest your comments. Here's what
I'm facing, to answer some of your questions:

The server will talk to multiple clients at once (this was why I was
using the Net::Daemon module); so, multiple concurrent clients and
multiple concurrent requests per client. 99.9% of the client requests
will be quickly answered (immediate response), so I won't end up with a
huge backlog of pending requests, however, there will be the 0.1% that
would take "awhile" (meaning, 5 to 30 seconds). Each client request
has a header, Request-ID: \d+, and the clients are responsible for
tracking their request-ids.

So, a given client will connect to the server and start sending
requests. Should a long-running request occur, the client needs to
still be able to send short requests and get their responses. That is
where I need some kind of non-blocking mechanism. Ideally, the client
can continue to send requests even as it is reading a large response,
but life would definitely go on if we didn't have that particular
ability. What is most important is for the to be able to continue to
send requests and get responses while the server is still processing a
previous long request. (These "long" requests, by the way, are
database searches, compliments of MySQL and the DBI.)

The problem I face with threads and forking is (well, the real problem
is my ignorance), is that once the server forks/threads/whatever, I
seem to lose the ability to talk back to the client. So, solutions
that don't thread/fork/whatever block on long requests, and solutions
that thread make me unable to write back to my socket. I'm sure
someone will recognize this as a common issue... again, my ignorance is
probably what's tripping me up here.

What I find odd, though, is that I can't find any other examples of
this functionality. There's thousands of examples about not blocking
between connections, but i haven't seen anything about not blocking on
requests for a given connection.

Any more insight? Let me know what more info I need to provide...
Thanks for all helpful comments!!!

MAC
 
X

xhoster

Hi Folks,

Haven't responded to this thread recently, partly because I wanted to
try out some of your suggestions and digest your comments. Here's what
I'm facing, to answer some of your questions:

The server will talk to multiple clients at once (this was why I was
using the Net::Daemon module); so, multiple concurrent clients and
multiple concurrent requests per client.

I'd be tempted to use threads for the concurrent requests for one client,
but forking to handle the different client connections. Anyway, I'd focus
on the the within client concurrency first, and worry about the
multiple-client functionality later.
99.9% of the client requests
will be quickly answered (immediate response), so I won't end up with a
huge backlog of pending requests, however, there will be the 0.1% that
would take "awhile" (meaning, 5 to 30 seconds).

Will the server know the fast ones from the slow ones just by looking at
them?
Each client request
has a header, Request-ID: \d+, and the clients are responsible for
tracking their request-ids.

So, a given client will connect to the server and start sending
requests. Should a long-running request occur, the client needs to
still be able to send short requests and get their responses. That is
where I need some kind of non-blocking mechanism. Ideally, the client
can continue to send requests even as it is reading a large response,
but life would definitely go on if we didn't have that particular
ability.

Receiving small requests while sending a large response definitely
complicates things. Recieving small requests and sending the reponses to
those small requests while also sending a large response complicates it
even more, much more. Then the client has to keep track not only if "This
is the reply to request 3", but also "But this is only part of the reply to
request 3, but more parts are coming, and maybe not until I get parts of
the reply for other requests first".
What is most important is for the to be able to continue to
send requests and get responses while the server is still processing a
previous long request. (These "long" requests, by the way, are
database searches, compliments of MySQL and the DBI.)

You may want to look into POE, specifically POE::Component::EasyDBI or
similar. I've never used it myself.
The problem I face with threads and forking is (well, the real problem
is my ignorance), is that once the server forks/threads/whatever, I
seem to lose the ability to talk back to the client.

Then you are probably doing something wrong. Without seeing some of the
code, I can't really comment on what that is. Did you try my server/client
I posted earlier that used Storable to serialize the messages? Can
you use that as a frame-work to demonstrate the problem you are seeing?
So, solutions
that don't thread/fork/whatever block on long requests, and solutions
that thread make me unable to write back to my socket. I'm sure
someone will recognize this as a common issue... again, my ignorance is
probably what's tripping me up here.

What I find odd, though, is that I can't find any other examples of
this functionality. There's thousands of examples about not blocking
between connections, but i haven't seen anything about not blocking on
requests for a given connection.

I'm not sure what you mean here. How can you have an example of
non-blocking between connections that isn't inherently non-blocking on each
of the individual connections on which it is jointly non-blocking? The two
problems as I see it are that when you intersperse nonblocking calls on the
same socket, they need to be self-contained and/or self-delimiting. And
2nd that you need all the necessary components to be non-blocking. If the
perl server's connection to the MySQL server is blocking, it doesn't much
matter that the perl server's connection to the perl client is nonblocking.

Xho
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,900
Latest member
Nell636132

Latest Threads

Top