eventmachine and threads

J

Julien Schmurfy

Hi,
I have an eventmachine machine ruby server which goal is to serve
requests but for each request it needs to do some requests to others
servers, I know this could be achievied easily in a full asynchronous
way but we have some restrictions currently which forbid that.

The current way we do it is by issuing the request and then freezing the
thread which is woke up when the answer arrives, the code that does the
request looks like (I have kept only the essential, sendRequest simply
build the request and send it on the eventmachine socket):

def sendSyncRequest(obj_name, cmd_name, args = {}, peer = nil)
raise "Cannot freeze main thread, synchronous call cannot be made in
eventmachine thread !!!" if EM::reactor_thread?
# in reactor loop
# issue the call and put the thread to sleep, wake up on return
th = Thread.current
return_value = nil
error = nil

req = sendRequest(obj_name, cmd_name, args, peer)

req.errback do |err|
error = err
th.wakeup
th.priority= 1
th.priority= 0
end

req.callback do |ret|
return_value = ret
th.wakeup
th.priority= 1
th.priority= 0
end

# go to sleep
sleep

# return result or error

end



My problem is that 30% of the requests made this way take 200ms or a
little more to return when the packet on the network arrived in less
than 5ms (I used ngrep to sniff what happens on the network and check
the timings).

The priority change in each callback made our problem a little less
problematic by reducing this 30% number but I wish to find a better and
real solution if any exists, I hate writing code without understanding
why some things happen and this is exactly the case here...
I suppose the priority change reschedule the thread to the top of the
queue but I doubt this hack will continue to work with high concurrency.


I hope someone can help.
 
C

Chuck Remes

Hi,
I have an eventmachine machine ruby server which goal is to serve
requests but for each request it needs to do some requests to others
servers, I know this could be achievied easily in a full asynchronous
way but we have some restrictions currently which forbid that.

The current way we do it is by issuing the request and then freezing
the
thread which is woke up when the answer arrives, the code that does
the
request looks like (I have kept only the essential, sendRequest simply
build the request and send it on the eventmachine socket):

def sendSyncRequest(obj_name, cmd_name, args = {}, peer = nil)
raise "Cannot freeze main thread, synchronous call cannot be made in
eventmachine thread !!!" if EM::reactor_thread?
# in reactor loop
# issue the call and put the thread to sleep, wake up on return
th = Thread.current
return_value = nil
error = nil

req = sendRequest(obj_name, cmd_name, args, peer)

req.errback do |err|
error = err
th.wakeup
th.priority= 1
th.priority= 0
end

req.callback do |ret|
return_value = ret
th.wakeup
th.priority= 1
th.priority= 0
end

# go to sleep
sleep

# return result or error

end

Try using EM#defer (which you "kind of" duplicated with your code
above). It uses an internal thread pool (default 20) to run your
synchronous stuff. When complete it executes a callback.

Here's an example:

def receive_data chunk
sync_operation = proc {
# do sync operation
}

callback = proc {
# called when sync_operation completes
}

EM.defer sync_operation, callback
end

My problem is that 30% of the requests made this way take 200ms or a
little more to return when the packet on the network arrived in less
than 5ms (I used ngrep to sniff what happens on the network and check
the timings).

The priority change in each callback made our problem a little less
problematic by reducing this 30% number but I wish to find a better
and
real solution if any exists, I hate writing code without understanding
why some things happen and this is exactly the case here...
I suppose the priority change reschedule the thread to the top of the
queue but I doubt this hack will continue to work with high
concurrency.

EM.defer uses threads internally. On MRI Ruby these are green threads
(cooperative multitasked) whereas JRuby gives you native threads (that
can preempt). If you have hard timing requirements JRuby is probably a
better bet since you can get real parallelism. Of course, this doesn't
matter if your sync operation is merely blocking on an external
resource though it would matter a lot if it were compute bound.

I hope this helps.

cr
 
J

Julien Schmurfy

George said:
Have you considered EventMachine::Deferrable?

http://eventmachine.rubyforge.org/EventMachine/Deferrable.html

Using threads with EventMachine kind of beats the purpose of
lightweight concurrency.

George

The object returned by sendRequest ("req") is a Deferrable, the reason
we need to keep synchronous call is that these calls are made in
functions which look like:

def do_something()
ret = sendRequestOne()
ret2 = sendRequestTwo(ret)
...
end

We could rewrite them to by asynchronous but making the current
implementation works is way faster (and take less time too).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top