logging.SocketHandler connections

O

oj

Hi folks,

I'm writing some fairly simple logging code that makes use of the
SocketHandler.

The example server code works fine, as expected. (http://
docs.python.org/lib/network-logging.html)

However, initially, I had tried it with a server that closed the
connection after receiving each record, and the SocketHandler doesn't
seem to behave as advertised.

My test script was simply this:

#!/usr/bin/python

import logging
import logging.handlers
import time
import sys

port = 12345

handler = logging.handlers.SocketHandler('localhost', port)
l = logging.getLogger("my-logger")
l.addHandler(handler)
l.addHandler(logging.StreamHandler(sys.stdout))
l.setLevel(logging.DEBUG)

for i in xrange(10):
l.info("Log message %i", i)
time.sleep(1)


My test server received messages 0, 3, 6 and 9.

Doing a packet capture with wireshark confirmed that it only made 4
connections.

The SocketHandler documentation says that it will re-establish the
connection if it has been closed. After a bit of digging, I found a
patch had been submitted and accepted that made it back off
exponentially. However, this should be time based. Even if I make my
sleep here 30 seconds, my server still only receives messages 0, 3, 6
and 9.

I'm concerned that if a connection is lost at some point, I will
always lose at least 2 log messages.

Is there some reason for this that I am not aware of? Have I
misunderstood something, or is this a bug?

-Oliver
 
V

Vinay Sajip

However, initially, I had tried it with a server that closed the
connection after receiving each record, and the SocketHandler doesn't
seem to behave as advertised.

My test script was simply this:
[snip]
The SocketHandler documentation says that it will re-establish the
connection if it has been closed. After a bit of digging, I found a
patch had been submitted and accepted that made it back off
exponentially. However, this should be time based. Even if I make my
sleep here 30 seconds, my server still only receives messages 0, 3, 6
and 9.

I'm concerned that if a connection is lost at some point, I will
always lose at least 2 log messages.

Is there some reason for this that I am not aware of? Have I
misunderstood something, or is this a bug?

Not sure yet - can you please post your test server code? Feel free to
add it as a bug on bugs.python.org, with the code attached, and I'll
look into it. Include "logging" in the subject/summary line.

Thanks,

Vinay Sajip
 
D

Dennis Lee Bieber

The SocketHandler documentation says that it will re-establish the
connection if it has been closed. After a bit of digging, I found a
patch had been submitted and accepted that made it back off
exponentially. However, this should be time based. Even if I make my
sleep here 30 seconds, my server still only receives messages 0, 3, 6
and 9.

I'm concerned that if a connection is lost at some point, I will
always lose at least 2 log messages.

Is there some reason for this that I am not aware of? Have I
misunderstood something, or is this a bug?
May not be applicable, but don't normal socket's have delay before
re-use? What happens when your sleep is, say, 2+ minutes? Could the
logging module NOT be using the reuseaddr (or however it is spelt)
option?
--
Wulfraed Dennis Lee Bieber KD6MOG
(e-mail address removed) (e-mail address removed)
HTTP://wlfraed.home.netcom.com/
(Bestiaria Support Staff: (e-mail address removed))
HTTP://www.bestiaria.com/
 
O

oj

However, initially, I had tried it with a server that closed the
connection after receiving each record, and the SocketHandler doesn't
seem to behave as advertised.
My test script was simply this:
[snip]
The SocketHandler documentation says that it will re-establish the
connection if it has been closed. After a bit of digging, I found a
patch had been submitted and accepted that made it back off
exponentially. However, this should be time based. Even if I make my
sleep here 30 seconds, my server still only receives messages 0, 3, 6
and 9.
I'm concerned that if a connection is lost at some point, I will
always lose at least 2 log messages.
Is there some reason for this that I am not aware of? Have I
misunderstood something, or is this a bug?

Not sure yet - can you please post your test server code? Feel free to
add it as a bug on bugs.python.org, with the code attached, and I'll
look into it. Include "logging" in the subject/summary line.

Thanks,

Vinay Sajip


Here is the server code. Pretty much directly copied from the example,
aside from not having the the handler loop forever, and queing the
records instead of dealing with the directly.

After further investigation, running the client with a long timeout,
without the server, so that every connection will fail, produces
results much closer to what I would expect. Connections attempted for
each message initially, but not for all of the later messages as the
retry time increases.

The point is kinda moot now, since I guess not closing the connection
is the 'right way' to do this, but I'm still interested in why I see
this behaviour when the server closes the connection.

#!/usr/bin/python

import thread
import cPickle
import logging
import logging.handlers
import SocketServer
import select
import struct
import sys
import time

port = 12345

queue = []
queue_lock = thread.allocate_lock()

class LogRecordStreamHandler(SocketServer.StreamRequestHandler):

def handle(self):
chunk = self.connection.recv(4)

if len(chunk) < 4:
print "failed to get 4 bytes in first read"
return

slen = struct.unpack(">L", chunk)[0]
chunk = self.connection.recv(slen)

while len(chunk) < slen:
chunk = chunk + self.connection.recv(slen -
len(chunk))

obj = self.unPickle(chunk)
record = logging.makeLogRecord(obj)

queue_lock.acquire()
queue.insert(0, record)
queue_lock.release()

def unPickle(self, data):
return cPickle.loads(data)

class LogRecordSocketReceiver(SocketServer.ThreadingTCPServer):

def __init__(self, host='localhost',
port=port, handler=LogRecordStreamHandler):
SocketServer.ThreadingTCPServer.__init__(self, (host,
port), handler)
self.abort = 0
self.timeout = 1
self.logname = None

def serve_until_stopped(self):
abort = 0
while not abort:
rd, wr, ex =
select.select([self.socket.fileno()],
[], [],
self.timeout)
if rd:
self.handle_request()
abort = self.abort

def listen():

server = LogRecordSocketReceiver()
server.serve_until_stopped()

def main():
global queue, queue_lock

logger = logging.getLogger()
logger.addHandler(logging.StreamHandler(sys.stdout))

while True:
record = None
locked = queue_lock.acquire(0)

if locked:
if len(queue):
record = queue.pop()

queue_lock.release()

if record:
logger.handle(record)

else:
time.sleep(1)

if __name__ == '__main__':

thread.start_new_thread(listen, ())
main()
 
V

Vinay Sajip

Here is the server code. Pretty much directly copied from the example,
aside from not having the the handler loop forever, and queing the
records instead of dealing with the directly.

After further investigation, running the client with a long timeout,
without the server, so that every connection will fail, produces
results much closer to what I would expect. Connections attempted for
each message initially, but not for all of the later messages as the
retry time increases.

The point is kinda moot now, since I guess not closing the connection
is the 'right way' to do this, but I'm still interested in why I see
this behaviour when the server closes the connection.

I've investigated this and the issue appears not to be related to
closing connections. Your server code differs from the example in the
docs in one crucial way: there is a while loop which you have left out
in the handle() function, which deals with multiple logging events
received in one packet. Add this back in, and all 9 events are
received.

def handle(self):
while 1:
chunk = self.connection.recv(4)

if len(chunk) < 4:
break

slen = struct.unpack(">L", chunk)[0]
chunk = self.connection.recv(slen)

while len(chunk) < slen:
chunk = chunk + self.connection.recv(slen -
len(chunk))

obj = self.unPickle(chunk)
record = logging.makeLogRecord(obj)
queue_lock.acquire()
queue.insert(0, record)
queue_lock.release()

So it appears that due to buffering, 3 socket events are sent in each
packet sent over the wire. You were only processing the first of each
set of three, viz. nos. 0, 3, 6 and 9. Mystery solved, it appears!

Regards,


Vinay Sajip
 
O

oj

On Nov 16, 2:31 pm, Vinay Sajip <[email protected]> wrote:
Here is the server code. Pretty much directly copied from the example,
aside from not having the the handler loop forever, and queing the
records instead of dealing with the directly.
After further investigation, running the client with a long timeout,
without the server, so that every connection will fail, produces
results much closer to what I would expect. Connections attempted for
each message initially, but not for all of the later messages as the
retry time increases.
The point is kinda moot now, since I guess not closing the connection
is the 'right way' to do this, but I'm still interested in why I see
this behaviour when the server closes the connection.

I've investigated this and the issue appears not to be related to
closing connections. Your server code differs from the example in the
docs in one crucial way: there is a while loop which you have left out
in the handle() function, which deals with multiple logging events
received in one packet. Add this back in, and all 9 events are
received.

def handle(self):
while 1:
chunk = self.connection.recv(4)

if len(chunk) < 4:
break

slen = struct.unpack(">L", chunk)[0]
chunk = self.connection.recv(slen)

while len(chunk) < slen:
chunk = chunk + self.connection.recv(slen -
len(chunk))

obj = self.unPickle(chunk)
record = logging.makeLogRecord(obj)
queue_lock.acquire()
queue.insert(0, record)
queue_lock.release()

So it appears that due to buffering, 3 socket events are sent in each
packet sent over the wire. You were only processing the first of each
set of three, viz. nos. 0, 3, 6 and 9. Mystery solved, it appears!

Regards,

Vinay Sajip

I don't think buffering explains the behaviour I was seeing.

The server was logging message 0 about the same time the client logged
message 0, which means that the information was sent then and there.
Messages 1 and 2 hadn't yet been sent, so they couldn't be in the same
packet.

If buffering was the cause, I'd expect to see message 0 logged on the
server at the same time message 2 was logged on the client.

-Oliver
 
V

Vinay Sajip

I've investigated this and the issue appears not to be related to
closing connections. Your server code differs from the example in the
docs in one crucial way: there is a while loop which you have left out
in the handle() function, which deals with multipleloggingevents
received in one packet. Add this back in, and all 9 events are
received.
def handle(self):
while 1:
chunk = self.connection.recv(4)
if len(chunk) < 4:
break
slen = struct.unpack(">L", chunk)[0]
chunk = self.connection.recv(slen)
while len(chunk) < slen:
chunk = chunk + self.connection.recv(slen -
len(chunk))
obj = self.unPickle(chunk)
record =logging.makeLogRecord(obj)
queue_lock.acquire()
queue.insert(0, record)
queue_lock.release()
So it appears that due to buffering, 3 socket events are sent in each
packet sent over the wire. You were only processing the first of each
set of three, viz. nos. 0, 3, 6 and 9. Mystery solved, it appears!

Vinay Sajip

I don't think buffering explains the behaviour I was seeing.

Can you confirm that if you add the while loop back in, all messages
are seen by the server? It worked for me.

Vinay Sajip
 
O

oj

Can you confirm that if you add the while loop back in, all messages
are seen by the server? It worked for me.

Yes, it works in that case. This was meant to be implied by my earlier
messages, but on reflection, isn't obvious.

As I said previously, the point is sorta moot, but if you do have an
explanation as to why it behaves that way, or how I'm causing it, I
would be interested.

-Oliver
 
V

Vinay Sajip

Yes, it works in that case. This was meant to be implied by my earlier
messages, but on reflection, isn't obvious.

As I said previously, the point is sorta moot, but if you do have an
explanation as to why it behaves that way, or how I'm causing it, I
would be interested.

-Oliver

How can you be sure that buffering is not happening at the server end?

Vinay
 
O

oj

How can you be sure that buffering is not happening at the server end?

Vinay

Because the server closes the connection after receiving the first
message, which is before the client has even sent the second message.

Although a packet capture is revealing something more interesting.
After receiving the first message, the server sends a FIN/ACK back to
the client, and the client responds with an ACK. However, the client
doesn't send a FIN/ACK back, it tries to maintain the connection and
at the next message sends a packet on the same connection (with PSH/
ACK flags set), to which the server responds RST, which is reasonable,
since as far as it's concerned, it's closed that connection.

So the client has tried, and failed to send the second message.

The client makes no attempt to send the third message.

So basically, the client, the SocketHandler, isn't closing the
connection when the server closes the connection, which isn't
necessarily a problem, but there is still at least one message that
the client logs that generates no network traffic at all.

These tests were done with the time between messages set at 2 seconds.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,537
Members
45,022
Latest member
MaybelleMa

Latest Threads

Top