with timeout(...):

N

Nick Craig-Wood

Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

From my experiments with timeouts I suspect it won't be possible to
implement it perfectly in python 2.5 - maybe we could add some extra
core infrastructure to Python 3k to make it possible?
 
J

James Stroud

Nick said:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

From my experiments with timeouts I suspect it won't be possible to
implement it perfectly in python 2.5 - maybe we could add some extra
core infrastructure to Python 3k to make it possible?

I'm guessing your question is far over my head, but if I understand it,
I'll take a stab:

First, did you want the timeout to kill the long running stuff?

I'm not sure if its exactly what you are looking for, but I wrote a
timer class that does something like you describe:

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/464959

Probably you can do whatever you want upon timeout by passing the
appropriate function as the "expire" argument.

This works like a screen saver, etc.

James
 
D

Diez B. Roggisch

Nick said:
Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.

But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.
Which I'd take as an personal insult, because in that rolled-back timeframe
I will be possibly proposing to my future wife or something...

Diez
 
I

irstas

But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.

Hey hey, isn't the Python mantra that we're all adults here? It'd
be the programmers responsibility to use only code that has no
side effects. I certainly can ensure that no side-effects occur in the
following code: 1+2. I didn't even need a time machine to do that :p
Or the primitive could be implemented so that Python
throws a TimeoutException at the earliest opportunity. Then one
could write except-blocks which deal with rolling back any undesirable
side effects. (I'm not saying such timeout feature could be
implemented in
Python, but it could be made by modifying the CPython implementation)
 
K

Klaas

Did anyone write a contextmanager implementing a timeout for
python2.5?

I'd love to be able to write something like

with timeout(5.0) as exceeded:
some_long_running_stuff()
if exceeded:
print "Oops - took too long!"

And have it work reliably and in a cross platform way!

Doubt it. But you could try:

class TimeoutException(BaseException):
pass

class timeout(object):
def __init__(self, limit_t):
self.limit_t = limit
self.timer = None
self.timed_out = False
def __nonzero__(self):
return self.timed_out
def __enter__(self):
self.timer = threading.Timer(self.limit_t, ...)
self.timer.start()
return self
def __exit__(self, exc_c, exc, tb):
if exc_c is TimeoutException:
self.timed_out = True
return True # suppress exception
return False # raise exception (maybe)

where '...' is a ctypes call to raise the given exception in the
current thread (the capi call PyThreadState_SetAsyncExc)

Definitely not fool-proof, as it relies on thread switching. Also,
lock acquisition can't be interrupted, anyway. Also, this style of
programming is rather unsafe.

But I bet it would work frequently.

-Mike
 
N

Nick Craig-Wood

Hey hey, isn't the Python mantra that we're all adults here?

Yes the timeout could happen at any time, but at a defined moment in
the python bytecode interpreters life so it wouldn't mess up its
internal state.
It'd be the programmers responsibility to use only code that has no
side effects. I certainly can ensure that no side-effects occur in
the following code: 1+2. I didn't even need a time machine to do
that :p Or the primitive could be implemented so that Python throws
a TimeoutException at the earliest opportunity. Then one could
write except-blocks which deal with rolling back any undesirable
side effects. (I'm not saying such timeout feature could be
implemented in Python, but it could be made by modifying the
CPython implementation)

I don't think timeouts would be any more difficult that using threads.

It is impossible to implement reliably at the moment though because it
is impossible to kill one thread from another thread. There is a
ctypes hack to do it, which sort of works... It needs some core
support I think.
 
N

Nick Craig-Wood

James Stroud said:
I'm guessing your question is far over my head, but if I understand it,
I'll take a stab:

First, did you want the timeout to kill the long running stuff?
Yes.

I'm not sure if its exactly what you are looking for, but I wrote a
timer class that does something like you describe:

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/464959

Probably you can do whatever you want upon timeout by passing the
appropriate function as the "expire" argument.

I don't think your code implements quite what I meant!
 
H

Hendrik van Rooyen

Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.

But to make that work reliably, it has to be ensured that no sideeffects
occur while being in some_long_running_stuff. which doesn't only extend to
python itself, but also external modules and systems (file writing, network
communications...). Which can't be done, unless you use a time-machine.
Which I'd take as an personal insult, because in that rolled-back timeframe
I will be possibly proposing to my future wife or something...

how does the timed callback in the Tkinter stuff work - in my experience so
far it seems that it does the timed callback quite reliably...

probably has to do with the fact that the mainloop runs as a stand alone
process, and that you set the timer up when you do the "after" call.

so it probably means that to emulate that kind of thing you need a
separate "thread" that is in a loop to monitor the timer's expiry, that
somehow gains control from the "long running stuff" periodically...

so Diez is probably right that the way to go is to put the timer in the
python interpreter loop, as its the only thing around that you could
more or less trust to run all the time.

But then it will not read as nice as Nick's wish, but more like this:

id = setup_callback(error_routine, timeout_in_milliseconds)
long_running_stuff_that_can_block_on_IO(foo, bar, baz)
cancel_callback(id)
print "Hooray it worked !! "
sys.exit()

def error_routine():
print "toughies it took too long - your chocolate is toast"
attempt_at_recovery_or_explanation(foo, bar, baz)

Much more ugly.
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?

- Hendrik
 
N

Nick Craig-Wood

Klaas said:
Doubt it. But you could try:

class TimeoutException(BaseException):
pass

class timeout(object):
def __init__(self, limit_t):
self.limit_t = limit
self.timer = None
self.timed_out = False
def __nonzero__(self):
return self.timed_out
def __enter__(self):
self.timer = threading.Timer(self.limit_t, ...)
self.timer.start()
return self
def __exit__(self, exc_c, exc, tb):
if exc_c is TimeoutException:
self.timed_out = True
return True # suppress exception
return False # raise exception (maybe)

where '...' is a ctypes call to raise the given exception in the
current thread (the capi call PyThreadState_SetAsyncExc)

Definitely not fool-proof, as it relies on thread switching. Also,
lock acquisition can't be interrupted, anyway. Also, this style of
programming is rather unsafe.

But I bet it would work frequently.

Here is my effort... You'll note from the comments that there are
lots of tricky bits.

It isn't perfect though as it sometimes leaves behind threads (see the
FIXME). I don't think it crashes any more though!

------------------------------------------------------------

"""
General purpose timeout mechanism not using alarm(), ie cross platform

Eg

from timeout import Timeout, TimeoutError

def might_infinite_loop(arg):
while 1:
pass

try:
Timeout(10, might_infinite_loop, "some arg")
except TimeoutError:
print "Oops took too long"
else:
print "Ran just fine"

"""

import threading
import time
import sys
import ctypes
import os

class TimeoutError(Exception):
"""Thrown on a timeout"""
PyThreadState_SetAsyncExc = ctypes.pythonapi.PyThreadState_SetAsyncExc
_c_TimeoutError = ctypes.py_object(TimeoutError)

class Timeout(threading.Thread):
"""
A General purpose timeout class
timeout is int/float in seconds
action is a callable
*args, **kwargs are passed to the callable
"""
def __init__(self, timeout, action, *args, **kwargs):
threading.Thread.__init__(self)
self.action = action
self.args = args
self.kwargs = kwargs
self.stopped = False
self.exc_value = None
self.end_lock = threading.Lock()
# start subtask
self.setDaemon(True) # FIXME this shouldn't be needed but is, indicating sub tasks aren't ending
self.start()
# Wait for subtask to end naturally
self.join(timeout)
# Use end_lock to kill the thread in a non-racy
# fashion. (Using isAlive is racy). Poking exceptions into
# the Thread cleanup code isn't a good idea either
if self.end_lock.acquire(False):
# gained end_lock => sub thread is still running
# sub thread is still running so kill it with a TimeoutError
self.exc_value = TimeoutError()
PyThreadState_SetAsyncExc(self.id, _c_TimeoutError)
# release the lock so it can progress into thread cleanup
self.end_lock.release()
# shouldn't block since we've killed the thread
self.join()
# re-raise any exception
if self.exc_value:
raise self.exc_value
def run(self):
self.id = threading._get_ident()
try:
self.action(*self.args, **self.kwargs)
except:
self.exc_value = sys.exc_value
# only end if we can acquire the end_lock
self.end_lock.acquire()

if __name__ == "__main__":

def _spin(t):
"""Spins for t seconds"""
start = time.time()
end = start + t
while time.time() < end:
pass

def _test_time_limit(name, expecting_time_out, t_limit, fn, *args, **kwargs):
"""Test Timeout"""
start = time.time()

if expecting_time_out:
print "Test",name,"should timeout"
else:
print "Test",name,"shouldn't timeout"

try:
Timeout(t_limit, fn, *args, **kwargs)
except TimeoutError, e:
if expecting_time_out:
print "Timeout generated OK"
else:
raise RuntimeError("Wasn't expecting TimeoutError Here")
else:
if expecting_time_out:
raise RuntimeError("Was expecting TimeoutError Here")
else:
print "No TimeoutError generated OK"

elapsed = time.time() - start
print "That took",elapsed,"seconds for timeout of",t_limit

def test():
"""Test code"""

# no nesting
_test_time_limit("simple #1", True, 5, _spin, 10)
_test_time_limit("simple #2", False, 10, _spin, 5)

# 1 level of nesting
_test_time_limit("nested #1", True, 4, _test_time_limit,
"nested #1a", True, 5, _spin, 10)
_test_time_limit("nested #2", False, 6, _test_time_limit,
"nested #2a", True, 5, _spin, 10)
_test_time_limit("nested #4", False, 6, _test_time_limit,
"nested #4a", False, 10, _spin, 5)

# 2 level of nesting
_test_time_limit("nested #5", True, 3, _test_time_limit,
"nested #5a", True, 4, _test_time_limit,
"nested #5b", True, 5, _spin, 10)
_test_time_limit("nested #9", False, 7, _test_time_limit,
"nested #9a", True, 4, _test_time_limit,
"nested #9b", True, 5, _spin, 10)
_test_time_limit("nested #10", False, 7, _test_time_limit,
"nested #10a",False, 6, _test_time_limit,
"nested #10b",True, 5, _spin, 10)
_test_time_limit("nested #12", False, 7, _test_time_limit,
"nested #12a",False, 6, _test_time_limit,
"nested #12b",False, 10, _spin, 5)

print "All tests OK"

test()
 
N

Nick Craig-Wood

Hendrik van Rooyen said:
so Diez is probably right that the way to go is to put the timer in the
python interpreter loop, as its the only thing around that you could
more or less trust to run all the time.

But then it will not read as nice as Nick's wish, but more like this:

id = setup_callback(error_routine, timeout_in_milliseconds)
long_running_stuff_that_can_block_on_IO(foo, bar, baz)
cancel_callback(id)
print "Hooray it worked !! "
sys.exit()

def error_routine():
print "toughies it took too long - your chocolate is toast"
attempt_at_recovery_or_explanation(foo, bar, baz)

Much more ugly.

I could live with that!

It could be made to work I'm sure by getting the interpreter to check
for timeouts every few hundred bytecodes (like it does for thread
switching).
But would be useful to be able to do without messing with
threads and GUI and imports.
Could be hard to implement as the interpreter would have
to be assured of getting control back periodically, so a
ticker interrupt routine is called for - begins to sound more
like a kernel function to me.
Isn't there something available that could be got at via ctypes?

I think if we aren't executing python bytecodes (ie are blocked in the
kernel or running in some C extension) then we shouldn't try to
interrupt. It may be possible - under unix you'd send a signal -
which python would act upon next time it got control back to the
interpreter, but I don't think it would buy us anything except a whole
host of problems!
 
H

Hendrik van Rooyen

I think if we aren't executing python bytecodes (ie are blocked in the
kernel or running in some C extension) then we shouldn't try to
interrupt. It may be possible - under unix you'd send a signal -
which python would act upon next time it got control back to the
interpreter, but I don't think it would buy us anything except a whole
host of problems!

Don't the bytecodes call underlying OS functions? - so is there not a case
where a particular bytecode could block, or all they all protected by
time outs?
Embedded code would handle this sort of thing by interrupting anyway
and trying to clear the mess up afterward - if the limit switch does not
appear after some elapsed time, while you are moving the 100 ton mass,
you should abort and alarm, regardless of anything else...
And if the limit switch sits on a LAN device, the OS timeouts could be
wholly inappropriate...

- Hendrik
 
P

Paul Rubin

Nick Craig-Wood said:
It could be made to work I'm sure by getting the interpreter to check
for timeouts every few hundred bytecodes (like it does for thread
switching).

Is there some reason not to use sigalarm for this?
 
N

Nick Craig-Wood

Hendrik van Rooyen said:
Don't the bytecodes call underlying OS functions? - so is there not a case
where a particular bytecode could block, or all they all protected by
time outs?

I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.
Embedded code would handle this sort of thing by interrupting
anyway and trying to clear the mess up afterward - if the limit
switch does not appear after some elapsed time, while you are
moving the 100 ton mass, you should abort and alarm, regardless of
anything else... And if the limit switch sits on a LAN device, the
OS timeouts could be wholly inappropriate...

Well, yes there are different levels of potential reliability with
different implementation strategies for each!
 
H

Hendrik van Rooyen

Nick Craig-Wood said:
Well, yes there are different levels of potential reliability with
different implementation strategies for each!

Gadzooks! Foiled again by the horses for courses argument.

; - )

- Hendrik
 
D

Diez B. Roggisch

I beleive the convention is when calling an OS function which might
block the global interpreter lock is dropped, thus allowing other
python bytecode to run.


So what? That doesn't help you, as you are single-threaded here. The
released lock won't prevent the called C-code from taking as long as it
wants. |And there is nothing you can do about that.

Diez
 
N

Nick Craig-Wood

Hendrik van Rooyen said:
Gadzooks! Foiled again by the horses for courses argument.

; - )

;-)

I'd like there to be something which works well enough for day to day
use. Ie doesn't ever wreck the internals of python. It could have
some caveats like "may not timeout during C functions which haven't
released the GIL" and that would still make it very useable.
 
N

Nick Craig-Wood

Diez B. Roggisch said:
So what? That doesn't help you, as you are single-threaded here. The
released lock won't prevent the called C-code from taking as long as it
wants. |And there is nothing you can do about that.

I'm assuming that the timeout function is running in a thread...
 
J

John Nagle

Diez said:
Nick Craig-Wood wrote:



Cross platform isn't the issue here - reliability though is. To put it
simple: can't be done that way. You could of course add a timer to the
python bytecode core, that would "jump back" to a stored savepoint or
something like that.

Early versions of Scheme had a neat solution to this problem.
You could run a function with a limited amount of "fuel". When the
"fuel" ran out, the call returned with a closure. You could
run the closure again and pick up from where the function had been
interrupted, or just discard the closure.

So there's conceptually a clean way to do this. It's probably
not worth having in Python, but there is an approach that will work.

LISP-type systems tend to be more suitable for this sort of thing.
Traditionally, LISP had the concept of a "break", where
execution could stop and the programmer (never the end user) could
interact with the computation in progress.

John Nagle
 
H

Hendrik van Rooyen

Nick Craig-Wood said:
I'd like there to be something which works well enough for day to day
use. Ie doesn't ever wreck the internals of python. It could have
some caveats like "may not timeout during C functions which haven't
released the GIL" and that would still make it very useable.

I second this (or third or whatever if my post is slow).
It is tremendously useful to start something and to be told it has timed
out by a call, rather than to have to unblock the i/o yourself and
to "busy-loop" to see if its successful. And from what I can see
the select functionality is not much different from busy looping...

- Hendrik
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,754
Messages
2,569,525
Members
44,997
Latest member
mileyka

Latest Threads

Top