or like this (untested!)
finished = False
while not finished:
before = time.time()
do(x) # sets finished if all was computed
after = time.time()
delta = after-before
time.sleep(delta*10/3.)
now the trick: do(x) can be a single piece of code, with
strategically placed yield's all over....
Since you have no way of knowing that your process was the only
one running between the two calls to time.time(), you're
placing an upper bound on how much CPU time you're using, but
the actual usage is unknown and may be much lower on a heavily
loaded machine.
Running for 100ms and sleeping for 333ms results in an upper
limit of 25% rather than 30%. Sleeping for (delta * 7.0/3.0)
gives a 30% upper bound.
All that aside, it seems to me that this situation is analogous
to when people waste all sorts of effort trying to write clever
applications that cache parts of files or other data structures
in main memory with backing store on disk. They end up with a
big, complicated, buggy app that's slower and requires more
resources than a far simpler app that lets the OS worry about
memory management.
IOW, you're probably better off not trying to write application
code that tries to out-think your OS. Use whatever prioritizing
scheme your OS kernel provides for setting up a low priority
"background" task, and let _it_ worry about divvying up the CPU.
That's what it's there for, and it's got a far better picture
of resource availability and demand.