Reducing cache/buffer for faster display

R

Rikishi42

I have these 2 scripts that are very heavy on the file i/o, consume a very
reasonable amount of cpu and output their counters at a - very - relaxed
pace to the console. The output is very simply done using something like:

print "files:", nFiles, "\r",


Yet alltough there is no real reason for it, even a pace of a print every
10-30 secs will be cached, only to actually show an output update every 1-2
min or so.

When I run the scripts with "python -u myscript.py", the output is complete,
very speedy and without any kind of impact on the actual work being one.


Can this option be called from within the script? Or is there another option
to make the display "a bit" speedier ?


Runnning Python 2.7.3, but it seems to me I've allready had this problem a
long, long time ago with other releases.
 
C

Chris Angelico

I have these 2 scripts that are very heavy on the file i/o, consume a very
reasonable amount of cpu and output their counters at a - very - relaxed
pace to the console. The output is very simply done using something like:

print "files:", nFiles, "\r",


Yet alltough there is no real reason for it, even a pace of a print every
10-30 secs will be cached, only to actually show an output update every 1-2
min or so.

Yup! Just add a call to sys.stdout.flush() after each print.

ChrisA
 
J

John Gordon

Yup! Just add a call to sys.stdout.flush() after each print.

Isn't terminal output line-buffered? I don't understand why there would
be an output delay. (Unless the "\r" is messing things up...)
 
C

Chris Angelico

Isn't terminal output line-buffered? I don't understand why there would
be an output delay. (Unless the "\r" is messing things up...)

This is a classic progress-indication case, which does indeed mess up
line-buffering. The carriage return (and no line feed, done in the
Python 2 style of a trailing comma) puts the cursor back to the
beginning of the line, ready to overwrite, and ripe for one of those
old favorite incomplete overwrite errors - if nFiles monotonically
increases, it's fine, but if it decreases, the display can get ugly.

ChrisA
 
R

Rikishi42

This is a classic progress-indication case, which does indeed mess up
line-buffering. The carriage return (and no line feed, done in the
Python 2 style of a trailing comma) puts the cursor back to the
beginning of the line, ready to overwrite, and ripe for one of those
old favorite incomplete overwrite errors - if nFiles monotonically
increases, it's fine, but if it decreases, the display can get ugly.

True, but that wasn't the problem here. The updates where. Thanks for the
given answer, I'll try it.

The scripts in question only increase numbers. But should that not be the
case, solutions are simple enough. The numbers can be formatted to have a
fixed size. In the case of random line contents (a list of filesnames, say)
it's enough to create an output function that is aware of the length of the
previously printed line, and add enough spaces to the current one to wipe
exess content.


Thanks again for the suggestion.
 
D

Dennis Lee Bieber

Isn't terminal output line-buffered? I don't understand why there would
be an output delay. (Unless the "\r" is messing things up...)

It's the trailing , The \r is being used to reset to the
beginning of the console line, but the comma "says" more output for
/this/ line will be coming... So no output until explicitly flushed, or
a new-line is issued.
 
C

Chris Angelico

The scripts in question only increase numbers. But should that not be the
case, solutions are simple enough. The numbers can be formatted to have a
fixed size. In the case of random line contents (a list of filesnames, say)
it's enough to create an output function that is aware of the length of the
previously printed line, and add enough spaces to the current one to wipe
exess content.

Yep, that's a pretty effective way to do it. One simple method to it
is to format the whole string as a single whole, then left justify it
in a field of (say) 79 characters, and output that:

msg = "Progress: %d%% (%d/%d)... %s" % (done*100/total, done, total,
current_file)
print msg.ljust(79)+"\r",
sys.stdout.flush()

ChrisA
 
R

Rikishi42

It's the trailing , The \r is being used to reset to the
beginning of the console line, but the comma "says" more output for
/this/ line will be coming... So no output until explicitly flushed, or
a new-line is issued.

Well, the \r seems to be the problem, allright.
But output was not completely blocked, just delayed a very long time.

So perhaps flushing and a sending a newline aren't the only triggers for
output. Perhaps there's a maximum delay or a maximum cumulated size, and
the output is flushed when such a limit is reached.

Anyway, that's mainly academic. I doubt there will be a correction to
that behaviour.
 
R

Rikishi42

Yep, that's a pretty effective way to do it. One simple method to it
is to format the whole string as a single whole, then left justify it
in a field of (say) 79 characters, and output that:

msg = "Progress: %d%% (%d/%d)... %s" % (done*100/total, done, total,
current_file)
print msg.ljust(79)+"\r",
sys.stdout.flush()

Mmm, I allmost went for that. It's elegant, simple and clear. But there's
one drawback: I usually reduce the terminal's window to take up less desktop
surface during those long runs.
So fixing it to 79 chars won't do. And I'm not even tempted to go for a
detection of the width of the terminal from within the script. The idea is
after all to keep the scripts simple (syntax) and light (execution).

Well, good night everyone.
 
H

Hans Mulder

Well, the \r seems to be the problem, allright.
But output was not completely blocked, just delayed a very long time.

So perhaps flushing and a sending a newline aren't the only triggers for
output. Perhaps there's a maximum delay or a maximum cumulated size, and
the output is flushed when such a limit is reached.

There's a maximum cumulated size; it's called the buffer size.
Output goes into a buffer, and when the buffer is full, it's
printed all at once.

One way to avoid it, is to use an unbuffered stream.

Another, more efficient, way to avoid it, is to invoke the
stream's .flush() method after writing to it.
Anyway, that's mainly academic. I doubt there will be a correction to
that behaviour.

It's an optimization. When it was invented, 40 years ago, it was a
really necessary to do this, to get something resembling performance.

The performance of a system without stream buffering would probably
be tolerable on modern hardware. But the people maintaining Python
are unlikely to cut out buffering, because few people would benefit
(yours is pretty much the only use case where buffereing hurts) and
some would suffer (those who write many short strings to a disk file).


Hope this helps,

-- HansM
 
D

Dennis Lee Bieber

Well, the \r seems to be the problem, allright.
But output was not completely blocked, just delayed a very long time.
Of course -- once the output buffer was filled, the system would
output the data...
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,583
Members
45,072
Latest member
trafficcone

Latest Threads

Top