Why is "for line in f" faster than readline()

A

Alexandre Ferrieux

Hi,

In a recent thread I discovered why the "for line in f" idiom was not
suitable for live sources (pipes, sockets, tty).
The reason is that it uses buffering on input, blocking on a full
buffer read before anything.
When I asked why it did it this way, the answer came up that it made
it faster.

Now, *why* is such buffering gaining speed over stdio's fgets(), which
already does input buffering (though in a more subtle way, which makes
it still usable with pipes etc.) ?

-Alex
 
D

Duncan Booth

Alexandre Ferrieux said:
Now, *why* is such buffering gaining speed over stdio's fgets(), which
already does input buffering (though in a more subtle way, which makes
it still usable with pipes etc.) ?

Because the C runtime library has different constraints than Python's file
iterator.

In particular the stdio fgets() must not read ahead (because the stream
might not be seekable), so it is usually just implemented as a series of
calls to read one character at a time until it has sufficient characters.
That inevitably has a lot more overhead than reading one 8k buffer and
subsequently splitting it up into lines.

It would probably be possible to do an implementation of fgets which looked
at the underlying stream and used buffering and seeking when the stream was
seekable and the cautious approach otherwise, but that isn't what is
usually done, and the incentive to do it isn't there: fgets() exists and
works as advertised even if it isn't very efficient. Anyone worried about
speed won't use it anyway, so improving it on specific platforms wouldn't
really help.

A lot of the C runtime is like that: it needs to be robust in a very
general purpose environment, but it doesn't need to be efficient. If you
are worried about efficiency then you should look elsewhere.
 
A

Alexandre Ferrieux

Because the C runtime library has different constraints than Python's file
iterator.

In particular the stdio fgets() must not read ahead (because the stream
might not be seekable), so it is usually just implemented as a series of
calls to read one character at a time until it has sufficient characters.

Sorry, but this is simply wrong. Try an strace on a process doing
fgets(), and you'll see that it does
large read()s. Moreover, the semantics of a blocking read on those
live sources is to block while there are no more data, but to return
whatever's available (possibly fewer bytes than asked) as soon as new
data arrive.

This is the key to libc's performance in both line-by-line or bulk
transfers.

In fact, the Python file iterator is not faster than fgets() by the
way. It's just faster than the Python-wrapped version of it, readline,
which you said is just a thin layer above fgets(). So instead of
dismissing too quickly ol'good libc, maybe the investigation could
uncover a slight suboptimality in readline...
Anyone worried about
speed won't use it anyway, so improving it on specific platforms wouldn't
really help.

Oh really ? grep, sed, awk, all these stdio-based tools, are pathetic
snails compared to the Python file iterator, of course. But then, why
didn't anyone feel the urge to use this very same trick in C and
provide an "fgets2()", working only on seekable devices, but so much
faster ?

-Alex
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top