In the sense that all data processing functions are interpreters?
No, i should have clarified:
printf() has to peek, individually, at each and every character of its
first parameter, every time it is called (well, not exactly true, there
could be a printf() compiler, but I've never seen this done).
So printf has to look at each character, if it's a percent sign, it has
to look at the following characters, if they're a prefix code, such as
a number or other special prefixes, thosae have to be collected and the
right flags set. If the final format specifier is unknown, an error
message has to be issued. if the format specifier is known, it has to
dispatch to the right code to handle that data type. For eeach data
type, it has to use the VARARGS macro to fetch the right amount of data
from the parrameter list. Only THEN can it get to the actual output
formatting operation, doing a atoi() or equivalent in this case.
So in a really tight loop, replacing printf( "%d", var ) with puts(
atoi( var ) ) *just MIGHT* be a whole lot faster, as there's no
interpretation being done. Then again, the overhead of doing the I/O
might be a whole lot greater than any savings here.
Why the ?? this program is doing a printf(). printf() normally
outputs to standard output. Standard output is often a terminal, in
front of which is a human being. Humans can only read at a certain
maximum rate. In general, if a program is outputting numbers, those
numebers are meant to be interpreted by the human being. It takes
considerable time for human beings to interpret numbers. On a terminal
or pseudo-terminal, onne often uses some program like "more" to stop
screen output every page. Now in this case the numbers are
quasi-sequential, so they're not hard to follow. Even so, I doubt if
anyone can read more than a few dozen of these a second. Modern
computers can printf() over TEN MILLION numers a second. So optimizing
this loop is likely to be pointless, unmeasureable on the CPU usage
meter. Under typical usage, the CPU will run for a few microseconds,
then "more" will pause rading its input, which will filter back to thhe
program as "output buffer full, don't write any more", then the human
will read the output, which will take a second or more, then thee human
will press a key, more will read more input, unblockking the main
program again for a few microseconds, etc, etc, etc.... The resulting
flow will be the program runs for a few microseconds, generates a page
of output, the program gets put on hold (process put in non-runnable
status, or even paged or wholesale swapped out to disk), then when the
user resses a key to see the next page, the prgram runs again for a few
microseconds... The overall effect on the CPU is miniscule, a few
parts per million, so it doesnt make ANY sense to optimize this code if
the output does end up at a human.
If stdout is redirected to a file, or to another program, then the
above blather is less relevant, but still the high overhead of disk or
pipe or network I/O is likely to dwarf the printf() time.