santosh said:
Richard wrote:
As far as the second point applies to this specific example, it could be
possible that the strcpy implementation is more efficient than a manual
coded loop. This could become noticeable for large string numbers or
sizes.
Yes, it's certainly possible. So is the reverse, of course. The only proper
course is to measure, realising that the measurement will be specific to a
particular implementation on a specific machine.
I have measured the performance of the code - my code - on an Athlon 1.4
(no slouch, but hardly a state-of-the-art mean machine) under gcc 2.95.3,
using an input file over 10 Megabytes in size (a typical real world input
would be a handful of kilobytes). Exact input file size: 11057780 bytes.
Number of lines: 120,000 (compared to a typical real world input of
perhaps a few dozen, or maybe three or four thousand for a fairly large
system).
The profiler reports that the program took 0.1 seconds to run (on inputs
that are orders of magnitude larger than would be expected in production).
The code whose performance is in question is called ONCE, by the way, and
the profiler (which claims to measure in microseconds) reports that the
function takes zero time to run. Obviously that can't be literally true,
but it's certainly too small for my gprof implementation to measure. It
might be reasonably argued that it takes almost a microsecond.
The purpose of the program is to take as input a list of error messages and
identifiers, and convert these into a .h file with #defines for the error
identifiers, and a .c file with a function that converts a number into an
error message.
It's a programmer's tool, and requires as input a file that is used by a
programmer to store an intelligent identifier/message pair. To edit such a
file, for a superhuman programmer like Chris Torek, might take as little
as - what, five seconds? (Wow, watch those fingers fly) And the next step
would be to run the program (perhaps automatically, on saving the file).
If this superhuman programmer did nothing but updates to the input file
all day every day, he would cause 86400/5 = 17280 program runs per day. If
the file size is as large as in my test (deeply unlikely in the real
world, but just about possible for a really, really, really large
project), the total program time taken would be a little less than half an
hour (29 minutes 28 seconds, in fact) - remember that this is for over
seventeen THOUSAND runs.
If, for the sake of argument, the OP's code is correct and if it reduces
the cost of converting dots to underscores from 1 microsecond to *zero*,
the total time saving per day will be 0.01728 seconds. Over a thousand
years of 24hrs/day of running this program 17280 times a day, the total
time saved will be about an hour and three quarters.
Compare this to the time spent on the discussion so far.