Efficiency of the for loop

V

vamshi

Hi all,
This is a question about the efficiency of the code.
a :-
int i;
for( i = 0; i < 20; i++ )
printf("%d",i);


b:-
int i = 10;
for( i = 0; i < 10; i++ ){
printf("%d",i);
printf("%d",i);
}


Which one of these code will run faster? which one will generate small
executable size?
if I replace printf statements with some arithmatic expression will the
answer for the previous questions changes ?

Can anyone suggest any links where i can learn more about writing
efficent C code ?

Thanks in advance for the replies.....
Regards,
Vamshi.
 
I

Ian Collins

vamshi said:
Hi all,
This is a question about the efficiency of the code.
a :-
int i;
for( i = 0; i < 20; i++ )
printf("%d",i);


b:-
int i = 10;
for( i = 0; i < 10; i++ ){
printf("%d",i);
printf("%d",i);
}


Which one of these code will run faster? which one will generate small
executable size?
if I replace printf statements with some arithmatic expression will the
answer for the previous questions changes ?
What did your measurements tell you?
Can anyone suggest any links where i can learn more about writing
efficent C code ?
Write clean code and worry about efficiency only if you have to.
 
R

Richard Bos

vamshi said:
This is a question about the efficiency of the code.

And as such, it is off-topic for this newsgroup, because ISO C does not
concern itself with matters of efficiency. There's a good reason for
this, too: such questions rarely have an answer that is true everywhere.
a :-
int i;
for( i = 0; i < 20; i++ )
printf("%d",i);


b:-
int i = 10;
for( i = 0; i < 10; i++ ){
printf("%d",i);
printf("%d",i);
}

Which one of these code will run faster? which one will generate small
executable size?

Nobody knows. They don't do the same thing, to begin with, which makes
the comparison rather pointless. If you rewrote them so they did, any
compiler would be free to compile them to the same code. It's likely
that most do not, but even so, it is impossible to tell which code
compiles faster and/or smaller. In fact, the answer is likely to change
not only with the compiler you use, but also with the platform you
compile on, the platform you compile _for_, and the compiler options you
use.
In the end, such micro-optimisation is rarely the largest factor in the
efficiency of your program. Calling your compiler correctly may or may
not be of more influence, but the one thing which will help you write
more efficient code is to simply choose the correct algorithm. A 2%
difference between snippet A and snippet B is dwarfed by the choice for
another algorithm which calls snippet A or B 20% fewer.
If you have already squeezed all the optimisation out of your
algorithms, and really must have that last 2% efficiency or that last 2%
smaller code size, there are two rules that you must memorise:
- Measure, measure, measure, and _then_ choose the function which is
best optimised for _your_ particular situation;
- Expect to have to undo and redo all micro-optimisations when you port
your code to another implementation (which may be as trivial as the
next bugfix version of your compiler).

Richard
 
S

shaanxxx

vamshi said:
Hi all,
This is a question about the efficiency of the code.
a :-
int i;
for( i = 0; i < 20; i++ )
printf("%d",i);


b:-
int i = 10;
for( i = 0; i < 10; i++ ){
printf("%d",i);
printf("%d",i);
}

did you miss 'i++' in first printf ?
 
R

Richard

vamshi said:
Hi all,
This is a question about the efficiency of the code.
a :-
int i;
for( i = 0; i < 20; i++ )
printf("%d",i);


b:-
int i = 10;
for( i = 0; i < 10; i++ ){
printf("%d",i);
printf("%d",i);
}


Which one of these code will run faster? which one will generate small
executable size?

Don't worry about efficiency yet : fix your understanding first.

At start of (b) why "i=10"? You then immediately reset i to 0 at the
start of the for loop.

Ignoring the i=10, you probably meant the second printf in code snippet
b to say something like "printf("%d",i+10")" and the for loop to run
from 0 to 9. ok, then we get 0,10,1,11 etc but you get the picture.

This is code unravelling and can lead to faster code. Can. Not "will". Depends.

Clearly if you continue down the road of unravelling the loop as in (b)
you will probably have a larger code size. Personally, and assuming you
are a beginner, I would keep thinking about these things (an eye on
efficiency types is always good IMO), but dont let it hinder your
learning of good practice.

Best of luck.

http://www.mactech.com/articles/mactech/Vol.09/09.08/EfficientC/index.html

if I replace printf statements with some arithmatic expression will the
answer for the previous questions changes ?

Can anyone suggest any links where i can learn more about writing
efficent C code ?

Thanks in advance for the replies.....
Regards,
Vamshi.

--
 
S

santosh

vamshi said:
Hi all, [snip]

Can anyone suggest any links where i can learn more about writing
efficent C code ?

Thanks in advance for the replies.....

You're welcome in advance.

Now coming to your questions, as others have addressed the code snippet
and it's possible efficiency, I'll confine myself to providing a few
links to learn C from. Also note that the Google archive for this group
is itself an invaluble source of information.

<www.lysator.liu.se/c/>
<www.eskimo.com/~scs/cclass/cclass.html>
<http://www.cs.cf.ac.uk/Dave/C/>
<http://c-faq.com/>
<http://www.le.ac.uk/cc/tutorials/c/>
<http://math.nmu.edu/c/cstart.htm>
<http://www.phys.unsw.edu.au/~mcba/c001.html>
<http://www.crasseux.com/books/ctutorial/>
<http://home.att.net/~jackklein/c/c_main.html>
<http://www.dinkumware.com/manuals/>
<http://david.tribble.com/text/cdiffs.htm>
<http://www.acm.uiuc.edu/webmonkeys/book/c_guide/>
<http://www.comeaucomputing.com/techtalk/c99/>
<http://www-128.ibm.com/developerworks/linux/library/l-c99.html>
<www.open-std.org/jtc1/sc22/wg14/>
<http://www-ccs.ucsd.edu/c/index.html>
<http://einstein.drexel.edu/courses/CompPhys/General/C_basics/c_tutorial.html>
 
A

Ancient_Hacker

The efficiency of code isnt very relavant to this newsgroup, but here
are some answers anyway:

(1) Efficiency in the broadest sense might mean "What is most
cost-effective in the long run". With the cost of programmers
somewhere around $30 per line of debugged good code, and the cost of
modyfing code many times that, . it's rarely efficient to try to
micro-manage each line of code. It's usually more cost effective ion
the really long run to write simple and clear code.

(2) The key here is to realize two, er three, er, SIX things:

(a) printf() is an INTERPRETER. Interpreters tend to be slow.

(b) the lines could be combined into "printf( "%d%d", i, i+1 );

(c) The lines could be combined into printf
("1234567891011121314151617181920");

(d) The function call could be changed to the less slow puts()

, (e) Output to strout, which is often sent to a human being, is a bout
a thousand to a million times slower than any human being can read
absorb, and undertand what is being output.

(f) output to a file, especially stdout(), these days tends to be many
factors of ten slower than today's CPU's, so the whole idea of
optimizing this loop is pointless.
 
S

santosh

Ancient_Hacker said:
The efficiency of code isnt very relavant to this newsgroup, but here
are some answers anyway:

(1) Efficiency in the broadest sense might mean "What is most
cost-effective in the long run". With the cost of programmers
somewhere around $30 per line of debugged good code, and the cost of
modyfing code many times that, . it's rarely efficient to try to
micro-manage each line of code. It's usually more cost effective ion
the really long run to write simple and clear code.

Also flexible/generic code.
(2) The key here is to realize two, er three, er, SIX things:

(a) printf() is an INTERPRETER. Interpreters tend to be slow.

In the sense that all data processing functions are interpreters?
(b) the lines could be combined into "printf( "%d%d", i, i+1 );

This will duplicate each value of i.
(c) The lines could be combined into printf
("1234567891011121314151617181920");

Horrible. Might as well program in assembly. This will break
fantastically upon any modification. And what if the bounds of i is
100000. You plan on writing that out as a string literal?
(d) The function call could be changed to the less slow puts()

printf() is more convinient for converted and formatted output. If
you're forced to use puts(), over time you'll tend to duplicate most of
printf()'s functionality, and ofcourse your replica won't be portable.
, (e) Output to strout, which is often sent to a human being, is a bout
a thousand to a million times slower than any human being can read
absorb, and undertand what is being output.
??

(f) output to a file, especially stdout(), these days tends to be many
factors of ten slower than today's CPU's, so the whole idea of
optimizing this loop is pointless.

Yes, most I/O is often so slow compared to the processor that it's
probably waste of time to try to micro-optimise I/O routines.
 
R

Richard

Ancient_Hacker said:
The efficiency of code isnt very relavant to this newsgroup, but here
are some answers anyway:

I cant think of any NG where it would be more relevant when designing
cross platform C systems with an eye on efficiency.

Or?
(1) Efficiency in the broadest sense might mean "What is most
cost-effective in the long run". With the cost of programmers
somewhere around $30 per line of debugged good code, and the cost of
modyfing code many times that, . it's rarely efficient to try to
micro-manage each line of code. It's usually more cost effective ion
the really long run to write simple and clear code.

(2) The key here is to realize two, er three, er, SIX things:

(a) printf() is an INTERPRETER. Interpreters tend to be slow.

printf is code. All code is "slow" by that reasoning. What it does
inside the library is no more difficult or easy than stuff done in the
application itself.
(b) the lines could be combined into "printf( "%d%d", i, i+1 );

You are being too literal I feel. He was basically asking about
unravelling loops - a very common optimisation technique.
(c) The lines could be combined into printf
("1234567891011121314151617181920");

Oh for goodness sake - do you really think the output was what he trying
to achieve the most efficiently?
(d) The function call could be changed to the less slow puts()

, (e) Output to strout, which is often sent to a human being, is a bout
a thousand to a million times slower than any human being can read
absorb, and undertand what is being output.
???


(f) output to a file, especially stdout(), these days tends to be many
factors of ten slower than today's CPU's, so the whole idea of
optimizing this loop is pointless.

Rubbish. Even if the output was buffered then using as few CPU cycles is
good in a multithreaded/multiprocessor system.


--
 
A

Ancient_Hacker

In the sense that all data processing functions are interpreters?

No, i should have clarified:

printf() has to peek, individually, at each and every character of its
first parameter, every time it is called (well, not exactly true, there
could be a printf() compiler, but I've never seen this done).

So printf has to look at each character, if it's a percent sign, it has
to look at the following characters, if they're a prefix code, such as
a number or other special prefixes, thosae have to be collected and the
right flags set. If the final format specifier is unknown, an error
message has to be issued. if the format specifier is known, it has to
dispatch to the right code to handle that data type. For eeach data
type, it has to use the VARARGS macro to fetch the right amount of data
from the parrameter list. Only THEN can it get to the actual output
formatting operation, doing a atoi() or equivalent in this case.

So in a really tight loop, replacing printf( "%d", var ) with puts(
atoi( var ) ) *just MIGHT* be a whole lot faster, as there's no
interpretation being done. Then again, the overhead of doing the I/O
might be a whole lot greater than any savings here.



Why the ?? this program is doing a printf(). printf() normally
outputs to standard output. Standard output is often a terminal, in
front of which is a human being. Humans can only read at a certain
maximum rate. In general, if a program is outputting numbers, those
numebers are meant to be interpreted by the human being. It takes
considerable time for human beings to interpret numbers. On a terminal
or pseudo-terminal, onne often uses some program like "more" to stop
screen output every page. Now in this case the numbers are
quasi-sequential, so they're not hard to follow. Even so, I doubt if
anyone can read more than a few dozen of these a second. Modern
computers can printf() over TEN MILLION numers a second. So optimizing
this loop is likely to be pointless, unmeasureable on the CPU usage
meter. Under typical usage, the CPU will run for a few microseconds,
then "more" will pause rading its input, which will filter back to thhe
program as "output buffer full, don't write any more", then the human
will read the output, which will take a second or more, then thee human
will press a key, more will read more input, unblockking the main
program again for a few microseconds, etc, etc, etc.... The resulting
flow will be the program runs for a few microseconds, generates a page
of output, the program gets put on hold (process put in non-runnable
status, or even paged or wholesale swapped out to disk), then when the
user resses a key to see the next page, the prgram runs again for a few
microseconds... The overall effect on the CPU is miniscule, a few
parts per million, so it doesnt make ANY sense to optimize this code if
the output does end up at a human.

If stdout is redirected to a file, or to another program, then the
above blather is less relevant, but still the high overhead of disk or
pipe or network I/O is likely to dwarf the printf() time.
 
R

Richard

Ancient_Hacker said:
No, i should have clarified:

printf() has to peek, individually, at each and every character of its
first parameter, every time it is called (well, not exactly true, there
could be a printf() compiler, but I've never seen this done).

So printf has to look at each character, if it's a percent sign, it has
to look at the following characters, if they're a prefix code, such as
a number or other special prefixes, thosae have to be collected and the
right flags set. If the final format specifier is unknown, an error
message has to be issued. if the format specifier is known, it has to
dispatch to the right code to handle that data type. For eeach data
type, it has to use the VARARGS macro to fetch the right amount of data
from the parrameter list. Only THEN can it get to the actual output
formatting operation, doing a atoi() or equivalent in this case.

So in a really tight loop, replacing printf( "%d", var ) with puts(
atoi( var ) ) *just MIGHT* be a whole lot faster, as there's no
interpretation being done. Then again, the overhead of doing the I/O
might be a whole lot greater than any savings here.





Why the ?? this program is doing a printf(). printf() normally
outputs to standard output. Standard output is often a terminal, in
front of which is a human being. Humans can only read at a certain
maximum rate. In general, if a program is outputting numbers, those
numebers are meant to be interpreted by the human being. It takes
considerable time for human beings to interpret numbers. On a terminal
or pseudo-terminal, onne often uses some program like "more" to stop
screen output every page. Now in this case the numbers are
quasi-sequential, so they're not hard to follow. Even so, I doubt if
anyone can read more than a few dozen of these a second. Modern
computers can printf() over TEN MILLION numers a second. So optimizing
this loop is likely to be pointless, unmeasureable on the CPU usage
meter. Under typical usage, the CPU will run for a few microseconds,
then "more" will pause rading its input, which will filter back to thhe
program as "output buffer full, don't write any more", then the human
will read the output, which will take a second or more, then thee human
will press a key, more will read more input, unblockking the main
program again for a few microseconds, etc, etc, etc.... The resulting
flow will be the program runs for a few microseconds, generates a page
of output, the program gets put on hold (process put in non-runnable
status, or even paged or wholesale swapped out to disk), then when the
user resses a key to see the next page, the prgram runs again for a few
microseconds... The overall effect on the CPU is miniscule, a few
parts per million, so it doesnt make ANY sense to optimize this code if
the output does end up at a human.

If stdout is redirected to a file, or to another program, then the
above blather is less relevant, but still the high overhead of disk or
pipe or network I/O is likely to dwarf the printf() time.

Step back. Take a deep breath. This thread is not about how print() may
or may not be efficient. It is about the efficiency of the calling
harness. The size of the generated code.

All you have done is give a rather longwinded explanation of the
somewhat obvious. We all know that computers can output numbers faster
that the human can read. This is not, and never was, the point. Even if
the printfs weren't examples, we could easily consider the grander scheme
of things and print to a file handle which is associated with a fast RAM
disk or somesuch.

Even, and a big even, if the printfs were being used in an optimised
handler, it could still be important , if say (rather stupidly) the
output stream was being used to blit a load of characters directly to
video ram or similar.

You are, inadvertently, introducing white elephants IMO.

If the title was "how to optimise console io" then possibly fair enough.
 
R

Richard Bos

Richard said:
I cant think of any NG where it would be more relevant when designing
cross platform C systems with an eye on efficiency.

Ah, there's the rub. Low-level optimisations such as the OP was
investigating rarely do work well cross-platform.
You are being too literal I feel. He was basically asking about
unravelling loops - a very common optimisation technique.

Yes, but the compiler could do that. There's no reason to apply this
technique by hand unless you've proof, not a vague suspicion, that it
will work.

Richard
 
R

Richard

Ah, there's the rub. Low-level optimisations such as the OP was
investigating rarely do work well cross-platform.

Turn optimisations off. General optimising techniques might work. Its
not off topic - its still ways you can program in C to influence
things. It may nor not influence things depending on the compiler I
agree.

But since we dont know how well different platform/compilers will
optimise anyway, its all a little academic I agree.
Yes, but the compiler could do that. There's no reason to apply this
technique by hand unless you've proof, not a vague suspicion, that it
will work.

I know. Which is why I say it "can" but not necessarily *will* improve things.

--
 
S

santosh

Ancient_Hacker said:
No, i should have clarified:

printf() has to peek, individually, at each and every character of its
first parameter, every time it is called (well, not exactly true, there
could be a printf() compiler, but I've never seen this done).

Yes, I am aware of the probable internal logic of printf() type
functions, but I still don't think that that qualifies it to be called
as an interpreter. By that reasoning strtod() is an interpreter,
strtok() is an interpreter, just about any parser is an interpreter. It
is to avoid loading too many meanings to a word that science assigns
specific, generally relatively narrow meanings to it's terms.

[snip]
So in a really tight loop, replacing printf( "%d", var ) with puts(
atoi( var ) ) *just MIGHT* be a whole lot faster, as there's no
interpretation being done. Then again, the overhead of doing the I/O
might be a whole lot greater than any savings here.

Though the running time of the loop may be reduced the total running
time of the code cannot be reduced since printf() type parsing and
conversion _has_ to be done somewhere.

Besides it's rather naive to place complicated functions like printf()
in a loop and then fret about micro-optimising it. If you really want
the loop to execute at the maximum possible rate then do the minimum
possible in it.

Why the ??

Just re-read what you've written above and then read your subsequent
explanations below. Do you spot the source of the confusion yet?
this program is doing a printf(). printf() normally
outputs to standard output. Standard output is often a terminal, in
front of which is a human being. Humans can only read at a certain
maximum rate. In general, if a program is outputting numbers, those
numebers are meant to be interpreted by the human being. It takes
considerable time for human beings to interpret numbers. On a terminal
or pseudo-terminal, onne often uses some program like "more" to stop
screen output every page. Now in this case the numbers are
quasi-sequential, so they're not hard to follow. Even so, I doubt if
anyone can read more than a few dozen of these a second. Modern
computers can printf() over TEN MILLION numers a second. So optimizing
this loop is likely to be pointless, unmeasureable on the CPU usage
meter. Under typical usage, the CPU will run for a few microseconds,
then "more" will pause rading its input, which will filter back to thhe
program as "output buffer full, don't write any more", then the human
will read the output, which will take a second or more, then thee human
will press a key, more will read more input, unblockking the main
program again for a few microseconds, etc, etc, etc.... The resulting
flow will be the program runs for a few microseconds, generates a page
of output, the program gets put on hold (process put in non-runnable
status, or even paged or wholesale swapped out to disk), then when the
user resses a key to see the next page, the prgram runs again for a few
microseconds... The overall effect on the CPU is miniscule, a few
parts per million, so it doesnt make ANY sense to optimize this code if
the output does end up at a human.

So? Do you want printf() to output at a rate of three words a second
just because that's the rate at which a human being can read?

printf(), (or for that matter any C function) is not concerned about
it's efficiency. It'll do it's job at as fast a possible rate as it can
without being incorrect. What if stdout has been redirected to a
gigabit NIC. Then you'll be thankful for how fast printf() is, as such,
rather than wanting it to be deliberately crippled for just one
particular case.
If stdout is redirected to a file, or to another program, then the
above blather is less relevant, but still the high overhead of disk or
pipe or network I/O is likely to dwarf the printf() time.

Regardless of the relative speeds of different computer devices, it's
wise for each individual device to run at it's most efficient as long
as it doesn't sacrifice correctness. What if tommorow a type drive and
memory are invented that is faster or even nearly as fast as the CPU?
Do you want to rewrite parts of your program again to undo the
deliberate slowness you've introduced?

Instead let each logical part of the total system run at it's fastest
feasible. The total speed of the system will be proportional to it's
slowest logical part, (which could be improved later). It's foolish to
slow down faster parts to be more compatible with slower ones except
where it's neccessary for correct operation.
 
R

Richard

santosh said:
Ancient_Hacker said:
No, i should have clarified:

printf() has to peek, individually, at each and every character of its
first parameter, every time it is called (well, not exactly true, there
could be a printf() compiler, but I've never seen this done).

Yes, I am aware of the probable internal logic of printf() type
functions, but I still don't think that that qualifies it to be called
as an interpreter. By that reasoning strtod() is an interpreter,
strtok() is an interpreter, just about any parser is an interpreter. It
is to avoid loading too many meanings to a word that science assigns
specific, generally relatively narrow meanings to it's terms.

[snip]
So in a really tight loop, replacing printf( "%d", var ) with puts(
atoi( var ) ) *just MIGHT* be a whole lot faster, as there's no
interpretation being done. Then again, the overhead of doing the I/O
might be a whole lot greater than any savings here.

Though the running time of the loop may be reduced the total running
time of the code cannot be reduced since printf() type parsing and
conversion _has_ to be done somewhere.

It does no harm to think they might be parmed off via ram buffers and
operated on in another thread. As multicores & processors become more
main stream I dont think its safe to assume "total time = time on watch"
- better to think in terms of consumed clock cycles or core time.
Besides it's rather naive to place complicated functions like printf()
in a loop and then fret about micro-optimising it. If you really want
the loop to execute at the maximum possible rate then do the minimum
possible in it.

I think he was asking more about the loop than the printfs ..... hence
his demo with two printfs.

Is it really that hard to see this whole printf thing is nothing but a
ridiculous misinterpretation?

Maybe I have made the mistake? Possibly. But I dont think so in this
case - I think you've allowed yourself to be dragged down a dead end to
discuss the obvious.
 
S

santosh

Richard said:
santosh said:
Ancient_Hacker wrote:
[snip]
[snip]
Besides it's rather naive to place complicated functions like printf()
in a loop and then fret about micro-optimising it. If you really want
the loop to execute at the maximum possible rate then do the minimum
possible in it.

I think he was asking more about the loop than the printfs ..... hence
his demo with two printfs.

Don't you detect a contradiction of logic in that statement? The OP
wants to know the relative speed of the two loops minus the printf()
but nevertheless places unequal printf() statements in both the loops.

The loops contains hardly anything else other than printf(), so I don't
see the point of measuring their relative speed and ignoring printf()'s
contribution to it, (which is probably close to 100%).
Is it really that hard to see this whole printf thing is nothing but a
ridiculous misinterpretation?

By the OP yes. But I was responding to Ancient_Hacker's assertion that
since the reading speed of a human is far slower, the speed of printf()
is not important. It's not certain that the final output has to be read
by a human. It could be processed by another device or program in which
case printf()'s speed becomes relatively important.
Maybe I have made the mistake? Possibly. But I dont think so in this
case - I think you've allowed yourself to be dragged down a dead end to
discuss the obvious.

Yes, you're right there. We certainly are debating what is childishly
obvious. I'll stop here as far as this thread is concerned.
 
F

free4trample

To me it is clear that the question was about unrolling loops. Bottom
line is:
the fewer times your loop has to execute faster it will execute, in
case B, ther are 10 less cmp , jbe and inc assembly instructios,
it will take at least 1 clock cycle per instruction, in reality
probably more than 1. So in a given example user saves 30 clock cycles,
which in a given example is insignifficant because printf takes 3e5
clockcycles on my machine to execute, so total efficiency improovement
is atleast: 0.001% but, assuming the code inside the loop would be
faster than printf, the efficiency improvement would grow.

i usually try to unroll the loops completely if they execute 10 times
or less and are part of a nested loop which executes many many times.
Also loops which are longer than 10 lines probably should not be
unrolled completely but it may proove usefull to unroll them partially.

Example of unrolling completely would be:
I would consider this loop to be 1 line long.
i=0;
while(i<10){
printf("What ever");
i++;
}

Partially unrolled version:
i=0;
while(i<5){
printf("What ever");
printf("What ever");
i++;
}



Completely unrolled version:
printf("What ever");
printf("What ever");
printf("What ever");
printf("What ever");
printf("What ever");
printf("What ever");
printf("What ever");
printf("What ever");
printf("What ever");
printf("What ever");




..
 
R

Richard

santosh said:
Richard said:
santosh said:
Ancient_Hacker wrote:
[snip]
So in a really tight loop, replacing printf( "%d", var ) with puts(
atoi( var ) ) *just MIGHT* be a whole lot faster, as there's no
interpretation being done. Then again, the overhead of doing the I/O
might be a whole lot greater than any savings here.
[snip]
Besides it's rather naive to place complicated functions like printf()
in a loop and then fret about micro-optimising it. If you really want
the loop to execute at the maximum possible rate then do the minimum
possible in it.

I think he was asking more about the loop than the printfs ..... hence
his demo with two printfs.

Don't you detect a contradiction of logic in that statement? The OP
wants to know the relative speed of the two loops minus the printf()
but nevertheless places unequal printf() statements in both the loops.

Yes, but hes a beginner trying to get his head around optimising
techniques. Nothing he can do can optimise the call to printf ... only
the surrounding harness can be altered. The rest is immaterial IMO - had
he put "a=a+1;b=b+f();" would you be zooming in on the increment and the
call to f()?
 
A

Ancient_Hacker

Yes, I am aware of the probable internal logic of printf() type
functions, but I still don't think that that qualifies it to be called
as an interpreter.

Well, it has a lexical scanner trhat looks for significant characetrs
"%", it parses prefix characters, finds a comand character, and jumps
to various execute routines. Opinions may differ, but many folks would
call that pretty much an all-up interpreter.




By that reasoning strtod() is an interpreter.

Well, it does parse, it just doesnt have commands and execute routines
per se.

strtok() is an interpreter.

No, strtok() is jsut a really poor scanner. It doesnt interpret a
thing.
just about any parser is an interpreter. It
is to avoid loading too many meanings to a word that science assigns
specific, generally relatively narrow meanings to it's terms.

Yes, I'm for clarity too. Too bad you're kinda fuzzy.
Though the running time of the loop may be reduced the total running
time of the code cannot be reduced since printf() type parsing and
conversion _has_ to be done somewhere.

No........ You dont' get it, atoi() is about eight lines of code while
printf() is hundreds.

Besides it's rather naive to place complicated functions like printf()
in a loop and then fret about micro-optimising it. If you really want
the loop to execute at the maximum possible rate then do the minimum
possible in it.

I didnt put the code in a loop, the poor OP did.

So? Do you want printf() to output at a rate of three words a second
just because that's the rate at which a human being can read?

No, you missed my point. There's no point in optimizing code that's
writing digits to a human being. The computer is several million
times faster than the human.

printf(), (or for that matter any C function) is not concerned about
it's efficiency. It'll do it's job at as fast a possible rate as it can
without being incorrect. What if stdout has been redirected to a
gigabit NIC. Then you'll be thankful for how fast printf() is, as such,
rather than wanting it to be deliberately crippled for just one
particular case.

Nobody said anything about crippling printf(), quite the opposite.

Regardless of the relative speeds of different computer devices, it's
wise for each individual device to run at it's most efficient as long
as it doesn't sacrifice correctness. What if tommorow a type drive and
memory are invented that is faster or even nearly as fast as the CPU?
Do you want to rewrite parts of your program again to undo the
deliberate slowness you've introduced?

You have to look at the bigger picture. Most programs have plenty of
areas that could use optimizing. it's generally more useful to focus
on the existing bottlenecks than to optimize areas that are unlikely to
ever need optimization. We're only here on this planet for a limited
time. Better to spend one's time wisely.
Instead let each logical part of the total system run at it's fastest
feasible. The total speed of the system will be proportional to it's
slowest logical part, (which could be improved later). It's foolish to
slow down faster parts to be more compatible with slower ones except
where it's neccessary for correct operation.

Nobody said that.
 
R

Richard

Ancient_Hacker said:
Well, it has a lexical scanner trhat looks for significant characetrs
"%", it parses prefix characters, finds a comand character, and jumps
to various execute routines. Opinions may differ, but many folks would
call that pretty much an all-up interpreter.

How do you see it being any different from any application code which
formats data? It is nothing particularly special - and is prone to the
same issues which slow up any block of code which has data driven code
branchings or is linked to physical bottlenecks such as console, disk,
audio HW etc etc etc. Why you are so determined to allow this routine to
color your views on how a newbie might understand the basics of loop
unravelling is beyond me.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,763
Messages
2,569,562
Members
45,038
Latest member
OrderProperKetocapsules

Latest Threads

Top