while (1) vs. for ( ;; )

P

pete

Tim Rentsch wrote:
I was surprised at how many people reported that a compiler
they use issues a warning for 'while(1)' and gave that as a
reason for giving preference to the 'for(;;)' form. It
seems like a choice should be made on the basis of what's a
better expression (lower defect rate, higher productivity,
better understood by the readers), not on the basis of some
errant warning message.

The warning is the tie breaker.

There is no difference in defect rate,
productivity, or understanding by the readers.
 
A

akarl

Richard said:
Seems you've never heard of break.

I really meant to say: In Modula there is a *separate*
exit-in-the-middle loop construct, i.e. a construct whose intention is
clear when you read the word LOOP.

August
 
L

Lawrence Kirby

What about following?
#defined loop_forever for(;;)

There are a few loops that really don't end, although rarely no more than
one in a program. For those this might be OK. There are far more loops
which do end but the termination condition is in the middle of the loop,
not at the start or end, e.g.

for (;;) {
...

if (cond) break;

...
}

This loop does not loop for ever and writing it as

loop_forever {
...

if (cond) break;

...
}

would be highly misleading.

Incidentally the for (;;) form is what K&R suggested for a loop construct
without a termination condition. It is quite natural as such because it
doesn't specify one. I find while (1) to be slightly kludgy by comparison
because it specifies a redundant condition expression.

Lawrence
 
D

Default User

Christopher said:
Would you suppose that such a view is common in the world of
professional programming? My work experience is still quite limited,
and it's hard to tell ubiquitous programming conventions from
idiosyncratic oddities...

Single return has been mandated in both of the Common Coding Standards
(one C, the other C++) I've worked to at my company.



Brian
 
D

Default User

Richard Heathfield wrote:

No. In my experience, people in the world of professional programming
prefer:

* enormous functions
* multiple exit points from functions and loops
* multiple entry points, if they can get them
* spaghetti logic
* tight coupling
* vry shrt var nms
* evnshrterfnnms
* low cohesion
* lots of debugging
* hardly any testing
* lots of maintenance


That's not been my experience.




Brian
 
M

Michael Wojcik

IIRC that's one reason, why COBOL has multiple entry points into functions.

<OT>
COBOL doesn't have multiple entry points into functions. Functions
in COBOL are a fairly restrictive language feature. I suspect you're
thinking of "programs", which is what COBOL calls the things that are
closest to what C calls "functions", or what are sometimes called
"subroutines".

Standard COBOL does not permit multiple entry points in programs, but
it's an extension in several popular implementations.
</OT>

Multiple entry points would certainly be *a* way of implementing the
design Richard describes, but I don't see much justification for
using them that way. Just using two functions would be clearer and
less error-prone, I suspect. And it strikes me as unlikely that,
conversely, this design would be used to argue for allowing multiple
entry points in a language.

I must admit, though, that I'm not fond of Richard's scheme in most
cases; unless a function is time-critical or the checks are very
expensive I don't see the advantage to skipping the checks. That
shifts the burden of validation from the program to the programmer.

--
Michael Wojcik (e-mail address removed)

Art is our chief means of breaking bread with the dead ... but the social
and political history of Europe would be exactly the same if Dante and
Shakespeare and Mozart had never lived. -- W. H. Auden
 
T

Tim Rentsch

Richard Heathfield said:
Michael B Allen said:
Should there be any preference between the following logically equivalent
statements?

while (1) {

vs.

for ( ;; ) {

I suspect the answer is "no" but I'd like to know what the consensus is
so that it doesn't blink through my mind anymore when I type it.

[snip]

Personally, I prefer neither choice! I would rather have the loop control
statement explicitly document the exit condition (unless there genuinely
isn't one, such as might be the case in an electronic appliance like a
microwave oven, where "forever" can roughly be translated as "whilst power
is being supplied to the appliance").

What Richard might be saying, but isn't really what I think
he's trying to say, is that the control expression should
redundantly express the condition for loop exit, even if the
loop is never exited by the test on the loop control. So
for example,

/* p != NULL; */
while( p != NULL ){
...
...
p = blah_blah_blah();
if( p == NULL ) break;
...
... code that doesn't affect p ...
}

If this is what he's saying it's an interesting idea. Just
offhand I don't remember seeing it before.

My first reaction was that he's meaning to say that loops
that look infinite (but aren't) are bad, and the code should
be reworked so that the loop control expression is really
what controls the loop body. So to rework the example
above, it might come out as

... /* these lines were at the start of the */
... /* "infinite" loop body originally */

while( (p = blah_blah_blah()) != NULL ){
...
... code that doesn't affect p ...
... /* these lines were at the start of the */
... /* "infinite" loop body originally */
}

which is often the right idea. Certainly in code review any
loop that is of the "infinite-but-not-really" variety should
get some scrutiny. In many cases the code will benefit from
being reworked so that the loop control expression is what
causes the loop to exit (or at least one of them).

However, I don't believe that all "infinite-but-not-really"
loops benefit from this kind of rewriting. Even for loops
with only one exit condition, sometimes having the only loop
exit be a break (or return) in the middle of the loop body
is the clearest expression of what the loop is supposed to
do. At least, that has been my experience.
 
R

Richard Heathfield

Tim Rentsch said:
What Richard might be saying, but isn't really what I think
he's trying to say, is that the control expression should
redundantly express the condition for loop exit, even if the
loop is never exited by the test on the loop control.

No, I'm saying I could put up with

for(;;)
{
fetch_microwave_command(&foo);
execute_microwave_command(&foo);
}

because it's quite evident that this is a genuinely "infinite" loop, in the
limited sense of that word applicable to a microwave oven!

My first reaction was that he's meaning to say that loops
that look infinite (but aren't) are bad,

All else being equal, yes...

and the code should
be reworked so that the loop control expression is really
what controls the loop body.

....and yes.

(This reply is purely for clarification of my earlier intent - I did read
the rest of your article, and I accept that our viewpoints differ.)
 
T

Tim Rentsch

pete said:
The warning is the tie breaker.

There is no difference in defect rate,

Is this statement just one of belief, or are you offering
some evidence?

productivity,

There is for some developers. Among other things, some
debugging techniques work better when the 'while(1)' form is
used. If the people who prefer the 'for(;;)' form don't see
any difference in productivity, that seems to be an argument
in favor of using the 'while(1)' form.

or understanding by the readers.

At least one poster in this thread said something along
the lines of using 'while' for "looping" and 'for' for
"iteration". In the non-infinite case, that's usually
my leaning also. So there is *some* difference in how
the two forms are understood.

Overall, I still believe that the direct effects on the
developers have a more significant effect than do the
effects of a warning message issued by a compiler,
especially since it's easy to get around that message being
issued. But that's just a statement of belief; if we
want to be sure, we should set up a comparison and
gather some sort of objective evidence.
 
T

Tim Rentsch

Richard Heathfield said:
Tim Rentsch said:

[snip]
My first reaction was that he's meaning to say that loops
that look infinite (but aren't) are bad,

All else being equal, yes...

and the code should
be reworked so that the loop control expression is really
what controls the loop body.

...and yes.

(This reply is purely for clarification of my earlier intent - I did read
the rest of your article, and I accept that our viewpoints differ.)

Are they really different? What I said was that some loops
are better with the exit condition in the loop body rather
than the control expression; such loops don't occur
commonly. Do you really mean to say that _no_ loop, at any
time under any conditions, should be written with a constant
control expression? Not counting of course examples like
the microwave oven driver loop, which you explained already.
 
D

Default User

Richard said:
Default User said:


You've been a very lucky chap. (And, on occasion, so have I.)

Perhaps luck in that the software here has generally been written to
Common Coding Standards (if ERT opposes it, it must be good) which
either forbid or recommend against most of the things on your list.
Naturally, debugging, testing, and maintenance are different, not
really things controlled by a standard.



Brian
 
K

Keith Thompson

Tim Rentsch said:
Is this statement just one of belief, or are you offering
some evidence?

I won't try to speak for pete, but since "while (1)" and "for (;;)"
are semantically identical, I'd be very surprised if there were any
difference in defect rate. It's something that seems so obvious to me
that I wouldn't bother trying to measure it without a very good
reason. If there were a difference, I'd tend to assume that it's a
difference in training (perhaps the books or classes that use one form
happen, by coincidence to be better than the ones that use the other
form). Do you have some reason to think there's a significant
difference?
There is for some developers. Among other things, some
debugging techniques work better when the 'while(1)' form is
used. If the people who prefer the 'for(;;)' form don't see
any difference in productivity, that seems to be an argument
in favor of using the 'while(1)' form.

What debugging techniques are you referring to? Off the top of my
head, I'd say that any debugging technique that treats the two forms
significantly differently is broken, but I'm prepared to be
enlightened.
At least one poster in this thread said something along
the lines of using 'while' for "looping" and 'for' for
"iteration". In the non-infinite case, that's usually
my leaning also. So there is *some* difference in how
the two forms are understood.

I read both "while (1)" and "for (;;)" as "this is an infinite loop,
or at least one for which the termination condition is not specified",
and any C programmer should be familiar with both forms. I see the
point of preferring "for (;;)" when iterating over some set of
discrete entities, but I don't think it's a strong argument.

In another thread, I've railed against "if (0 = a)", even though it's
semantically identical to "if (a == 0)". And yes, I'm taking the
opposite side of this argument. The difference, which is largely in
my head, is that I don't find either "while (1)" or "for (;;)" to be
ugly or jarring.
 
K

Keith Thompson

Keith Thompson said:
In another thread, I've railed against "if (0 = a)", even though it's
semantically identical to "if (a == 0)". And yes, I'm taking the
opposite side of this argument. The difference, which is largely in
my head, is that I don't find either "while (1)" or "for (;;)" to be
ugly or jarring.

Of course, I meant "if (0 == a)", not "if (0 = a)".

Gloating over the irony of that typo is left as an exercise for the
reader.
 
A

Alan Balmer

What Richard might be saying, but isn't really what I think
he's trying to say, is that the control expression should
redundantly express the condition for loop exit, even if the
loop is never exited by the test on the loop control. So
for example,

/* p != NULL; */
while( p != NULL ){
...
...
p = blah_blah_blah();
if( p == NULL ) break;
...
... code that doesn't affect p ...
}

I hope that's not what he's saying. My reaction to the above would be
"What the hell? Did the writer leave something out? Is the problem I'm
currently debugging caused by missing code which should have set p?"

If it's desirable to redundantly express the exit condition, do it in
a comment.
 
C

Charlie Gordon

Keith Thompson said:
Charlie Gordon said:
news:[email protected]... [...]
You are right, but things are a bit more complicated than this: pretending to
clean up the C language is doomed.
Just look at :

#include <stdbool.h>
#include <ctype.h>
...
while (isdigit(*s) == true) {
... sometime works, sometimes not ?...
}

That's solved by following a simple rule: never compare a value to a
literal true or false. Comparing to true or false is both error-prone
and useless. If an expression is a condition, just use it as a
condition.

The existence of type bool doesn't mean you can't use

while (isdigit(*s)) {
...
}

I just meant to stress the point, that the C language is full of traps and
pitfalls... giving it a java flavor doesn't fix them.

Chqrlie.
 
C

Charlie Gordon

Default User said:
Perhaps luck in that the software here has generally been written to
Common Coding Standards (if ERT opposes it, it must be good) which
either forbid or recommend against most of the things on your list.
Naturally, debugging, testing, and maintenance are different, not
really things controlled by a standard.

Would you mind posting these ?
Coding Standards so effective would be very helpful around here.

Chqrlie.
 
T

Tim Rentsch

Keith Thompson said:
I won't try to speak for pete, but since "while (1)" and "for (;;)"
are semantically identical, I'd be very surprised if there were any
difference in defect rate.

In fact, I believe the most likely situation is that there
aren't any differences in defect rate, at least if we're
talking about a measurable difference. I asked the question
because I think it's useful to get people to be explicit
about the assumptions that they're making, especially if the
assumptions are unconscious rather than conscious. The
causes of defects are like performance bottlenecks - where
you think they are is often not where they actually are, and
there's no substitute for actual measurement to find out.

It's something that seems so obvious to me
that I wouldn't bother trying to measure it without a very good
reason. If there were a difference, I'd tend to assume that it's a
difference in training (perhaps the books or classes that use one form
happen, by coincidence to be better than the ones that use the other
form). Do you have some reason to think there's a significant
difference?

If there is a difference, I would guess that the factors
involved are correlative rather than causitive (like the
example you gave about training. I don't have any reason to
think that there's a significant difference; but not having
a reason to think there *is* a significant difference should
not, by itself, be enough to claim that there *isn't* any.
There should be some sort of evidence, even if it's just
anecdotal.

What debugging techniques are you referring to? Off the top of my
head, I'd say that any debugging technique that treats the two forms
significantly differently is broken, but I'm prepared to be
enlightened.

One technique I was thinking of is, eg,

#define while(e) while( LOOPTRACK() && (e) )

which is a technique I've found useful in the past in some
situations. You can do something similar with 'for()', but
it doesn't work quite as well because of the syntax of the
'for' control expressions; in particular, it's hard to put
the call to 'LOOPTRACK()' before the first iteration. This
technique can be especially important for loops of the "near
infinite" variety.

I read both "while (1)" and "for (;;)" as "this is an infinite loop,
or at least one for which the termination condition is not specified",
and any C programmer should be familiar with both forms. I see the
point of preferring "for (;;)" when iterating over some set of
discrete entities, but I don't think it's a strong argument.

I'm not saying it should be read that way, only that some
people do tend to read it that way. Whether, or how well,
something is understood by the readers depends on who "the
readers" are. Most people (myself included) tend to assume
unconsciously that other people will read things they same
way that they do. That assumption is at least partly untrue
for the 'while(1)/for(;;)' distinction.

Does this make a big difference? Perhaps so, perhaps not.
However I think it's important to note that the distinction
is there, even if later we decide that it isn't important
enough to worry about.

In another thread, I've railed against "if (0 == a)", even though it's
semantically identical to "if (a == 0)". And yes, I'm taking the
opposite side of this argument. The difference, which is largely in
my head, is that I don't find either "while (1)" or "for (;;)" to be
ugly or jarring.

My personal opinion is that either 'while(1)' or 'for(;;)'
is ok, and developers should be free to write either one.
For myself I find the 'while(1)' to be more natural, and I'm
sure there are other people who find the 'for(;;)' more
natural; probably it roughly balances out. However, that
conclusion is merely personal opinion not supported by any
specific evidence. An important point in most of what I've
been saying is that it helps to first state points of
evidence, and only later in a separate step draw conclusions
based on whatever evidence is being considered.
 
A

akarl

Charlie said:
Charlie Gordon said:
[...]

You are right, but things are a bit more complicated than this: pretending
to
clean up the C language is doomed.
Just look at :

#include <stdbool.h>
#include <ctype.h>
...
while (isdigit(*s) == true) {
... sometime works, sometimes not ?...
}

That's solved by following a simple rule: never compare a value to a
literal true or false. Comparing to true or false is both error-prone
and useless. If an expression is a condition, just use it as a
condition.

The existence of type bool doesn't mean you can't use

while (isdigit(*s)) {
...
}


I just meant to stress the point, that the C language is full of traps and
pitfalls... giving it a java flavor doesn't fix them.

You don't see the point. The `bool' type in stdbool.h is primarily for
(self) documentation purposes. That a lint tool such as Splint can
detect incorrect usage is an extra bonus.

August
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,228
Latest member
MikeMichal

Latest Threads

Top