while (1) vs. for ( ;; )

R

Richard Heathfield

Alan Balmer said:
[avoiding multiple returns]
Only because you then have to chase after call_the_real_function(). On
the initial writing, this would probably be only a few lines below,
but that could change. Hmm... Actually, the more I think about it, the
less I like it. Each function turns into two, and the validation's
connection to the rest of the code is more tenuous than necessary.

That's a reasonable argument. Here's another, which is diametrically
opposed. :)

There are two tasks here - the validation of parameters, and the task for
which the function was actually written. Now, there may well be times when
you know perfectly well that the parameters are valid, because it's
actually impossible for them not to be valid. And there are times when
you're not sure.

In the cases where you are sure the parameters are okay, you can call the
function directly, avoiding the overhead of the parameter check. (You could
reasonably add assertion checks here, since they cannot possibly fire if
your program is correct.)

When you're not sure, you call the validation function - which either calls
the "do it" function directly or, perhaps slightly more cleanly, returns an
"everything's fine" value, after which you can proceed to call the "do it"
function yourself.
 
A

akarl

Mark said:
I know it's only a trivial example... but why not:

if(nothing_strange_about_the_parameters)
status = call_the_real_function(passing_validated_parameters);
return status;

....or perhaps

if (something_strange_about_the_parameters) {
status = 0;
} else {
<do the real stuff>
status = 1;
}
return status;


August
 
W

Walter Roberson

There are two tasks here - the validation of parameters, and the task for
which the function was actually written. Now, there may well be times when
you know perfectly well that the parameters are valid, because it's
actually impossible for them not to be valid. And there are times when
you're not sure.
In the cases where you are sure the parameters are okay, you can call the
function directly, avoiding the overhead of the parameter check. (You could
reasonably add assertion checks here, since they cannot possibly fire if
your program is correct.)
When you're not sure, you call the validation function - which either calls
the "do it" function directly or, perhaps slightly more cleanly, returns an
"everything's fine" value, after which you can proceed to call the "do it"
function yourself.

Unfortunately, most programmers "know" that they don't make mistakes,
and so will go ahead and call the functions directly. They won't
put in the assert()'s, either, as those are "a drain on performance"
and if you know you are calling with valid parameters you don't need them...
 
C

Chris McDonald

Then might they not also flounder with while(true) by not supplying the
standard header <stdbool.h>?


I believe that that is far less of a problem;
we encourage students learning C to compile their code with
<ot>
gcc -std=c99 -Wall -Werror -pedantic
</ot>

in the belief that not only will it be more educational, because
it encourages students learning C to track down a greater number of
compile-time errors, but that it may develop better habits.

Not supplying <stdbool.h> in this context is as meaningful as
not supplying <stdio.h> or, to follow a them,
as useful as an ashtray on a motorbike.
 
C

Charlie Gordon

Chris McDonald said:
I would suggest that both while(1) and for(;;) are potentially unclear
to a person (undergrad. student) seeing C for the first time
(while assuming they haven't seen Java or C++, either).

OK, I suggest that while(true) is *clearer*.

BS

why not use the real clear thing :

WHILE (TRUE) {
}

or even

WHILE (TRUE == TRUE)
BEGIN
...
END

with obvious macro definitions for dummies ;-)
 
C

Christopher Benson-Manica

Richard Heathfield said:
I'm quite unlikely to use multiple returns

Would you suppose that such a view is common in the world of
professional programming? My work experience is still quite limited,
and it's hard to tell ubiquitous programming conventions from
idiosyncratic oddities...
 
B

Ben Pfaff

Christopher Benson-Manica said:
Would you suppose that such a view is common in the world of
professional programming?

It is not a view I have seen espoused at the companies where I
have worked.
 
A

akarl

Charlie said:
BS

why not use the real clear thing :

WHILE (TRUE) {
}

or even

WHILE (TRUE == TRUE)
BEGIN
...
END

with obvious macro definitions for dummies ;-)

In variable and function declarations the virtues of stdbool.h is clear.
Compare

int isThisAPredicate;

and

bool thisMustBeAPredicate;


August
 
C

Charlie Gordon

akarl said:
In variable and function declarations the virtues of stdbool.h is clear.
Compare

int isThisAPredicate;

and

bool thisMustBeAPredicate;

You are right, but things are a bit more complicated than this: pretending to
clean up the C language is doomed.
Just look at :

#include <stdbool.h>
#include <ctype.h>
....
while (isdigit(*s) == true) {
... sometime works, sometimes not ?...
}

coming from java, you can be used to writing while(true) { ... } but then this
works too :

#if true
.... this code will compile :
.... It is a good thing the ISO folks made it work... as a consequence, true and
false are plain untyped integral literals.
#endif


I think it is better for people who see C for the first time to not get the
false idea that le language is easy.
The common idiom to loop forever is for (;;) { ... } . It catches the eye
instantly, why make it less obvious by burying semantics in less distinctive
wording ?

I especially don't like while(1) { ... } because it can be confused with while
(l) { ... } and vice versa.
Naming a variable l is of course a bad idea, but a quite common occurrence in
fact.
 
T

Tim Rentsch

Michael B Allen said:
Should there be any preference between the following logically equivalent
statements?

while (1) {

vs.

for ( ;; ) {

I suspect the answer is "no" but I'd like to know what the consensus is
so that it doesn't blink through my mind anymore when I type it.

My experience is that there is less cognitive load if the
'while(1)' form is used. That is, when reading code, I can
read either form just fine, but something in my conscious
brain notices if the 'for(;;)' form is used, which slows
down my reading just a tiny bit. The 'while(1)' form gets
read and turned into "loop forever" without losing flow or
needing any conscious brain cycles. For these reasons --
that is, higher productivity -- I prefer the 'while(1)'
form.

I was surprised at how many people reported that a compiler
they use issues a warning for 'while(1)' and gave that as a
reason for giving preference to the 'for(;;)' form. It
seems like a choice should be made on the basis of what's a
better expression (lower defect rate, higher productivity,
better understood by the readers), not on the basis of some
errant warning message. Getting a warning on 'while(1)'
(assuming it can't be separately disabled) is a sign that
the person who wrote the warning code didn't think through
what he was doing, not that the construct is suspect. Style
choices should determine what kinds of warning messages come
out of the compiler, not the other way around.

I myself would have no problem with people writing

FOREVER {
}

assuming a suitable '#define FOREVER <...>', _provided_ the
definition had made its way into a project-level include
file and was used and recognized by the team as a whole and
not just by some individuals. Doing that might be a way of
getting team consensus and simultaneously avoiding the whole
while/for discussion altogether.
 
K

Keith Thompson

Charlie Gordon said:
news:[email protected]... [...]
You are right, but things are a bit more complicated than this: pretending to
clean up the C language is doomed.
Just look at :

#include <stdbool.h>
#include <ctype.h>
...
while (isdigit(*s) == true) {
... sometime works, sometimes not ?...
}

That's solved by following a simple rule: never compare a value to a
literal true or false. Comparing to true or false is both error-prone
and useless. If an expression is a condition, just use it as a
condition.

The existence of type bool doesn't mean you can't use

while (isdigit(*s)) {
...
}
 
M

Michael B Allen

It is not a view I have seen espoused at the companies where I
have worked.

I do this all the time. Consider something like:

int
process_input(FILE *in)
{
for ( ;; ) {
switch (read_tok(in)) {
case TOK_EXIT:
return 0;
case TOK_ERR:
return -1;
case TOK_UP:
y++;
break;
case TOK_DOWN:
y--;
break;
case TOK_FIRE:
if (fire() == -1) {
return -1;
}
break;
default:
return -1;
}

This also happends quite a bit in little state machines like:

int state = 0;
for ( ;; ) {
switch (state) {
case ST_GROUND:
case
... multiple exit conditions ...

etc.

Mike
 
R

Richard Bos

akarl said:
Well, in C we have the similar do-while statement. In Modula on the
other hand there is a real exit-in-the-middle loop construct:

LOOP
...
IF someCondition THEN EXIT END;
...
END

Seems you've never heard of break.

#define ever (;;)

for ever {
do_something();
if (condition) break;
do_some_more();
}

#define until(x) while (!(x))
#define friday_the_first 0

do (
an_action();
if (cows_dance_on_ice) break;
another_action();
} until(friday_the_first);

Richard
 
R

Richard Bos

Chris F.A. Johnson said:
No, he's not. Did you mean:

while(is_the_pope_Catholic) {

I suggest he might have meant

while(is_the_pope_Roman_Catholic) {

Richard
 
R

Richard Heathfield

Walter Roberson said:
Unfortunately, most programmers "know" that they don't make mistakes,
and so will go ahead and call the functions directly. They won't
put in the assert()'s, either, as those are "a drain on performance"
and if you know you are calling with valid parameters you don't need
them...

Most programmers, sure. But I wasn't talking to most programmers. I was
talking to you! :)
 
R

Richard Heathfield

Christopher Benson-Manica said:
Would you suppose that such a view is common in the world of
professional programming?

No. In my experience, people in the world of professional programming
prefer:

* enormous functions
* multiple exit points from functions and loops
* multiple entry points, if they can get them
* spaghetti logic
* tight coupling
* vry shrt var nms
* evnshrterfnnms
* low cohesion
* lots of debugging
* hardly any testing
* lots of maintenance

My work experience is still quite limited,
and it's hard to tell ubiquitous programming conventions from
idiosyncratic oddities...

Right. I think the above list is pretty much universal. I've only worked on,
perhaps, three client sites where it wasn't observed.
 
W

Wolfgang Riedel

Richard said:
Alan Balmer said:
[avoiding multiple returns]
That's a reasonable argument. Here's another, which is diametrically
opposed. :)

There are two tasks here - the validation of parameters, and the task for
which the function was actually written. Now, there may well be times when
you know perfectly well that the parameters are valid, because it's
actually impossible for them not to be valid. And there are times when
you're not sure.

In the cases where you are sure the parameters are okay, you can call the
function directly, avoiding the overhead of the parameter check. (You could
reasonably add assertion checks here, since they cannot possibly fire if
your program is correct.)

When you're not sure, you call the validation function - which either calls
the "do it" function directly or, perhaps slightly more cleanly, returns an
"everything's fine" value, after which you can proceed to call the "do it"
function yourself.
IIRC that's one reason, why COBOL has multiple entry points into functions.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,219
Latest member
KristieKoh

Latest Threads

Top