Optimizing away an endless loop

  • Thread starter Johannes Schaub (litb)
  • Start date
J

Joshua Maurice

Johannes said:
Hello all, i need some advice. Is it actually true that a C++0x
implementation can print "Hello" for the following snippet?
#include <iostream>
int main() {
  while(1)
    /* do nothing */;
  std::cout << "Hello" << std::endl;
}
C++0x seems to allow an implementation to assume that the above infinite
loop terminates, which to me seems to mean that it can entirely ignore
that this loop never wants to stop.
Why is this the way it is? Doesn't it harm a lot of programs?

Also, here is another thought I have:

int a = -1;
for(;;) {
  if(a > 100) break;
  a++;

}

int *p = new int[a];

This loop *does* terminate, does not access volatile objects, does not call
i/o functions, sync or atomic operations. Yet, the implementation may assume
it terminates. If an implementation is thus free of optimizating it away,
would it not make this program UB?

I don't see how this example is different from the print example above. Both
examples have code after the loop that depend on the loop. Yet the rationale
that the UB is only risen because the loop would not terminate is disproven
by this. Can someone show please what paragraph makes these both examples
different?

I think there is a difference. I forget the exact phrasing, but I
don't think anyone is talking about optimizing that loop. I think
we're only talking about removing loops which have no side effects ala
the C++ standard definition /and/ for which no one after the loop
depends on objects modified in the loop.
 
J

Joshua Maurice

* Joshua Maurice, on 30.08.2010 09:23:


No, you claimed that "if the loop does terminate, then the entire loop should be
optimized away".

"should" is very different from "can".

It implies that the compiler should determine whether the loop terminates, and
further, if that proves to be impossible in general (as it is), that the
semantics need to be woolified sufficiently so that the loop can be optimized
away in all cases where it does terminate plus some.

No no no. When I said that all loops
1- which terminate,
2- which have zero visible side effects to the abstract machine of the
C++ standard,
3- and which do not write to objects read outside the loop,
should be optimized into nothing.

In this context, should is like a moral imperative. It denotes a
desired outcome. Obviously the desired outcome is impossible. I stated
this explicitly immediately following this claim in my first post.
"Should" in that context does not imply that:
the compiler should determine whether the loop terminates, and
further, if that proves to be impossible in general (as it is), that the
semantics need to be woolified sufficiently so that the loop can be optimized
away in all cases where it does terminate plus some.

This is just as silly as saying "Compilers should optimize all tail
recursive functions into their iterative methods" implies "If a
compiler cannot prove that a function is not equivalent to a tail
recursive function, then it should replace the function with return
42".

I think because I don't agree with you entirely, you immediately cast
me into the opponent's camp, which is largely false.

Suffice to say, I see this as analogous to the copy constructor
elision rules which I tend to like.

It's not analogous; see below.
Arguably, the copy constructor elision rules "screw up the
programmer's intent [...] to produce an /incorrect result/ for correct
code".

No.

Copy constructors were from the outset defined as special, because they do a
very special and limited job, and a copy constructor call elision does not
violate those semantics: it gets the same job done more efficiently, that's all.

A loop can do anything; elision of a loop in general changes the code's effect
so as to produce /incorrect results/.
I also think that this particular thing in this particular case
is a good thing.

Copy constructor elision = good thing, yes.

The corresponding thing for a loop would be a new loop construct like
'perhaps_loop_while' or a loop attribute '[[tentative]]', whatever, with the
semantics that a provably infinite such loop can be elided.

I don't think anyone would actually use that idiot's constructs, do you?
However, unlike the copy constructor elision rules, 1- there is code
already out there which would break under this new infinite loops
rule, 2- I think that the copy constructor elision rules are a lot
more intuitive than this new infinite loops rule, and 3- I think that
the copy constructor elision rules actually have a tangible benefit as
opposed to this new infinite loops rule.
That is, I err on the side of caution when introducing allowances such
as the copy constructor elision rule. I have seen compelling arguments
for the copy constructor elision rule, but I have yet to see anything
near that level of rigor for this new infinite loops rule.

Yes.

You won't see the rigor either, since it's utterly meaningless sabotage.

Whoever proposed this, making fools of the committee?

Copy constructor elision changes the semantics of the program. Suppose
you had an instance counting thing in a copy constructor, and you
wrote logic to trigger whenever the number of copies became X, then
copy constructor elision would break your program. Copy constructor
elision is a very narrow allowance for the compilers to translate
source not equivalently to the naive way. However, we live with it
because it "jives" with our intuition: we're just talking about making
a couple less copies, and all copies "should be" are equivalent,
right?

I think that it's a little less intuitive for a complete beginner to
try and sell him that "all loops should terminate" as opposed to
"copies of objects made by the copy constructor should be equivalent",
but I don't think we're talking entirely different kinds here, just
matter of degree.

However, I basically agree with your recommendations that this is a
really bad idea to standardize, and I definitely do not want my
compiler to do this. (If it is a big problem, I would want my compiler
to issue a warning, though.)
 
J

James Kanze

* James Kanze, on 30.08.2010 17:15:

[...]
Is this meant to be an example of something that can be
"optimized" by removing the loop?

According to the wording in the current standard, the compiler
could remove the loop, regardless of the input. (How important
it is, I don't know. It's a very artificial example.)
 
J

James Kanze

Also, here is another thought I have:
int a = -1;
for(;;) {
if(a > 100) break;
a++;
}
int *p = new int[a];
This loop *does* terminate, does not access volatile objects,
does not call i/o functions, sync or atomic operations. Yet,
the implementation may assume it terminates. If an
implementation is thus free of optimizating it away, would it
not make this program UB?

No. In fact, an implementation is free to optimize it away; a
good optimizing compiler will produce the same code here as if
you'd just written:

int *p = new int[101];
I don't see how this example is different from the print example above.

The loop terminates.
Both examples have code after the loop that depend on the
loop. Yet the rationale that the UB is only risen because the
loop would not terminate is disproven by this. Can someone
show please what paragraph makes these both examples
different?

It's not in the standard; it's in the proposed draft C++0x:
§6.5/5 in N3000 (which isn't the very latest, but is relatively
recent):

A loop that, outside of the for-init-statement in the
case of a for statement,
-- makes no calls to library I/O functions, and
-- does not access or modify volatile objects, and
-- performs no synchronization operations (1.10) or
atomic operations (Clause 29)
may be assumed by the implementation to terminate. [
Note: This is intended to allow compiler
transformations, such as removal of empty loops, even
when termination cannot be proven. —end note ]

The wording is pretty vague: there's no mention of undefined
behavior, for example. I'd say that there is no undefined
behavior, but that the standard allows alternate semantics
(which may result in undefined behavior if you haven't provided
for them).
 
A

Alf P. Steinbach /Usenet

* James Kanze, on 31.08.2010 12:11:
* James Kanze, on 30.08.2010 17:15:
[...]
Who said anything about depending on nothing external? And "no
side effects" means in the sense of the C++ language: no IO and
no accesses to volatile variables. The effect of one
calculation can still affect the next one. Something like:
inline int f(int in)
{
return in % 2 == 0 ? in / 2 : n * 2;
}
int i;
std::cin>> i;
while (i != 0) {
i = f(i);
}
std::cout<< "Whatever"<< std::endl;
fits the bill, for example. (Note that if i is not initially
zero, the loop never terminates. I would *not* expect an
optimizer to be able to recognize this.)
Is this meant to be an example of something that can be
"optimized" by removing the loop?

According to the wording in the current standard, the compiler
could remove the loop, regardless of the input. (How important
it is, I don't know. It's a very artificial example.)

Well it doesn't make the program faster (to see that, try to calculate how much
faster you think it makes it). So it's not an optimization on that count. And it
makes the program produce an /incorrect result/, contrary to the intent of
infinitely looping, which also means that it's not an optimization.

In short, it's not an optimization, and the importance is about how much damage
it does, not about any benefit (there's none), so I say it's pretty important.

So, no optimization example has been shown.

And it cannot be shown, because this is just inane damage to the language.

Nobody of non-negative IQ would use 'perhaps_loop_while' if they had any choice,
but, our disagreement isn't over that, but merely over whether one should
pretend that whoever proposed this idiocy wasn't actively pulling one over, i.e.
pretending that there is some likeness to actual optimization.


Cheers & hth.,

- Alf
 
J

James Kanze

* James Kanze, on 31.08.2010 12:11:
* James Kanze, on 30.08.2010 17:15:
[...]
Who said anything about depending on nothing external? And "no
side effects" means in the sense of the C++ language: no IO and
no accesses to volatile variables. The effect of one
calculation can still affect the next one. Something like:
inline int f(int in)
{
return in % 2 == 0 ? in / 2 : n * 2;
}
int i;
std::cin>> i;
while (i != 0) {
i = f(i);
}
std::cout<< "Whatever"<< std::endl;
fits the bill, for example. (Note that if i is not initially
zero, the loop never terminates. I would *not* expect an
optimizer to be able to recognize this.)
Is this meant to be an example of something that can be
"optimized" by removing the loop?
According to the wording in the current standard, the compiler
could remove the loop, regardless of the input. (How important
it is, I don't know. It's a very artificial example.)
Well it doesn't make the program faster (to see that, try to
calculate how much faster you think it makes it). So it's not
an optimization on that count. And it makes the program
produce an /incorrect result/, contrary to the intent of
infinitely looping, which also means that it's not an
optimization.

Removing the loop makes the program inifinitely faster, for some
input. The results are incorrect, I agree, but apparently, the
standard gives the compiler the explicit right to do this.

[...]
And it cannot be shown, because this is just inane damage to
the language.

I tend to agree, but since the proposed standard allows it...
Nobody of non-negative IQ would use 'perhaps_loop_while' if
they had any choice, but, our disagreement isn't over that,
but merely over whether one should pretend that whoever
proposed this idiocy wasn't actively pulling one over, i.e.
pretending that there is some likeness to actual optimization.

The actual question is whether someone might use a
perhaps_loop_while, and what the consequences are, according to
the standard. In fact, if I understand correctly, the standard
has also invalidated true infinite loops, which are used in some
specific types of programs.
 
K

Kai-Uwe Bux

Johannes said:
Actually, the behavior of the above program turns out to be undefined,
according to the C++0x draft as explained by
http://blog.regehr.org/archives/161 :

`Unfortunately, the words "undefined behavior" are not used. However,
anytime the standard says "the compiler may assume P," it is implied that
a program which has the property not-P has undefined semantics.`

I asked the same question on Stackoverflow, and they came to the
conclusion that this allows better optimizations for the compiler.

Ok, so now I start to wonder what the provision actually means. It reads
thus:

A loop that, outside of the for-init-statement in the case of a for
statement,
? makes no calls to library I/O functions, and
? does not access or modify volatile objects, and
? performs no synchronization operations (1.10) or atomic operations
(Clause 29)
may be assumed by the implementation to terminate.

Now, this "may be assumed" allows the compiler to make a possibly false
assumption. As pointed out above, if the assumption is wrong, the behavior
is undefined. I wonder what happens if the assumption is true for some but
possibly not all inputs. Is the compiler allowed to make the (false)
assumption that the loop _always_ terminates? In this case, we would have
undefined behavior even for those values where the loop happens to
terminate.

Also, what happens if we just cannot proof that a loop terminates? Does that
mean, we cannot argue that the program is well-defined?

Example:

#include <iostream>

int main ( void ) {
unsigned long long n;
unsigned long long m = 0;
std::cin >> n;
while ( n != 1 ) {
++ m;
if ( n % 2 ) {
n = 3*n + 1;
} else {
n /= 2;
}
}
std::cout << m << "\n";
}

// end of file

If the 3n+1 conjecture should be false, we may see undefined behavior; or: a
proof that the above is well-defined entails a proof of the 3n+1 conjecture.

So, if this program prints something, does the result mean anything? or do I
have to make m volatile?


Best

Kai-Uwe Bux
 
A

Armen Tsirunyan

It's a direct introduction, by the compiler, of a second bug /hiding/ the
original bug.

By allowing this rewriting rule in the standard one has introduced sufficiently
vague idiot's semantics so that the transformation can be formally regarded as
preserving semantics and thus, that it can formally be regarded as an
optimization, but it's still pure idiot's wordplay. Someone needs a real hard
kick in the ass. Or on the other side of the body at about the same elevation.

+1 for Alf :)
This is absolutely outrageous. If compilers DO actually perform this
"optimization"
the nazis will once again ride on dinosaurs.

I myself have used empty loops to test things, like, OK, let's put an
empty loop here and see if the program ends, so I'll see if the code
reaches that particular branch... I think I am not the only person
done so who has and is willing to so under the rule of 0x
 
J

Joshua Maurice

* James Kanze, on 31.08.2010 12:11:
On Aug 30, 4:24 pm, "Alf P. Steinbach /Usenet"<alf.p.steinbach
(e-mail address removed)>  wrote:
* James Kanze, on 30.08.2010 17:15:
     [...]
Who said anything about depending on nothing external?  And "no
side effects" means in the sense of the C++ language: no IO and
no accesses to volatile variables.  The effect of one
calculation can still affect the next one.  Something like:
      inline int f(int in)
      {
          return in % 2 == 0 ? in / 2 : n * 2;
      }
      int i;
      std::cin>>   i;
      while (i != 0) {
          i = f(i);
      }
      std::cout<<   "Whatever"<<   std::endl;
fits the bill, for example.  (Note that if i is not initially
zero, the loop never terminates.  I would *not* expect an
optimizer to be able to recognize this.)
Is this meant to be an example of something that can be
"optimized" by removing the loop?
According to the wording in the current standard, the compiler
could remove the loop, regardless of the input.  (How important
it is, I don't know.  It's a very artificial example.)
Well it doesn't make the program faster (to see that, try to
calculate how much faster you think it makes it). So it's not
an optimization on that count. And it makes the program
produce an /incorrect result/, contrary to the intent of
infinitely looping, which also means that it's not an
optimization.

Removing the loop makes the program inifinitely faster, for some
input.  The results are incorrect, I agree, but apparently, the
standard gives the compiler the explicit right to do this.

    [...]
And it cannot be shown, because this is just inane damage to
the language.

I tend to agree, but since the proposed standard allows it...
Nobody of non-negative IQ would use 'perhaps_loop_while' if
they had any choice, but, our disagreement isn't over that,
but merely over whether one should pretend that whoever
proposed this idiocy wasn't actively pulling one over, i.e.
pretending that there is some likeness to actual optimization.

The actual question is whether someone might use a
perhaps_loop_while, and what the consequences are, according to
the standard.  In fact, if I understand correctly, the standard
has also invalidated true infinite loops, which are used in some
specific types of programs.

I found a paper on the committee's website, which at least gives a
slightly more rational explanation for this proposed new rule.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2052.htm
Semantics of some non-terminating loops

Concern has been expressed about whether it is safe and legal for a
compiler to optimize based on the assumption that a loop will
terminate. The canonical example:

for (T * p = q; p != 0; p = p->next)
++count;
x = 42;

Is it valid for the compiler to move the assignment to x above the
loop? If the loop terminates, clearly yes, because the overall effect
of the code doesn't change, and, in the absence of synchronization,
there is no guarantee that the assignment to x will not be visible to
a different thread before any assignments to count. But what if the
loop doesn't terminate? For example, may a user assume that a non-
terminating loop effects synchronization, and may therefore be used to
prevent a data race? Clearly, a loop that contains any explicit
synchronizations must be assumed to interact with a different thread,
and a loop that contains a volatile access or a call to an I/O
function must be assumed to interact with the environment, so
optimization opportunities for such a loop are already limited. But
what about a simple loop, as above?

If such a loop does not terminate, then clearly neither the loop
itself nor any code following the loop can have any observable
behavior. Moreover, as the "least requirements" refer to data written
to files "at program termination", the presence of a non-terminating
loop may even nullify observable behavior preceding entry to the loop
(for example, because of buffered output). For these reasons, there
are problems with concluding that a strictly-conforming program can
contain any non-terminating loop. We therefore conclude that a
compiler is free to assume that a simple loop will terminate, and to
optimize based on that assumption.
<<<<

I believe the mandate should be to standardize existing practice, and
in limited cases standardize currently nonexistent practice.

The quote is a very standardeze-y argument. It proceeds from the very
limited guarantees of the standard that the contents of file are only
guaranteed at the end of the execution. However, that is bogus. A lot
of us have seen servers for which these limited guarantees result in
an unworkable model. We need a long lived process which writes to a
file, and we need a guarantee that the file has X contents after a
flush (transactions, recovery, other programs reading the file, the
file is a pipe or socket, etc.). So, from this very shaky and false
foundation, the paper goes on to conclude that any C++ program which
does not terminate has no defined behavior. Again, this is just silly.
It's definitely not standardizing existing practice, and it's
definitely not standardizing any useful practice either. Instead, it's
the result of rules lawyering of the worst kind.

Maybe it's time to increase the guarantees given to the abstract
machine to machine current practice? Then again, I've seen enough
discussions about file flushes not actually working, either because
the OS caches it in memory despite your instructions, and most evilly
because there's a hardware cache in the hard drive unit itself which
caches it despite a request to flush, so this isn't trivial.

It is an interesting take on it. I didn't quite see the argument until
I read that paper. To quote the paper above:
For example, may a user assume that a non-terminating loop effects synchronization, and may therefore be used to prevent a data race?

My answer is a resounding "yes". In the committee papers discussing
threading and race conditions, there are examples which describe how
an implementation may not introduce a load on a branch of a switch
where there was none before. In that same line of thought, an
implementation may not introduce a new load by moving a load from
after to before a (potentially) infinite loop, any more than it could
move a load out of an "if (false)" branch or introduce a load which
was not there before in a branch of a switch. Moreover, the semantics
of the abstract machine should be changed to accommodate standard and
well accepted practice of programs which are meant to run for years or
forever, such as servers of all varieties.

As a preemptive reply to the other arguments I've heard before: it
might result in more efficient code to introduce a new load in a
switch statement branch, but that can introduce a race condition, so
an implementation may not do it. It changes the semantics of the
program as written. Equivalently, an infinite loop has well defined
semantics, and they are used in practice and are well accepted, and
moving loads before the loop can introduce race conditions and change
the semantics of the program. Any lost "optimization opportunities"
are dwarfed by having a sane programming language without a multitude
of "exceptions" such as copy constructor elision and return value
optimization. I'm happy with the current exception of "copy
constructed objects, in certain situations, might be elided, so they
should be equivalent". I am not happy with "sometimes, the
implementation might introduce a race condition", and I would
definitely not be happy trying to debug that in a complex piece of
software.
 
C

cpp4ever

* James Kanze, on 29.08.2010 21:30:

It's not an optimization.

It's a direct introduction, by the compiler, of a second bug /hiding/
the original bug.

By allowing this rewriting rule in the standard one has introduced
sufficiently vague idiot's semantics so that the transformation can be
formally regarded as preserving semantics and thus, that it can formally
be regarded as an optimization, but it's still pure idiot's wordplay.
Someone needs a real hard kick in the ass. Or on the other side of the
body at about the same elevation.



I agree, and more (see above). <g>


Cheers,

- Alf

What can I say? I'm gob-smacked that Alf would appear to be the first
person to find this compiler suggestion insane, or at least the mental
aberration of people who should know better.

Alf, can I be second in line after yourself to deliver that ass kicking?

Regards

cpp4ever
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top