i++ + i++ questions

?

=?ISO-8859-15?Q?Juli=E1n?= Albo

Kai-Uwe Bux said:
statically analyzed. However, I fail to see why evaluation of an integral
expression should fall into that category. Why should this be deemed QoI?
Sometimes, I get the impression that the standard has way more undefined
behavior than necessary. This makes it unnecessarily hard to ensure the
correctness of a program.

I suppose that one reason is to maintain relatively low the minimal
complexity required to make a conforming compiler.
Ok, let' have a command line switch for that.
BTW: that legacy code would probably benefit from a little rewrite. What
happens when those folks upgrade their compiler and the UB changes
silently?

Yes, but I suspect that commercial considerations play a big role in that
type of things. Every change that forces rewriting of some code can be a
bunch of customers unsatisfied.
 
K

Kaz Kylheku

Richard said:
Jon Slaughter said:
[...]
The idea is to allow C to run on a diversity of systems
and furthermore to be optimized when possible. So the
"loose" rules allow that, while the undefined behavior
allows your possibilty as oneof the undefined choices.
Is this annoying? Seems to me be. However, the other
side of the coin is that C would probably not be as
popular w/o it. Some may say this is a good thing.

I didn't know that optimization was part of the C++ standard?

There's a difference between prescribing optimisation and not thwarting
it. What the Standard attempts to do is to specify what a well-formed
program is, and how such a program must behave, without restricting the
freedom of implementors to make it work efficiently on a variety of
architectures. Unnecessarily specifying things like order of evaluation
makes that task much harder.

Specifying the /abstract/ order of evaluation does not unduly constrain
optimization. The /actual/ evaluation order can still be permuted; as
long as the result is correct, externally visible effects all happen in
the right order, and all objects appear to be updated with the correct
values at the next sequence point.

And in any case, it's a much, much more sane design to have a language
feature by which the programmer can assert that over a well-defined
region of the program, the semantics are relaxed in some way. E.g.,
within this scope, I want looser floating-point semantics for greater
speed. Within this scope, I want loose evaluation order. And so forth.

If the program spends 90% of its time in 10% of the code, why does the
remaining 90% of the code have to be written in the same unsafe
language that supposedly helps to optimize that 10%?
You are entitled to your opinion.

Yeah, but it seems that deranged lunatics are /more/ than just entitled
to their opinions. Their opinions end up being designed into the tools
that we use at the end of the day.
 
O

Old Wolf

Kai-Uwe Bux said:
Well, let's have a look at the wording of the standard:

[5/4]
Except where noted, the order of evaluation of operands of
individual operators and subexpressions of individual
expressions, and the order in which side effects take place,
is unspecified.53) Between the previous and next sequence
point a scalar object shall have its stored value modified
at most once by the evaluation of an expression. Furthermore,
the prior value shall be accessed only to determine the value
to be stored. The requirements of this paragraph shall be met
for each allowable ordering of the subexpressions of a full
expression; otherwise the behavior is undefined.

Now, if the standard just dropped the line last 5 words, the
clause would still specify what a well defined program looks
like. Just the compiler would be required to issue an error
if the shall-clause is violated.

To detect this reliably is not so simple:

int p;

void foo(int *q)
{
p = *q++;
}

int main()
{
int a = 0;

switch( getchar() )
{
case 'a': foo(&p, &a); break;
case 'b': foo(&p, &p); break;
}
}

Then there is only a problem if the user types 'b'.

C99 has added a partial solution called 'restrict' but
I think that does not solve the problem in this particular
case.

I cannot see a simple way that the compiler can generate an
error for the above code, but still permit a good range of
legal code. It seems to me that the only solution would be
for the compiler to store 'metadata' with foo, that &a
must not be equal to p, and to trap any possible path of
execution that gives that result (including casting integers
to pointers). This information must also be checked across
compilation units and across libraries (it would certainly
require a new C ABI so that libraries such as GLIBC could
keep track of restrictions). This seems too restrictive to me
and it would cripple many of the uses that C is currently
put to.
 
K

Kai-Uwe Bux

Old said:
Kai-Uwe Bux said:
Well, let's have a look at the wording of the standard:

[5/4]
Except where noted, the order of evaluation of operands of
individual operators and subexpressions of individual
expressions, and the order in which side effects take place,
is unspecified.53) Between the previous and next sequence
point a scalar object shall have its stored value modified
at most once by the evaluation of an expression. Furthermore,
the prior value shall be accessed only to determine the value
to be stored. The requirements of this paragraph shall be met
for each allowable ordering of the subexpressions of a full
expression; otherwise the behavior is undefined.

Now, if the standard just dropped the line last 5 words, the
clause would still specify what a well defined program looks
like. Just the compiler would be required to issue an error
if the shall-clause is violated.

To detect this reliably is not so simple:

int p;

void foo(int *q)
{
p = *q++;
}

int main()
{
int a = 0;

switch( getchar() )
{
case 'a': foo(&p, &a); break;
case 'b': foo(&p, &p); break;
}
}

Then there is only a problem if the user types 'b'.

C99 has added a partial solution called 'restrict' but
I think that does not solve the problem in this particular
case.

I cannot see a simple way that the compiler can generate an
error for the above code, but still permit a good range of
legal code. It seems to me that the only solution would be
for the compiler to store 'metadata' with foo, that &a
must not be equal to p, and to trap any possible path of
execution that gives that result (including casting integers
to pointers). This information must also be checked across
compilation units and across libraries (it would certainly
require a new C ABI so that libraries such as GLIBC could
keep track of restrictions). This seems too restrictive to me
and it would cripple many of the uses that C is currently
put to.

Rats -- it's pointers again.

You are right. Oh well.


Best

Kai-Uwe Bux
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,776
Messages
2,569,603
Members
45,197
Latest member
Sean29G025

Latest Threads

Top