parenthesis

G

gc

If I have double variables, d,u,v,w,x. Soes the standard ensure that
while
assigning a value to d as,
d=(u+v)-(w+x);
u+v will be added first then w+x and then they will be added together
and that the compiler will not evaluate the above expression in
anyother order.

Just wondering, since addition of doubles is not associative.
 
G

Grumble

gc said:
If I have double variables, d,u,v,w,x. Soes the standard ensure that
while
assigning a value to d as,
d=(u+v)-(w+x);
u+v will be added first then w+x and then they will be added together
and that the compiler will not evaluate the above expression in
anyother order.

Just wondering, since addition of doubles is not associative.

The subtraction will be carried out last, but AFAIK the additions
could be done in either order. If you want to force a specific
evaluation order, you could use an extra variable.
 
R

Russell Hanneken

gc said:
If I have double variables, d,u,v,w,x. Soes the standard ensure that
while assigning a value to d as,
d=(u+v)-(w+x);
u+v will be added first then w+x

No, neither the C standard nor the C++ standard guarantee that. (w+x) might
be evaluated before (u+v).
 
A

Andreas Kahari

The subtraction will be carried out last, but AFAIK the additions
could be done in either order. If you want to force a specific
evaluation order, you could use an extra variable.

double t1, t2;

t1 = u + v;
t2 = w + x;

d = t1 - t2;


AFAIK, there's nothing stopping the compiler from generating
code that calculates t2 before t1.

I'm not sure the finer details of floating point arithmetics is
an issue here at all.
 
R

Richard Bos

If I have double variables, d,u,v,w,x. Soes the standard ensure that
while
assigning a value to d as,
d=(u+v)-(w+x);
u+v will be added first then w+x and then they will be added together
and that the compiler will not evaluate the above expression in
anyother order.
No.

Just wondering, since addition of doubles is not associative.

True, but it shouldn't matter at all in which order the two
sub-expressions are evaluated. This:

- add u and v; remember the result.
- add w and x; remember the result.
- subtract the second result from the first.

should give the same answer as this:

- add w and x; remember the result.
- add u and v; remember the result.
- subtract the first result from the second.


The problem only starts when you do things like

a=b+c+d;

because then, for floating point objects, (b+c)+d can give very
different results from b+(c+d). And indeed, the Standard prohibits
implementations from optimising that kind of expression too
enthusiastically. For example,

a=x+y+z;
b=w+y+z;

cannot be optimised into

t=y+z
a=x+t;
b=w+t;

because that might give different results. Contrariwise,

a=y+z+x;
b=y+z+w;

_can_ be optimised to

t=y+z
a=t+x;
b=t+w;


But your example does not have this problem.

Richard
 
I

Irrwahn Grausewitz

Grumble said:
The subtraction will be carried out last, but AFAIK the additions
could be done in either order. If you want to force a specific
evaluation order, you could use an extra variable.

Why use an extra variable?

d = u + v;
d -= w + x;

Regards
 
A

Arthur J. O'Dwyer

The order of operations is not specified, except that (w+x) will be
subtracted from (u+v); the compiler can't decide to subtract w from
(u+v), and then subtract x from the result, *UNLESS* the answer would
be exactly the same (the "as if" rule) -- in which case who cares how
it was computed?
double t1, t2;

t1 = u + v;
t2 = w + x;

d = t1 - t2;

AFAIK, there's nothing stopping the compiler from generating
code that calculates t2 before t1.

Practically, that's true. However, the code *must* behave *AS IF*
t1 was calculated before t2, because the semicolons introduce
sequence points.
I'm not sure the finer details of floating point arithmetics is
an issue here at all.

On some systems (which may or may not be conforming, I don't know;
I don't follow floating-point stuff :), we can have

double d = 3.14;
double e = 4.72;
double f = d/e;
printf("%g %g\n", f, d/e);

and get two different numbers, because the FPU registers use
slightly wider representations than the actual type 'double'
objects. That's why order-of-operations matters in general,
in practice.
And specifically about associativity, you know that

double d = 1e10, e = 5, f = 5;
double r1, r2;
r1 = (d+e)+f;
r2 = d+(e+f);

can produce (r1 != r2) (for suitable values of 5, of course).
It's plausible that something like that could be at stake
in the OP's code; I don't feel like running through all possible
values of (u,v,w,x) right now. ;-)

HTH,
-Arthur
 
A

Andreas Kahari

Practically, that's true. However, the code *must* behave *AS IF*
t1 was calculated before t2, because the semicolons introduce
sequence points.

In this example, there will not be any difference to the result
if u+v is evaluated before/after w+x. The OP's concern is
without foundation.

However, if the question was if there was a difference between
"u+v-w+x" and "(u+v)-(w+x)", then the answer is clearly yes,
depending on the magnitude and sign of the involved floating
point numbers.
 
A

Arthur J. O'Dwyer

In this example, there will not be any difference to the result
if u+v is evaluated before/after w+x. The OP's concern is
without foundation.

But some compilers like to aggressively optimize arithmetic, turning
(u+v)-(w+x) into (u+v-w-x) or similar, and in *that* case there's
definitely a difference!
However, if the question was if there was a difference between
"u+v-w+x" and "(u+v)-(w+x)", then the answer is clearly yes, ^^^^^^^ ^^^^^^^^^^^
depending on the magnitude and sign of the involved floating
point numbers.

In particular, if the magnitude of 'x' is non-zero. ;-)

-Arthur
 
G

gc

Andreas Kahari said:
if u+v is evaluated before/after w+x. The OP's concern is
without foundation.

Sorry for not making myself clear,
I wanted to know whether the compiler is free to evaluate
(u+v)-(w+x);

as u+(v-w)-x; (i.e, add u to the result of v-w and then subtract x)
or in any other order it finds suitable.



Similarly, another question that arises is that whether is assured
that the compiler will not interpret the the test (1+x>1) as (x>0), if
x is double these two tests need not be the same
 
G

gc

(e-mail address removed) (Richard Bos) wrote in message
True, but it shouldn't matter at all in which order the two
sub-expressions are evaluated.

So, if I understand you correctly, the subexpressions have to be
calculated separately, i.e., the compiler cannot treat a+(b+c) and
(a+b)+c the same way for floating point variables a,b,c. That was
something I had doubts about, I wanted to know how much leeway the
compiler had.
This:

- add u and v; remember the result.
- add w and x; remember the result.
- subtract the second result from the first.

should give the same answer as this:

- add w and x; remember the result.
- add u and v; remember the result.
- subtract the first result from the second.


The problem only starts when you do things like

a=b+c+d;

because then, for floating point objects, (b+c)+d can give very
different results from b+(c+d). And indeed, the Standard prohibits
implementations from optimising that kind of expression too
enthusiastically. For example,

a=x+y+z;
b=w+y+z;

cannot be optimised into

t=y+z
a=x+t;
b=w+t;

because that might give different results. Contrariwise,

a=y+z+x;
b=y+z+w;

_can_ be optimised to

t=y+z
a=t+x;
b=t+w;


But your example does not have this problem.


So, is x=a+b+c+d; considered treated the same as x=((a+b)+c)+d; i.e.,
the summation is carried out from left to right?
 
I

Irrwahn Grausewitz

(e-mail address removed) (gc) wrote:

Sorry for not making myself clear,
I wanted to know whether the compiler is free to evaluate
(u+v)-(w+x);

as u+(v-w)-x; (i.e, add u to the result of v-w and then subtract x)
or in any other order it finds suitable.

Hm, I think Arthur already exlained it, but anyway:

Yes, the compiler is free to do whatever, as long as the result is the
same as if (w+x) was subtracted from (u+v).

Otherwise parantheses were completely useless in algebraic expressions.
Similarly, another question that arises is that whether is assured
that the compiler will not interpret the the test (1+x>1) as (x>0), if
x is double these two tests need not be the same

+ has precedence over >; so, whatever code the compiler generates, it
must behave as if 1+x was evaluated prior to >.

Otherwise the operator precedence rules were completely useless.

Regards
 
I

Irrwahn Grausewitz

(e-mail address removed) (Richard Bos) wrote:


So, is x=a+b+c+d; considered treated the same as x=((a+b)+c)+d; i.e.,
the summation is carried out from left to right?

Yes, all binary operators, except assignment operators, are evaluated
from left to right.

Regards
 
X

xarax

Irrwahn Grausewitz said:
(e-mail address removed) (gc) wrote: /snip/

+ has precedence over >; so, whatever code the compiler generates, it
must behave as if 1+x was evaluated prior to >.

Otherwise the operator precedence rules were completely useless.

If "x" is an unsigned int with the value 0xffffffff, then you
get different results between ((1+x)>1) versus (x > 0).

OTOH, I cannot think of an integer example where
(x>1) is different from (x>=0).

Integer comparisons with constant zero are usually faster
than non-zero compares, because most machines have fast
compare-with-zero instructions. So a compiler may want
to convert a non-zero compare to an equivalent zero-compare.
 
J

Joe Wright

xarax said:
If "x" is an unsigned int with the value 0xffffffff, then you
get different results between ((1+x)>1) versus (x > 0).

OTOH, I cannot think of an integer example where
(x>1) is different from (x>=0).
If x is unsigned then (x >= 0) is always 1. There are two cases for (x >
1) == 0.
 
I

Irrwahn Grausewitz

xarax said:
If "x" is an unsigned int with the value 0xffffffff, then you
get different results between ((1+x)>1) versus (x > 0).

How does this affect operator precedence? A conforming implementation
must generate code that, when executed, behave /AS IF/ the evaluation
took place according to the requirements of the standard.

You write ((1+x)>1), you get ((1+x)>1).
OTOH, I cannot think of an integer example where
(x>1) is different from (x>=0).

Integer comparisons with constant zero are usually faster
than non-zero compares, because most machines have fast
compare-with-zero instructions. So a compiler may want
to convert a non-zero compare to an equivalent zero-compare.

A conforming compiler may want (if a compiler can have desires at all)
to grab the executable code from the rear side of a cornflakes box, as
long as the results meet the requirements imposed by the standard.

Regards
 
C

CBFalconer

Irrwahn said:
(e-mail address removed) (gc) wrote:



Hm, I think Arthur already exlained it, but anyway:

Yes, the compiler is free to do whatever, as long as the result
is the same as if (w+x) was subtracted from (u+v).

Otherwise parantheses were completely useless in algebraic
expressions.

Not so, especially when dealing with floating point or possible
overflows. The compiler is only free to rearrange things when it
can detect that the results are identical. Note that it is
perfectly allowable for an intermediate result to cause an
overflow, even though the overall expression does not.

Similarly an expression such as "bigvalue - 10 * littlevalue" is
not the same as

"bigvalue - littlevalue - littlevalue ..... - littlevalue"

which _MIGHT_ well totally discard any effect from littlevalue.
The cure is to use parentheses, as in:

"bigvalue - (littlevalue + littlevalue ..... + littlevalue)"
 
A

Arthur J. O'Dwyer

(s/were/would be/ in both this and the snipped part, BTW.
And s/paran/paren/ .)

Not so, especially when dealing with floating point or possible
overflows. The compiler is only free to rearrange things when it
can detect that the results are identical.

Yes, that's exactly what I and Irrwahn said. :) The "as-if"
rule allows the compiler to produce whatever executable code it
likes, as long as the result is the same as if (w+x) had been
subtracted from (u+v).

Note that it is
perfectly allowable for an intermediate result to cause an
overflow, even though the overall expression does not.

True. However, if the compiler can tell ahead-of-time that
such an overflow will not occur, then it's free to optimize
in that way. Heck, it can use the built-in "subtract A+B from
C+D" FPU instruction, if it happens to have one.

Oh, and something else for the OP's question: Many compilers
will have a switch that allows you to turn off (or on) this
strict compliance with ISO C as regards floating-point "as if"s.
It is sometimes much faster, and occasionally much more
accurate (!), to produce answers that *don't* follow the C
standard. Consider my earlier example of

double d = 3.14, e = 2.55;
double f = d/e;
printf("%d\n", d/e == f);

On some compilers for x86, that will print different numbers
(for appropriate values of 3.14 and 2.55), because the FPU
registers are wider than the variables on the program's stack.
So storing d/e into f loses precision, which is reflected in
the output. To accurately reflect the standard (unless, as
is likely, the standard leaves some loopholes for this sort
of thing), you'd need to add an instruction to store d/e into
a regular 'double' before the comparison, and that would slow
down the program. So many compilers let you turn on and off
some optimizations related to this sort of thing.

HTH,
-Arthur
 
I

Irrwahn Grausewitz

CBFalconer said:
Not so, especially when dealing with floating point or possible
overflows. The compiler is only free to rearrange things when it
can detect that the results are identical.

Well, that was sort of my point: if an implementation chooses to
evaluate the expression (u+v)-(w+x) like u+(v-w)-x, without assuring
that the result will be the expected one, why should one bother to
write parantheses at all, as they were rendered useless by the (flawed)
implementation.

<absolutely correct notes snipped>

Regards
 
P

P.J. Plauger

Well, that was sort of my point: if an implementation chooses to
evaluate the expression (u+v)-(w+x) like u+(v-w)-x, without assuring
that the result will be the expected one, why should one bother to
write parantheses at all, as they were rendered useless by the (flawed)
implementation.

If a compiler generates incorrect code, why should one bother to write
correct code at all, as it is rendered useless by the (flawed)
implementation?

And your point is...?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,743
Messages
2,569,478
Members
44,898
Latest member
BlairH7607

Latest Threads

Top