(Example)
Some floating point hardware works internally using 80-bits, when the
precision of double is 64-bits, which can lead to inconsistencies when
intermediate 80-bit results are written to memory as 64-bits then loaded
again, compared with keeping the intermediate values in the registers.
I was going to say that the expression b + c has type (double), but after
looking in the standard for confirmation of this, I'm confused:
6.3.1.8 Usual arithmetic conversions
"Unless explicitly stated otherwise, the common real type is also
the corresponding real type of the result"
[so the result of b + c would have type double -- MK]
but I'm confused by paragraph 2 and its footnote, which say
"The values of floating operands and of the results of floating
expressions may be represented in greater precision and range
than that required by the type; the types are not changed thereby.
52)"
and "52) The cast and assignment operators are still required to perform
their specified conversions as described in 6.3.1.4 and 6.3.1.5."
What's meant by this? If "the types are not changed thereby", does this
mean that (b + c) has type double, or not? And if the type is not changed,
what conversion would be necessary to do the assignment to a?
Furthermore, if the result of a floating expression can be "represented
in greater precision and range" than that required, what does this say
about sizeof(b + c)? What can we predict about the value of the expression
sizeof(b + c) == sizeof(double)
in conforming implementations? Can a strictly conforming program rely on
this having the value 1?
Or is this "greater range and precision" clause merely giving
implementations
permission to represent intermediate results in ways that could give
different results for more complicated floating expressions, e.g.
potentially
giving different results for
((double)(b + c)) - ((double)(e * f))
vs.
(b + c) - (e * f)
where b, c, e, and f are all doubles?