Floating point to integer casting

B

bartc

Tim Rentsch said:
Richard Heathfield said:
On pg 45 of K&R, the authors write that:

float to int causes truncation of any fractional part.
The = operator takes the value of its right operand, which is 1.0F
(float), and stores this value in its left operand. Since the left
operand is an int, a conversion is performed on the value yielded by
the
operand, to coerce that value into the proper type. [snip]​


A minor point -- assignment _always_ performs a conversion,
whether the types of the two sides are the same or different.​


So what conversion is performed when assigning an int value to an int
destination of the same width?​
 
T

Tim Rentsch

bartc said:
Tim Rentsch said:
Richard Heathfield said:
On pg 45 of K&R, the authors write that:

float to int causes truncation of any fractional part.
The = operator takes the value of its right operand, which is 1.0F
(float), and stores this value in its left operand. Since the left
operand is an int, a conversion is performed on the value yielded by
the
operand, to coerce that value into the proper type. [snip]​


A minor point -- assignment _always_ performs a conversion,
whether the types of the two sides are the same or different.​


So what conversion is performed when assigning an int value to an int
destination of the same width?​
 
N

Nick Keighley

On pg 45 of K&R, the authors write that:
float to int causes truncation of any fractional part.
The = operator takes the value of its right operand, which is 1.0F
(float), and stores this value in its left operand. Since the left
operand is an int, a conversion is performed on the value yielded by
the
operand, to coerce that value into the proper type. [snip]​

A minor point -- assignment _always_ performs a conversion,
whether the types of the two sides are the same or different.

So what conversion is performed when assigning an int value to an int
destination of the same width?​


this reminds me of the maths people who tell you a quadratic equation
always has two roots (solutions). And I say "but there's only one
answer
in such-and-such a case" "ah yes in that case the two roots are
actually
identical"​
 
J

James Kuyper

6.5.16.1p2 is pretty clear about this: "the value of the right operand
is converted to the type of the assignment expression" - there's nothing
conditional about that statement.
So what conversion is performed when assigning an int value to an int
destination of the same width?

int=>int, an identity conversion. It's covered by 6.3p2: "Conversion of
an operand value to a compatible type causes no change to the value or
the representation." 'int' is compatible with 'int'.
 
J

James Kuyper

Richard said:
No. Here's what's going on in int b = a:

The = operator takes the value of its right operand, which is 1.0F
(float), and stores this value in its left operand. Since the left
operand is an int, a conversion is performed on the value yielded
by the
operand, to coerce that value into the proper type.
[snip]​

A minor point -- assignment _always_ performs a conversion,
whether the types of the two sides are the same or different.​


A minor point - what assignment?​


You're right, of course - this is initialization rather than assignment.
However, his comment could be taken as simply a response to your
comments about the "= operator", without reference to the fact that the
code you were talking about did not contain an "= operator".

Luckily for both of you, "the same type constraints and conversions as
for simple assignment apply," (6.7.8p11)​
 
S

Seebs

In a case where performance is critical, is there a way to be sure that
the compiler does not perform a conversion for same-type
assignment? ...or would that be an implementation detail?

Presumably a quality-of-implementation issue. In practice, I doubt there
have been any compilers, ever, which generated code to perform such
"conversions".

-s
 
J

James Kuyper

Ted said:
In a case where performance is critical, is there a way to be sure that
the compiler does not perform a conversion for same-type
assignment?

The standard requires that such a conversion must change neither the
value nor the representation, so it can be fully implemented by not
generating any code whatsoever. It's not clear to me that it's even
meaningful to talk about not performing a no-op conversion.
 
J

James Kuyper

Anand said:
Is there any context where this subtlety (viz., "conversion" is
performed even when the types of the operands on either side of = is
the same) is important?

I think that essentially 100% of the importance of this subtlety lies in
the simplification it allows in the standard's description of what is
required to happen.
 
T

Tim Rentsch

Ted DeLoggio said:
In a case where performance is critical, is there a way to be sure that
the compiler does not perform a conversion for same-type
assignment? ...or would that be an implementation detail?

For the most part such conversions don't change anything
and so generate no additional code. But see also my
other replies.
 
T

Tim Rentsch

Seebs said:
Presumably a quality-of-implementation issue. In practice, I doubt there
have been any compilers, ever, which generated code to perform such
"conversions".

Except in the case of floating point types, when such
conversions actually can make a difference. Notable
because some well-known compilers (gcc is the example
I'm thinking of) sometimes get this wrong.
 
T

Tim Rentsch

James Kuyper said:
The standard requires that such a conversion must change neither the
value nor the representation, so it can be fully implemented by not
generating any code whatsoever. It's not clear to me that it's even
meaningful to talk about not performing a no-op conversion.

Notwithstanding the assurances of 6.3p2, a same-type conversion
actually can result in a different value when the types involved
are floating-point types. I neglected to mention this section
earlier, let me correct that now -- 6.3.1.5p2. Also applies to
the complex types.
 
S

Seebs

Except in the case of floating point types, when such
conversions actually can make a difference. Notable
because some well-known compilers (gcc is the example
I'm thinking of) sometimes get this wrong.

Is this the old hassle with intermediate representations?

-s
 
T

Tim Rentsch

Anand Hariharan said:
Is there any context where this subtlety (viz., "conversion" is
performed even when the types of the operands on either side of = is
the same) is important? If so, is it important to the implementor or
even to the programmer? How/Why?

Yes there is, when floating-point types are involved.
Conversions in such cases are required to discard extra
precision and range (see 6.3.1.5p2). For example, in

double a, b, c;

...

a = b + c;

the plus operation can be computed in greater precision than
(double), but upon being assigned the value must be squeezed
back into a (double) again. For developers, this can matter
when deciding when to simplify expressions. For example:

/* 1 */
t0 = b * c;
t1 = d * e;
a = t0 + t1;

/* 2 */
a = b * c + d * e;

There's a good chance the result in /*1*/ will be
different from the result in /*2*/.

For implementors, it's important to remember to follow the
requirements, since it can be tempting not to for reasons of
performance and/or optimization. I believe gcc gets this
wrong in some cases, notably on the x86, where the processor
instruction set makes it pretty inconvenient (or so I've
heard) to do what the Standard requires.
 
T

Tim Rentsch

Richard Heathfield said:
Richard Heathfield said:
In <[email protected]>, Albert
wrote:

On pg 45 of K&R, the authors write that:

float to int causes truncation of any fractional part.

Right.


Then shouldn't:

#include <stdio.h>

int main(void)
{
float a = 1.0000;
int b = a;
printf("%d %d\n", a, b);
return 0;
}

give 1 1 as the output instead of 0 and garbage?

No. Here's what's going on in int b = a:

The = operator takes the value of its right operand, which is 1.0F
(float), and stores this value in its left operand. Since the left
operand is an int, a conversion is performed on the value yielded
by the
operand, to coerce that value into the proper type.
[snip]​


A minor point -- assignment _always_ performs a conversion,
whether the types of the two sides are the same or different.​


A minor point - what assignment?​



What James Kuyper said (thanks James!). I was in fact
responding to the mentionings of '= operator' and 'operand'
(both left and right), which seems to be talking about
assignment.​
 
T

Tim Rentsch

Seebs said:
Is this the old hassle with intermediate representations?

Assuming I understand your question correctly, the
answer is yes. C requires that intermediate results
be converted to the precision of the type involved
upon assignment. I believe this requirement is
there partly (mostly?) to conform to rules set for
IEEE 754 floating point. (Disclaimer: I know very
little about IEEE 754; my comment here is based
on some long-ago traded emails with the committee
chairman for IEEE 754.)
 
C

chad

Yes there is, when floating-point types are involved.
Conversions in such cases are required to discard extra
precision and range (see 6.3.1.5p2).  For example, in

   double a, b, c;

   ...

   a = b + c;

the plus operation can be computed in greater precision than
(double), but upon being assigned the value must be squeezed
back into a (double) again.  For developers, this can matter
when deciding when to simplify expressions.  For example:

Okay, I'm going to take the bait here. How can the plus operation be
computed with greater precision than double?
 
B

bartc

chad said:
Okay, I'm going to take the bait here. How can the plus operation be
computed with greater precision than double?

(Example)

Some floating point hardware works internally using 80-bits, when the
precision of double is 64-bits, which can lead to inconsistencies when
intermediate 80-bit results are written to memory as 64-bits then loaded
again, compared with keeping the intermediate values in the registers.
 
T

Tim Rentsch

chad said:
Okay, I'm going to take the bait here. How can the plus operation be
computed with greater precision than double?

Do you mean how can it happen, or when will it ever make
a difference? The answer for how it can happen is,
no matter what the range and precision are for (double)
(or (long double), for that matter), the implementation
is allowed to use greater range and precision for the
results of operations. So plus could be carried out
with 1024 bits of precision, say, or with more exponent
bits to give a greater range (or both). Extra bits
may be relevant because floating-point numbers might
be in different ranges (ie, have different exponents).

As to when will it ever make a difference, for this
simple example I think it depends on rounding modes.
Obviously for more complicated expressions, eg

a = b + c + d + e + f + g;

some extra precision could make a difference due to
carries when adding some small numbers and some bigger
ones. Extra range could also matter when adding
some positive numbers and some negative ones,
protecting against overflows in intermediate results.
I'm sure there must be other examples, and probably
better ones, but the ones here are just the first
ones that popped into my head.
 
M

Morris Keesan

(Example)

Some floating point hardware works internally using 80-bits, when the
precision of double is 64-bits, which can lead to inconsistencies when
intermediate 80-bit results are written to memory as 64-bits then loaded
again, compared with keeping the intermediate values in the registers.

I was going to say that the expression b + c has type (double), but after
looking in the standard for confirmation of this, I'm confused:

6.3.1.8 Usual arithmetic conversions

"Unless explicitly stated otherwise, the common real type is also
the corresponding real type of the result"
[so the result of b + c would have type double -- MK]

but I'm confused by paragraph 2 and its footnote, which say

"The values of floating operands and of the results of floating
expressions may be represented in greater precision and range
than that required by the type; the types are not changed thereby.
52)"
and "52) The cast and assignment operators are still required to perform
their specified conversions as described in 6.3.1.4 and 6.3.1.5."

What's meant by this? If "the types are not changed thereby", does this
mean that (b + c) has type double, or not? And if the type is not changed,
what conversion would be necessary to do the assignment to a?

Furthermore, if the result of a floating expression can be "represented
in greater precision and range" than that required, what does this say
about sizeof(b + c)? What can we predict about the value of the expression

sizeof(b + c) == sizeof(double)

in conforming implementations? Can a strictly conforming program rely on
this having the value 1?

Or is this "greater range and precision" clause merely giving
implementations
permission to represent intermediate results in ways that could give
different results for more complicated floating expressions, e.g.
potentially
giving different results for

((double)(b + c)) - ((double)(e * f))
vs.
(b + c) - (e * f)

where b, c, e, and f are all doubles?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,777
Messages
2,569,604
Members
45,234
Latest member
SkyeWeems

Latest Threads

Top