Float to int type casting?

E

Enekajmer

Hi,

1 int main()
2 {
3 float a = 17.5;
4 printf("%d\n", a);
5 printf("%d\n", *(int *)&a);
6 return 0;
7 }

The output is:

0
1099694080

I could not understand why the first printf prints 0 and the other
,1099694080...

I think there is sth different from simple type casting.Is it an
undefined behaviour?
Please let me understand.

Regards.
 
A

Artie Gold

Enekajmer said:
Hi,

1 int main()
2 {
3 float a = 17.5;
4 printf("%d\n", a);

Here you're trying to print a float value with a conversion specifier
for int. The results are undefined.
5 printf("%d\n", *(int *)&a);

Here, you're reinterpreting the representation of the float. The results
are implementation defined.
6 return 0;
7 }

The output is:

0
1099694080

I could not understand why the first printf prints 0 and the other
,1099694080...

I think there is sth different from simple type casting.Is it an
undefined behaviour?

[the word is `something']

See above.

HTH,
--ag
 
E

Enekajmer

Artie said:
Enekajmer said:
Hi,

1 int main()
2 {
3 float a = 17.5;
4 printf("%d\n", a);

Here you're trying to print a float value with a conversion specifier
for int. The results are undefined.
5 printf("%d\n", *(int *)&a);

Here, you're reinterpreting the representation of the float. The results
are implementation defined.
6 return 0;
7 }

The output is:

0
1099694080

I could not understand why the first printf prints 0 and the other
,1099694080...

I think there is sth different from simple type casting.Is it an
undefined behaviour?

[the word is `something']

See above.

HTH,
--ag
--
Artie Gold -- Austin, Texas
http://goldsays.blogspot.com
http://www.cafepress.com/goldsays
"If you have nothing to hide, you're not trying!"

Thanks i learnt the types of behaviours but i want to learn causes in
more details.To give an example, there is no problem for the snippet
code below:
....
float a=17.5;
int b=a; (type casting from float to int)
printf("b-->%d",b); // as normally, this prints "b-->17"
....

according to this example i guessed the output of the code below as 17
too before i compiled. As contrast its output is 0...

printf("a-->%d",a) //a is in float type


So, what are the causes of this case?The IEEE-754?

Regards.
 
R

Robin Haigh

Artie Gold said:
Here you're trying to print a float value with a conversion specifier
for int. The results are undefined.

Actually, you're trying to print a double value, because the float is
promoted when it's not governed by a formal parameter in a prototype.

If it weren't for that, you'd probably get the same output from both
printfs.
 
V

Vladimir S. Oka

Enekajmer said:
Thanks i learnt the types of behaviours but i want to learn causes in
more details.To give an example, there is no problem for the snippet
code below:
...
float a=17.5;
int b=a; (type casting from float to int)

This is, as you rightly say, a cast. It /converts/ a float value in `a`
to it's closest representation as int. Hence, (int)17.5 == 17.
printf("b-->%d",b); // as normally, this prints "b-->17"
...

according to this example i guessed the output of the code below as 17
too before i compiled. As contrast its output is 0...

printf("a-->%d",a) //a is in float type

Artie led you off the track here by saying "conversion specifier". A s a
matter of fact "%d" is a "format specifier". It tells printf()
to /interpret/ whatever is stored in a corresponding parameter (in your
case `a`) as an int. This means that printf() blindly takes whatever is
the bit representation of `a` and interprets these bits as if they were
an `int`. Them actually being an `float` the result is undefined. In
your case, it happened to be 0, but the effect could have been much
worse. Beware the Undefined behaviour monster -- it may make demons
come out of your nose! ;-)
So, what are the causes of this case?The IEEE-754?

No, see paragraph above...

Cheers

Vladimir
 
J

Jordan Abel

Hi,

1 int main()
2 {
3 float a = 17.5;
4 printf("%d\n", a);
5 printf("%d\n", *(int *)&a);
6 return 0;
7 }

The output is:

0
1099694080

I could not understand why the first printf prints 0 and the other
,1099694080...

I think there is sth different from simple type casting.Is it an
undefined behaviour?

Yes. Both printf lines invoke undefined behavior.
Please let me understand.

hint: a is promoted to a double when passed to printf as a float.
 
C

CBFalconer

Robin said:
Actually, you're trying to print a double value, because the float is
promoted when it's not governed by a formal parameter in a prototype.

If it weren't for that, you'd probably get the same output from both
printfs.

It's even worse than that - he is using a variadic function without
a prototype, which makes everything undefined behaviour.

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell.org/google/>
Also see <http://www.safalra.com/special/googlegroupsreply/>
 
F

Flash Gordon

Vladimir said:
This is, as you rightly say, a cast. It /converts/ a float value in `a`
to it's closest representation as int. Hence, (int)17.5 == 17.

No, it's not a cast. It is, on the other hand, a conversion.
Artie led you off the track here by saying "conversion specifier". A s a
matter of fact "%d" is a "format specifier". It tells printf()
to /interpret/ whatever is stored in a corresponding parameter (in your
case `a`) as an int. This means that printf() blindly takes whatever is
the bit representation of `a` and interprets these bits as if they were
an `int`. Them actually being an `float` the result is undefined. In
your case, it happened to be 0, but the effect could have been much
worse. Beware the Undefined behaviour monster -- it may make demons
come out of your nose! ;-)

Actually, it does not take the bit pattern and reinterpret it. It
assumes that the parameter is an int and tries to read that and this is
undefined behaviour (i.e. anything can happen) because the standard says
it is.

There are calling conventions where even varargs parameters are passed
in appropriate registers, so in this case it could read whatever the
first integer register happens to contain which could have absolutely
nothing to do with the contents of a. The specific implementation I'm
thinking of (a real one) can actually cause even more fun. If the
parameter is a double it gets passed in a floating point register, but
also converted to an int and passed in an integer register, so you would
actually get the result the OP expected. However, when you pass an int
that is *only* passed in an integer register, so if you try to print an
int using the format specifier for a double, it will try to print
whatever happened to be in the floating point register.

The implementation happens to be the latest version of Visual Studio
when given appropriate options.

The moral of this is *never* try to say what C implementations do when
you invoke undefined behaviour, for it's ways are stranger than your
imagination can conceive.

It is reasonable to wonder why something is undefined, and sometimes
useful to know how undefined behaviour can manifest, but only if you
accept that literally *anything* can cause a different behaviour,
including it deciding to show a picture of your boss in the nude.
No, see paragraph above...

Indeed.
 
A

Artie Gold

Vladimir said:
This is, as you rightly say, a cast. It /converts/ a float value in `a`
to it's closest representation as int. Hence, (int)17.5 == 17.




Artie led you off the track here by saying "conversion specifier". A s a
matter of fact "%d" is a "format specifier". It tells printf()

A slip of the fingers. Thanks for the correction.
--ag
 
J

Jack Klein

Here you're trying to print a float value with a conversion specifier
for int. The results are undefined.


Here, you're reinterpreting the representation of the float. The results
are implementation defined.

[snip]

No, undefined. There are only two defined ways to look at the bits of
an object of type float, those are using an lvalue of type float, or
using an array of unsigned characters the size of a float.
 
J

Jack Klein

This is, as you rightly say, a cast. It /converts/ a float value in `a`
to it's closest representation as int. Hence, (int)17.5 == 17.


Artie led you off the track here by saying "conversion specifier". A s a
matter of fact "%d" is a "format specifier". It tells printf()

[snip]

Actually, Artie was mostly correct. Here's some exact wording copied
from the C standard, paragraphs 3 and of "7.19.6.1 The fprintf
function":

====
3 The format shall be a multibyte character sequence, beginning and
ending in its initial shift state. The format is composed of zero or
more directives: ordinary multibyte characters (not %), which are
copied unchanged to the output stream; and conversion specifications,
each of which results in fetching zero or more subsequent arguments,
converting them, if applicable, according to the corresponding
conversion specifier, and then writing the result to the output
stream.

4 Each conversion specification is introduced by the character %.
After the %, the following appear in sequence:

— Zero or more flags (in any order) that modify the meaning of the
conversion specification.

— An optional minimum field width. If the converted value has fewer
characters than the field width, it is padded with spaces (by default)
on the left (or right, if the left adjustment flag, described later,
has been given) to the field width. The field width takes the form of
an asterisk * (described later) or a decimal integer.

— An optional precision that gives the minimum number of digits to
appear for the d, i, o, u, x, and X conversions, the number of digits
to appear after the decimal-point character for a, A, e, E, f, and F
conversions, the maximum number of significant digits for the g and G
conversions, or the maximum number of bytes to be written for s
conversions. The precision takes the form of a period (.) followed
either by an asterisk * (described later) or by an optional decimal
integer; if only the period is specified, the precision is taken as
zero. If a precision appears with any other conversion specifier, the
behavior is undefined.

— An optional length modifier that specifies the size of the argument.

— A conversion specifier character that specifies the type of
conversion to be applied.
====

Notice specifically the final sentence of paragraph 4.

In C standard terminology, "%d" is a "conversion specification" of
which the 'd' is the conversion specifier.
 
K

Keith Thompson

Vladimir S. Oka said:
This is, as you rightly say, a cast. It /converts/ a float value in `a`
to it's closest representation as int. Hence, (int)17.5 == 17.
[...]

No, it's not a cast. A cast is an explicit unary operator consisting
of a type name in parentheses; it specifies a conversion. So:

int b = a;

involves a conversion, but no cast, whereas

int b = (int)a;

has a cast that specifies the same conversion. (Thus the phrase
"implicit cast" is meaningless.)
 
V

Vladimir S. Oka

Jack said:
This is, as you rightly say, a cast. It /converts/ a float value in
`a` to it's closest representation as int. Hence, (int)17.5 == 17.


Artie led you off the track here by saying "conversion specifier". A
s a matter of fact "%d" is a "format specifier". It tells printf()

[snip]

Actually, Artie was mostly correct. Here's some exact wording copied
from the C standard, paragraphs 3 and of "7.19.6.1 The fprintf
function":

— A conversion specifier character that specifies the type of
conversion to be applied.
====

Notice specifically the final sentence of paragraph 4.

In C standard terminology, "%d" is a "conversion specification" of
which the 'd' is the conversion specifier.

I stand corrected, apologies where due. I guess too much FORTRAN at some
point is to blame. ;-)

I still feel that "conversion" can be misleading, as some may expect
that the parameter values will be /converted/ to what one specifies as
their display format. This was, IMHO, clearly what confused the OP.
But, who's to argue with the Gospel. ;-)
 
V

Vladimir S. Oka

Keith said:
Vladimir S. Oka said:
This is, as you rightly say, a cast. It /converts/ a float value in
`a` to it's closest representation as int. Hence, (int)17.5 == 17.
[...]

No, it's not a cast. A cast is an explicit unary operator consisting
of a type name in parentheses; it specifies a conversion. So:

int b = a;

involves a conversion, but no cast, whereas

int b = (int)a;

has a cast that specifies the same conversion. (Thus the phrase
"implicit cast" is meaningless.)

You are absolutely right.

I can't really explain this, apart maybe by the fact that I had the
example I put at the very end (which was meant to say something
different) in my mind too early in writing the whole thing. This
possibly also explains the conversion bit I was talking about in the
middle. Quite a mess, really... :-(
 
K

Keith Thompson

Jack Klein said:
On Sun, 5 Feb 2006 19:27:51 +0000 (UTC), "Vladimir S. Oka"
Artie led you off the track here by saying "conversion specifier". A s a
matter of fact "%d" is a "format specifier". It tells printf()

[snip]

Actually, Artie was mostly correct. Here's some exact wording copied
from the C standard, paragraphs 3 and of "7.19.6.1 The fprintf
function": [snip]
Notice specifically the final sentence of paragraph 4.

In C standard terminology, "%d" is a "conversion specification" of
which the 'd' is the conversion specifier.

And, just to be clear, the standard's use of the phrase "conversion
specification" does not refer to the same kind of "conversion"
discussed in section 6.3 of the standard (such as the conversion
implied by a cast expression), though it's loosely similar.
 
J

John Bode

Enekajmer said:
Hi,

1 int main()
2 {
3 float a = 17.5;
4 printf("%d\n", a);

This is not a cast. If you use the %d specifier, printf expects the
corresponding argument to be of type int. a is not type int, so the
behavior is undefined. Since it's undefined, there's no point in
trying to explain *why* it prints 0 instead of some other value.
5 printf("%d\n", *(int *)&a);

This is a cast, but it isn't doing what you think it is. This tells
printf to interpret the bit pattern in a *as if* it were an int; it
does not convert the value of a to the integer equivalent. To do that,
you would write

printf("%d\n", (int) a);

Integer and floating-point types usually have completely different
representations. One reasonable hypothetical 32-bit representation for
17.5 would be

00111101100011000000000000000000

or

0x3d8c0000

where bit 31 is the sign bit, bits 30-23 are the exponent, and bits
22-0 are the mantissa, which represents values in the range [0.5-1).
The above bit string represents +0.546875 * 2^5 == 17.5 in this format.
Compare this to a two's complement, 32-bit, signed integer
representation of 17:

00000000000000000000000000010001

or

0x00000011

Hopefully you can see why line 5 gave you such an obnoxious number. It
was taking the bit string representing 17.5 in your machine's floating
point format (0x418c0000) and interpreting it as an int, instead of
*converting* that value to the int equivalent (0x00000011).
 
S

SM Ryan

# Hi,
#
# 1 int main()
# 2 {
# 3 float a = 17.5;
# 4 printf("%d\n", a);
# 5 printf("%d\n", *(int *)&a);

(int)a is sufficient, and will also work if you
cannot take the address of a.

# 6 return 0;
# 7 }
#
# The output is:
#
# 0
# 1099694080
#
# I could not understand why the first printf prints 0 and the other
# ,1099694080...

The language does not require the types of arguments be
made available from the caller to callee. If you prototype
a function, the compiler can check the prototype against
the call and cast to type, but a varargs list cannot be
prototyped and therefore cannot be generally checked. The
caller makes a collection of bytes of available and assumes
the callee interprets those bytes in the same manner as the
caller does.

In the first case the caller is making the bytes of a double
available but callee is looking for the bytes of an int.
Wackiness ensues.

In the second case both caller and callee agree the bytes
are an int.

In printf and related functions, the callee deduces argument
types from the format string. Some compilers make guesses
about what they know of the callee to warn of argument mismatches,
but that is not generally possible with the current state of
C.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top