chad said:
Why does b print some garbage value? I mean, the type for b is an int
and the second %d in printf() is also an int. Is this because he tried
to pass float as an int in the first arg to printf()?
Given that a is a float and b is an int, the behavior of
printf("%d %d\n", a, b);
is undefined. That doesn't mean the implementation can print anything
it likes for a, but must still get b and everything else right. Once
undefined behavior has occurred, all bets are off; the standard no
longer says *anything* about how the program behaves.
As far as the standard is concerned, that's all that there is to be
said about it. But examining the behavior on some particular system
can be useful as an *example* of how undefined behavior can manifest
itself.
Here's the original program, with the variable names changed for
clarity:
#include <stdio.h>
int main(void)
{
float f = 1.0000;
int i = f;
printf("%d %d\n", f, i); /* undefined behavior */
return 0;
}
On my system, I get a compile-time warning:
c.c:7: warning: format `%d' expects type `int', but argument 2 has type `double'
The compiler isn't required to warn about this, but it's kind enough
to do so anyway.
The run-time output is:
0 1072693248
Here's what I think is happening.
On the call to printf, the float value of f is promoted (implicitly
converted) to double. The int value of i is passed as an int. On my
system, float is 4 bytes, double is 8 bytes, and int is 4 bytes.
Arguments to printf are (I think) passed consecutively on the stack;
printf then uses the <stdarg.h> mechanism, or something very much like
it, to extract the argument values as directed by the format string.
The call passes 12 bytes of data, 8 for the double value (promoted from
float) and 4 for the int value. The "%d %d\n" format instructs printf
to extract 8 bytes of data, 4 for each of two expected int arguments.
The double representation of 1.0, when interpreted as a pair of 32-bit
ints, happens to look like (0, 1072693248). (Note that 1072693248 is
0x3ff00000 in hex. Consult the IEEE floating-point standard to see if
this matches; I haven't checked it myself, but it's certainly
plausible.) The passed int value of i is quietly ignored, because
it's beyond the 8 bytes of data specified by the format string.
When I modify the program as follows:
#include <stdio.h>
int main(void)
{
float f = 1.0000;
int i = 42;
printf("%d %d %d\n", f, i); /* undefined behavior */
return 0;
}
I get even more warnings:
c2.c:7: warning: format `%d' expects type `int', but argument 2 has type `double'
c2.c:7: warning: too few arguments for format
and the following run-time output:
0 1072693248 42
I passed 12 bytes of data (a double and an int), and the format
string specifies 12 bytes of data (3 ints). So, at run time,
everything is, in some strange sense, consistent, and we see the
actual value of i.
Do not, I repeat, *DO NOT*, use techniques like this for real-world
code. I wrote this purely for the purpose of exploring the internal
workings of printf on my system. It could work entirely differently
on a different system. For example, doubles and ints might be
passed as arguments using different mechanisms.
If you really want to examine the internal contents of a
floating-point object, there are ways to do that; the safest is to
alias it with an array of unsigned char.
The format string in a call to printf should *always* match the
(promoted) arguments. If it doesn't, you might or might not get
a warning from your compiler, and the actual behavior could be
almost literally anything. The behavior I've shown on my system
is relatively benign, and it still took several paragraphs just to
describe it.
Understanding this kind of system-specific behavior can be very
useful for debugging. For example, if you see an output value
of 1072693248 where you expected a small int value, it's likely
that something other than an integer is somehow being interpreted
as if it were of an integer. If a pointer value, printed with
"%p", looks like "0x00000003", it's probably a small integer being
misinterpreted as a pointer value.
But the point of such analysis is to fix the problem; it's almost
never a good idea to take advantage of it.
Here's another example of undefined behavior:
#include <stdio.h>
int main(void)
{
double f = 0.0;
long n = f;
printf("%d %d\n", f, n); / *undefined behavior */
return 0;
}
Again, I've lied to printf about the arguments I'm passing to it, and
any behavior is permitted. But here's the output I get on my system:
0 0
Types double and long just happen to be the same size, and 0.0 and 0L
just happen to have the same representation, all-bits-zero. So the
output *looks* correct even though it's garbage and even though it
might be different on a different system.
This is the worst way that undefined behavior can manifest itself.
There's a serious bug in the program, but it's very difficult to
detect. It will probably show itself at the worst possible time, when
the values of f and n have been changed, or when the program is run on
the intended target system rather than on my test system. Fortunately
gcc warns about this:
c3.c:7: warning: format `%d' expects type `int', but argument 2 has type `double'
c3.c:7: warning: format `%d' expects type `int', but argument 3 has type `long int'
but there are plenty of similar errors for which it can't or won't
issue a warning.