M
matt.jaffe
I'm trying to show how floating point numbers are represented internally. I thought the easiest way would be to print the same floating point number once with %f and then again with %x, but the results surprised me. I can do it with a union and bit fields, but why doesn't the simpler way work? Here's the code:
#include<stdio.h>
int main(void)
{
union
{
float aFloat;
int anInt;
struct
unsigned int sig:23; /* significand without the most significant 1 */
unsigned int expo:8; /* biased exponent */
unsigned int sign:1;
} fields;
} uEx; /* abbreviation of unionExample */
uEx.aFloat = 1.0;
printf("\n The union as a float: %f; as in integer in hex: %x; \n the sign bit is %x; the biased exponent is %x; the signifcand is %x \n", \
uEx.aFloat, uEx.anInt, uEx.fields.sign, uEx.fields.expo, uEx.fields.sig);
printf("\n 1.0 printed with with %%f is %f and with %%x is 0x%x \n", 1..0, 1.0);
}
The size of both integers and floats on the machine is 32 bits. Here's theresult:
The union as a float: 1.000000; as in integer in hex: 3f800000;
the sign bit is 0; the biased exponent is 7f; the signifcand is 0
1.0 printed with with %f is 1.000000 and with %x is 0x0
The first two lines of output are what I was expecting per IEEE 754; but the 0x0 in the last line has me confused. Why is it not printing as 0x3f800000 ?
#include<stdio.h>
int main(void)
{
union
{
float aFloat;
int anInt;
struct
unsigned int sig:23; /* significand without the most significant 1 */
unsigned int expo:8; /* biased exponent */
unsigned int sign:1;
} fields;
} uEx; /* abbreviation of unionExample */
uEx.aFloat = 1.0;
printf("\n The union as a float: %f; as in integer in hex: %x; \n the sign bit is %x; the biased exponent is %x; the signifcand is %x \n", \
uEx.aFloat, uEx.anInt, uEx.fields.sign, uEx.fields.expo, uEx.fields.sig);
printf("\n 1.0 printed with with %%f is %f and with %%x is 0x%x \n", 1..0, 1.0);
}
The size of both integers and floats on the machine is 32 bits. Here's theresult:
The union as a float: 1.000000; as in integer in hex: 3f800000;
the sign bit is 0; the biased exponent is 7f; the signifcand is 0
1.0 printed with with %f is 1.000000 and with %x is 0x0
The first two lines of output are what I was expecting per IEEE 754; but the 0x0 in the last line has me confused. Why is it not printing as 0x3f800000 ?