L
linq936
Hi,
I have this little test program:
#include <stdio.h>
int main(){
float f1 = 2.2;
double d1 = 3.3;
int j = f1 * d1;
int i = (int) f1 * d1;
int i2 = (int) (f1 * d1);
printf("%d, i=%d, j=%d, i2=%d\n", f1*d1, i, j, i2);
}
I use gcc on Linux to compile it, there is no warning and the result
is like this:
2066953011, i=1075644989, j=6, i2=7
I have several questions:
1. Why there is no warning in compiler when I assign the
multiplication of float and double to integer?
2. As for the result, I can understand how i2 comes from, but others
do not make sense to me.
Could you give me some pointer?
I have this little test program:
#include <stdio.h>
int main(){
float f1 = 2.2;
double d1 = 3.3;
int j = f1 * d1;
int i = (int) f1 * d1;
int i2 = (int) (f1 * d1);
printf("%d, i=%d, j=%d, i2=%d\n", f1*d1, i, j, i2);
}
I use gcc on Linux to compile it, there is no warning and the result
is like this:
2066953011, i=1075644989, j=6, i2=7
I have several questions:
1. Why there is no warning in compiler when I assign the
multiplication of float and double to integer?
2. As for the result, I can understand how i2 comes from, but others
do not make sense to me.
Could you give me some pointer?