C
Christian
Hi,
I have a small problem about distant compilation and float literals.
Here is my program :
#include <stdio.h>
void main() {
float f;
double d;
f = 0.5;
printf("f=%f\n", f);
printf("0.5=%f\n", 0.5);
printf("0.5f=%f\n", 0.5f);
printf("0.9=%f\n", 0.9);
printf("1.9=%f\n", 1.9);
d = 0.5;
printf("d0.5=%f\n", d);
printf("1.0/2.0=%f\n", 1.0 / 2.0);
}
I compile on HP-UX 11.11 et then execute it :
#/usr/bin/cc c.c
#a.out
f=0.500000
0.5=0.500000
0.5f=0.500000
0.9=0.900000
1.9=1.900000
d0.5=0.500000
1.0/2.0=0.500000
Every thing is ok.
Then I compile it again from a distant solaris machine.
solaris#rsh machineHP /usr/bin/cc c.c
and execute it again on HP :
#a.out
f=0.000000
0.5=0.000000
0.5f=0.000000
0.9=0.000000
1.9=1.000000
d0.5=0.000000
1.0/2.0=0.500000
It seems thant every float literals are rounded to integer.
How can I avoid this ?
Thank you for your help.
I have a small problem about distant compilation and float literals.
Here is my program :
#include <stdio.h>
void main() {
float f;
double d;
f = 0.5;
printf("f=%f\n", f);
printf("0.5=%f\n", 0.5);
printf("0.5f=%f\n", 0.5f);
printf("0.9=%f\n", 0.9);
printf("1.9=%f\n", 1.9);
d = 0.5;
printf("d0.5=%f\n", d);
printf("1.0/2.0=%f\n", 1.0 / 2.0);
}
I compile on HP-UX 11.11 et then execute it :
#/usr/bin/cc c.c
#a.out
f=0.500000
0.5=0.500000
0.5f=0.500000
0.9=0.900000
1.9=1.900000
d0.5=0.500000
1.0/2.0=0.500000
Every thing is ok.
Then I compile it again from a distant solaris machine.
solaris#rsh machineHP /usr/bin/cc c.c
and execute it again on HP :
#a.out
f=0.000000
0.5=0.000000
0.5f=0.000000
0.9=0.000000
1.9=1.000000
d0.5=0.000000
1.0/2.0=0.500000
It seems thant every float literals are rounded to integer.
How can I avoid this ?
Thank you for your help.