B
Bubba
Greetings to all!
I will begin with a humble apology for posting such "impure" article, but
I was wandering - shouldn't data types be defined by architecture, and not
by compiler? I trust the answer is yes, but having such "by design" flaw
by Microsoft just escapes me.
To my surprise, I found out today (on harder way) that even in VS 2010
{
long double x;
size_t s = sizeof(x):
}
gives '8', while gcc returns '16' (as expected).
Did some homework and read articles but could not find a sane reason for
such implementation (due to more than 20 years of 80 bit FPU in x86
CPU's).
Is anyone acquainted more detailed about this issue?
Furthermore, I was shocked to find out that Windows calculator couldn't
calculate properly subtraction sqrt(x) and real square root of that number
(sqrt(4) - 2, sqrt(16) - 4...).
Yes, I am aware of IEEE 754, but I couldn't implement that one even in
GMP:
bubba@korea:~$ cat b.c
#include <gmp.h>
int main (int argc, char *argv[]) {
mpf_t sq_me, sq_out, test, sub;
mpf_set_default_prec (1024);
mpf_init(sq_me);
mpf_init(sq_out);
mpf_init(test);
mpf_init(sub);
mpf_set_str (sq_me, argv[1], 10);
mpf_set_str (sub, "2", 10);
mpf_sqrt(sq_out, sq_me);
mpf_sub(test,sq_out,sub);
gmp_printf ("Input: %Ff\n\n", sq_me);
gmp_printf ("Square root: %.1000Ff\n\n", sq_out);
gmp_printf ("Subtraction: %.1000Ff\n\n", test);
return 0;
}
bubba@korea:~$ gcc -g -Wall -pedantic -ansi -O3 b.c -lgmp
bubba@korea:~$ ./a.out 4
Input: 4.000000
Square root:
2.000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000
Subtraction:
0.000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000
I tried to produce similar error in gcc but failed. Does anyone know who
is it implemented to make such errors? I guess it is C, but can't figure
out how come they haven't fixed it since Windows XP...
I will begin with a humble apology for posting such "impure" article, but
I was wandering - shouldn't data types be defined by architecture, and not
by compiler? I trust the answer is yes, but having such "by design" flaw
by Microsoft just escapes me.
To my surprise, I found out today (on harder way) that even in VS 2010
{
long double x;
size_t s = sizeof(x):
}
gives '8', while gcc returns '16' (as expected).
Did some homework and read articles but could not find a sane reason for
such implementation (due to more than 20 years of 80 bit FPU in x86
CPU's).
Is anyone acquainted more detailed about this issue?
Furthermore, I was shocked to find out that Windows calculator couldn't
calculate properly subtraction sqrt(x) and real square root of that number
(sqrt(4) - 2, sqrt(16) - 4...).
Yes, I am aware of IEEE 754, but I couldn't implement that one even in
GMP:
bubba@korea:~$ cat b.c
#include <gmp.h>
int main (int argc, char *argv[]) {
mpf_t sq_me, sq_out, test, sub;
mpf_set_default_prec (1024);
mpf_init(sq_me);
mpf_init(sq_out);
mpf_init(test);
mpf_init(sub);
mpf_set_str (sq_me, argv[1], 10);
mpf_set_str (sub, "2", 10);
mpf_sqrt(sq_out, sq_me);
mpf_sub(test,sq_out,sub);
gmp_printf ("Input: %Ff\n\n", sq_me);
gmp_printf ("Square root: %.1000Ff\n\n", sq_out);
gmp_printf ("Subtraction: %.1000Ff\n\n", test);
return 0;
}
bubba@korea:~$ gcc -g -Wall -pedantic -ansi -O3 b.c -lgmp
bubba@korea:~$ ./a.out 4
Input: 4.000000
Square root:
2.000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000
Subtraction:
0.000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000
0000000000000000000000000000000000000000
I tried to produce similar error in gcc but failed. Does anyone know who
is it implemented to make such errors? I guess it is C, but can't figure
out how come they haven't fixed it since Windows XP...