C
Corne' Cornelius
Hi,
I'm experiencing some weirdness in a program, when subtracting 2
(double)'s which should result in 0, but instead it returns
-1.11022e-16. It looks to me that changing the double x_step decleration
to unsigned type, might help, but the compiler complains when i try that.
Any ideas ?
#include <iostream>
using namespace std;
int main(int argc, char *argv[]) {
double x1 = -1;
double x2 = 1;
double nx = 6;
int pos = 0;
double x = 0.0;
double x_step = 0.0;
x_step = (x2 - x1) / nx;
x = x1;
for (pos = 0; pos < nx; pos++) {
cout << x << " +\t" << x_step << " =\t " << (x +
x_step) << endl;
x = x + x_step;
}
return 0;
}
Output:
-1 + 0.333333 = -0.666667
-0.666667 + 0.333333 = -0.333333
-0.333333 + 0.333333 = -1.11022e-16
-1.11022e-16 + 0.333333 = 0.333333
0.333333 + 0.333333 = 0.666667
0.666667 + 0.333333 = 1
I'm experiencing some weirdness in a program, when subtracting 2
(double)'s which should result in 0, but instead it returns
-1.11022e-16. It looks to me that changing the double x_step decleration
to unsigned type, might help, but the compiler complains when i try that.
Any ideas ?
#include <iostream>
using namespace std;
int main(int argc, char *argv[]) {
double x1 = -1;
double x2 = 1;
double nx = 6;
int pos = 0;
double x = 0.0;
double x_step = 0.0;
x_step = (x2 - x1) / nx;
x = x1;
for (pos = 0; pos < nx; pos++) {
cout << x << " +\t" << x_step << " =\t " << (x +
x_step) << endl;
x = x + x_step;
}
return 0;
}
Output:
-1 + 0.333333 = -0.666667
-0.666667 + 0.333333 = -0.333333
-0.333333 + 0.333333 = -1.11022e-16
-1.11022e-16 + 0.333333 = 0.333333
0.333333 + 0.333333 = 0.666667
0.666667 + 0.333333 = 1