D
Daniel T.
The expression is:
unsigned int x = 3333702325u + 120u * 1000 / 500.0f;
With normal math, where I don't have to worry about overflow, the
answer is x == 3333702565, but when I look at the value of 'x' it
reads: 3333702656.
If I change any of the values to type double, then I get the correct
answer. So am I loosing precision, or is there a compiler bug?
unsigned int x = 3333702325u + 120u * 1000 / 500.0f;
With normal math, where I don't have to worry about overflow, the
answer is x == 3333702565, but when I look at the value of 'x' it
reads: 3333702656.
If I change any of the values to type double, then I get the correct
answer. So am I loosing precision, or is there a compiler bug?