C
Christopher T King
Speakign of rounding errors, why is round() defined to round away from
zero? I've always been quite fond of the mathematical definition of
rounding of floor(n+.5). As a contrived example of why the latter is IMHO
better, I present the following:
for x in [x+.5 for x in xrange(-10,10)]:
print int(round(x))
This prints -10, -9, ..., -2, -1, 1, 2, ..., 9, 10 (it skips zero),
probably not what you'd expect. I'm not sure how often a case like this
comes up in real usage, but I imagine it's more often than a case relying
on the current behaviour would.
zero? I've always been quite fond of the mathematical definition of
rounding of floor(n+.5). As a contrived example of why the latter is IMHO
better, I present the following:
for x in [x+.5 for x in xrange(-10,10)]:
print int(round(x))
This prints -10, -9, ..., -2, -1, 1, 2, ..., 9, 10 (it skips zero),
probably not what you'd expect. I'm not sure how often a case like this
comes up in real usage, but I imagine it's more often than a case relying
on the current behaviour would.