bug in modulus?

J

jantod

I think there might be something wrong with the implementation of
modulus.

Negative float values close to 0.0 break the identity "0 <= abs(a % b)
< abs(b)".

print 0.0 % 2.0 # => 0.0
print -1e-010 % 2.0 # =>1.9999999999

which is correct, but:

print -1e-050 % 2.0 # => 2.0
print -1e-050 % 2.0 < 2.0 # => False

This means I can't use modulus to "wrap around" before it reaches a
certain value. I'm using Python 2.4.2 on WindowsXP.

Thanks
Janto
 
T

Tim Peters

[[email protected]]
I think there might be something wrong with the implementation of
modulus.

Negative float values close to 0.0 break the identity "0 <= abs(a % b) < abs(b)".

While that's a mathematical identity, floating point has finite
precision. Many mathematical identities can fail when using floats.
For example,

(x+y)+z = x+(y+z)

doesn't always hold when using floats either.
print 0.0 % 2.0 # => 0.0
print -1e-010 % 2.0 # =>1.9999999999

which is correct, but:

print -1e-050 % 2.0 # => 2.0
print -1e-050 % 2.0 < 2.0 # => False

See footnote 5.2 in the Language (not Library) reference manual,
section 5.6 "Binary arithmetic operations":

While abs(x%y) < abs(y) is true mathematically, for floats it may
not be true
numerically due to roundoff. For example, and assuming a platform on which
a Python float is an IEEE 754 double-precision number, in order that
-1e-100 % 1e100 have the same sign as 1e100, the computed result is
-1e-100 + 1e100, which is numerically exactly equal to 1e100.
Function fmod()
in the math module returns a result whose sign matches the sign of the first
argument instead, and so returns -1e-100 in this case. Which
approach is more
appropriate depends on the application.
This means I can't use modulus to "wrap around" before it reaches a
certain value.

It's simply not possible to deliver a float result in all cases that
satisfies all desirable identities. % and math.fmod make different
tradeoffs, but neither is suitable for all applications, and it's
possble that the neither is suitable for a particular application --
pick your poison, or brew your own.
I'm using Python 2.4.2 on WindowsXP.

Because it's inherent to using finite-precision approximations to real
numbers, this is a cross-platorm and cross-language phenomenon. If
you can't tolerate approximations, rework your logic to use integers
instead.
 
J

jantod

Hmmm. I understand. I'd suggest that someone just drop a link from the
Library reference manual as the divmod entry over there seems to
contradict it.

"""
divmod(a, b)

Take two (non complex) numbers as arguments and return a pair of
numbers consisting of their quotient and remainder when using long
division. With mixed operand types, the rules for binary arithmetic
operators apply. For plain and long integers, the result is the same as
(a / b, a % b). For floating point numbers the result is (q, a % b),
where q is usually math.floor(a / b) but may be 1 less than that. In
any case q * b + a % b is very close to a, if a % b is non-zero it has
the same sign as b, and 0 <= abs(a % b) < abs(b).
"""

But maybe I'm reading it wrong. In any case what I wanted was simply a
way to extract the angle from a complex number where the angle is
between 0 and 2*pi. I think I'll just take the modulus twice.

def angle(complex):
"""Returns angle where 2*pi > angle >=0

>>> angle(1+1j) - atan(1) < 1e-3
True
>>> angle(-1+1j) - (atan(-1) + 3*pi) % (2*pi) < 1e-3
True
>>> angle(0+1j) == pi/2
True
>>> angle(0-1j) == 1.5*pi
True
>>> angle(1+0j) == 0
True
>>> angle(0+0j) == 0
True
>>> angle(1-1e-100*1j) == 0
True
"""
if complex.real == 0:
if complex.imag == 0:
return 0
if complex.imag < 0:
return 1.5*pi
return pi/2
theta = (atan2(complex.imag, complex.real) % (2*pi)) % (2*pi)
assert 2*pi > theta >=0, (theta, complex)
return theta

Thanks for your help!
 
D

Dan Bishop

Hmmm. I understand. I'd suggest that someone just drop a link from the
Library reference manual as the divmod entry over there seems to
contradict it.

"""
divmod(a, b)

Take two (non complex) numbers as arguments and return a pair of
numbers consisting of their quotient and remainder when using long
division. With mixed operand types, the rules for binary arithmetic
operators apply. For plain and long integers, the result is the same as
(a / b, a % b). For floating point numbers the result is (q, a % b),
where q is usually math.floor(a / b) but may be 1 less than that. In
any case q * b + a % b is very close to a, if a % b is non-zero it has
the same sign as b, and 0 <= abs(a % b) < abs(b).
"""

But maybe I'm reading it wrong. In any case what I wanted was simply a
way to extract the angle from a complex number where the angle is
between 0 and 2*pi. I think I'll just take the modulus twice.

If you absolutely insist on having the angle be less than 2*pi, you
could always do something like:

theta = min(theta, pi*2 - 2**(-50))
 
G

Gerard Flanagan

But maybe I'm reading it wrong. In any case what I wanted was simply a
way to extract the angle from a complex number where the angle is
between 0 and 2*pi. I think I'll just take the modulus twice.

def angle(complex):
"""Returns angle where 2*pi > angle >=0

>>> angle(1+1j) - atan(1) < 1e-3
True
>>> angle(-1+1j) - (atan(-1) + 3*pi) % (2*pi) < 1e-3
True
>>> angle(0+1j) == pi/2
True
>>> angle(0-1j) == 1.5*pi
True
>>> angle(1+0j) == 0
True
>>> angle(0+0j) == 0
True
>>> angle(1-1e-100*1j) == 0
True
"""
if complex.real == 0:
if complex.imag == 0:
return 0
if complex.imag < 0:
return 1.5*pi
return pi/2
theta = (atan2(complex.imag, complex.real) % (2*pi)) % (2*pi)
assert 2*pi > theta >=0, (theta, complex)
return theta


from math import atan2, pi

def cangle(z):
ret = atan2(z.imag, z.real)
if ret < 0:
ret += 2*pi
return ret

assert cangle(1+1j) * 180 / pi == 45.0
assert cangle(-1+1j) * 180 / pi == 135.0
assert cangle(-1-1j) * 180 / pi == 225.0
assert cangle(1-1j) * 180 / pi == 315.0
assert cangle(1+0j) * 180 / pi == 0.0
assert cangle(-1+0j) * 180 / pi == 180.0
assert cangle(1j) * 180 / pi == 90.0
assert cangle(-1j) * 180 / pi == 270.0

Gerard
 
C

Christophe

(e-mail address removed) a écrit :
I think there might be something wrong with the implementation of
modulus.

Negative float values close to 0.0 break the identity "0 <= abs(a % b)
< abs(b)".

print 0.0 % 2.0 # => 0.0
print -1e-010 % 2.0 # =>1.9999999999

which is correct, but:

print -1e-050 % 2.0 # => 2.0
print -1e-050 % 2.0 < 2.0 # => False

This means I can't use modulus to "wrap around" before it reaches a
certain value. I'm using Python 2.4.2 on WindowsXP.

Thanks
Janto

Consider that -1e-050 % 2.0 = 2.0 - 1e-050

Now, test that :True

Floating point numbers just don't have the required precision to
represent 2.0 - 1e-050. For your specific problem, if you really want
the result to be < 2.0, the the best you can do is admit that floating
point operations have errors and return 0.0 when the modulus operation
equals 2.0.
 
A

Andrew Koenig

Christophe said:
(e-mail address removed) a écrit :
Floating point numbers just don't have the required precision to represent
2.0 - 1e-050. For your specific problem, if you really want the result to
be < 2.0, the the best you can do is admit that floating point operations
have errors and return 0.0 when the modulus operation equals 2.0.

I disagree. For any two floating-point numbers a and b, with b != 0, it is
always possible to represent the exact value of a mod b as a floating-point
number--at least on every floating-point system I have ever encountered.
The implementation is not even that difficult.
 
A

Andrew Koenig

I disagree. For any two floating-point numbers a and b, with b != 0, it
is always possible to represent the exact value of a mod b as a
floating-point number--at least on every floating-point system I have ever
encountered. The implementation is not even that difficult.

Oops... This statement is true for the Fortran definition of modulus (result
has the sign of the dividend) but not the Python definition (result has the
sign of the divisor). In the Python world, it's true only when the dividend
and divisor have the same sign.
 
T

Tim Peters

[Andrew Koenig, on the counter intuitive -1e-050 % 2.0 == 2.0 example]
[also Andrew]
Oops... This statement is true for the Fortran definition of modulus (result
has the sign of the dividend) but not the Python definition (result has the
sign of the divisor). In the Python world, it's true only when the dividend
and divisor have the same sign.

Note that you can have it in Python too, by using math.fmod(a, b)
instead of "a % b".

IMO, it was a mistake (and partly my fault cuz I didn't whine early)
for Python to try to define % the same way for ints and floats. The
hardware realities are too different, and so are the pragmatics. For
floats, it's actually most useful most often to have both that a % b
is exact and that 0.0 <= abs(a % b) <= abs(b/2). Then the sign of a%b
bears no relationship to the signs of a and b, but for purposes of
modular reduction it yields the result with the smallest possible
absolute value. That's often valuable for floats (e.g., think of
argument reduction feeding into a series expansion, where time to
convergence typically depends on the magnitude of the input and
couldn't care less about the input's sign), but rarely useful for
ints.

I'd like to see this change in Python 3000. Note that IBM's proposed
standard for decimal arithmetic (which Python's "decimal" module
implements) requires two operations here, one that works like
math.fmod(a, b) (exact and sign of a), and the other as described
above (exact and |a%b| <= |b/2|). Those are really the only sane
definitions for a floating point modulo/remainder.
 
G

Guido van Rossum

This is way above my head. :)

The only requirement *I* would like to see is that for floats that
exactly represent ints (or longs for that matter) the result ought of
x%y ought to have the same value as the same operation on the
corresponding ints (except if the result can't be represented exactly
as a float -- I don't know what's best then).

We're fixing this for / in Py3k, so passing an int into an algorithm
written for floats won't be harmful and won't require defensiev
float() casting everywhere. It would be a shame if we *introduced* a
new difference between ints and floats for %.

--Guido

[Andrew Koenig, on the counter intuitive -1e-050 % 2.0 == 2.0 example]
[also Andrew]
Oops... This statement is true for the Fortran definition of modulus (result
has the sign of the dividend) but not the Python definition (result has the
sign of the divisor). In the Python world, it's true only when the dividend
and divisor have the same sign.

Note that you can have it in Python too, by using math.fmod(a, b)
instead of "a % b".

IMO, it was a mistake (and partly my fault cuz I didn't whine early)
for Python to try to define % the same way for ints and floats. The
hardware realities are too different, and so are the pragmatics. For
floats, it's actually most useful most often to have both that a % b
is exact and that 0.0 <= abs(a % b) <= abs(b/2). Then the sign of a%b
bears no relationship to the signs of a and b, but for purposes of
modular reduction it yields the result with the smallest possible
absolute value. That's often valuable for floats (e.g., think of
argument reduction feeding into a series expansion, where time to
convergence typically depends on the magnitude of the input and
couldn't care less about the input's sign), but rarely useful for
ints.

I'd like to see this change in Python 3000. Note that IBM's proposed
standard for decimal arithmetic (which Python's "decimal" module
implements) requires two operations here, one that works like
math.fmod(a, b) (exact and sign of a), and the other as described
above (exact and |a%b| <= |b/2|). Those are really the only sane
definitions for a floating point modulo/remainder.
_______________________________________________
Python-3000 mailing list
(e-mail address removed)
http://mail.python.org/mailman/listinfo/python-3000
Unsubscribe: http://mail.python.org/mailman/options/python-3000/[email protected]
 
C

Christophe

Guido van Rossum a écrit :
This is way above my head. :)

The only requirement *I* would like to see is that for floats that
exactly represent ints (or longs for that matter) the result ought of
x%y ought to have the same value as the same operation on the
corresponding ints (except if the result can't be represented exactly
as a float -- I don't know what's best then).

Which is exactly the case at hand.

As a proposed change, I would say that when the implementation of x%y
returns y, then we should return 0 instead, or maybe the biggest
possible number that is still < y. That way we would have a result that
respects the "x%y < y" rule and still be as good as possible with the
reduced float precision.

And besides, mathematics say that "0 = y = 2y = 3y [y]" and so there is
no problem for returning 0 instead of y :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,057
Latest member
KetoBeezACVGummies

Latest Threads

Top