Unexpected results comparing float to Fraction

T

Terry Reedy

This is Python, and we can make __eq__ methods that do anything,
including be non-transitive, non-reflexive, and nonsensical if we like :)

Yes, Python's developers can intentionally introduce bugs, but we try
not to. The definitions of sets and dicts and containment assume that ==
means equality as mathematically defined. As one time, we had 0 == 0.0
and 0 == Decimal(0) but 0.0 != Decimal(0) (and so on for all integral
float values. That 'misfeature' was corrected because of the 'problems'
it caused. That lesson learned, one of the design requirements for the
new enum class (metaclass) was that it not re-introduce non-transitivity.
 
O

Oscar Benjamin

You may not have expected these results but as someone who regularly
uses the fractions module I do expect them.

Why would you do the above? You're deliberately trying to create a
float with a value that you know is not representable by the float
type. The purpose of Fractions is precisely that they can represent
all rational values, hence avoiding these problems.

When I use Fractions my intention is to perform exact computation. I
am very careful to avoid allowing floating point imprecision to sneak
into my calculations. Mixing floats and fractions in computation is
not IMO a good use of duck-typing.

I would say that if type A is a strict superset of type B then the
coercion should be to type A. This is the case for float and Fraction
since any float can be represented exactly as a Fraction but the
converse is not true.
I'm surprised that Fraction(1/3) != Fraction(1, 3); after all, floats
are approximate anyway, and the float value 1/3 is more likely to be
Fraction(1, 3) than Fraction(6004799503160661, 18014398509481984).

Refuse the temptation to guess: Fraction(float) should give the exact
value of the float. It should not give one of the countably infinite
number of other possible rational numbers that would (under a
particular rounding scheme and the floating point format in question)
round to the same float. If that is the kind of equality you would
like to test for in some particular situation then you can do so by
coercing to float explicitly.

Calling Fraction(1/3) is a misunderstanding of what the fractions
module is for and how to use it. The point is to guarantee avoiding
floating point errors; this is impossible if you use floating point
computations to initialise Fractions.

Writing Fraction(1, 3) does look a bit ugly so my preferred way to
reduce the boiler-plate in a script that uses lots of Fraction
"literals" is to do:

from fractions import Fraction as F

# 1/3 + 1/9 + 1/27 + ...
limit = F('1/3') / (1 - F('1/3'))

That's not as good as dedicated syntax but with code highlighting it's
still quite readable.


Oscar
 
S

Steven D'Aprano

Most likely.

Floats aren't precise enough to be equal to a (true) fraction.
float(1/3) is cut short somewhere by the computer, a (true) fraction of
one third is not, it goes on forever.

I know this, and that's not what surprised me. What surprised me was that
Fraction converts the float to a fraction, then compares. It surprises me
because in other operations, Fractions down-cast to float.

Adding a float to a Fraction converts the Fraction to the nearest float,
then adds:

py> 1/3 + Fraction(1, 3)
0.6666666666666666

but comparing a float to a Fraction does the conversion the other way,
the float is up-cast to an exact Fraction, then compared.
 
C

Chris Angelico

I know this, and that's not what surprised me. What surprised me was that
Fraction converts the float to a fraction, then compares. It surprises me
because in other operations, Fractions down-cast to float.

Adding a float to a Fraction converts the Fraction to the nearest float,
then adds:

py> 1/3 + Fraction(1, 3)
0.6666666666666666

Hmm. This is the one that surprises me. That would be like the
addition of a float and an int resulting in an int (at least in C; in
Python, where floats have limited range and ints have arbitrary
precision, the matter's not quite so clear-cut). Perhaps this needs to
be changed?

ChrisA
 
O

Oscar Benjamin

Hmm. This is the one that surprises me. That would be like the
addition of a float and an int resulting in an int (at least in C; in
Python, where floats have limited range and ints have arbitrary
precision, the matter's not quite so clear-cut). Perhaps this needs to
be changed?

The Python numeric tower is here:
http://docs.python.org/3/library/numbers.html#module-numbers

Essentially it says that
Integral < Rational < Real < Complex
and that numeric coercions in mixed type arithmetic should go from
left to right which makes sense mathematically in terms of the
subset/superset relationships between the numeric fields.

When you recast this in terms of Python's builtin/stdlib types it becomes
int < Fraction < {float, Decimal} < complex
and taking account of boundedness and imprecision we find that the
only subset/superset relationships that are actually valid are
int < Fraction
and
float < complex
In fact Fraction is a superset of both float and Decimal (ignoring
inf/nan/-0 etc.). int is not a subset of float, Decimal or complex.
float is a superset of none of the types. Decimal is a superset of
float but the tower places them on the same level.

The real dividing line between {int, Fraction} and {float, Decimal,
complex} is about (in)exactness. The numeric tower ensures the
property that inexactness is contagious which I think is a good thing.
This is not explicitly documented anywhere. PEP 3141 makes a dangling
reference to an Exact ABC as a superclass of Rational but this is
unimplemented anywhere AFAICT:
http://www.python.org/dev/peps/pep-3141/

The reason contagious inexactness is a good thing is the same as
having contagious quite NaNs. It makes it possible to rule out inexact
computations playing a role in the final computed result. In my
previous post I asked what the use case is for mixing floats and
Rationals in computation. I have always considered this to be
something that I wanted to avoid and I'm glad that contagious
inexactness helps me to avoid mixing floats into exact computations.


Oscar
 
C

Chris Angelico

The real dividing line between {int, Fraction} and {float, Decimal,
complex} is about (in)exactness. The numeric tower ensures the
property that inexactness is contagious which I think is a good thing.

*nods slowly*

That does make sense, albeit a little oddly. So when you're sorting
out different integer sizes (C's short/int/long, Py2's int/long), you
go to the "better" one, but when working with inexact types, you go to
the "worse" one. But I can see the logic in it.

ChrisA
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,596
Members
45,135
Latest member
VeronaShap
Top