Performance of list vs. set equality operations

G

Gustavo Narea

Hello!

Could you please confirm whether my understanding of equality
operations in sets and lists is correct? This is how I think things
work, partially based on experimentation and the online documentation
for Python:

When you compare two lists, *every* element of one of the lists is
compared against the element at the same position in the other list;
that comparison is done by the __eq__() method (or the equivalent for
builtin types). This is interrupted when a result is False.

When you compare two sets, there's a loop over all the elements of the
first set, where the hash for that element is looked up in the second
set:
- If this hash matches the hash for one or more elements in the
second set, the element in the first set is compared (with __eq__ or
equivalent) against the elements in the second set which have the same
hash. When a result is True, nothing else is done on that element and
the loop takes the next element in the first set; when all the results
are False, the loop ends and the two sets are not equivalent.
- If the hash doesn't match that of an element in the second set,
then the loop ends and the two sets are not equivalent.

So this means that:
1.- When you have two collections which have the same elements, the
equality operation will *always* be faster with lists.
2.- When you have two collections with different elements, the
equality operation *may* be faster with sets.

For example, if you have two collections of 1,000 elements each and
998 of them are equivalent, comparing both collections as sets will be
slower than doing it with lists. But if you have two collections of
1,000 elements each and 998 of them are not equivalent, then comparing
both collections as lists will be slower than doing it with sets.

The performance of equality operations on sets is directly
proportional to the amount of different elements in both sets, while
the performance of equality operations on lists is simply proportional
to the cardinality of the collection.

In other words: The more different elements two collections have, the
faster it is to compare them as sets. And as a consequence, the more
equivalent elements two collections have, the faster it is to compare
them as lists.

Is this correct?

This is why so many people advocate the use of sets instead of lists/
tuples in similar situations, right?

Cheers,

- Gustavo.
 
C

Chris Colbert

the proof is in the pudding:

In [1]: a = range(10000)

In [2]: s = set(a)

In [3]: s2 = set(a)

In [5]: b = range(10000)

In [6]: a == b
Out[6]: True

In [7]: s == s2
Out[7]: True

In [8]: %timeit a == b
1000 loops, best of 3: 204 us per loop

In [9]: %timeit s == s2
10000 loops, best of 3: 124 us per loop
 
G

Gustavo Narea

the proof is in the pudding:

In [1]: a = range(10000)

In [2]: s = set(a)

In [3]: s2 = set(a)

In [5]: b = range(10000)

In [6]: a == b
Out[6]: True

In [7]: s == s2
Out[7]: True

In [8]: %timeit a == b
1000 loops, best of 3: 204 us per loop

In [9]: %timeit s == s2
10000 loops, best of 3: 124 us per loop


I think you meant to set "s2 = set(b)":
=====
In [1]: a = range(10000)

In [2]: b = range(10000)

In [3]: s1 = set(a)

In [4]: s2 = set(a)

In [5]: s3 = set(b)

In [6]: %timeit a == b
10000 loops, best of 3: 191 us per loop

In [7]: %timeit s1 == s2
10000 loops, best of 3: 118 us per loop

In [8]: %timeit s1 == s3
1000 loops, best of 3: 325 us per loop
=====

Cheers.
 
C

Chris Colbert

:slaps forehead:

good catch.

the proof is in the pudding:

In [1]: a = range(10000)

In [2]: s = set(a)

In [3]: s2 = set(a)

In [5]: b = range(10000)

In [6]: a == b
Out[6]: True

In [7]: s == s2
Out[7]: True

In [8]: %timeit a == b
1000 loops, best of 3: 204 us per loop

In [9]: %timeit s == s2
10000 loops, best of 3: 124 us per loop


I think you meant to set "s2 = set(b)":
=====
In [1]: a = range(10000)

In [2]: b = range(10000)

In [3]: s1 = set(a)

In [4]: s2 = set(a)

In [5]: s3 = set(b)

In [6]: %timeit a == b
10000 loops, best of 3: 191 us per loop

In [7]: %timeit s1 == s2
10000 loops, best of 3: 118 us per loop

In [8]: %timeit s1 == s3
1000 loops, best of 3: 325 us per loop
=====

Cheers.
 
J

Jack Diederich

Hello!

Could you please confirm whether my understanding of equality
operations in sets and lists is correct? This is how I think things
work, partially based on experimentation and the online documentation
for Python:

When you compare two lists, *every* element of one of the lists is
compared against the element at the same position in the other list;
that comparison is done by the __eq__() method (or the equivalent for
builtin types). This is interrupted when a result is False.

When you compare two sets, there's a loop over all the elements of the
first set, where the hash for that element is looked up in the second
set: [snip]
In other words: The more different elements two collections have, the
faster it is to compare them as sets. And as a consequence, the more
equivalent elements two collections have, the faster it is to compare
them as lists.

Is this correct?

Yes, but faster isn't the same thing as free. I still get bitten
occasionally by code that blows away the difference by including a
set-wise assert() in a loop. Also, lists can have mutable members so
there are times you really /do/ want to compare lists instead of
hashes.

-Jack
 
L

Lie Ryan

Hello!

Could you please confirm whether my understanding of equality
operations in sets and lists is correct? This is how I think things
work, partially based on experimentation and the online documentation
for Python:

When you compare two lists, *every* element of one of the lists is
compared against the element at the same position in the other list;
that comparison is done by the __eq__() method (or the equivalent for
builtin types). This is interrupted when a result is False.

When you compare two sets, there's a loop over all the elements of the
first set, where the hash for that element is looked up in the second
set:
- If this hash matches the hash for one or more elements in the
second set, the element in the first set is compared (with __eq__ or
equivalent) against the elements in the second set which have the same
hash. When a result is True, nothing else is done on that element and
the loop takes the next element in the first set; when all the results
are False, the loop ends and the two sets are not equivalent.
- If the hash doesn't match that of an element in the second set,
then the loop ends and the two sets are not equivalent.

I have not seen python's set implementation, but if you keep a bitmap of
hashes that already exist in a set, you can compare 32 or 64 items (i.e.
the computer's native word-size) at a time instead of comparing items
one-by-one[1]; this could marginally improve set operation's performance
for doing comparisons, difference, update, etc. Anyone that have seen
python's set source code can confirm whether such thing are implemented
in python's set?

[1] the possibility of hash collision complicates this a little bit, I
haven't fully thought out the the consequences of the interaction of
such bitmap with hash collision handling (there was a PyCon 2010 talk
"The Mighty Dictionary" by Brandon Craig Rhodes describing collision
handling
http://python.mirocommunity.org/video/1591/pycon-2010-the-mighty-dictiona).
 
R

Raymond Hettinger

[Gustavo Nare]
In other words: The more different elements two collections have, the
faster it is to compare them as sets. And as a consequence, the more
equivalent elements two collections have, the faster it is to compare
them as lists.

Is this correct?

If two collections are equal, then comparing them as a set is always
slower than comparing them as a list. Both have to call __eq__ for
every element, but sets have to search for each element while lists
can just iterate over consecutive pointers.

If the two collections have unequal sizes, then both ways immediately
return unequal.

If the two collections are unequal but have the same size, then
the comparison time is data dependent (when the first mismatch
is found).


Raymond
 
S

Steven D'Aprano

[Gustavo Nare]
In other words: The more different elements two collections have, the
faster it is to compare them as sets. And as a consequence, the more
equivalent elements two collections have, the faster it is to compare
them as lists.

Is this correct?

If two collections are equal, then comparing them as a set is always
slower than comparing them as a list. Both have to call __eq__ for
every element, but sets have to search for each element while lists can
just iterate over consecutive pointers.

If the two collections have unequal sizes, then both ways immediately
return unequal.


Perhaps I'm misinterpreting what you are saying, but I can't confirm that
behaviour, at least not for subclasses of list:
.... def __len__(self):
.... return self.n
....False
 
P

Patrick Maupin

[Gustavo Nare]
In other words: The more different elements two collections have, the
faster it is to compare them as sets. And as a consequence, the more
equivalent elements two collections have, the faster it is to compare
them as lists.
Is this correct?
If two collections are equal, then comparing them as a set is always
slower than comparing them as a list.  Both have to call __eq__ for
every element, but sets have to search for each element while lists can
just iterate over consecutive pointers.
If the two collections have unequal sizes, then both ways immediately
return unequal.

Perhaps I'm misinterpreting what you are saying, but I can't confirm that
behaviour, at least not for subclasses of list:

...     def __len__(self):
...             return self.n
...>>> L1 = MyList(range(10))
False

I think what he is saying is that the list __eq__ method will look at
the list lengths first. This may or may not be considered a subtle
bug for the edge case you are showing.

If I do the following:

I don't even need to run timeit -- the "True" takes awhile to print
out, while the "False" prints out immediately.

Regards,
Pat
 
R

Raymond Hettinger

[Raymond Hettinger]
[Steven D'Aprano]
Perhaps I'm misinterpreting what you are saying, but I can't confirm that
behaviour, at least not for subclasses of list:

For doubters, see list_richcompare() in
http://svn.python.org/view/python/trunk/Objects/listobject.c?revision=78522&view=markup

if (Py_SIZE(vl) != Py_SIZE(wl) && (op == Py_EQ || op == Py_NE)) {
/* Shortcut: if the lengths differ, the lists differ */
PyObject *res;
if (op == Py_EQ)
res = Py_False;
else
res = Py_True;
Py_INCREF(res);
return res;
}

And see set_richcompare() in
http://svn.python.org/view/python/trunk/Objects/setobject.c?revision=78886&view=markup

case Py_EQ:
if (PySet_GET_SIZE(v) != PySet_GET_SIZE(w))
Py_RETURN_FALSE;
if (v->hash != -1 &&
((PySetObject *)w)->hash != -1 &&
v->hash != ((PySetObject *)w)->hash)
Py_RETURN_FALSE;
return set_issubset(v, w);


Raymond
 
S

Steven D'Aprano

[Raymond Hettinger]
[Steven D'Aprano]
Perhaps I'm misinterpreting what you are saying, but I can't confirm
that behaviour, at least not for subclasses of list:

For doubters, see list_richcompare() in
http://svn.python.org/view/python/trunk/Objects/listobject.c?
revision=78522&view=markup

So what happens in my example with a subclass that (falsely) reports a
different length even when the lists are the same?

I can guess that perhaps Py_SIZE does not call the subclass __len__
method, and therefore is not fooled by it lying. Is that the case?
 
T

Terry Reedy

[Raymond Hettinger]
If the two collections have unequal sizes, then both ways immediately
return unequal.

[Steven D'Aprano]
Perhaps I'm misinterpreting what you are saying, but I can't confirm
that behaviour, at least not for subclasses of list:

For doubters, see list_richcompare() in
http://svn.python.org/view/python/trunk/Objects/listobject.c?
revision=78522&view=markup

So what happens in my example with a subclass that (falsely) reports a
different length even when the lists are the same?

I can guess that perhaps Py_SIZE does not call the subclass __len__
method, and therefore is not fooled by it lying. Is that the case?

Adding a print call within __len__ should determine that.
 
R

Raymond Hettinger

[Steven D'Aprano]
So what happens in my example with a subclass that (falsely) reports a
different length even when the lists are the same?

I can guess that perhaps Py_SIZE does not call the subclass __len__
method, and therefore is not fooled by it lying. Is that the case?

Yes. Py_SIZE() gets the actual size of the underlying list.

The methods for most builtin containers typically access the
underlying structure directly. That makes them fast and allows
them to maintain their internal invariants.


Raymond
 
G

Gabriel Genellina

En Thu, 08 Apr 2010 04:07:53 -0300, Steven D'Aprano
[Raymond Hettinger]
If the two collections have unequal sizes, then both ways immediately
return unequal.

[Steven D'Aprano]
Perhaps I'm misinterpreting what you are saying, but I can't confirm
that behaviour, at least not for subclasses of list:

For doubters, see list_richcompare() in
http://svn.python.org/view/python/trunk/Objects/listobject.c?
revision=78522&view=markup

So what happens in my example with a subclass that (falsely) reports a
different length even when the lists are the same?

I can guess that perhaps Py_SIZE does not call the subclass __len__
method, and therefore is not fooled by it lying. Is that the case?

Yes. Py_SIZE is a generic macro, it returns the ob_size field from the
object structure. No method is called at all.

Another example: the print statement bypasses the sys.stdout.write()
method and calls directly fwrite() at the C level when it determines that
sys.stdout is a `file` instance. Even if it's a subclass of file, so
overriding write() in Python code does not work.

The CPython source contains lots of shortcuts like that. Perhaps the
checks should be stricter in some cases, but I imagine it's not so easy to
fix: lots of code was written in the pre-2.2 era, assuming that internal
types were not subclassable.
 
P

Patrick Maupin

The CPython source contains lots of shortcuts like that. Perhaps the  
checks should be stricter in some cases, but I imagine it's not so easy to  
fix: lots of code was written in the pre-2.2 era, assuming that internal  
types were not subclassable.

I don't know if it's a good "fix" anyway. If you subclass an internal
type, you can certainly supply your own rich comparison methods, which
would (IMO) put the CPU computation burden where it belongs if you
decide to do something goofy like subclass a list and then override
__len__.

Regards,
Pat
 
G

Gabriel Genellina

I don't know if it's a good "fix" anyway. If you subclass an internal
type, you can certainly supply your own rich comparison methods, which
would (IMO) put the CPU computation burden where it belongs if you
decide to do something goofy like subclass a list and then override
__len__.

We're all consenting adults, that's the Python philosophy, isn't it?
If I decide to make stupid things, it's my fault. I don't see why Python
should have to prevent that.
 
P

Patrick Maupin

We're all consenting adults, that's the Python philosophy, isn't it?
If I decide to make stupid things, it's my fault. I don't see why Python  
should have to prevent that.

Exactly. I think we're in violent agreement on this issue ;-)
 
R

Raymond Hettinger

I don't know if it's a good "fix" anyway.  If you subclass an internal
We're all consenting adults, that's the Python philosophy, isn't it?
If I decide to make stupid things, it's my fault. I don't see why Python  
should have to prevent that.

Perhaps so for pure python classes, but the C builtins are another
story.

The C containers directly reference underlying structure and methods
for several reasons. The foremost reason is that if their internal
invariants are violated, they can segfault. A list's __getitem__
method needs to know the real length (not what you report in __len__)
if it is to avoid writing objects outside of its allocated memory
range. Another reason is efficiency -- the cost of attribute lookups
is high and would spoil the performance of the builtins if they could
not access their underlying structure and friend methods directly.
It is important to have those perform well because they are used
heavily
in everyday programming.

There are also couple of OOP design considerations. The
http://en.wikipedia.org/wiki/Open/closed_principle is one example.

Encapsulation is another example. If you override __len__
in order to influence the behavior of __eq__, then you're
relying on an implementation detail, not the published interface.
Eventhough the length check is an obvious optimization
for list equality and set equality, there is no guarantee
that other implementations of Python use that same pattern.

my-two-cents-ly yours,

Raymond
 
S

Stefan Behnel

Steven D'Aprano, 08.04.2010 03:41:
[Gustavo Nare]
In other words: The more different elements two collections have, the
faster it is to compare them as sets. And as a consequence, the more
equivalent elements two collections have, the faster it is to compare
them as lists.

Is this correct?

If two collections are equal, then comparing them as a set is always
slower than comparing them as a list. Both have to call __eq__ for
every element, but sets have to search for each element while lists can
just iterate over consecutive pointers.

If the two collections have unequal sizes, then both ways immediately
return unequal.


Perhaps I'm misinterpreting what you are saying, but I can't confirm that
behaviour, at least not for subclasses of list:
... def __len__(self):
... return self.n
...False

This code incorrectly assumes that overriding __len__ has an impact on the
equality of two lists. If you want to influence the equality, you need to
override __eq__. If you don't, the original implementation is free to do
whatever it likes to determine if it is equal to another value or not. If
it uses __len__ for that or not is only an implementation detail that can't
be relied upon.

Stefan
 
T

Terry Reedy

Steven D'Aprano, 08.04.2010 03:41:

This code incorrectly assumes that overriding __len__ has an impact on
the equality of two lists. If you want to influence the equality, you
need to override __eq__. If you don't, the original implementation is
free to do whatever it likes to determine if it is equal to another
value or not. If it uses __len__ for that or not is only an
implementation detail that can't be relied upon.

After reading the responses of both you and Raymond, I realized that a)
there is a real difference between 'checking lengths' and 'calling
__len__', which I (and apparently the example) had seen as the same and
b) that the example shows that assuming that they are the same is a
mistake. Thank you both for the clarification.

Terry Jan Reedy
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,066
Latest member
VytoKetoReviews

Latest Threads

Top