sum for sequences?

A

Alf P. Steinbach

* Steve Howell:
Yep, I did mention in my post that the outer loop does not *have* to
be O(N), if you can parallelize it. Apart from reducing
intermediates, the divide-and-conquer method does not reduce overall
computation time unless you have multiple processors, correct?

With a single thread of execution it can reduce the time for large integers and
custom numerical types, with n the number of final bits, from O(n^2) to O(n log n).

I don't think the parallelism here is very relevant for Python.

From a more practical point of view, the sum efficiency could be improved by
doing the first addition using '+' and the rest using '+=', without changing the
behavior.


Cheers & hth.,

- Alf
 
S

Steve Howell

* Steve Howell:

 From a more practical point of view, the sum efficiency could be improved by
doing the first addition using '+' and the rest using '+=', without changing the
behavior.

I like that approach, because you don't have to assume that the start
object is clonable, although perhaps you need to make that assumption
anyway for the case where the list is empty.

Here is code that might shed light on Alf's suggested approach:

import timeit
M = 10
N = 8000

def in_place(
start = [],
sublists = ([[None] * M]) * N
):
# only macro-optimized
i = 0
for sublist in sublists:
if i == 0:
accum = start + sublist
i += 1
else:
accum.extend(sublist)
if i == 0:
return 'whatever' # semantics here?
return accum

def with_intermediates(
start = [],
sublists = ([[None] * M]) * N
):
accum = start
for sublist in sublists:
accum = accum + sublist
return accum

t1 = timeit.timeit('in_place()', 'from __main__ import in_place',
number=10)
t2 = timeit.timeit('with_intermediates()', 'from __main__ import
with_intermediates', number=10)
print M, N, t1, t2, int(t2 / t1)

For N=100, you get a significant speedup (x84):

10 1000 0.00102496147156 0.0867249965668 84

More data:

10 1000 0.00102496147156 0.0867249965668 84
10 2000 0.00211381912231 0.172616004944 81
10 4000 0.00491786003113 0.561800956726 114
10 8000 0.0706541538239 19.9019489288 281
10 16000 0.0161759853363 9.64171791077 596
10 32000 0.0351569652557 43.98346591 1251
 
P

Patrick Maupin

 From a more practical point of view, the sum efficiency could be improved by
doing the first addition using '+' and the rest using '+=', without changing the
behavior.

Mod parent up!!!! :)
 
S

Steve Howell

    import timeit
    M = 10
    N = 8000

    def in_place(
        start = [],
        sublists = ([[None] * M]) * N
        ):
        # only macro-optimized
        i = 0
        for sublist in sublists:
            if i == 0:
               accum = start + sublist
               i += 1
            else:
                accum.extend(sublist)

FYI I later obtained similar results with the more general:
accum += sublist
        if i == 0:
            return 'whatever' # semantics here?
        return accum

    def with_intermediates(
        start = [],
        sublists = ([[None] * M]) * N
        ):
        accum = start
        for sublist in sublists:
            accum = accum + sublist
        return accum
 
P

Patrick Maupin

FYI I later obtained similar results with the more general:
                  accum += sublist

Yeah, but you still have to create an object of the correct type for
accum. And for summing small lists, that will actually increase the
runtime. (Worst case, a list of a single item: you have to create the
accum object based on the type of the start value, then do two += into
it, for the start value and the first item in the list, vs just doing
a single + which automatically creates an object.)

Personally, I think the most general approach, if sum were open to
enhancements, would be to automatically apply Alf's suggestion of "+"
on summing the first item to the start value, and "+=" on subsequent
items. Also, you *could* add a parameter to sum(), for example,
default "partialsums=False" that would allow the use of partial sums
in those cases where that might give better behavior (such as floats
and tuples).
 
S

Steve Howell

Yeah, but you still have to create an object of the correct type for
accum.  

I think you overlooked the surrounding code.

Here is the code again:

def in_place(
start = [],
sublists = ([[None] * M]) * N
):
# only macro-optimized
i = 0
for sublist in sublists:
if i == 0:
accum = start + sublist
i += 1
else:
accum += sublist
if i == 0:
return 'whatever' # semantics here?
return accum

No need to infer the type.
And for summing small lists, that will actually increase the
runtime.  (Worst case, a list of a single item: you have to create the
accum object based on the type of the start value, then do two += into
it, for the start value and the first item in the list, vs just doing
a single + which automatically creates an object.)

For small lists, the performance of any sane implementation would
probably be so fast as to be negligible, unless you are in a really
tight loop. If you are in a tight loop, then your use case probably
eventually sums large lists. Even if it doesn't, the slowdown is just
a constant.

For M=5, I get these results on my machine:

M N t1 t2 (t2/t1)
5 1 3.50475311279e-05 3.2901763916e-05 0.938775510204
5 2 2.00271606445e-05 1.59740447998e-05 0.797619047619
5 4 6.79492950439e-05 6.31809234619e-05 0.929824561404
5 8 0.000124931335449 0.000126123428345 1.00954198473
5 64 0.000530958175659 0.00226187705994 4.25999101931
5 1024 0.00262904167175 0.27246594429 103.636981953

t1 = time with add only first element
t2 = time with add all elements

Very small penalty for small lists, very large gain for large lists.
Personally, I think the most general approach, if sum were open to
enhancements, would be to automatically apply Alf's suggestion of "+"
on summing the first item to the start value, and "+=" on subsequent
items.  

See above. That's the approach I would use as well.
 
P

Paul Rubin

Steve Howell said:
The documentation is pretty clear on the intention that sum() is
intended for numbers: ...

I've never been big on the idea of duck-typing addition. Who would have
thought that (1,2,3)+(4.5.6) was something other than the vector sum?

I think itertools.chain more directly expresses what the OP was trying
to do, and is preferable for readability, independently of these
performance questions.
 
S

Steven D'Aprano

I've never been big on the idea of duck-typing addition. Who would have
thought that (1,2,3)+(4.5.6) was something other than the vector sum?

But your arguments are tuples, not vectors.

There are languages which do treat arithmetic operations on vectors as
vector operations, as do (e.g.) H-P scientific calculators. That's a fine
design choice, and it works for languages where the emphasis is on
scientific calculations. But Python is a more generalist language, so in
my mind it is more appropriate to treat lists as generic lists, and not
numeric vectors:

[1, 2, 3] + [4, "a"] => [1, 2, 3, 4, "a"]

just as

"123" + "4a" => "1234a"
 
S

Steven D'Aprano

From a more practical point of view, the sum efficiency could be
improved by doing the first addition using '+' and the rest using '+=',
without changing the behavior.

But that would change the behaviour. The __iadd__ method is not the same
as the __add__ method, and you can't guarantee that they will behave the
same -- or even similarly.

And what about tuples? And subclasses of list/tuples? How many different
types need to be optimized?

In practical terms, does anyone actually ever use sum on more than a
handful of lists? I don't believe this is more than a hypothetical
problem.

The primary use case for sum is adding numbers when floating point
accuracy is not critical. If you need float accuracy, use math.fsum. If
you need to add more than a handful of small lists, don't use sum: just
calling extend in a loop is probably fast enough, or use itertools.chain.
Trying to turn sum into an all-singing all-dancing function optimised for
an ever-increasing number of objects is a fool's errand. The
implementation of sum in C is already complex enough: 95 lines, excluding
comments, blanks and braces, for something which is a lightweight
function with very simple semantics.

But if anyone wants to submit a patch to the bug tracker, go right ahead.
Without a patch though, I'd say that Python-Dev will consider this a non-
issue.
 
A

Alf P. Steinbach

* Steven D'Aprano:
But that would change the behaviour. The __iadd__ method is not the same
as the __add__ method, and you can't guarantee that they will behave the
same -- or even similarly.

Hm, I don't think it's documented (except if the reference implementation serves
as documentation) which one is currently used.

And what about tuples? And subclasses of list/tuples? How many different
types need to be optimized?

Point. One would need to check for availability of '+='.

In practical terms, does anyone actually ever use sum on more than a
handful of lists? I don't believe this is more than a hypothetical
problem.

Agreed.


Cheers,

- Alf
 
P

Patrick Maupin

And what about tuples? And subclasses of list/tuples? How many different
types need to be optimized?

One of the beautiful things about Python is that, for most things,
there are few surprises for even new users. "There should be one
obvious way to do it" for the user means that, sometimes, under the
hood, there are a lot of special cases for the implementers.
In practical terms, does anyone actually ever use sum on more than a
handful of lists? I don't believe this is more than a hypothetical
problem.

Right now, it's probably not, because when somebody sums a large list
and gets thwacked on the head by the lack of efficiency, they then
come here and get thwacked because "everybody knows" they should user
itertools or something else; not sum().
The primary use case for sum is adding numbers when floating point
accuracy is not critical. If you need float accuracy, use math.fsum.

See, I think the very existence of math.fsum() already violates "there
should be one obvious way to do it."
But if anyone wants to submit a patch to the bug tracker, go right ahead.
Without a patch though, I'd say that Python-Dev will consider this a non-
issue.

Agreed. Wish I had the time to do this sort of cleanup.

Regards,
Pat
 
S

Steve Howell

One of the beautiful things about Python is that, for most things,
there are few surprises for even new users.  "There should be one
obvious way to do it" for the user means that, sometimes, under the
hood, there are a lot of special cases for the implementers.

If nothing else, I think it's reasonably for users to expect symmetry.

If you can use "+" to concatentate lists, then it seems reasonable
that something spelled "sum" would concatenate lists as well, and in
reasonable time.
Right now, it's probably not, because when somebody sums a large list
and gets thwacked on the head by the lack of efficiency, they then
come here and get thwacked because "everybody knows" they should user
itertools or something else; not sum().

Indeed. It would be nice if the docs for sum() at least pointed to
list(itertools.chain(list_of_lists)), or whatever the most kosher
alternative is supposed to be.

It only takes a handful of sublists, about ten on my box, to expose
the limitation of the Shlemeil-the-Painter O(M*N*N) algorithm that's
under the hood. It only takes 200 sublists to start getting a 10x
degradation in performance.
See, I think the very existence of math.fsum() already violates "there
should be one obvious way to do it."

The nice thing about math.fsum() is that it is at least documented
from sum(), although I suspect some users try sum() without even
consulting the docs.

You could appease all users with an API where the most obvious choice,
sum(), never behaves badly, and where users can still call more
specialized versions (math.fsum() and friends) directly if they know
what they are doing. This goes back to the statement that Patrick
makes--under the hood, this means more special cases for implementers,
but fewer pitfalls for users.
 
S

Steven D'Aprano

One of the beautiful things about Python is that, for most things, there
are few surprises for even new users. "There should be one obvious way
to do it" for the user means that, sometimes, under the hood, there are
a lot of special cases for the implementers.

It never ceases to amaze me how often people simply don't understand this.

"There should be one obvious way to do it" is the opposite of "NO obvious
way", not of "many ways which may or may not be obvious". The complete
quote from the Zen makes that clear:

There should be one-- and preferably ONLY one --obvious way to do it.
[Emphasis added]

And don't forget the next line:

Although that way may not be obvious at first unless you're Dutch.

Python is under no compulsion to make "the obvious way" obvious to anyone
except Guido. It's a bonus if it happens to be obvious to newbies, not a
requirement.

And besides, what is "it" you're talking about?

* Adding integers, decimals or fractions, or floats with a low
requirement for precision and accuracy? Use sum.

* Adding floats with a high requirement for precision and accuracy?
Use math.fsum.

* Concatenating strings? Use ''.join.

* Joining lists? Use [].extend.

* Iterating over an arbitrary sequence of arbitrary sequences?
Use itertools.chain.

That's five different "its", and five obvious ways to do them.

Right now, it's probably not, because when somebody sums a large list
and gets thwacked on the head by the lack of efficiency, they then come
here and get thwacked because "everybody knows" they should user
itertools or something else; not sum().

Exactly. If you're adding a large number of large lists, you're already
doing it wrong. Making sum more efficient is just a band aid.

See, I think the very existence of math.fsum() already violates "there
should be one obvious way to do it."

How does the existence of math.fsum contradict the existence of sum?
 
P

Paul Rubin

Steven D'Aprano said:
* Iterating over an arbitrary sequence of arbitrary sequences?
Use itertools.chain.

chain is only for finite sequences. For arbitrary sequences,
use chain.from_iterable.
 
S

Steven D'Aprano

If nothing else, I think it's reasonably for users to expect symmetry.

Why? What is symmetry in programming?

Since the + operator takes both numbers and lists, and the - operator
doesn't, does "symmetry" require that we make up some list operation so
that integers and lists are symmetrical?

If you can use "+" to concatentate lists, then it seems reasonable that
something spelled "sum" would concatenate lists as well, and in
reasonable time.

Where do you get the "reasonable time" from? A *single* + on lists can be
slooooow. Try this:

L = [None]*10**9
result = L+L

(assuming you've even got enough memory to do so)

With a very few exceptions (e.g. dict lookup being "usually" O(1), list
append being amortized O(1)), Python makes no promises about performance.
It's not part of the language. If you, the programmer, are making any
assumptions about performance that aren't clearly and explicitly
documented in the official docs, then YOU are at fault, not Python.

Indeed. It would be nice if the docs for sum() at least pointed to
list(itertools.chain(list_of_lists)), or whatever the most kosher
alternative is supposed to be.

It only takes a handful of sublists, about ten on my box, to expose the
limitation of the Shlemeil-the-Painter O(M*N*N) algorithm that's under
the hood. It only takes 200 sublists to start getting a 10x degradation
in performance.

And how many times have you needed to add 200 sublists?

How many times have you needed to add 10 sublists?

Nobody has demonstrated that this is anything more than a hypothetical
problem.

The nice thing about math.fsum() is that it is at least documented from
sum(), although I suspect some users try sum() without even consulting
the docs.

You could appease all users with an API where the most obvious choice,
sum(), never behaves badly, and where users can still call more
specialized versions (math.fsum() and friends) directly if they know
what they are doing. This goes back to the statement that Patrick
makes--under the hood, this means more special cases for implementers,
but fewer pitfalls for users.

More special cases for implementers means more bugs in the language,
which means I end up writing my own code and ignoring the built-in
version anyway.

More special cases means I have to pay the cost of high accuracy float
summation even when I don't need, or want, it.

More special cases means I'm fooled into paying the cost of summing lists
when I don't need to, because it's easier than importing itertools:

for item in sum(lots_of_lists):
pass

needlessly creates a large list out of the smaller ones. Even if I don't
fall for the temptation, and write bad code, I still pay the cost in the
libraries and applications written by others.


More special cases isn't free. It's MORE costly than teaching users to
use list.extend or itertools.chain.
 
S

Steve Howell

Why? What is symmetry in programming?

"Symmetry" is best shown by example.
>>> 3 - 2 1
>>> set([1,2,3]) - set([2,3])
set([1])
>>> 5 * 3 15
>>> [1, 2, 3, 4, 5] * 3
[1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5]
>>> 1 + 2 + 3 + 4 10
>>> sum([1,2,3,4])
10
>>> [1,2] + [3,4] [1, 2, 3, 4]
>>> sum([[1,2], [3,4]], [])
[1, 2, 3, 4]
Since the + operator takes both numbers and lists, and the - operator
doesn't, does "symmetry" require that we make up some list operation so
that integers and lists are symmetrical?

No. Nobody is advocating for list subtraction.
Where do you get the "reasonable time" from?

From common sense. Concatenation of lists should be in O(M*N) time,
not O(M*N*N), because there is no need to build intermediate lists.
With a very few exceptions (e.g. dict lookup being "usually" O(1), list
append being amortized O(1)), Python makes no promises about performance.
It's not part of the language. If you, the programmer, are making any
assumptions about performance that aren't clearly and explicitly
documented in the official docs, then YOU are at fault, not Python.

I'm not making any assumptions here. I know that sum() is needlessly
slow for lists, even though it is not explicitly documented.
And how many times have you needed to add 200 sublists?

How many times have you needed to add 10 sublists?

Aggregating lists is a very common task in programming. An example of
a list of lists would be a list of departments, where each department
is a list of employees. If you want to sort the employees, you will
want to aggregate to a list.
More special cases for implementers means more bugs in the language,
which means I end up writing my own code and ignoring the built-in
version anyway.

Special cases do not have to introduce bugs in the language.
More special cases means I have to pay the cost of high accuracy float
summation even when I don't need, or want, it.

Nothing about making sum() work for the general cases precludes more
specific, optimized functions.
More special cases means I'm fooled into paying the cost of summing lists
when I don't need to, because it's easier than importing itertools:

You don't have to be fooled.
for item in sum(lots_of_lists):
    pass

needlessly creates a large list out of the smaller ones.

Although this is a mostly harmless example, developers with common
sense would not sum lots of lists unless they expected to keep the
resulting list around for multiple operations, or for one operation,
like sort(), where you need to create a list for the subsequent
operation.
Even if I don't
fall for the temptation, and write bad code, I still pay the cost in the
libraries and applications written by others.

You already pay the small price for polymorphism when you use Python.
More special cases isn't free.

Nobody said they were.
It's MORE costly than teaching users to
use list.extend or itertools.chain.

Which the docs for sum() don't do.
 
P

Patrick Maupin

How does the existence of math.fsum contradict the existence of sum?

You're exceptionally good at (probably deliberately) mis-interpreting
what people write.

Regards,
Pat
 
P

Patrick Maupin

With a very few exceptions (e.g. dict lookup being "usually" O(1), list
append being amortized O(1)), Python makes no promises about performance.
It's not part of the language. If you, the programmer, are making any
assumptions about performance that aren't clearly and explicitly
documented in the official docs, then YOU are at fault, not Python.

It's not about promises, guarantees, quid-pro-quo, etc.

It's about a lack of surprises. Which, 99% of the time, Python excels
at. This is why many of us program in Python. This is why some of us
who would never use sum() on lists, EVEN IF IT WERE FIXED TO NOT BE SO
OBNOXIOUSLY SLOW, advocate that it, in fact, be fixed to not be so
obnoxiously slow.

BTW, it's also not about "fault". There is no shame in writing a
Python program, seeing that it doesn't go fast enough, and then
hammering on it until it does. There is also no shame in not reading
the docs before you write the program, although arguably (and you
obviously work very hard to help see to this) a great deal of shame
attaches to posting to the newsgroup before reading the docs.

Regards,
Pat
 
S

Steve Howell

One of the beautiful things about Python is that, for most things, there
are few surprises for even new users.  "There should be one obvious way
to do it" for the user means that, sometimes, under the hood, there are
a lot of special cases for the implementers.

It never ceases to amaze me how often people simply don't understand this..

"There should be one obvious way to do it" is the opposite of "NO obvious
way", not of "many ways which may or may not be obvious". The complete
quote from the Zen makes that clear:

There should be one-- and preferably ONLY one --obvious way to do it.
[Emphasis added]

And don't forget the next line:

Although that way may not be obvious at first unless you're Dutch.

Python is under no compulsion to make "the obvious way" obvious to anyone
except Guido. It's a bonus if it happens to be obvious to newbies, not a
requirement.

And besides, what is "it" you're talking about?

* Adding integers, decimals or fractions, or floats with a low
  requirement for precision and accuracy? Use sum.

* Adding floats with a high requirement for precision and accuracy?
  Use math.fsum.

* Concatenating strings? Use ''.join.

* Joining lists? Use [].extend.

* Iterating over an arbitrary sequence of arbitrary sequences?
  Use itertools.chain.

That's five different "its", and five obvious ways to do them.

Let's go through them...
* Adding integers, decimals or fractions, or floats with a low
requirement for precision and accuracy? Use sum.

Pretty obvious.
6


* Adding floats with a high requirement for precision and accuracy?
Use math.fsum.

Mostly obvious.
>>> fsum([1.234534665989, 2.987, 3])
Traceback (most recent call last):
File said:
>>> import math
>>> math.fsum([1.234534665989, 2.987, 3])
7.2215346659890001

* Concatenating strings? Use ''.join.


Common pitfall:
>>> ['abc', 'def', 'ghi'].join()
Traceback (most recent call last):
File said:
>>> ''.join(['abc', 'def', 'ghi'])
'abcdefghi'

* Joining lists? Use [].extend.

Obvious, yes. Convenient? Not really.
>>> start = []
>>> for list in [[1, 2], [3, 4]]:
... start.extend(list)
... [1, 2, 3, 4]
* Iterating over an arbitrary sequence of arbitrary sequences?
Use itertools.chain.
>>> group1 = ['al', 'bob']
>>> group2 = ['cal']
>>> groups = [group1, group2]

Obvious if you are Dutch...
Traceback (most recent call last):
Traceback (most recent call last):
3

So you have sum, fsum, join, extend, and chain.

Sum is builtin, but you have to import fsum from math and chain from
itertools.

Join is actually a method on strings, not sequences.

Chain can take multiple lists, or you can use the splat operator on a
list of lists.

Extend is actually a method on lists, and it only takes one list, not
multiple ones.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: extend() takes exactly one argument (2 given)

Just commit all that to memory, and enjoy the productivity of using a
high level language! ;)
 
S

Steven D'Aprano

Why? What is symmetry in programming?

"Symmetry" is best shown by example.
3 - 2 1
set([1,2,3]) - set([2,3])
set([1])


That's a useless example.

42 - 10 32
set([42]) - set([10])
set([42])


You don't define symmetry. You don't even give a sensible example of
symmetry. Consequently I reject your argument that because sum is the
obvious way to sum a lot of integers, "symmetry" implies that it should
be the obvious way to concatenate a lot of lists.

1+2 3
[1] + [2] [1, 2]
set([1]) + set([2])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'set' and 'set'

From common sense. Concatenation of lists should be in O(M*N) time, not
O(M*N*N), because there is no need to build intermediate lists.

You are correct that building intermediate lists isn't *compulsory*,
there are alternatives, but the alternatives themselves have costs.
Complexity itself is a cost. sum currently has nice simple semantics,
which means you can reason about it: sum(sequence, start) is the same as

total = start
for item in sequence:
total = total + start
return total

You don't have to care what the items in sequence are, you don't have to
make assumptions about what methods sequence and start have (beyond
supporting iteration and addition). Adding special cases to sum means it
becomes more complex and harder to reason about. If you pass some other
sequence type in the middle of a bunch of lists, what will happen? Will
sum suddenly break, or perhaps continue to work but inefficiently?

You still need to ask these questions with existing sum, but it is
comparatively easy to answer them: you only need to consider how the
alternative behaves when added to a list. You don't have to think about
the technicalities of the sum algorithm itself -- sometimes it calls +,
sometimes extend, sometimes +=, sometimes something else... which of the
various different optimized branches will I fall into this time? Who
knows? sum already has two branches. In my opinion, three branches is one
too many.


[...]
Aggregating lists is a very common task in programming.

"Aggregating" lists? Not summing them? I think you've just undercut your
argument that sum is the "obvious" way of concatenating lists.

In natural language, we don't talk about "summing" lists, we talk about
joining, concatenating or aggregating them. You have just done it
yourself, and made my point for me. And this very thread started because
somebody wanted to know what the equivalent to sum for sequences.

If sum was the obvious way to concatenate sequences, this thread wouldn't
even exist.

An example of a
list of lists would be a list of departments, where each department is a
list of employees. If you want to sort the employees, you will want to
aggregate to a list.

I grant you that 10 departments is likely to be a borderline case, but if
you have 200 departments, then you will most likely be querying a
database and getting back a single list of employees. And if you're not,
you probably should be.

Special cases do not have to introduce bugs in the language.

They don't *have* to, but since even Donald Knuth has written code with
bugs in it, it is a safe bet that the number of bugs in code written by
mere mortals will be roughly proportional to the complexity: more complex
means more bugs. You can go to the bank with that.

Nothing about making sum() work for the general cases precludes more
specific, optimized functions.

You have missed my point. If I call your version of sum which all the
special cases, I pay the overhead of the complexity regardless of whether
I need it or not. The existence of other functions which duplicate the
special cases in sum is irrelevant.



[...]
Which the docs for sum() don't do.

Patches are always welcome.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,768
Messages
2,569,574
Members
45,051
Latest member
CarleyMcCr

Latest Threads

Top