reduce() anomaly?

  • Thread starter Stephen C. Waterbury
  • Start date
I

Isaac To

David> That sounds dishonest. But Fibonacci can be a good example of
David> both memoization and dynamic programming, and I would expect the
David> iterative linear version to be faster (by a constant factor) than
David> the memoized recursive one (also linear due to the memoization).

Fibonacci can be computed in logarithmic time with repeated squaring of an
appropriate 2x2 matrix. And this, as Terry pointed out, can be done by
either recursion or iteration. Actually the phrase "can be done by either
recursion or iteration" can be omitted, since every iteration can be turned
into a tail recursion without losing efficiency (given the appropriate
compiler optimization), and every recursion can be turned into an iteration
using a stack; and the primary difference is how they looks (and people
disagree even on whether a tail recursion is easier or harder to read than
an iteration).

Regards,
Isaac.
 
F

Francis Avila

Douglas Alan said:
I agree: Down with bloat! Get rid of sum() -- it's redundant with
reduce(), which I use all the time, like so:

def longer(x, y):
if len(y) > len(x): return y
else: return x

def longest(seq):
return reduce(longer, seq)

print longest(("abc", "yumyum!", "hello", "goodbye", "?"))

=> yumyum!

|>oug

Oh, guys...

def ireduce(func, seq):
seq = iter(seq)
try:
i = seq.next()
while True:
i = func(i, seq.next())
except StopIteration:
return i

def cond(boolean, iftrue, iffalse):
if boolean: return iftrue
else: return iffalse
peace a chance!".split(' '))
'peace'

Can't we all just....get along?
 
D

David Eppstein

Isaac To said:
Fibonacci can be computed in logarithmic time with repeated squaring of an
appropriate 2x2 matrix. And this, as Terry pointed out, can be done by
either recursion or iteration. Actually the phrase "can be done by either
recursion or iteration" can be omitted, since every iteration can be turned
into a tail recursion without losing efficiency (given the appropriate
compiler optimization), and every recursion can be turned into an iteration
using a stack; and the primary difference is how they looks (and people
disagree even on whether a tail recursion is easier or harder to read than
an iteration).

Perhaps, for Fibonacci, they are almost the same. For the recursive vs
iterative versions of 0-1 knapsack which I posted earlier in this
thread, the same values are computed in the same places by the same
formula, it's not tail-recursive because there are two subproblem values
used in each instance of the formula (anyway we're in a Python group and
Python doesn't optimize out tail recursions), and the recursive version
computes them in a different order than the iterative one and doesn't
compute as many of them.
 
B

Bengt Richter

Of course you can make it iterative with auxiliary stacks.
Any compiler for a compiled language would do that.
I don't think of that as removing the recursion, just hiding it.

I thought your question wasn't about semantic games, I thought it was
about the relation between memoization and dynamic programming. Both
compute and store the same subproblem values, but in different orders;
usually they are the same in complexity but not always.


Not to mention the logarithmic (in number of arithmetic operations)
algorithms... http://www.cs.utexas.edu/users/EWD/ewd06xx/EWD654.PDF


That sounds dishonest. But Fibonacci can be a good example of both
memoization and dynamic programming, and I would expect the iterative
linear version to be faster (by a constant factor) than the memoized
recursive one (also linear due to the memoization).
I played with fibonacci and was quite proud of myself for having
probably rediscovered a fast recursion algorithm for it
(about this time 1996 it seems, sheesh time flies) :

in scheme

http://groups.google.com/[email protected]&output=gplain

and later I translated it to python

http://groups.google.com/[email protected]&output=gplain

Perhaps you know of a similar predecessor?

BTW, do you mean dynamic programming above as in Bellman? Recursion can be used in
implementing that, but for fibonacci how would it apply?

Regards,
Bengt Richter
 
D

David Eppstein

[email protected] (Bengt Richter) said:
I played with fibonacci and was quite proud of myself for having
probably rediscovered a fast recursion algorithm for it
(about this time 1996 it seems, sheesh time flies) :

in scheme

http://groups.google.com/[email protected]&output
=gplain

and later I translated it to python

http://groups.google.com/[email protected]&o
utput=gplain

Perhaps you know of a similar predecessor?

You're in good company, this looks pretty similar to or the same as the
one rediscovered by Dijkstra in
http://www.cs.utexas.edu/users/EWD/ewd06xx/EWD654.PDF
(bottom of page 1). He called it "well-known" in 1978 but I'm not
familiar with earlier references.
 
I

Isaac To

David> Perhaps, for Fibonacci, they are almost the same. For the
David> recursive vs iterative versions of 0-1 knapsack which I posted
David> earlier in this thread, the same values are computed in the same
David> places by the same formula, it's not tail-recursive because there
David> are two subproblem values used in each instance of the formula
David> (anyway we're in a Python group and Python doesn't optimize out
David> tail recursions), and the recursive version computes them in a
David> different order than the iterative one and doesn't compute as
David> many of them.

Hmm... don't keep your eye closed. I mean an *iterative* version can always
be turned into a tail recursion, while you are not talking about the
iterative version. You are talking about the recursive version, which can
be turned into an iteration using a stack. This is what I said exactly:

Isaac> since every iteration can be turned into a tail recursion without
Isaac> losing efficiency (given the appropriate compiler optimization),
Isaac> and every recursion can be turned into an iteration using a
Isaac> stack

To do this basically you turn your program into an interpretor performing
the instruction cycle as a loop; and the stack needed is for the low-level
implementation of the recursion. So instead of

def f(n):
if n == 0: # statement 1
return 1 # statement 2
else
return f(n-1)*n # statement 3/4, recursion

You'd write (untested code, depend on a stack with push and pop
operations...)

def f(n):
s = Stack()
state = [1,n,0] # [statement nr, value of n, return value slot]
s.push(state)
while true: # the loop implementing the instruction cycle
state = s.pop()
if state[0] == 1: # "if n == 0"
if state[1] == 0: # if n == 0
state[0] = 2 # then goto statement 2
else:
state[0] = 3 # else goto statement 3
s.push(state)
else if state[0] == 2: # "return 1"
if s.empty(): # if no more recursion
return 1 # then return 1
laststate=s.pop()
laststate[2] = 1 # else write 1 to return value slot
s.push(laststate)
else if state[0] == 3: # "f(n-1)"
s.push([4, state[1], 0]) # prepare for return
s.push([1, state[1]-1, 0]) # restart execution, n=n-1
else: # "return retval*n"
if s.empty(): # if no more recursion
return state[2]*state[1] # then return retval*n
laststate=s.pop()
laststate[2] = state[2]*state[1] # else write that to return value slot
s.push(laststate)

You can see it involves no recursion, and is completely mechanical. It
works basically for all recursions, including the one you mentioned.
Basically one go back to the principle in which a computer works. Of
course, depending on circumstances it can be optimized a bit. The above
only shows that recursion is not absolutely necessary.

Regards,
Isaac.
 
A

Alex Martelli

Anybody who can claim (without a smilie) that "sum is redundant with
reduce" has clearly lost all touch with reality. "sum" accounts for
about 80-90% of the uses of reduce found on large existing codebases,
and even more when only _good_ uses of reduce are considered. "sum"
is way faster than "reduce(operator.add, ..." -- and the advantage in
speed *AND* simplicity grows even more when one considers how often
that typical reduce is miscoded as "reduce(lambda x, y: x+y, ..." or
"reduce(int.__add__, ..." and the like.

"Add up this bunch of numbers" is a VERY frequent task in many kinds of
programs. reduce LOOKS like a decent solution to that task, but it just
*ISN'T*. Consider the simple, obvious, elementary solution:

[alex@lancelot test]$ timeit.py -c -s'import operator' 'x=0' 'for i in
xrange(999): x+=i'
1000 loops, best of 3: 290 usec per loop

versus progressively-fancier reduce-based ones:

[alex@lancelot test]$ timeit.py -c -s'import operator' 'reduce(operator.add,
xrange(999))'
1000 loops, best of 3: 300 usec per loop

[alex@lancelot test]$ timeit.py -c -s'import operator' 'reduce(int.__add__,
xrange(999))'
1000 loops, best of 3: 540 usec per loop

[alex@lancelot test]$ timeit.py -c -s'import operator' 'reduce(lambda x,y:
x+y, xrange(999))'
1000 loops, best of 3: 660 usec per loop

The complications of going to a higher-order function and learning about
module operator "pay off" in a SLOWDOWN compared to elementary code...!

The "fancy" ways reduce is often coded slow things down further by about
two times. This is clearly unacceptable. And the right solution is,
quite obviously:

[alex@lancelot test]$ timeit.py -c 'sum(xrange(999))'
10000 loops, best of 3: 129 usec per loop

giving speed, simplicity, and code that is clean, high-level, concise, and
immediately obvious to any reader or maintainer.


The paladins of reduce don't help their case by posting reduce "use
cases" that are complicated and rarely optimal:

In 2.4, the much simpler expression:
list.sorted(seq, key=len)[-1]
is about 3 times faster on a typical long seq (seq=map(str, xrange(2000)).
We still don't have (in the current pre-alpha 2.4) the "one obvious way",
max(seq, key=len), because max & other functions haven't yet been upgraded
to take the optional parameter key= as lists' sort and sorted methods, but
I trust they will be.

And if one doesn't care for the concision of these new idioms, the most
elementary programming is still best:

[alex@lancelot test]$ timeit.py -c -s'xs=map(str,xrange(2000))' -s'''
def longest(seq):
seq=iter(seq)
best = seq.next()
blen = len(best)
for item in seq:
ilen = len(item)
if ilen>blen:
best = item
blen = ilen
return best
''' 'longest(xs)'
1000 loops, best of 3: 980 usec per loop

versus:

alex@lancelot test]$ timeit.py -c -s'xs=map(str,xrange(2000))' -s'''
def longer(x, y):
if len(y) > len(x): return y
else: return x
def longest(seq):
return reduce(longer, seq)
''' 'longest(xs)'
100 loops, best of 3: 2.4e+03 usec per loop

Once again, reduce proves to be an "attractive nuisance": it "looks" kind
of cool, but in fact it "repays" attention and study by _slowing things
down_ even in comparison to simple, elementary programming. Higher order
programming, such as the "max(seq, key=len)" which I hope will make it
into 2.4, is of course way simpler, faster, and more concise -- but the
"neither fish nor fowl" level at which most uses of reduce fall just
doesn't cut it, IMHO.

And btw, if one wants concision rather than speed, even in 2.3, the
typical DSU-like idiom:
def longest(seq):
aux = [ (len(x), -i, x) for i, x in enumerate(seq) ]
return max(aux)[-1]
still handily beats the reduce-coded alternative (even though
having the len= parameter instead of explicit decoration will
of course make it even smoother, simpler, and faster, in 2.4).

Oh, guys...

def ireduce(func, seq):
seq = iter(seq)
try:
i = seq.next()
while True:
i = func(i, seq.next())
except StopIteration:
return i

I have some trouble with this snippet's indentation (have you used
tabs...?), but, apart from that, this is surely a close equivalent
to reduce (it differs from reduce when seq is empty, raising an
UnboundLocalError rather than a TypeError, etc, but such trifles
could of course be easily adjusted). It IS, however, marginally
slower than reduce. I clock 'reduce(lambda x, y: y, xs)' at 1110
millisecs, but 'ireduce(lambda x, y: y, xs)' at 1270 millisecs,
when "xs=map(str, xrange(2000))". [[ Now, of course, reduce's fans
will claim that this 15%-or-so slowdown destroys ireduce's usefulness,
while at the same time claiming that the many-times speedups versus
reduce that I have repeatedly shown for many other idioms, both lower
and higher level, don't matter -- just watch it, it'll be fun:) ]]

def cond(boolean, iftrue, iffalse):
if boolean: return iftrue
else: return iffalse

heh -- trying to sneak in a ternary, are we?-)

peace a chance!".split(' '))
'peace'

Unfortunately, what this suggests to me is that this ireduce (not
surprisingly, as it has basically reduce's interface:) and cond (ditto)
encourage obscure and best-avoided idioms.

For the task "get the first item in seq that has a length of 5", both
the simplest, lowest-level coding:
def firstlen5(seq):
for item in seq:
if len(item) == 5: return item
else: raise ValueError, "No item has length 5"
or the concise, higher-level coding
[ x for x in seq if len(x)==5 ] [0]
or better in 2.4
( x for x in seq if len(x)==5 ).next()
appear far preferable to me. I suspect they'd also be way faster, but I'm
getting a bit tired of posting oodles of measurements that are just going
to get ignored by the frantic fans of reduce, since if they got off their
high horse and just looked at reality they'd be left with no case;-).

Can't we all just....get along?

Sure we can -- unless you count 'reduce' itself as a part of 'us':).


Alex
 
G

Georgy Pruss

| <...>
| is way faster than "reduce(operator.add, ..." -- and the advantage in
| speed *AND* simplicity grows even more when one considers how often
| that typical reduce is miscoded as "reduce(lambda x, y: x+y, ..." or
| "reduce(int.__add__, ..." and the like.


What's the difference between operator.add and int.__add__?
Why is operator.add faster and "better" than int.__add__ if we deal
with integers?

Georgy


| <...>
|
| Alex
|
 
A

Alex Martelli

Georgy said:
| <...>
| is way faster than "reduce(operator.add, ..." -- and the advantage in
| speed *AND* simplicity grows even more when one considers how often
| that typical reduce is miscoded as "reduce(lambda x, y: x+y, ..." or
| "reduce(int.__add__, ..." and the like.

What's the difference between operator.add and int.__add__?

operator.add just adds anything, not caring about the type of what
it's adding. Internally, it just reaches for the "addition function"
slot and calls that function, period.
Why is operator.add faster and "better" than int.__add__ if we deal
with integers?

int.__add__ must also check the type of its argument, and only call
the addition function if the argument is indeed integer, because
otherwise it must raise a TypeError. The type-checking is normally
needless overhead.

So, we can measure, for example:

alex@lancelot src]$ timeit.py -c -s'import operator' 'map(int.__add__,
xrange(999), xrange(999))'
1000 loops, best of 3: 730 usec per loop

[alex@lancelot src]$ timeit.py -c -s'import operator' 'map(operator.__add__,
xrange(999), xrange(999))'
1000 loops, best of 3: 460 usec per loop

the 999 (useless) type-checks cost 270 microseconds, for a net of
about 270 nanoseconds per check; the 999 additions and the building
of the 999-elements result list cost another 460 microseconds, or
about 460 nanoseconds per operation-and-addition-to-list-of-results.


Alex
 
D

David C. Fox

Alex said:
Erik Max Francis wrote:




However, so many of reduce's practical use cases are eaten up by sum,
that reduce is left without real use cases to justify its existence.

How about

import operator
seq_of_flag_integers = [01020, 02300, 00132] #etc.
reduce(operator.or_, seq_of_integers)

My simple timing tests (cumulative time for 1000 trials, each with a
sequence of 1000 integers), show no significant difference between the
above and an alternative using a for loop and |=, but the above is
clearer to read.

I'm not claiming that the use case above is common, or particularly
useful. I'm just pointing out sum doesn't replace reduce unless the
function being applied is addition. Addition may be the most common
case, and in that case, sum is both clearer and faster. However, that
doesn't detract from the clarity and usefulness of reduce in the
remaining cases.

David

P. S. I've seen a lot of talk about removing old features from Python,
or specifically old built-ins, because of bloat. Does this "bloat"
reduce performance, or does it refer to the extra burden on someone
learning the language or reading someone else's code?
 
A

Alex Martelli

David C. Fox wrote:
...
How about

....a typical example of *made-up* case follows...:
import operator
seq_of_flag_integers = [01020, 02300, 00132] #etc.
reduce(operator.or_, seq_of_integers)

My simple timing tests (cumulative time for 1000 trials, each with a
sequence of 1000 integers), show no significant difference between the
above and an alternative using a for loop and |=, but the above is
clearer to read.

You may consider it "clearer to read", and I might opine differently
(apart from the typo/bug that you're reducing a sequence you never
defined, of course:). That (necessary) trailing underscore in the
name of operator.or_ is quite unpleasant, for example. But I think
the real point is another.

If you're doing lots of bit-twidding in Python, you surely need to learn
such Python constructs as for and |= anyway.

With these can't-avoid-learning-them constructs, you can obviously
code a simple, elementary loop such as:
ored_flags = 0
for flags in all_the_flags:
ored_flags |= flags

Alternatively, you may learn _two more_ things -- that 'reduce' thingy
_plus_ the fact that module operator has an or_ function that does the
same job as | -- all in order to get an *equivalent* way to code the
same task?! Not to mention that what you end up coding this way is most
definitely NOT going to be any clearer than the simple, elementary loop
to any future maintainer of your code -- if your bit-twiddling code is
going to be maintained by somebody who doesn't understand for or | you
are in trouble anyway, friend, while -- given that reduce and operator.or_
give NO real advantages! -- it WOULD be perfectly possible for your future
maintainer to NOT be familiar with them.

I'm not claiming that the use case above is common, or particularly
useful. I'm just pointing out sum doesn't replace reduce unless the
function being applied is addition. Addition may be the most common
case, and in that case, sum is both clearer and faster. However, that
doesn't detract from the clarity and usefulness of reduce in the
remaining cases.

It's hard to detract from the clarity and usefulness of a construct
that never had much usefulness OR clarity in the first place. When
we've taken away just about all of its reasonably frequent use cases,
what remains? Essentially only a case of "more than one way to do-itis"
where in the "BEST" case the existence of 'reduce' and module operator
let you "break even" compared to elementary, simplest coding -- and
more often than not you can fall into traps such as those exemplified by
most reduce-defenders' posts on this thread.

P. S. I've seen a lot of talk about removing old features from Python,
or specifically old built-ins, because of bloat. Does this "bloat"
reduce performance, or does it refer to the extra burden on someone
learning the language or reading someone else's code?

A slightly larger memory footprint, and larger built-ins dictionaries,
can only reduce runtime performance very marginally. The "cognitive
burden" of having built-ins that "don't carry their weight", on the
other hand, is considered an issue *in Python*, because it doesn't fit
in with Python's general philosophy and worldview. Python is meant to
be a LEAN language (with a not-lean standard library of modules that
are specifically imported when needed); certain "legacy" features are
sub-optimal (and best removed, when the strict constraint of backwards
compatibility can be relaxed) because they interfere with that (and
built-ins are close enough to "the core language" that they _do_ need
to be weighed severely in terms of "how often will they be useful").

It's a conceptual wart that the only sincere response to this thread
subject can be "not much, really -- basically backwards compatibility,
and some people's preference for constructs they're used to, often in
preference to simpler or better performing ones, some of which didn't
happen to have been introduced yet when they learned Python"...:).


Alex
 
A

Anton Vredegoor

On 9 Nov 2003 22:40:57 GMT, (e-mail address removed) (Bengt Richter) wrote:

[ackerman function generates very long integers]
Is there a fast formula for computing results, ignoring the representation problem?

This seems to be an easy way to generate them faster:

http://www.kosara.net/thoughts/ackermann.html

... back to the topic of this thread ...

From what I get of the main discussion (recursion vs iteration) it is
reasonable to not make a distinction based on the type of algorithm
but to only look at the order in which the nodes of the search space
are visited. It seems possible to turn recursive functions into
iterative ones by pushing and popping a stack of argument values.

This opens up possibilities of continuing from arbitrary positions of
the stack instead of just by pushing and popping ...

Anton
 
D

David C. Fox

Alex said:
A slightly larger memory footprint, and larger built-ins dictionaries,
can only reduce runtime performance very marginally. The "cognitive
burden" of having built-ins that "don't carry their weight", on the
other hand, is considered an issue *in Python*, because it doesn't fit
in with Python's general philosophy and worldview. Python is meant to
be a LEAN language (with a not-lean standard library of modules that
are specifically imported when needed); certain "legacy" features are
sub-optimal (and best removed, when the strict constraint of backwards
compatibility can be relaxed) because they interfere with that (and
built-ins are close enough to "the core language" that they _do_ need
to be weighed severely in terms of "how often will they be useful").

Thanks, that's what I was trying to understand.

David
 
F

Francis Avila

Alex Martelli said:
Francis said:
[a reduce() clone]
[a long and thorough critique]

*Sigh* My only objective was to combine functional/recursive/iterative
programming styles into an olive branch to extend to both parties of this
silly war.

But oh well. Like I said, I never found a use for reduce, but alas, I am a
ham-fisted programmer entirely lacking in subtlety and art....
 
D

Douglas Alan

Alex Martelli said:
Anybody who can claim (without a smilie) that "sum is redundant with
reduce" has clearly lost all touch with reality.

*All* touch? My girlfriend still feels real to me when I touch
her....
"sum" accounts for about 80-90% of the uses of reduce found on large
existing codebases,

I have *never* used sum() (or a reduce() that ammounts to sum()). And
I use reduce() fairly often, in more or less the way that I described.
and even more when only _good_ uses of reduce are considered. "sum"
is way faster than "reduce(operator.add, ..." -- and the advantage
in speed *AND* simplicity

I don't care about speed all that much when using Python. If I did, I
probably wouldn't be using Python. And to me, "simple" means
providing a small number of orthogonal features that combine easily,
rather than a lot of special purpose features, that all have to be
learned, remembered, and documented separately. sum() is a
special-purpose feature, while reduce() is a very general feature that
is very easily adapted to a myriad of situations. So what if 80-90%
of the uses of reduce in your experience is to support sum()? That's
not my experience, and even if it were, if sum() is used a lot, then
10-20% of that is still a lot. max(), like sum() also has a built-in
reduce. So the current approach for the future of Python seems to be
to build reduce() into everything that might need it? I'd prefer to
see reduce() taken out of sum() and out of max() and then no one needs
to document or learn that sum() and max() duplicate the functionality
of reduce(). Just learn, remember, and document one thing, and you're
all set.
grows even more when one considers how
often that typical reduce is miscoded as "reduce(lambda x, y: x+y,
..." or "reduce(int.__add__, ..." and the like.

People who do this, no doubt, are more interested in coding than
wading through a big manual to figure out to make use of Python's
idiosyncracities to get their code to run as fast as possible in this
one particular implementation of Python.

Such people are surely not going to want to wade through the manual to
find sum().
The complications of going to a higher-order function and learning about
module operator "pay off" in a SLOWDOWN compared to elementary code...!

All of 3.4% according to your calculations. I'm not losing any sleep
over it.

Learning about module operator is not a chore, since I'd want to know
how to access these operators anyway.

Higher order functions are not complicated -- they are beautiful and
elegant.
The "fancy" ways reduce is often coded slow things down further by about
two times. This is clearly unacceptable.

There's nothing fancy about reduce() -- it's taught in every CS 101
class!
And the right solution is, quite obviously:
[alex@lancelot test]$ timeit.py -c 'sum(xrange(999))'
10000 loops, best of 3: 129 usec per loop

Why? Because it saves having to type "operator.add,"? Because it's a
bit faster? Who cares about either one of these things. Certainly
not I.
giving speed, simplicity, and code that is clean, high-level, concise, and
immediately obvious to any reader or maintainer.

reduce() is immediately obvious to anyone who has ever taken CS
101. sum() is going in the direction of Perl, where there's a separate
idiomatic feature for everything that anyone might ever want to do.

I like Python because it is a very simple, generic, easy-to-learn
programming language, where I didn't have to learn much of anything
anything new. It was just like all the other myriad of programming
languages I have programmed in, only less cluttered than most, and
"light-weight" in its implementation and accessibility.

I having nothing against learning new stuff, by the way, but when I
want to learn new stuff, I'm not particularly interested in the
idiosyncrasies of Python -- I'd spend my effort learning something a
bit more exotic, like OCaml, or Haskell, that might stretch my brain a
bit. Learning a bunch of idiosyncratic detail is just tedious. This
is one of the most important reasons that Perl sucks so badly.
The paladins of reduce don't help their case by posting reduce "use
cases" that are complicated and rarely optimal:

There's nothing complicated about my example. It's the epitome of
clarity and elegance.
In 2.4, the much simpler expression:
list.sorted(seq, key=len)[-1]
is about 3 times faster on a typical long seq (seq=map(str, xrange(2000)).

There's nothing simpler about that (in fact, conceptually, it's much
more complex) and it's only faster because of Python implementation
details. In most languages, or an alternate implementation of Python,
sorting a list would typically be significantly slower than a linear
search on it.
We still don't have (in the current pre-alpha 2.4) the "one obvious way",
max(seq, key=len), because max & other functions haven't yet been upgraded
to take the optional parameter key= as lists' sort and sorted methods, but
I trust they will be.

More annoying bloat. Instead of pumping up max() with bloat, tell
people to use reduce(). It's already there and simple and elegant.
And if one doesn't care for the concision of these new idioms, the most
elementary programming is still best:
[alex@lancelot test]$ timeit.py -c -s'xs=map(str,xrange(2000))' -s'''
def longest(seq):
seq=iter(seq)
best = seq.next()
blen = len(best)
for item in seq:
ilen = len(item)
if ilen>blen:
best = item
blen = ilen
return best
''' 'longest(xs)'
1000 loops, best of 3: 980 usec per loop

That's supposed to be best? It would take me 100 times as long to
figure out that code.
Once again, reduce proves to be an "attractive nuisance":

With the emphasis on "attractive". It results in easy-to-read, easy
to remember coding techniques.
Higher order programming, such as the "max(seq, key=len)" which I
hope will make it into 2.4, is of course way simpler, faster, and
more concise -- but the "neither fish nor fowl" level at which most
uses of reduce fall just doesn't cut it, IMHO.

Except for that I'd never use it because I'd never even know it was
there because I prefer to spend my time writing software than wading
through manuals.
And btw, if one wants concision rather than speed, even in 2.3, the
typical DSU-like idiom:
def longest(seq):
aux = [ (len(x), -i, x) for i, x in enumerate(seq) ]
return max(aux)[-1]
still handily beats the reduce-coded alternative (even though
having the len= parameter instead of explicit decoration will
of course make it even smoother, simpler, and faster, in 2.4).

The above is particularly ugly.

|>oug
 
R

Robin Becker

Francis Avila said:
Alex Martelli said:
Francis said:
[a reduce() clone]
[a long and thorough critique]

*Sigh* My only objective was to combine functional/recursive/iterative
programming styles into an olive branch to extend to both parties of this
silly war.

But oh well. Like I said, I never found a use for reduce, but alas, I am a
ham-fisted programmer entirely lacking in subtlety and art....
This whole thread is reminiscent of vi vs emacs or an os war or similar.
It's a pity that people with a preferred style should be so dogmatic
that they want to remove language features to prevent others using them.

The whole 'only one way to do it' concept is almost certainly wrong.
There should be maximal freedom to express algorithms. As others have
stated min, max,... sum et al are useful specialisations, but because
they cover 99% of the space doesn't mean that reduce is redundant.
-Eliminate reducespeak and control the future-ly yrs-
Robin Becker
 
A

Alex Martelli

Robin Becker wrote:
...
This whole thread is reminiscent of vi vs emacs or an os war or similar.
It's a pity that people with a preferred style should be so dogmatic
that they want to remove language features to prevent others using them.

Python's essence is simplicity and uniformity. Having extra features
in the language and built-ins runs directly counter to that.

The whole 'only one way to do it' concept is almost certainly wrong.

Bingo! You disagree with the keystone of Python's philosophy. Every
other disagreement, quite consequently, follows from this one.

The world is *chock full* of languages designed with the philosophy
of "the more, the merrier". Why can't you let us have just _*ONE*_
higher-level language designed with full appreciation for "the spirit
of C" (as per the C standard's Rationale) in its key principles:

"""
(c) Keep the language small and simple.
(d) Provide only one way to do an operation.
"""

Maybe "the spirit of C" is as wrong as you claim it is. Personally,
I _cherish_ it, and in Python I found a language that cherishes it
just as much, while working at a much higher semantic level and thus
affording me vastly higher productivity. I will not stand idly by,
and let you or anybody else attack its foundations and thus try to
destroy its key strengths, without a fight.

There should be maximal freedom to express algorithms. As others have

Want "maximal freedom to express algorithms"? You can choose among
a BAZILLION languages designed according to your philosophy -- even
sticking just to higher-level languages with dynamic typing and
generally infix/algoloid syntax, Perl is the obvious example, and if
you just can't stand its core syntax, in full respect of this
"maximal freedom" principle, you can of course change it, too -- see
http://www.csse.monash.edu.au/~damian/papers/HTML/Perligata.html and
revel in that _truly_ maximal freedom. Or, for a language that's
somewhat in-between, but still philosophically accepts the "maximal
freedom" tenet for which you crave, try Ruby -- if I _did_ want such
freedom, then Ruby is what I'd most likely use.

But can't you let us have *ONE* language that's designed according
to the concept you consider "almost certainly wrong"?! Why do YOU
use this language, even while you condemn its foundation and try to
undermine them, rather than using any one of the myriad that already
DO follow your philosophy?!

Trying to make Python into (e.g.) a Ruby with slightly different
cosmetics and "fine points" is truly absurd: if you want Ruby, you
know where to find it. Let Python stay Python, and leave those of
us who find ourselves best served by its design principles in peace!


Alex
 
R

Robin Becker

Alex Martelli said:
Robin Becker wrote:
...

Python's essence is simplicity and uniformity. Having extra features
in the language and built-ins runs directly counter to that.

no disagreement, reduce is in line with that philosophy sum is a
shortcut and as others have said is less general.
Bingo! You disagree with the keystone of Python's philosophy. Every
other disagreement, quite consequently, follows from this one.

not so, I agree that there ought to be at least one way to do it.
Want "maximal freedom to express algorithms"? You can choose among

.... you may be right, but I object to attempts to restrict my existing
freedoms at the expense of stability of Python as a whole.
But can't you let us have *ONE* language that's designed according

I am not attempting to restrict anyone or change anyone's programming
style. I just prefer to have a stable language.
 
G

G.A.

... you may be right, but I object to attempts to restrict my existing
freedoms at the expense of stability of Python as a whole.

To paraphrase a famous saying (at least in the context of US law), your
freedom ends where another person's maintenance headache begins.

Gary
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top