reduce() anomaly?

  • Thread starter Stephen C. Waterbury
  • Start date
A

Alex Martelli

Georgy said:
seq=[1,3,4,7]
map( int.__sub__, seq[1:], seq[:-1] ) # nearly twice faster than
....zip.... for long arrays

If this is a race, then let's measure things properly and consider
a few more alternatives, shall we...?

[alex@lancelot xine-lib-1-rc2]$ python a.py
reduce: 100 loops, best of 3: 13.8 msec per loop
zip: 100 loops, best of 3: 18 msec per loop
izip: 100 loops, best of 3: 7.6 msec per loop
w2: 100 loops, best of 3: 7.1 msec per loop
wib: 100 loops, best of 3: 12.7 msec per loop
loop: 100 loops, best of 3: 8.9 msec per loop
map: 100 loops, best of 3: 7.6 msec per loop

itertools.w2 is an experimental addition to itertools which I
doubt I'll be allowed to put in (gaining less than 10% wrt the
more general izip is hardly worth a new itertool, sigh). But,
apart from that, map and izip are head to head, and the plain
good old Python-coded loop is next best...! reduce is slowest.

My code...:


if __name__ != '__main__':
def difs_reduce(seq):
differences = []
def neighborDifference(left, right, accum=differences.append):
accum(right - left)
return right
reduce(neighborDifference, seq)
return differences

def difs_zip(seq):
return [ b-a for a, b in zip(seq,seq[1:]) ]

import itertools
def difs_izip(seq):
return [ b-a for a, b in itertools.izip(seq,seq[1:]) ]

def difs_w2(seq, wib=itertools.w2):
return [ b-a for a, b in wib(seq) ]

def window_by_two(iterable):
it = iter(iterable)
last = it.next()
for elem in it:
yield last, elem
last = elem

def difs_wib(seq, wib=window_by_two):
return [ b-a for a, b in wib(seq) ]

def difs_loop(seq):
differences = []
it = iter(seq)
a = it.next()
for b in it:
differences.append(b-a)
a = b
return differences

def difs_map(seq):
return map(int.__sub__, seq[1:], seq[:-1])

if __name__ == '__main__':
import timeit
bargs = ['-c', '-simport a', '-sx=range(9999)']
funs = 'reduce zip izip w2 wib loop map'.split()
for fun in funs:
args = bargs + ['a.difs_%s(x)' % fun]
print '%8s:' % fun,
timeit.main(args)


Alex
 
E

Erik Max Francis

Alex said:
However, so many of reduce's practical use cases are eaten up by sum,
that reduce is left without real use cases to justify its existence.

Any reduction that doesn't involve summing won't be handled by sum.
Flattening a list of (non-recursive) lists is a good example. reduce is
already in the language; removing an existing, builtin function seems
totally inappropriate given that it's there for a reason and there will
be no replacement.
But comparing plain Python code to a built-in that's almost bereft of
good use cases, and finding the plain Python code _faster_ on such a
regular basis, is IMHO perfectly legitimate.

reduce will be at least as fast as writing an explicit loop.
Potentially more if the function object used is itself a builtin
function.
 
A

Alex Martelli

Erik said:
Any reduction that doesn't involve summing won't be handled by sum.
Flattening a list of (non-recursive) lists is a good example. reduce is

If this a good example, what's a bad one?
sum([ ['a list'], ['of', 'non'], ['recursive', 'lists'] ], [])
['a list', 'of', 'non', 'recursive', 'lists']

sum, exactly like reduce(operator.add ... , has bad performance for this
kind of one-level flattening, by the way. A plain loop on .extend is much
better than either (basically, whenever a loop of x+=y has much better
performance than one of x = x + y -- since the latter is what both sum
and reduce(operator.add... are condemned to do -- you should consider
choosing a simple loop if performance has any importance at all).
already in the language; removing an existing, builtin function seems
totally inappropriate given that it's there for a reason and there will
be no replacement.

Existing, builtin functions _will_ be removed in 3.0: Guido is on record
as stating that (at both Europython and OSCON -- I don't recall if he
had already matured that determination at PythonUK time). They
exist for a reason, but when that reason is: "once upon a time, we
thought (perhaps correctly, given the way the rest of the language and
library was at the time) that they were worth having", that's not
sufficient reason to weigh down the language forever with their
not-useful-enough weight. The alternatives to removing those parts that
aren't useful enough any more are, either to stop Python's development
forever, or to make Python _bloated_ with several ways to perform the
same tasks. I much prefer to "lose" the "legacy only real use" built-ins
(presumably to some kind of legacy.py module whence they can easily
be installed to keep some old and venerable piece of code working) than
to choose either of those tragic alternatives.

3.0 is years away, but functions that are clearly being aimed at for
obsolencence then should, IMHO, already be better avoided in new code;
particularly because the obsolescence is due to the existence of better
alternatives. You claim "there will be no replacement", but in fact I have
posted almost a dozen possible replacements for 'reduce' for a case that
was being suggested as a good one for it and _every_ one of them is
better than reduce; I have posted superior replacements for ALL uses of
reduce in the standard library (except those that are testing reduce itself,
but if that's the only good use of reduce -- testing itself -- well, you see
my point...:). I don't intend to spend much more time pointing out how
reduce can be replaced by better code in real use cases, by the way:).

reduce will be at least as fast as writing an explicit loop.

You are wrong: see the almost-a-dozen cases I posted to try and demolish
one suggested "good use" of reduce.
Potentially more if the function object used is itself a builtin
function.

In one of the uses of reduce in the standard library, for which I posted
superior replacements today, the function object was int.__mul__ -- builtin
enough for you? Yet I showed how using recursion and memoization instead of
reduce would speed up that case by many, many times.

Another classic example of reduce being hopelessly slow, despite using
a built-in function, is exactly the "flatten a 1-level list" case mentioned
above. Try:
x = reduce(operator.add, listoflists, x)
vs:
for L in listoflists: x.extend(L)
for a sufficiently big listoflists, and you'll see... (the latter if need be
can get another nice little multiplicative speedup by hoisting the x.extend
lookup, but the key issue is O(N**2) reduce vs O(N) loop...).

[alex@lancelot src]$ timeit.py -c -s'xs=[[x] for x in range(999)]' -s'import
operator' 'x=[]' 'x=reduce(operator.add, xs, x)'
100 loops, best of 3: 8.7e+03 usec per loop
[alex@lancelot src]$ timeit.py -c -s'xs=[[x] for x in range(999)]' -s'import
operator' 'x=[]' 'for L in xs: x.extend(L)'
1000 loops, best of 3: 860 usec per loop
[alex@lancelot src]$ timeit.py -c -s'xs=[[x] for x in range(999)]' -s'import
operator' 'x=[]; xex=x.extend' 'for L in xs: xex(L)'
1000 loops, best of 3: 560 usec per loop

See what I mean? Already for a mere 999 1-item lists, the plain Python
code is 10 times faster than reduce, and if that's not quite enough and
you want it 15 times faster instead, that's trivial to get, too.


Alex
 
A

Alex Martelli

JCM said:
if one is in a hurry, recursion and
memoization are obviously preferable:
def facto(n, _memo={1:1}):
try: return _memo[n]
except KeyError:
result = _memo[n] = (n-1) * facto(n-1)
return result ...
[alex@lancelot bo]$ timeit.py -c -s'import facs' 'facs.facto(13)'
1000000 loops, best of 3: 1.26 usec per loop

I'm going off topic, but it's really not fair to compare a memoized
function to non-memoized implementations using a "best of 3" timing
test.

The best-of-3 is irrelevant, it's the million loops that help:).

Of course you can memoize any pure function of hashable args. But
memoizing a recursive implementation of factorial has a nice property,
shared by other int functions implemented recursively in terms of their
values on other ints, such as fibonacci numbers: the memoization you do for
any value _helps_ the speed of computation for other values. This nice
property doesn't apply to non-recursive implementations.

Once you run timeit.py, with its million loops (and the best-of-3, but
that's not crucial:), this effect disappears. But on smaller tests it is
more easily seen. You can for example define the memoized functions
by a def nested inside another, so each call of the outer function will undo
the memoization, and exercise them that way even with timeit.py. E.g.:

import operator

def wireduce(N=23):
def factorial(x, _memo={0:1, 1:1}):
try: return _memo[x]
except KeyError:
result = _memo[x] = reduce(operator.mul, xrange(2,x+1), 1)
return result
for x in range(N, 0, -1): factorial(x)

def wirecurse(N=23):
def factorial(x, _memo={0:1, 1:1}):
try: return _memo[x]
except KeyError:
result = _memo[x] = x * factorial(x-1)
return result
for x in range(N, 0, -1): factorial(x)

[alex@lancelot bo]$ timeit.py -c -s'import aa' 'aa.wireduce()'
1000 loops, best of 3: 710 usec per loop
[alex@lancelot bo]$ timeit.py -c -s'import aa' 'aa.wirecurse()'
1000 loops, best of 3: 280 usec per loop


Alex
 
G

Georgy Pruss

Alex Martelli said:
Georgy said:
seq=[1,3,4,7]
map( int.__sub__, seq[1:], seq[:-1] ) # nearly twice faster than
....zip.... for long arrays

If this is a race, then let's measure things properly and consider
a few more alternatives, shall we...?

:) No, it's not a race. I just found the map expression to be clear and elegant.
Fortunatelly, one of the fastest solutions too.

G-:
[alex@lancelot xine-lib-1-rc2]$ python a.py
reduce: 100 loops, best of 3: 13.8 msec per loop
zip: 100 loops, best of 3: 18 msec per loop
izip: 100 loops, best of 3: 7.6 msec per loop
w2: 100 loops, best of 3: 7.1 msec per loop
wib: 100 loops, best of 3: 12.7 msec per loop
loop: 100 loops, best of 3: 8.9 msec per loop
map: 100 loops, best of 3: 7.6 msec per loop

itertools.w2 is an experimental addition to itertools which I
doubt I'll be allowed to put in (gaining less than 10% wrt the
more general izip is hardly worth a new itertool, sigh). But,
apart from that, map and izip are head to head, and the plain
good old Python-coded loop is next best...! reduce is slowest.

My code...:
<...>

Alex
 
T

Terry Reedy

Alex Martelli said:
above. Try:
x = reduce(operator.add, listoflists, x)
vs:
for L in listoflists: x.extend(L)
for a sufficiently big listoflists, and you'll see... (the latter if need be
can get another nice little multiplicative speedup by hoisting the x.extend
lookup, but the key issue is O(N**2) reduce vs O(N) loop...).

Right: however that issue and the possibility of hoisting x.extend
have *nothing* to do with reduce vs. for. For a fair comparison of
the latter pair, try the following, which is algorithmicly equivalent
to your sped-up for loop.
xs=[ for i in range(10)]
x=[]
xtend=x.extend
reduce(lambda dum,L: xtend(L), xs, x)
x

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]


[timeits snipped]...
See what I mean?

That you hate reduce?

I am sure that the O(N) reduce would be similarly faster than an
O(N**2) operator.add for loop: what would *that* mean?
Already for a mere 999 1-item lists, the plain Python
code is 10 times faster than reduce,

No, you have only shown that for N=999, O(N) can be O(N**2)/10, and
that smart programmers who understand that can write better (faster)
code than those who do not.

Terry J. Reedy

PS: for practical rather than didactic programming, I am pretty sure I
would have written a for loop 'reduced' with xtend.
 
D

Dave Benjamin

Nope -- apply beats it, given that in the last few years apply(f, args) is
best spelled f(*args) ...!-)

Yeah, and speaking of which, where was the debate on explicit vs. implicit
and avoiding Perl-style line-noise syntax-sugaring when that feature was
snuck in? ;)
 
A

Aahz

Yeah, and speaking of which, where was the debate on explicit vs. implicit
and avoiding Perl-style line-noise syntax-sugaring when that feature was
snuck in? ;)

There wouldn't be any; it's a straightforward integration with the
long-standing ability to use *args and **kwargs when defining a
function.
 
D

Douglas Alan

Alex Martelli said:
Existing, builtin functions _will_ be removed in 3.0: Guido is on record
as stating that (at both Europython and OSCON -- I don't recall if he
had already matured that determination at PythonUK time). They
exist for a reason, but when that reason is: "once upon a time, we
thought (perhaps correctly, given the way the rest of the language and
library was at the time) that they were worth having", that's not
sufficient reason to weigh down the language forever with their
not-useful-enough weight. The alternatives to removing those parts that
aren't useful enough any more are, either to stop Python's development
forever, or to make Python _bloated_ with several ways to perform the
same tasks.

I agree: Down with bloat! Get rid of sum() -- it's redundant with
reduce(), which I use all the time, like so:

def longer(x, y):
if len(y) > len(x): return y
else: return x

def longest(seq):
return reduce(longer, seq)

print longest(("abc", "yumyum!", "hello", "goodbye", "?"))

=> yumyum!

|>oug
 
A

Anton Vredegoor

Alex Martelli said:
Of course you can memoize any pure function of hashable args. But
memoizing a recursive implementation of factorial has a nice property,
shared by other int functions implemented recursively in terms of their
values on other ints, such as fibonacci numbers: the memoization you do for
any value _helps_ the speed of computation for other values. This nice
property doesn't apply to non-recursive implementations.

Maybe we need a better example because it is possible to use reduce on
a function which has side effects.

Anton.

class _Memo:
biggest = 1
facdict = {0 : 1, 1 : 1}

def fac(n):
def mymul(x,y):
res = x*y
_Memo.facdict[y] = res
_Memo.biggest = y
return res
def factorial(x):
b = _Memo.biggest
if x > b: return reduce(mymul, xrange(b+1,x+1), b)
return _Memo.facdict[x]
return factorial(n)

def test():
print fac(5)

if __name__=='__main__':
test()
 
A

Anton Vredegoor

def factorial(x):
b = _Memo.biggest
if x > b: return reduce(mymul, xrange(b+1,x+1), b)
return _Memo.facdict[x]

Sorry, this part should be:

def factorial(x):
b = _Memo.biggest
if x > b:
start = _Memo.facdict
return reduce(mymul, xrange(b+1,x+1), start)
return _Memo.facdict[x]
 
A

Alex Martelli

Anton said:
def factorial(x):
b = _Memo.biggest
if x > b: return reduce(mymul, xrange(b+1,x+1), b)
return _Memo.facdict[x]

Sorry, this part should be:

def factorial(x):
b = _Memo.biggest
if x > b:
start = _Memo.facdict
return reduce(mymul, xrange(b+1,x+1), start)
return _Memo.facdict[x]


The horrid complication of this example -- including the global side
effects, and the trickiness that made even you, its author, fall into such
a horrid bug -- is, I think, a _great_ testimonial to why reduce should go.
I would also include in this harrowing complexity the utter frailty of the
(textually separated) _Memo / mymul coupling, needed to maintain
_Memo's unstated invariant. This code is as close to unmaintainable,
due to complexity and brittleness, as Python will allow.

I will gladly concede that reduce IS hard to beat if your purpose is to
write complex, brittle, hard-to-maintain code.

Which is exactly why I'll be enormously glad to see the back of it, come
3.0 time, and in the meantime I heartily suggest to all readers that (like,
say, apply) it is best (_way_ best) avoided in all new code.


Alex
 
R

Robin Becker

I will gladly concede that reduce IS hard to beat if your purpose is to
write complex, brittle, hard-to-maintain code.

Which is exactly why I'll be enormously glad to see the back of it, come
3.0 time, and in the meantime I heartily suggest to all readers that (like,
say, apply) it is best (_way_ best) avoided in all new code.
.....

I don't understand why reduce makes the function definition brittle. The
code was brittle independently of the usage. Would a similar brittle
usage of sum make sum bad?

I really don't think reduce, map etc are bad. Some just don't like that
style. I like short code and if reduce, filter et al make it short I'll
use that.

As for code robustness/fragility how should it be measured? Certainly I
can make any code fragile if I keep randomly changing the language
translator. So code using reduce is fragile if we remove reduce.
Similarly sum is fragile when we intend to remove that. Code is made
robust by using features that stay in the language. Keep reduce and
friends and make Python more robust.
 
A

Anton Vredegoor

Alex Martelli said:
I will gladly concede that reduce IS hard to beat if your purpose is to
write complex, brittle, hard-to-maintain code.

Answering in the same vein I could now concede that you had
successfully distracted from my suggestion to find a better example in
order to demonstrate the superiority of recursive techniques over
iterative solutions in programming memoization functions.

However I will not do that, and instead concede that my code is often
"complex, brittle, hard-to-maintain". This probably results from me
being Dutch, and so being incapable of being wrong, one has to mangle
code (and speech!) in mysterious ways in order to simulate mistakes.

I'd like to distance myself from the insinuation that such things
happen on purpose. It was not necessary to use reduce to show an
iterative memoization technique, but since the thread title included
reduce I felt compelled to use it in order to induce the reader to
find a better example of recursive functions that memoize and which
cannot be made iterative.

This would generate an important precedent for me because I have
believed for a long time that every recursive function can be made
iterative, and gain efficiency as a side effect.

By the way, there is also some other association of reduce besides the
one with functional programming. Reductionism has become politically
incorrect in certain circles because of its association with
neopositivism and behavioristic psychology.

I would like to mention that there is also a phenomenological
reduction, which contrary to neopositivistic reduction does not try to
express some kind of transcendental idealism, in which the world would
be made transparent in some light of universal reason. Instead it
tries to lead us back to the world and the concrete individual
properties of the subjects under discussion.

Anton
 
A

Alex Martelli

Anton Vredegoor wrote:
...
find a better example of recursive functions that memoize and which
cannot be made iterative.

This would generate an important precedent for me because I have
believed for a long time that every recursive function can be made
iterative, and gain efficiency as a side effect.

Well, without stopping to ponder the issue deeply, I'd start with:

def Ack(M, N, _memo={}):
try: return _memo[M,N]
except KeyError: pass
if not M:
result = N + 1
elif not N:
result = Ack(M-1, 1)
else:
result = Ack(M-1, Ack(M, N-1))
_memo[M,N] = result
return result

M>=0 and N>=0 (and M and N both integers) are preconditions of the Ack(M, N)
call.

There is a substantial body of work on this function in computer science
literature, including work on a technique called "incrementalization" which,
I believe, includes partial but not total iterativization (but I am not
familiar with the details). I would be curious to examine a totally
iterativized and memoized version, and comparing its complexity, and
performance on a few typical (M,N) pairs, to both this almost-purest
recursive version, and an incrementalized one.


Alex
 
D

David Eppstein

Recursive memoization can be better than iteration when the recursion
can avoid evaluating a large fraction of the possible subproblems.

An example would be the 0-1 knapsack code in
http://www.ics.uci.edu/~eppstein/161/python/knapsack.py

If there are n items and size limit L, the iterative versions (pack4 and
pack5) take time O(nL), while the recursive version (pack3) takes time
O(min(2^n, nL)). So the recursion can be better when L is really large.

An example of this came up in one of my research papers some years ago,
http://www.ics.uci.edu/~eppstein/pubs/p-3lp.html
which involved a randomized recursive memoization technique with runtime
significantly faster than that for iterating through all possible
subproblems.
 
A

Anton Vredegoor

Alex Martelli said:
def Ack(M, N, _memo={}):
try: return _memo[M,N]
except KeyError: pass
if not M:
result = N + 1
elif not N:
result = Ack(M-1, 1)
else:
result = Ack(M-1, Ack(M, N-1))
_memo[M,N] = result
return result

M>=0 and N>=0 (and M and N both integers) are preconditions of the Ack(M, N)
call.

Defined as above the number of recursions is equal to the return
value, because there is only one increment per call.

Have a look at the paper about the ackerman function at:

http://www.dur.ac.uk/martin.ward/martin/papers/

(the 1993 paper, halfway down the list, BTW, there's also something
there about automatically translating assembler to C-code, maybe it
would also be possible to automatically translate C-code to Python?
Start yet another subthread :)

Another thing is that long integers cannot be used to represent the
result values because the numbers are just too big.

It seems possible to make an iterative version that computes the
values more efficiently, but it suffers from the same number
representation problem.

Maybe Bengt can write a class for representing very long integers as
formulas. For example an old post by François Pinard suggests that:

ackerman(4, 4) == 2 ** (2 ** (2 ** (2 ** (2 ** (2 ** 2))))) - 3

Anton
 
B

Bengt Richter

Alex Martelli said:
def Ack(M, N, _memo={}):
try: return _memo[M,N]
except KeyError: pass
if not M:
result = N + 1
elif not N:
result = Ack(M-1, 1)
else:
result = Ack(M-1, Ack(M, N-1))
_memo[M,N] = result
return result

M>=0 and N>=0 (and M and N both integers) are preconditions of the Ack(M, N)
call.

Defined as above the number of recursions is equal to the return
value, because there is only one increment per call.

Have a look at the paper about the ackerman function at:

http://www.dur.ac.uk/martin.ward/martin/papers/

(the 1993 paper, halfway down the list, BTW, there's also something
there about automatically translating assembler to C-code, maybe it
would also be possible to automatically translate C-code to Python?
Start yet another subthread :)

Another thing is that long integers cannot be used to represent the
result values because the numbers are just too big.

It seems possible to make an iterative version that computes the
values more efficiently, but it suffers from the same number
representation problem.

Maybe Bengt can write a class for representing very long integers as Thanks Anton ;-)
formulas. For example an old post by François Pinard suggests that:

ackerman(4, 4) == 2 ** (2 ** (2 ** (2 ** (2 ** (2 ** 2))))) - 3
If they are too big to represent, they are probably also too big to arrive at
in practical time counting by one ;-)

It is interesting to trace the composition of the args to successive calls and
label which recursive calls they were due to, though I don't know
what to make of it ;-) A quick hack (I may have goofed) shows:
2 2 root: M N
2 1 M&N argev: M (N-1)
2 0 M&N argev: M ((N-1)-1)
1 1 not N: (M-1) 1
1 0 M&N argev: (M-1) (1-1)
0 1 not N: ((M-1)-1) 1
0 2 M&N: ((M-1)-1) (1+1)
1 3 M&N: (M-1) ((1+1)+1)
1 2 M&N argev: (M-1) (((1+1)+1)-1)
1 1 M&N argev: (M-1) ((((1+1)+1)-1)-1)
1 0 M&N argev: (M-1) (((((1+1)+1)-1)-1)-1)
0 1 not N: ((M-1)-1) 1
0 2 M&N: ((M-1)-1) (1+1)
0 3 M&N: ((M-1)-1) ((1+1)+1)
0 4 M&N: ((M-1)-1) (((1+1)+1)+1)
1 5 M&N: (M-1) ((((1+1)+1)+1)+1)
1 4 M&N argev: (M-1) (((((1+1)+1)+1)+1)-1)
1 3 M&N argev: (M-1) ((((((1+1)+1)+1)+1)-1)-1)
1 2 M&N argev: (M-1) (((((((1+1)+1)+1)+1)-1)-1)-1)
1 1 M&N argev: (M-1) ((((((((1+1)+1)+1)+1)-1)-1)-1)-1)
1 0 M&N argev: (M-1) (((((((((1+1)+1)+1)+1)-1)-1)-1)-1)-1)
0 1 not N: ((M-1)-1) 1
0 2 M&N: ((M-1)-1) (1+1)
0 3 M&N: ((M-1)-1) ((1+1)+1)
0 4 M&N: ((M-1)-1) (((1+1)+1)+1)
0 5 M&N: ((M-1)-1) ((((1+1)+1)+1)+1)
0 6 M&N: ((M-1)-1) (((((1+1)+1)+1)+1)+1)
(7, '((((((1+1)+1)+1)+1)+1)+1)')

Is there a fast formula for computing results, ignoring the representation problem?

Regards,
Bengt Richter
 
T

Terry Reedy

David Eppstein said:
Recursive memoization can be better than iteration when the recursion
can avoid evaluating a large fraction of the possible subproblems.

An example would be the 0-1 knapsack code in
http://www.ics.uci.edu/~eppstein/161/python/knapsack.py

If there are n items and size limit L, the iterative versions (pack4 and
pack5) take time O(nL), while the recursive version (pack3) takes time
O(min(2^n, nL)). So the recursion can be better when L is really
large.

Are you claiming that one *cannot* write an iterative version (with
auxiliary stacks) of the same algorithm (which evaluates once each the
same restricted subset subproblems) -- or merely that it would be
more difficult, and more difficult to recognize correctness (without
having mechanically translated the recursive version)?
An example of this came up in one of my research papers some years ago,
http://www.ics.uci.edu/~eppstein/pubs/p-3lp.html
which involved a randomized recursive memoization technique with runtime
significantly faster than that for iterating through all possible
subproblems.

And it is surely also faster than recursing through all possible
subproblems ;-).

It seems to me that the issue of algorithm efficiency is one of
avoiding unnecessary and redundant computation and that iterative
versus recursive syntax has little to do, per se, with such avoidance.

Standard example: the fibonacci function has at least two
non-constant-time, non-memoized algorithms: one exponential (due to
gross redundancy) and the other linear. Either can be expressed with
either recursion or iteration. Too often, people present recursive
exponential and iterative linear algorithms and falsely claim
'iteration is better (faster) than recursion'. I could just as well
present iterative exponential and recursive linear algorithms and make
opposite false claim.

Having said all this, I quite agree that recursive expression is
sometime far better for getting a clear, visibly correct
implementation, which is why I consider iteration-only algorithm books
to be somewhat incomplete.

Terry J. Reedy
 
D

David Eppstein

"Terry Reedy said:
Are you claiming that one *cannot* write an iterative version (with
auxiliary stacks) of the same algorithm (which evaluates once each the
same restricted subset subproblems) -- or merely that it would be
more difficult, and more difficult to recognize correctness (without
having mechanically translated the recursive version)?

Of course you can make it iterative with auxiliary stacks.
Any compiler for a compiled language would do that.
I don't think of that as removing the recursion, just hiding it.

I thought your question wasn't about semantic games, I thought it was
about the relation between memoization and dynamic programming. Both
compute and store the same subproblem values, but in different orders;
usually they are the same in complexity but not always.
Standard example: the fibonacci function has at least two
non-constant-time, non-memoized algorithms: one exponential (due to
gross redundancy) and the other linear.

Not to mention the logarithmic (in number of arithmetic operations)
algorithms... http://www.cs.utexas.edu/users/EWD/ewd06xx/EWD654.PDF
Too often, people present recursive exponential and iterative linear
algorithms and falsely claim 'iteration is better (faster) than
recursion'.

That sounds dishonest. But Fibonacci can be a good example of both
memoization and dynamic programming, and I would expect the iterative
linear version to be faster (by a constant factor) than the memoized
recursive one (also linear due to the memoization).
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,770
Messages
2,569,584
Members
45,075
Latest member
MakersCBDBloodSupport

Latest Threads

Top