Why is recursion so slow?

S

slix

Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?

is the recursive definition counting fib 1 to fib x-1 for every x? is
that what lazy evaluation in functional languages avoids thus making
recursive versions much faster?
is recursive fibonacci in haskell as fast as an imperative solution in
a procedural language?

def fibr(nbr):
if nbr > 1:
return fibr(nbr-1) + fibr(nbr-2)
if nbr == 1:
return 1
if nbr == 0:
return 0

def fibf(n):
sum=0
a=1
b=1
if n<=2: return 1
for i in range(3,n+1):
sum=a+b
a=b
b=sum
return sum


i found a version in Clojure that is superfast:
(def fib-seq
(concat
[0 1]
((fn rfib [a b]
(lazy-cons (+ a b) (rfib b (+ a b)))) 0 1)))

(defn fibx [x]
(last (take (+ x 1) fib-seq)))

(fibx 12000) is delivered instantly. is it using lazy evaluation?
 
L

Luis Zarrabeitia

Quoting slix said:
Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?

The problem is not with 'recursion' itself, but with the algorithm:

def fibr(nbr):
if nbr > 1:
return fibr(nbr-1) + fibr(nbr-2)
if nbr == 1:
return 1
if nbr == 0:
return 0

If you trace, say, fibr(5), you'll find that your code needs to compute fibr(4)
and fibr(3), and to compute fibr(4), it needs to compute fibr(3) and fibr(2). As
you can see, fibr(3), and it whole subtree, is computed twice. That is enough to
make it an exponential algorithm, and thus, untractable. Luckily, the iterative
form is pretty readable and efficient. If you must insist on recursion (say,
perhaps the problem you are solving cannot be solved iteratively with ease), I'd
suggest you to take a look at 'dynamic programming', or (easier but not
necesarily better), the 'memoize' disgn pattern.

is the recursive definition counting fib 1 to fib x-1 for every x?

Yes - that's what the algorithm says. (Well, actually, the algorithm says to
count more than once, hence the exponential behaviour). The memoize patter could
help in this case.
is
that what lazy evaluation in functional languages avoids thus making
recursive versions much faster?

Not exactly... Functional languages are (or should be) optimized for recursion,
but if the algorithm you write is still exponential, it will still take a long time.
is recursive fibonacci in haskell as fast as an imperative solution in
a procedural language?
[...]
i found a version in Clojure that is superfast:

I've seen the haskell implementation (quite impressive). I don't know Clojure
(is it a dialect of Lisp?), but that code seems similar to the haskell one. If
you look closely, there is no recursion on that code (no function calls). The
haskell code works by defining a list "fib" as "the list that starts with 0,1,
and from there, each element is the sum of the element on 'fib' plus the element
on 'tail fib'). The lazy evaluation there means that you can define a list based
on itself, but there is no recursive function call.

Cheers,

(I'm sleepy... I hope I made some sense)
 
T

Terry Reedy

slix said:
Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?

The comparison below has nothing to do with recursion versus iteration.
(It is a common myth.) You (as have others) are comparing an
exponential, O(1.6**n), algorithm with a linear, O(n), algorithm.

def fibr(nbr):
if nbr > 1:
return fibr(nbr-1) + fibr(nbr-2)
if nbr == 1:
return 1
if nbr == 0:
return 0

This is exponential due to calculating fib(n-2) twice, fib(n-3) thrice,
fib(n-4) 5 times, fir(n-5) 8 times, etc. (If you notice an interesting
pattern, you are probably correct!) (It returns None for n < 0.)
If rewritten as iteratively, with a stack, it is still exponential.
def fibf(n):
sum=0
a=1
b=1
if n<=2: return 1
for i in range(3,n+1):
sum=a+b
a=b
b=sum
return sum

This is a different, linear algorithm. fib(i), 0<=i<n, is calculated
just once. (It returns 1 for n < 0.) If rewritten (tail) recursively,
it is still linear.

In Python, an algorithm written with iteration is faster than the same
algorithm written with recursion because of the cost of function calls.
But the difference should be a multiplicative factor that is nearly
constant for different n. (I plan to do experiments to pin this down
better.) Consequently, algorithms that can easily be written
iteratively, especially using for loops, usually are in Python programs.

Terry Jan Reedy
 
D

Dan Upton

The comparison below has nothing to do with recursion versus iteration. (It
is a common myth.) You (as have others) are comparing an exponential,
O(1.6**n), algorithm with a linear, O(n), algorithm.

FWIW, though, it's entirely possible for a recursive algorithm with
the same asymptotic runtime to be wall-clock slower, just because of
all the extra work involved in setting up and tearing down stack
frames and executing call/return instructions. (If the function is
tail-recursive you can get around this, though I don't know exactly
how CPython is implemented and whether it optimizes that case.)
 
T

Terry Reedy

Dan said:
FWIW, though, it's entirely possible for a recursive algorithm with
the same asymptotic runtime to be wall-clock slower, just because of
all the extra work involved in setting up and tearing down stack
frames and executing call/return instructions.

Which is exactly why I continued with "In Python, an algorithm written
with iteration is faster than the same algorithm written with recursion
because of the cost of function calls. But the difference should be a
multiplicative factor that is nearly constant for different n. (I plan
to do experiments to pin this down better.) Consequently, algorithms
that can easily be written iteratively, especially using for loops,
usually are in Python programs."

People should read posts to the end before replying, in case it actually
says what one thinks it should, but just in a different order than one
expected.

If each call does only a small amount of work, as with fib(), I would
guess that time difference might be a factor of 2. As I said, I might
do some measurement sometime in order to get a better handle on when
rewriting recursion as iteration is worthwhile.

tjr
 
D

Dan Upton

People should read posts to the end before replying, in case it actually
says what one thinks it should, but just in a different order than one
expected.

Well, pardon me.
 
C

cokofreedom

In case anyone is interested...

# Retrieved from: http://en.literateprograms.org/Fibonacci_numbers_(Python)?oldid=10746

# Recursion with memoization
memo = {0:0, 1:1}
def fib(n):
if not n in memo:
memo[n] = fib(n-1) + fib(n-2)
return memo[n]

# Quick exact computation of large individual Fibonacci numbers

def powLF(n):
if n == 1: return (1, 1)
L, F = powLF(n/2)
L, F = (L**2 + 5*F**2) >> 1, L*F
if n & 1:
return ((L + 5*F)>>1, (L + F) >>1)
else:
return (L, F)

def fib(n):
return powLF(n)[1]
 
B

Bruno Desthuilliers

Dan Upton a écrit :
FWIW, though, it's entirely possible for a recursive algorithm with
the same asymptotic runtime to be wall-clock slower, just because of
all the extra work involved in setting up and tearing down stack
frames and executing call/return instructions. (If the function is
tail-recursive you can get around this, though I don't know exactly
how CPython is implemented and whether it optimizes that case.)

By decision of the BDFL, based on the argument that it makes debugging
harder, CPython doesn't optimize tail-recursive calls.
 
A

Antoon Pardon

Recursion is awesome for writing some functions, like searching trees
etc but wow how can it be THAT much slower for computing fibonacci-
numbers?

Try the following recursive function:

def fib(n):

def gfib(a, b , n):
if n == 0:
return a
else:
return gfib(b, a + b, n - 1)

return gfib(0, 1, n)
 
B

bruno.desthuilliers

By definition any function in a functional language will
always produce the same result if given the same arguments,

This is only true for pure functional languages.

I know you know it, but someone might think it also applies to unpure
FPLs like Common Lisp.

(snip)
 
R

Rich Harkins

Nick Craig-Wood wrote:
[snip]
By definition any function in a functional language will
always produce the same result if given the same arguments, so you can
memoize any function.

Ah, so that's why time.time() seems to be stuck... ;)

Rich
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,770
Messages
2,569,586
Members
45,088
Latest member
JeremyMedl

Latest Threads

Top