why memoizing is faster

A

Andrea Crotti

I was showing a nice memoize decorator to a friend using the classic
fibonacci problem.

--8<---------------cut here---------------start------------->8---
def memoize(f, cache={}):
def g(*args, **kwargs):
# first must create a key to store the arguments called
# it's formed by the function pointer, *args and **kwargs
key = ( f, tuple(args), frozenset(kwargs.items()) )
# if the result is not there compute it, store and return it
if key not in cache:
cache[key] = f(*args, **kwargs)

return cache[key]

return g

# without the caching already for 100 it would be unfeasible
@memoize
def fib(n):
if n <= 1:
return 1
else:
return fib(n-1) + fib(n-2)

--8<---------------cut here---------------end--------------->8---


It works very well, but he said that the iterative version would be
anyway faster.

But I tried this

--8<---------------cut here---------------start------------->8---
def fib_iter(n):
ls = {0: 1, 1:1}
for i in range(2, n+1):
ls = ls[i-1] + ls[i-2]

return ls[max(ls)]
--8<---------------cut here---------------end--------------->8---

and what happens is surprising, this version is 20 times slower then the
recursive memoized version.

I first had a list of the two last elements and I thought that maybe the
swaps were expensive, now with a dictionary it should be very fast.

Am I missing something?
Thanks,
Andrea
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,482
Members
44,901
Latest member
Noble71S45

Latest Threads

Top