Fibonacci series recursion error

C

Chris Angelico

But the recursive solution has time complexity of O(logn).  The iterative
solution has time complexity of O(n).  That's a significant difference for
large n - a significant benefit of the recursive version.

Are you sure? It will produce n function calls and n multiplications,
how is that O(log n)?

Chris Angelico
 
I

Ian Kelly

But the recursive solution has time complexity of O(logn).  The iterative
solution has time complexity of O(n).  That's a significant difference for
large n - a significant benefit of the recursive version.


It's linear as written. I think you're talking about an
implementation like this one:

def powR(x,n):
if n < 0:
return 1.0 / powR(x, -n)
elif n == 0:
return 1
else:
half_n = n // 2
root = powR(x, half_n)
result = root * root
if half_n + half_n == n - 1:
result = result * x
return result

That's O(log n), but it was harder to write, and it could just as
easily be iterative. In fact it might actually have been a bit
simpler to write it iteratively.
 
H

Hans Georg Schaathun

> often recursively.  The compiler should generate code the way the CPU
: > thinks (most optimally), i.e. iteratively.
:
: The CPU can function iteratively or recursively.

I should have said 'hardware' rather than CPU. Iteratively it is not
as optimal as the call contexts have to be stored.

: But in my opinion, the programmer and the interpreter/compiler are
: teammates. If you allow programmers to be stupid, you will get a lot
: of stupid programmers writing code. (I think that's one of PHP's
: biggest downsides.) If the human and the machine know each other well
: enough, the resulting code can be orders of magnitude more efficient
: to run, *without taking any more tme to code* because the programmer
: already knew the right things to do.

There are situations where that team is essential. However, in many
more situation, the programmer has to team primarily with domain
experts and work with a language in which the domain expert is fluent.
The compiler and hardware are slaves of the engineers, and extra staff
to team up with the compiler is an added cost and delay.

: Perhaps it will take you four hours to read through something, study
: it, maybe eyeball the compiler's source code, maybe do some
: disassembly. If you have a few years of career ahead of you, you'll
: benefit many times over from those few hours, and without spending
: extra time, you'll produce better code. THAT is what distinguishes the
: master from the novice.

That depends on /what/ your career is, and what you need to master.
 
R

rusi

Usually when someone gives a recursive solution to this problem, it's
O(logn), but not this time.
Here's a logn one:

:) Ok so you beat me to it :D

I was trying to arrive at an answer to the question (by M Harris)
about why anyone would do something like fib recursively. I used power
since that illustrates the case more easily, viz:
A trivial power -- recursive or iterative -- is linear time -- O(n)
A more clever power can be O(log(n))
This more clever power can be trivially derived from the recursive one
as follows:

The recursive O(n) function:
def powR(x,n): return 1 if (n==0) else (x * powR(x, n-1))

is just python syntax for the school algebra (or Haskell) equations:

x^0 = 1
x^(n+1) = x * x^n

Adding one more equation:
x^(2*n) = (x^2)^n = (x*x)^n
Re-python-ifying we get:


def pow(x,n):
return 1 if (n==0) else pow(x*x, n/2) if even(n) else x*pow(x,n-1)

def even(n): return n % 2 == 0

Can this be done iteratively? Of course but its not so trivial.

[If you believe it is, then try writing a log(n) fib iteratively :D ]
 
D

Dave Angel

Usually when someone gives a recursive solution to this problem, it's
O(logn), but not this time.
Here's a logn one:

:) Ok so you beat me to it :D

I was trying to arrive at an answer to the question (by M Harris)
about why anyone would do something like fib recursively. I used power
since that illustrates the case more easily, viz:
A trivial power -- recursive or iterative -- is linear time -- O(n)
A more clever power can be O(log(n))
This more clever power can be trivially derived from the recursive one
as follows:

The recursive O(n) function:
def powR(x,n): return 1 if (n=) else (x * powR(x, n-1))

is just python syntax for the school algebra (or Haskell) equations:

x^0 =
x^(n+1) = * x^n

Adding one more equation:
x^(2*n) =x^2)^n = (x*x)^n
Re-python-ifying we get:


def pow(x,n):
return 1 if (n=) else pow(x*x, n/2) if even(n) else x*pow(x,n-1)

def even(n): return n % 2 =0

Can this be done iteratively? Of course but its not so trivial.

[If you believe it is, then try writing a log(n) fib iteratively :D ]

What I'm surprised at is that nobody has pointed out that the logn
version is also generally more accurate, given traditional floats.
Usually getting the answer accurate (given the constraints of finite
precision intermediates) is more important than performance, but in this
case they go hand in hand.

DaveA
 
R

rusi

What I'm surprised at is that nobody has pointed out that the logn
version is also generally more accurate, given traditional floats.
Usually getting the answer accurate (given the constraints of finite
precision intermediates) is more important than performance, but in this
case they go hand in hand.

Well!!
That's a slant that I was not aware of -- though fairly obvious on
second thought.

Thanks Dave.
 
C

Chris Angelico

What I'm surprised at is that nobody has pointed out that the logn version
is also generally more accurate, given traditional floats. Usually getting
the answer accurate (given the constraints of finite precision
intermediates) is more important than performance, but in this case they go
hand in hand.

And that, Your Honour, is why I prefer bignums (especially for
integers) to floating point. Precision rather than performance.

Chris Angelico
 
S

Steven D'Aprano

And that, Your Honour, is why I prefer bignums (especially for integers)
to floating point. Precision rather than performance.

I'm intrigued by your comment "especially for integers", which implies
that you might use bignums for non-integers. Out of curiosity, how would
you calculate:

sin(sqrt(7)*pi/31)

using bignums?
 
C

Chris Angelico

I'm intrigued by your comment "especially for integers", which implies
that you might use bignums for non-integers. Out of curiosity, how would
you calculate:

sin(sqrt(7)*pi/31)

using bignums?

REXX uses decimal bignums, although I don't have a high-performance
math library (for sin) that uses anything more than IEEE double
precision. If I coded my own sin algorithm in REXX, I could go to
whatever precision memory (and patience) would allow.

Chris Angelico
 
H

harrismh777

Chris said:
The recursion is in the last block. Note that it calls a function,
then keeps going. It doesn't fork. There are two CALL_FUNCTION
opcodes, called*sequentially*. In terms of the call stack, there is
only ever one of those two calls active at any given time.

RuntimeError: maximum recursion depth exceeded in comparison
[36355 refs]

can any one help

[ @ Ian, Chris, Thomas ]

Thanks, guys, but I think y'all are still missing my point/question, as
interesting as these discussions are... something is wrong (might be my
understanding)...

.... CPython must be making function calls by placing stack-frames on the
call stack... which has a relatively small limit. Python does crappy
tail optimization (one of the reasons to avoid recursion in Python) and
yes, this is an infinite recursion... doh... but the main point for this
part of the discussion is that the "maximum recursion depth exceeded in
comparison" runtime error is posted because there are more stack-frames
being posted to the call stack than there is call-stack to hold them!
We might change this with sys.setrecursionlimit, but that's dangerous;
but the bottom line is that in order for the recursion depth runtime
error to be posted, somebody placed too many stack-frames on the call
stack... in this case about 36,355 of them... yes, the base-case is
coded wrong and the process is infinite, the point is that tail
processing doesn't even get to run... the silly thing pummels the call
stack with stack-frames (obviously exceeding 20) and the runtime error
is posted. If your point is that the infinite process is the problem, I
agree. But my point is that the cpu crunch and the rate at which the
call stack is filled has to do with the double call (which never finds
tail processing).

What am I missing here>?
 
C

Chris Angelico

If your point is that the infinite process is the problem, I agree. But my
point is that the cpu crunch and the rate at which the call stack is filled
has to do with the double call (which never finds tail processing).

The double call _does not_ affect it. Failing to terminate recursion
_does_. I don't know what you mean by "cpu crunch" but the call stack
is going to get n entries. On the Python 2.6 on this system,
sys.getrecursionlimit() returns 1000, meaning that you can calculate
fib(1000) safely (okay, probably not quite as there'll be a few used
for other things, but fib(900) would be safe).

Chris Angelico
 
I

Ian Kelly

The double call _does not_ affect it. Failing to terminate recursion
_does_. I don't know what you mean by "cpu crunch" but the call stack
is going to get n entries. On the Python 2.6 on this system,
sys.getrecursionlimit() returns 1000, meaning that you can calculate
fib(1000) safely (okay, probably not quite as there'll be a few used
for other things, but fib(900) would be safe).

Depends what you mean by "safe". A back-of-the-envelope calculation
shows that with modern technology it would take more than 10 ** 257
years to complete. That puts it well into the Dark Era of the
universe, long after the black holes will have decayed, when I suspect
it will be difficult to find a continued power source for the computer
to run. And even if it somehow is still running, the process memory
will have been so thoroughly muddled by cosmic rays that the final
result of the calculation will be utterly worthless.
 
C

Chris Angelico

A back-of-the-envelope calculation
shows that with modern technology it would take more than 10 ** 257
years to complete.

Then I propose that Python be extended to allow for underlying
hardware upgrades without terminating a script. This will be essential
to the finding of answers to these vital questions.

Chris Angelico
 
G

Gregory Ewing

Chris said:
There's definitely something attractive about that letter. Lots of
programming languages' names start with P.

Including one I invented some years ago, that (in the spirit
of C and its derivatives) was called simply "P".

(I was playing around with Sun's NeWS window server, which
was programmed in Postscript, and I got fed up with Forth-like
syntax, so I wrote a translator that took an infix language and
generated Postscript from it.)
 
S

Steven D'Aprano

I don't understand why you place Lisp and Forth in the same category as
Pascal, C, and Java. Lisp and Forth generally have highly interactive
development environments, while the other languages generally require an
edit, compile, run it again debugging cycle.

Good point. Perhaps I need to rethink where Lisp and Forth sit in the
development vs runtime trade-off continuum.

Python requires me to rewrite the slow bits of my program in C
to get good performance.

Python doesn't require you to re-write anything in C. If you want to make
a different trade-off (faster runtime, slower development time) then you
use a language that has made the appropriate trade-off. This applies
whether you are talking about the entire program, or just one subroutine.

Why is that an efficient use of developer time?

Because for any non-trivial program, it is faster and more developer
friendly to just write one or three performance-critical routines in C,
and the rest in Python, than it is to write everything in C.
 
T

Teemu Likonen

* 2011-05-08T12:59:02Z * Steven D'Aprano said:
Good point. Perhaps I need to rethink where Lisp and Forth sit in the
development vs runtime trade-off continuum.


Python doesn't require you to re-write anything in C. If you want to
make a different trade-off (faster runtime, slower development time)
then you use a language that has made the appropriate trade-off.

I believe that Robert Brown wanted to imply that Common Lisp is quite
optimal on both sides. It supports dynamic interactive development and
yet it has implementations with very efficient compilers. The trade-off
is not necessarily between the two.

But of course "development time" is a nicely vague concept. Depending on
the argument it can include just the features of language and
implementation. Other times it could include all the available resources
such as documentation, library archives and community mailing lists. All
these can reduce development time.
 
C

Chris Angelico

But of course "development time" is a nicely vague concept. Depending on
the argument it can include just the features of language and
implementation. Other times it could include all the available resources
such as documentation, library archives and community mailing lists. All
these can reduce development time.

And it also includes a developer's personal experience level. A good
programmer will have a mental "algorithmic library" if you like;
effectively, it's a gigantic hash table of "if I need to do this, I
can do it this way". A mediocre programmer will both take longer and
produce less efficient code than an awesome programmer. This is
primarily not a language feature, but a language that usually has one
obvious way to do things will likely be easier to grok than a language
that has myriad "gotchas" to work around.

Chris Angelico
 
H

Hans Mulder

[If you believe it is, then try writing a log(n) fib iteratively :D ]

It took me a while, but this one seems to work:

from collections import namedtuple

Triple = namedtuple('Triple', 'hi mid lo')
Triple.__mul__ = lambda self, other: Triple(
self.hi * other.hi + self.mid * other.mid,
self.hi * other.mid + self.mid * other.lo,
self.mid * other.mid + self.lo * other.lo,
)

def fib(n):
f = Triple(1, 1, 0)
a = Triple(1, 0, 1)
while n:
if n & 1:
a *= f
f *= f
n >>= 1
return a.mid


-- HansM
 
R

rusi

[If you believe it is, then try writing a log(n) fib iteratively :D ]

It took me a while, but this one seems to work:

from collections import namedtuple

Triple = namedtuple('Triple', 'hi mid lo')
Triple.__mul__ = lambda self, other: Triple(
     self.hi * other.hi + self.mid * other.mid,
     self.hi * other.mid + self.mid * other.lo,
     self.mid * other.mid + self.lo * other.lo,
)

def fib(n):
     f = Triple(1, 1, 0)
     a = Triple(1, 0, 1)
     while n:
         if n & 1:
             a *= f
         f *= f
         n >>= 1
     return a.mid

-- HansM

Bravo! Can you explain this?

The tightest way I knew so far was this:
The 2x2 matrix
0 1
1 1
raised to the nth power gives the nth fibonacci number. [And then use
a logarithmic matrix mult]
Your version is probably tighter than this.

Yet one could argue that this is 'cheating' because you (and I) are
still solving the power problem.

What I had in mind was to use fib results like:
f_(2n) = f_n^2 + f_(n+1)^2
and use these in the same way (from first principles) like we use the
equation
x^2n = (x*x)^n
to arrive at a logarithmic power algo.
 
I

Ian Kelly

The tightest way I knew so far was this:
The 2x2 matrix
0 1
1 1
raised to the nth power gives the nth fibonacci number. [And then use
a logarithmic matrix mult]
Your version is probably tighter than this.

Oh, nice! I did it this way once:

V = [0 1]

M =
[0 1]
[1 1]

fib(n) = (V * M ** n)[0]

Since I viewed M as operating on V, it never occurred to me that
multiplying by V is actually unnecessary, but it is obvious in
retrospect. I think it's just a fortuitous coincidence that it works,
since V sets up the base case and M describes the recursive case. For
a FIbonacci sequence starting with any other pair of numbers, V would
change, but M would not, and so V would no longer happen to be
identical to the top row of M.

Ian
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,772
Messages
2,569,593
Members
45,111
Latest member
VetaMcRae
Top