Recursion Usage and Concepts - Newbie Question

R

Roedy Green

I'm pretty
certain factorial() was the first recursive example I saw, but the first
useful examples of recursion I met were connected with building and
walking ordered binary trees - quite probably they were in Nicolas
Wirth's book "Algorithm + Data Structure = Program".

Recursion is not big in my toolbox. I have used it for:

1. traversing directory trees.

2. traversing tree structures in RAM.

3. parsing.

4. generating nonsense sentences.

5. QuickSort

I can't think of anything else offhand.

It is not nearly as useful as induction is in mathematics. Your depth
is pretty seriously limited.
 
M

Mark Space

Martin said:
Yes. Tree walking.

I actually might have to disagree with you here. CPUs have a physical
limit regarding stack use. Several use local stacks (internal cache) to
implement call and return stacks. Go beyond that limit and you incur a
significant performance penalty.

AMD explicitly recommends implementing tree walks with a loop and a
local stack data structure, not using recursion.

I think the afore mentioned directory tree walk should probably be
implemented the same way. Hmmm, although maybe the IO involved would
make stack performance a moot issue.
 
L

Lew

Mark said:
I actually might have to disagree with you here. CPUs have a physical
limit regarding stack use. Several use local stacks (internal cache) to
implement call and return stacks. Go beyond that limit and you incur a
significant performance penalty.

AMD explicitly recommends implementing tree walks with a loop and a
local stack data structure, not using recursion.

Brian Goetz's superb /Java Concurrency in Practice/ has an example of a
recursive, sequential tree-walk algorithm (Listing 8.15), which is stack
bound, transformed to a thread-based concurrent algorithm (Listing 8.16). The
concurrent version is also recursive, but puts its intermediate state on the
heap, not the stack, so goes much, much deeper. This is an interesting case
of a parallel algorithm providing benefits even on a single-processor system.

It shows that part of the problem is not with recursion, but with use of the
stack to support recursion.

Of course, heap is finite, too. Recursion must maintain state proportionate
to problem size, versus a loop's summary state like a running total, with no
intermediate state to unwind. The state required by recursive algorithms
supports a certain economy of algorithmic expression, but often is too high a
price to pay.
I think the afore mentioned directory tree walk should probably be
implemented the same way. Hmmm, although maybe the IO involved would
make stack performance a moot issue.

CPUs use cache for heap memory, too. Locality of reference is something to
think about, but for a cross-platform (and evolving-platform) world like
Java's, I suggest we not get too hung up on the specifics of how a CPU
implements program stack vs. program heap. The real issue is the depth of
that memory. Java's execution model has logically separate stack and heap,
and heap is (nearly?) always much larger. (I'm explicitly disclaiming JME here.)

The parallel algorithm suggested in /Concurrency/ not only shifts state to the
heap from the stack, thus buying larger capacity, it is inherently scalable to
more processor threads.

I assess that the recursive expression of Goetz's example directly lends
itself to parallelization. A loop idiom might have been more difficult to
render as multithreaded.
 
M

Martin Gregorie

Roedy said:
Recursion is not big in my toolbox. I have used it for:

1. traversing directory trees.

2. traversing tree structures in RAM.

3. parsing.

4. generating nonsense sentences.

5. QuickSort

I can't think of anything else offhand.

It is not nearly as useful as induction is in mathematics. Your depth
is pretty seriously limited.
>
I agree that its usefulness is limited, but I can add one further case:

6. Handling Bill Of Materials structures in databases.

These are usually represented in a database with the following
entity-relationship diagram where 'part' describes a product,
subassembly or indivisible item and 'component' serves both to implement
the many:many relationship between parts and to say how many components
of a given type there are in something.

--------
{ part )
--------
is_made_from | | is_a _subassembly_of
/|\ /|\
-----------
( component }
-----------

This is a self-referential many-to-many structure and as such is
inherently recursive: You walk the 'is_made_from' relationship to find
which components a part (which can be a subassembly or unitary, such as
a washer) is made from and you walk the 'is_a_subassembly_of'
relationship to discover where a part is used. IOW 'is_made_from' can be
used for costing products and 'is_a_subassembly_of' would be used in
stock control.

I've also seen these structures used in financial systems to define
financial products, e.g, a current account is assembled from lower level
components such as a balance sheet, interest rate, a direct debit
facility, a cheque drawing facility and an overdraft facility. A balance
sheet consists of a balance and a transaction list. A transaction list
consists of.....

I've tried both iterative and recursive methods of handling these
structures and recursion was both more elegant and faster (at least when
implemented in a VAX/VMS Rdb environment).
 
L

lscharen

As an aside, I have always been appalled at the use
of Fibonacci numbers to illustrate recursion, because
recursion is probably the worst[*] way to compute them.
The obvious iterative solution executes in O(n) time, and
there's also an O(1) method, albeit with some drawbacks.
Why would anyone in his right mind choose the O(e^n)
recursive method? Oh, yeah: Because he's a classroom
drudge trying to teach recursion, and it's the only
problem that comes to mind ...

You might be interested to know that there is also a O(log n) method
for computing the Fibonacci sequence by matrix exponentiation. I've
used this for job interviews before. If a candidate knows the O(2^n),
O(n), O(log n) and O(1) methods of computing a Fibonacci sequence and
can explain the different approaches, then I am almost 100% confident
that they have an extremely solid computer science foundation. If
they can derive the O(1) solution on the fly, they're hired. ;)

For the curious, let the matrix A be defined as

n
A = [1 1] = [F(n+1) F(n) ]
[1 0] [F(n) F(n-1)]

Since computing the power of a matrix can be efficiently done by
recursion(!), the Fibonacci sequence is efficiently computed as well.
Here is the recursive exponentiation in pseudo-code since this is a
discussion about recursive examples.

public static Matrix matrixPower(Matrix matrix, int exponent)
{
if ( exponent == 0 )
return IdentityMatrix;

if ( exponent == 1 )
return matrix;

if ( exponent.isOdd() ) {
return matrix.times( matrixPower( matrix, exponent-1 ));
} else {
// This value must be cached to make the recursion
// efficient! Taria, can you explain why?
Matrix halfPower = matrixPower( matrix, exponent / 2 );
return halfPower.times( halfPower );
}
}

See http://www.ics.uci.edu/~eppstein/161/960109.html

Note that this decomposition implies the following Fibonacci
relationship

[F(2n+1) F(2n) ] = [F(n+1) F(n) ][F(n+1) F(n) ]
[F(2n) F(2n-1)] [F(n) F(n-1)][F(n) F(n-1)]

= [-- F(n+1)F(n) + F(n)F(n-1)]
[-- ---]

Thus

F(2n) = F(n) * (F(n+1) + F(n-1))

You could use this new identity to write a new recursive function that
makes three recursive calls when n is even and the standard recursion
when n is odd. This would be slightly more efficient than using the
matrix multiplications directly since you would only be computing with
the non-zero entries of the matrix.

public static int fib( int n )
{
if ( n < 2 )
return 1;

if ( isEven( n ))
{
int h = n / 2;
return fib(h) * ( fib(h+1) + fib(h-1) );
}
else
{
return fib(n-1) + fib(n-2);
}
}

Aren't algorithms fun? :)

-Lucas
 
M

Mark Space

Lew said:
Brian Goetz's superb /Java Concurrency in Practice/ has an example of a
recursive, sequential tree-walk algorithm (Listing 8.15), which is stack
bound, transformed to a thread-based concurrent algorithm (Listing
8.16).

Thanks, I added this to my Amazon wish list. I've been meaning to for a
while and you just reminded me.

I have a couple of algorithms I fall back on when I need faster tree
walks. One is Michael Abrash's algorithm from The Zen of Graphics
Programming. It uses a fixed length array of pointer allocated locally
(on the stack). Good if your tree can publish it's max depth in
advance. A simple alteration of Abrash can use a regular Java stack
structure to provide tree traversal up to the depth allowed my heap memory.

(Actually, I have no idea how the speed of these two compare. Hmmm,
goona need to test this.)

The other is J. M. Morris's simple tree walk. (Information Processing
Letters, Dec. 1979.) It uses no extra memory at all, but does transform
the tree as it walks. Good for limited memory devices where multiple
threads aren't an issue, but I've never had a chance to try it out in
practice.



You make good points. I hadn't thought about using multiple threads to
speed up a tree walk. It's something I should investigate.
 
J

John W. Kennedy

Patricia said:
I like that one. It does meet both the requirements I suggested earlier.

As a thought experiment only, I've always liked "Implement the #include
directive" (or %INCLUDE or COPY or whatever) -- but that's no good for
someone starting with Java, and, as I say, works only as a thought
experiment.

Walking the directory tree may indeed be the best example. It's a
concept that modern beginners will normally have an intuitive grasp of,
even if they don't know about other sorts of treewalking. We old farts,
of course, may not have been exposed to computers at all before our
first programming courses, but that's not normal today.

--
John W. Kennedy
"The whole modern world has divided itself into Conservatives and
Progressives. The business of Progressives is to go on making mistakes.
The business of the Conservatives is to prevent the mistakes from being
corrected."
-- G. K. Chesterton
 
T

Taria

Aren't algorithms fun? :)

-Lucas

It's a lot of fun when I finally understand what's being said. :p My
class is an intro to algorithms, as a matter of fact and if we were to
put the class on a standard curve of grading, we'd be all failing. By
this, at least I know I'm not the only person struggling in the class.

I am not sure why I'm having such a hard time with the O(n) concept
and everything else I'm learning about, sometimes it seems so simple
until someone asks me how to compute O(1) for a fib series. lol. I'm
going to have to look that up, I have no clue about that one. And I
didn't know there were different time complexities tagged onto the fib
series (other than if it was computed using iterative vs recursion.)

Interesting web page Lucas, I bookmarked so I can read it for a
different perspective on concepts when i get stuck trying to
understand them. :)

-t

And no, I can't tell you answer the cache question either. I'll think
about it for a bit though but whether I get any usable answers is
another. :p
 
L

Lasse Reichstein Nielsen

You might be interested to know that there is also a O(log n) method
for computing the Fibonacci sequence by matrix exponentiation. ....
If they can derive the O(1) solution on the fly, they're hired. ;)
For the curious, let the matrix A be defined as

n
A = [1 1] = [F(n+1) F(n) ]
[1 0] [F(n) F(n-1)]

I'm guessing the O(1) solution is the one that uses the eigenvalues of
that matrix.
I'm hard pressed to call it O(1) though. Calculating the power of a
real number also takes logarithmic time, as soon as you leave the range
of the machine supported doubles. But if you limit yourself to a fixed
range of output values, then there is an O(1) algorithm: table lookup.
For an int result, the table only need 12 entries :)

Actually, if you need arbitrarily large results, the simple operations
like addition and multiplication won't be O(1) any more, but probably
log(n) and log(n)^2 where n is the size of the numbers added or
multiplied.

/L
 
C

Christian

Taria said:
Hello all,

I'm struggling with an assignment that involves using a 'quicksort'
search for a user defined kth element within a list of numbers. The
use of recursion is a requirement of this assignment.

tasks until it solves the problem. My question is this: (I hope I
make sense in asking this) what if I was only interested in the result
at the very end of the recursion, how would I pass that value I found
to the first calling statement?

I figured out (and I'm sure my program is clumsy) how to recusively
solve the assignment but I don't know how to pass back the value I
find (after I split the list into smaller bits.) I could put a print
statement right there to indicate my answer, but after looking at a
few examples of other ppl's recursive Java routines, I feel
uncomfortable with this solution. I feel like I should be able to pass
back a value at the end of a recusrive call without it being changed
when it's returning to the main routine (yes, that's what happened
i.e. I HAD a statement that looked similiar to this:

return (aValue);

and it changed the value of aValue to something else. Conceptually,
should I be able to use that statemen above?

Also, because I'm so intimidated by the idea of recursion and still
somewhat afraid of objects, I resorted to using static methods instead
of objects. Am I hurting myself by doing this? Isn't the concepts of
the behavior of recursion the same whether it be by object or static?
All the examples I'm looking at use objects hence I've begun to
question my pursuit of using static methods. (Really, I was going
to rewrite this program using object-oriented code after I figured out
the recursive part, I know I can do it! :p)

Any advice is appreciated and I apologize in advance for anything that
sounds 'wrong.'

-t

Recursion is only if a function calls itself. What you are talking about
is one of the typical algorithm design paradigms to solve a problem:
"Divide and Conquer"
there are some other Algorithm paradigms that may use recusrsion to work.
Another paradigm that uses recursion is Dynamic Programming.
While you can do the devide and conquer in static method calls Dynamic
programming needs some kind of object that holds results of earlier
computed values. So either create an object that does the computation
and holds these values .. or you have to pass this computed values with
each methodcall..

In other parts of the thread we had the fibonacci series..
with only Divide and Conquer you will get exponential nr of method calls
while if you are using dynamic programming and store solved subproblems
you stay in O(n) (which is quite bad for fibonacci but better than
nothing..)

Christian
 
T

Taria

Recursion is only if a function calls itself. What you are talking about
is one of the typical algorithm design paradigms to solve a problem:
"Divide and Conquer"
there are some other Algorithm paradigms that may use recusrsion to work.
Another paradigm that uses recursion is Dynamic Programming.

Dynamic Programming sounds like the start of smart programs that
'learn' solutions by remembering things it's done in the past. This
sounds like an interesting subject!
While you can do the devide and conquer in static method calls Dynamic
programming needs some kind of object that holds results of earlier
computed values. So either create an object that does the computation
and holds these values .. or you have to pass this computed values with
each methodcall..

I did a combination of what you suggested, in a static method, I
passed computed parameters (left, right ArrayList indexes and the
value of k) using recursion and stored the list values into objects.
That was kinda fun...I enjoyed doing it once I had the recursive
concept intact. I can see why object-oriented languages became so
popular, they're really easy to use. (Once you get the hang of it,
anyway.) I have yet to use computational statements in an
object..that's new to me and sounds fun!
In other parts of the thread we had the fibonacci series..
with only Divide and Conquer you will get exponential nr of method calls
while if you are using dynamic programming and store solved subproblems
you stay in O(n) (which is quite bad for fibonacci but better than
nothing..)

Christian- Hide quoted text -

- Show quoted text -

Reading different perspectives of this subject (or any other subject)
helps tremendously in expanding 'beyond the textbook' stuff that is
presented in class. I'm enjoying this thread. :)

-t
 
I

Ingo Menger

Are trees usually introduced before recursion?

This would be quite impossible, wouldn't it?
Since trees are recursive data structures, when you introduce them,
you also introduce recursion.
 
R

Roedy Green

s Dynamic
programming needs some kind of object that holds results of earlier
computed values

I did all kinds of dynamic programming back in the days of Fortran
where you don't have recursion. It never dawned on me until now that
you could handle tracking the history with recursion.
 
S

Steve Wampler

public static int fib( int n )
{
if ( n < 2 )
return 1;

if ( isEven( n ))
{
int h = n / 2;
return fib(h) * ( fib(h+1) + fib(h-1) );
}
else
{
return fib(n-1) + fib(n-2);
}
}

Aren't algorithms fun? :)

Sure are! (What happens when this is called with n == 2?)
 
C

Christian

Roedy said:
I did all kinds of dynamic programming back in the days of Fortran
where you don't have recursion. It never dawned on me until now that
you could handle tracking the history with recursion.
I meant that if your Dynamic Programming based algorithm must somehow
provide the already calculated subproblems to each call of the function..

so either you could pass these as some object..
ex
public static Integer calc(Integer what, Map<Integer,Integer> subresults) {}

or you could use a calc object
class CalcObj {
private Map<Integer,Integer> subresults ...
public Integer calc(Integer what){}
}

I don't see any reason why a dynamic programming algorithm shouldn't
also use recursion if the problem is predestined for recursion but
without dynamic programming to hard to solve .. i.e fibonacci(again bad
example as O(1) is possible)..
 
L

lscharen

Sure are! (What happens when this is called with n == 2?)

Yeah, I noticed that it would go into an infinite recursion after I
posted it. I wasn't careful in stating whether I was counting from
zero or one. Let me be explicit:

Count the fibonacci numbers starting from 1,


{ 0 n < 1
F(n) = { 1 n = 1
{ 1 n = 2
{ F(n-1) + F(n-2) else

The relation still holds,

F(2) = F(1) * (F(0) + F(2))
= 1 * (0 + F(2))
= F(2)

but we now have a base case that returns F(2) immediately. Here is
the correct implementation. I hope. It's always dangerous posting
code for everyone to see. :)

public static int fib( int n )
{
if ( n < 1 )
return 0;

if ( n == 1 || n == 2 )
return 1;

if ( isEven( n ))
{
int h = n / 2;
return fib(h) * ( fib(h+1) + fib(h-1) );
}
else
{
return fib(n-1) + fib(n-2);
}
}

-Lucas
 
L

lscharen

I am not sure why I'm having such a hard time with the O(n) concept
and everything else I'm learning about, sometimes it seems so simple
until someone asks me how to compute O(1) for a fib series. lol. I'm
going to have to look that up, I have no clue about that one. And I
didn't know there were different time complexities tagged onto the fib
series (other than if it was computed using iterative vs recursion.)

Well, to be fair, the time complexity of an algorithm is totally
independent of the whether or not one computes via recursion or not.
Also, the O(1) solution to the fibonacci sequence requires analysis
techniques beyond an introductory course.

There are subtle issues here, too, in terms of accurately
characterizing the big-O running time -- one can get different
asymptotic results assuming a RAM computer (like most every computer
is these day) versus a Turing Machine model. If you take an analysis
of algorithms course you will be exposed to these topics.

If you want to look up the closed-form solution to the nth Fibonacci
number, please do. There is some neat math in there and it shows how
the Golden Ratio is related to the Fibonacci sequence -- since the
golden ratio is considered the "most aesthetic" ratio and many natural
processes follow the Fibonacci sequence, some conjecture that this is
a reason humans find natural forms and structure so visually pleasing,
and this is why the golden ratio is used by practicing architects.

http://en.wikipedia.org/wiki/Golden_ratio#Architecture
http://www.mcs.surrey.ac.uk/Personal/R.Knott/Fibonacci/fibnat.html

But it is probably best that you just treat this as a bit of "trivia"
for the moment. You know such a solution exists, so if it is
presenting to you later on, the analysis may be more relevant. When
you have the background to understand "why" a particular algorithm or
analysis technique can be useful, it's often much easier to
internalize that knowledge.

Interesting web page Lucas, I bookmarked so I can read it for a
different perspective on concepts when i get stuck trying to
understand them. :)

And no, I can't tell you answer the cache question either. I'll think
about it for a bit though but whether I get any usable answers is
another. :p

Hint: This is (loosely) related to dynamic programming. You have
solved a subproblem if size n/2 and can recycle that answer to solve a
problem of size n. Consider how much extra computation would be
required if you recomputed the n/2 solution. Remember, this is
repeated at each step of the recursion on sub-problems of size n/2, n/
4, n/8, ...

-Lucas
 
S

Steve Wampler

Yeah, I noticed that it would go into an infinite recursion after I
posted it. I wasn't careful in stating whether I was counting from
zero or one.

Interestingly, because in the end it makes little difference
whether one starts counting an 0 or 1, the original solution
works if:

if ( isEven(n) )

is simply replaced with:

if ( !isEven(n) ) # i.e. isOdd(n)
 
T

Taria

Hint: This is (loosely) related to dynamic programming. You have
solved a subproblem if size n/2 and can recycle that answer to solve a
problem of size n. Consider how much extra computation would be
required if you recomputed the n/2 solution. Remember, this is
repeated at each step of the recursion on sub-problems of size n/2, n/
4, n/8, ...

-Lucas

I see the answer now. It must be cached for 'instant' access
to the already computed subparts for it to be efficient.
The dynamic programming approach is actually a good example
for the fib series because it demostrates the weakness and
once you compensate for that by using an if statement:

if O(1) then
here's the answer
else
perform recursion solve of fib(n)(n-1)

it is a good replacement for the classical recursion solution and
(also, it's a good contrast.)

Anyway, those are my thoughts for this morning. :)

-t
Why is it I understand everything so much more in the morning
than 2am at night? :p
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,599
Members
45,175
Latest member
Vinay Kumar_ Nevatia
Top