Fabonicseries Program required

E

Eric Sosman

Karl said:
And if you want to compute f(100) in less than 5 minutes, consider how you
might use a Map to improve efficiency.

"Five minutes?" You must have a Really Fast Machine!

Just for fun I tried the recursive solution using BigInteger
on my 3GHz Pentium IV, which took not quite one minute to compute
f(42). At that rate, f(100) would take about two and a quarter
million years.

The obvious iterative solution, again using BigInteger, computed
f(100) in less than one millisecond.

I didn't test the closed-form non-iterative solution, because
BigInteger doesn't mix well with powers of irrational numbers.
 
C

Chris Uppal

Simon said:
On the contrary, it is a beautifully simple piece of recursion. In a
proper grown up language

If one is to ignore minor details of syntax, then I don't think the recursive
expression in Java is one whit less elegant than the Lisp version. If we /are/
to take such details into account then they both suck (IMO). Worse, they both
hide the underlying recurrence relation. Daniel's formulation shows how a real
functional programming language can expose the stucture transparently.

But the OP might prefer a formulation less taxing to the beginner's brain -- so
I'll append one.

BTW; for anyone interested. I should probably have used a table lookup -- even
though that means that there has to be code to populate the table. Javac
converts the switch statement into the obvious tableswitch bytecode. But then,
using JSK 1.5.0 on this 32-bit Intel box, the java -client JITer compiles that
into a sequence of 92 cmp/je instuction pairs, comparing n against 1..92 in
order ! Java -server is doing something much weirder, and I haven't yet
figured it out (my IA32 assember skillz are sub-embrionic). It produces an
almost equally long sequence of test-and-jump pairs, plus some arithmetic, but
the values are less than obvious -- I'm guessing it's some sort of hashing
scheme. Neither of them use a jump table, which I would consider to the be
obvious technique for a big switch like this. I know that the unguessable
indirection involved in an indirect jump is very pipeline unfriendly, but
compared with 50-odd compare-and-branches, which is likely to include an
unpredicted jump anyway ?!?

-- chris

========= Fibonacci.java =========
public class Fibonacci
{
public static long
value(int n)
{
// <shrug/>
switch (n)
{
case 1: return 1L;
case 2: return 1L;
case 3: return 2L;
case 4: return 3L;
case 5: return 5L;
case 6: return 8L;
case 7: return 13L;
case 8: return 21L;
case 9: return 34L;
case 10: return 55L;
case 11: return 89L;
case 12: return 144L;
case 13: return 233L;
case 14: return 377L;
case 15: return 610L;
case 16: return 987L;
case 17: return 1597L;
case 18: return 2584L;
case 19: return 4181L;
case 20: return 6765L;
case 21: return 10946L;
case 22: return 17711L;
case 23: return 28657L;
case 24: return 46368L;
case 25: return 75025L;
case 26: return 121393L;
case 27: return 196418L;
case 28: return 317811L;
case 29: return 514229L;
case 30: return 832040L;
case 31: return 1346269L;
case 32: return 2178309L;
case 33: return 3524578L;
case 34: return 5702887L;
case 35: return 9227465L;
case 36: return 14930352L;
case 37: return 24157817L;
case 38: return 39088169L;
case 39: return 63245986L;
case 40: return 102334155L;
case 41: return 165580141L;
case 42: return 267914296L;
case 43: return 433494437L;
case 44: return 701408733L;
case 45: return 1134903170L;
case 46: return 1836311903L;
case 47: return 2971215073L;
case 48: return 4807526976L;
case 49: return 7778742049L;
case 50: return 12586269025L;
case 51: return 20365011074L;
case 52: return 32951280099L;
case 53: return 53316291173L;
case 54: return 86267571272L;
case 55: return 139583862445L;
case 56: return 225851433717L;
case 57: return 365435296162L;
case 58: return 591286729879L;
case 59: return 956722026041L;
case 60: return 1548008755920L;
case 61: return 2504730781961L;
case 62: return 4052739537881L;
case 63: return 6557470319842L;
case 64: return 10610209857723L;
case 65: return 17167680177565L;
case 66: return 27777890035288L;
case 67: return 44945570212853L;
case 68: return 72723460248141L;
case 69: return 117669030460994L;
case 70: return 190392490709135L;
case 71: return 308061521170129L;
case 72: return 498454011879264L;
case 73: return 806515533049393L;
case 74: return 1304969544928657L;
case 75: return 2111485077978050L;
case 76: return 3416454622906707L;
case 77: return 5527939700884757L;
case 78: return 8944394323791464L;
case 79: return 14472334024676221L;
case 80: return 23416728348467685L;
case 81: return 37889062373143906L;
case 82: return 61305790721611591L;
case 83: return 99194853094755497L;
case 84: return 160500643816367088L;
case 85: return 259695496911122585L;
case 86: return 420196140727489673L;
case 87: return 679891637638612258L;
case 88: return 1100087778366101931L;
case 89: return 1779979416004714189L;
case 90: return 2880067194370816120L;
case 91: return 4660046610375530309L;
case 92: return 7540113804746346429L;
}
if (n < 1)
throw new IllegalArgumentException("N must be >= 1");
else
throw new IllegalArgumentException("answer cannot be expressed as a
long");
}
}
==================
 
T

Thomas Hawtin

Daniel said:
I prefer the Haskell/Miranda formulation (not non-recursive though):

fib 0 = 0
fib 1 = 1
fib (n + 2) = fib (n) + fib (n + 1)

The recursive method is not necessarily slow. It depends upon quality of
implementation of the language system. For example, I found something
like this on the web:

#include <ostream>

template<unsigned long i> struct Fibonacci {
static const unsigned long result =
Fibonacci<i-1>::result+Fibonacci<i-2>::result;
};
template<> struct Fibonacci<0> {
static const unsigned long result = 0;
};
template<> struct Fibonacci<1> {
static const unsigned long result = 1;
};
template<unsigned long i> struct Loop {
static void run() {
Loop<i-1>::run();
std::cout<<Fibonacci<i>::result<<std::endl;
}
};
template<> struct Loop<0> {
static void run() { }
};

int main() {
Loop<100>::run();
}

In fact ISO C++ only allows seven levels of template nesting, IIRC. My
compiler bottles after Loop<64>. The evaluation is done at compile time,
so the run time performance should be excellent. However, the compile
time performance is non too shabby either. You would expect template
instantiations to be memoised (might indeed be mandated).

So we have good compile time performance, excellend run time performance
and only need to use the naive algorithm. Pity the syntax, the error
messages, the lack of bounds checking and the rest of the language suck. :)

Tom Hawtin
 
S

Stefan Ram

Thomas Hawtin said:
The evaluation is done at compile time,

The evaluation might be done at compile time, even when
templates are not used: Some C++-compilers can evaluate
function calls and loops at compile time (for gcc the options
are »-O3« and »-funroll_loops«).

By the rule that a program only needs to behave as-if the
language semantics would be applied, a compiler also is free
to transform a recursive wording of a programmer into an
iterative implementation (tail-call optimization is a step
into this direction), although this is not always possible.

However, I regard source code as a design and specification
artefact and expect the compiler to do the actual
implementation. Thus, recursive notation in the source code
does not need to mean actual recursive execution to me.
 
J

John W. Kennedy

Furious said:
I see you are a vicious and cruel bastard. It is doubtless you torture
puppies in your spare time.

You gave the OP a program that was so close to being correct that even
I did not see the error at first. You expected the OP to blindly copy
your program and lose all face and all marks when the teacher exposes
the OP's stupidity.

You are fiendishly clever. Your program gives the correct fibonacci
sequence for the first 44 numbers and then starting with the 45th
number it produces nothing but incorrect results. Obviously, you
anticipated that the OP would not check much beyond the 10th number.

Hint to OP: To save face, figure out why this heartless bastard's
program fails for numbers larger than 44.

That's not the only problem. Have you actually tried running it?
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Stefan said:
The evaluation might be done at compile time, even when
templates are not used: Some C++-compilers can evaluate
function calls and loops at compile time (for gcc the options
are »-O3« and »-funroll_loops«).

I don't think loop unrolling quite fit with the compile time
evaluation described here.

Arne
 
S

Stefan Ram

Arne Vajhøj said:
I don't think loop unrolling quite fit with the compile time
evaluation described here.

public class Main
{ static int alpha( final int m )
{ int i = 3; for( int j = 0; j < m; ++j )i <<= j;
return i; }
public static void main
( final java.lang.String[] args )
{ java.lang.System.out.println( alpha( 3 )); }}

gcj -O3 -funroll-loops -S Main.java

(...)
movl __ZN4java4lang6System3outE, %eax
movl $24, %ecx
movl (%eax), %edx
movl %ecx, 4(%esp)
movl %eax, (%esp)
call *100(%edx)
(...)

This is equivalent to:

public class Main
{ public static void main
( final java.lang.String[] args )
{ java.lang.System.out.println( 24 ); }}
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Stefan said:
public class Main ....
gcj -O3 -funroll-loops -S Main.java ....
This is equivalent to:

public class Main
....

And ?

Interesting example of compiler optimization.

Still completely unrelated to the compile time
evaluation dons via templates that started this
subthread.

Arne
 
S

Stefan Ram

Arne Vajhøj said:
Still completely unrelated to the compile time
evaluation dons via templates that started this
subthread.

The relation already has been explained to you.

The example was given in order to show that

»The evaluation might be done at compile time,
even when templates are not used«
 
F

Furious George

John said:
That's not the only problem. Have you actually tried running it?

Yes, I did. I was very disappointed with my compiler. I would have
thought it would have transformed it into an iterative algorithm (at
the byte code level). Is this a reasonable expectation?

If it is reasonable, then your program (using BigInteger) would be
perfectly acceptable for any situation.

Regardless of the compiler, your program would probably be perfectly
acceptable as is, for an introductory programming class. Have you
taught one? Most students will submit buggy programs that do not
produce the correct sequence. Correctness is the most important thing,
efficiency can be learned later on.
 
J

John W. Kennedy

Furious said:
Yes, I did. I was very disappointed with my compiler. I would have
thought it would have transformed it into an iterative algorithm (at
the byte code level). Is this a reasonable expectation?

A /double/ recursion? I doubt it.
If it is reasonable, then your program (using BigInteger) would be
perfectly acceptable for any situation.

Regardless of the compiler, your program would probably be perfectly
acceptable as is, for an introductory programming class. Have you
taught one? Most students will submit buggy programs that do not
produce the correct sequence. Correctness is the most important thing,
efficiency can be learned later on.

There's "efficiency" and there's "running to completion within a human
lifetime". This algorithm is O 2^N.
 
J

JanTheKing

Hi Karl,

When using a iterative solution (referring to code posted by me
earlier) I am wondering as to how a map will improve efficiency?
 
K

Karl Uppiano

JanTheKing said:
Hi Karl,

When using a iterative solution (referring to code posted by me
earlier) I am wondering as to how a map will improve efficiency?

My apologies to everyone for posting a solution to a homework assignment. I
assume by now the OP has either submitted his solution or received zero
credit. I adapted this implementation from an example written in Python,
which I Googled using the query string "fibonacci algorithm". Just basic
Internet research.

This implementation can generate the first 1000 fibonacci numbers in less
than 5 seconds. The recursion stops as soon as a map lookup yields a result.
I verified the first 20 numbers were correct, and then spot checked
f(n)/f(n - 1) = 1.618 (approx) at a couple of places within the dynamic
range of Windows Calculator, with expected results. This might not be the
optimum implementation, but I think it is a decent one.

package fibonacci;

import java.math.BigInteger;
import java.util.HashMap;
import java.util.Map;

public class Main {
static Map<Integer, BigInteger> map = new HashMap<Integer,
BigInteger>();
static {
map.put(0, BigInteger.ZERO);
map.put(1, BigInteger.ONE);
}

public static void main(String[] args) {
for(int nn = 0; nn <= 1000; nn++) {
System.out.println("F(" + nn + ") = " + fibonacci(nn));
}
}

static BigInteger fibonacci(int nn) {
BigInteger fn = map.get(nn);

if (fn == null) {
fn = fibonacci(nn - 1).add(fibonacci(nn - 2));
map.put(nn, fn);
}
return fn;
}
}
 
C

Chris Uppal

JanTheKing said:
When using a iterative solution (referring to code posted by me
earlier) I am wondering as to how a map will improve efficiency?

It won't. What the map (or use an array instead) does is allow you to use the
same kind of elegant code as in the recursive formulation, but with an
efficiency which approaches that of the iterative one.

-- chris
 
S

Simon Brooke

Chris Uppal said:
It won't. What the map (or use an array instead) does is allow you to
use the same kind of elegant code as in the recursive formulation, but
with an efficiency which approaches that of the iterative one.

It will on second and subsequent invocations of the computation, if you
cache partial results. Iterating up you can start with the last partial
results posted, even if they are lower than the value you're looking for;
recursing down, obviously, they help even more.
 
S

Simon Brooke

Furious said:
Yes, I did. I was very disappointed with my compiler. I would have
thought it would have transformed it into an iterative algorithm (at
the byte code level). Is this a reasonable expectation?

No, not for the algorithm as posted. It's not tail recursive (it is, in
fact, doubly recursive). I posted another algorithm (in LISP, so as not to
do the OP's homework) which while technically still recursive /is/ tail
recursive and so /can/ have the recursion optimised out.
 
F

Furious George

Simon said:
No, not for the algorithm as posted. It's not tail recursive (it is, in
fact, doubly recursive). I posted another algorithm (in LISP, so as not to
do the OP's homework) which while technically still recursive /is/ tail
recursive and so /can/ have the recursion optimised out.

Now, I am curious. If I understand correctly, it is unreasonable to
expect my compiler to iron out the doubly recursive O(2^n) algorithm,
but it is reasonable to expect my compiler to iron out the tail
recursive O(n) algorithm. I would like to examine whether my compiler
does in fact iron out the tail recursive algorithm.

It was easy for me to verify that the compiler was not ironing out the
doubly recursive algorithm (because it is so painfully slow compared to
the iterative O(n) algorithm). But how would I verify that the
compiler is ironing out the tail recursive algorithm. I am not clever
enough to examine the byte code.
 
M

Mark Jeffcoat

Furious George said:
Now, I am curious. If I understand correctly, it is unreasonable to
expect my compiler to iron out the doubly recursive O(2^n) algorithm,
but it is reasonable to expect my compiler to iron out the tail
recursive O(n) algorithm. I would like to examine whether my compiler
does in fact iron out the tail recursive algorithm.

It was easy for me to verify that the compiler was not ironing out the
doubly recursive algorithm (because it is so painfully slow compared to
the iterative O(n) algorithm). But how would I verify that the
compiler is ironing out the tail recursive algorithm. I am not clever
enough to examine the byte code.


I'm suspicious of the claim that a Java compiler could,
in principle, convert a tail recursion into a loop.

(That is, a Java compiler specificically. I'm aware of things
like the Scheme standard requires compilers to do this.)

The problem I'm thinking of is with exceptions. Consider
the following class (which I'm not going to bother to test,
or even compile; I trust you to deal with it):

class SmallNumber {

private int i;

public SmallNumber(Integer i) {
if ((i > 1000) || (i < 0)) {
throw new RuntimeException("Out of bounds");
}
this.i = i;
}

public SmallNumber add(SmallNumber n) {
return tailAdd(n, this);
}

private SmallNumber addHelper(SmallNumber n, SmallNumber accumulator) {
if (n.i == 0) {
return accumulator;
}
return tailAdd(new SmallNumber(n.i - 1),
new SmallNumber(accumulator.i + 1));
}
}

So, add() -- or, at least, addHelper() -- seems to be
as tail recursive as they come. But what's the stack
trace supposed to be when you trigger an Exception
calling n.add(500,501)?

If you object that add() isn't really tail recursive
because it may have to change the call stack on an
exceptional return (and so it doesn't meet the basic
requirement that a tail recursive doesn't have to do
anything but return a value), it to me that you've
made the whole discussion nearly pointless; you've
defined away the possibility of a tail call appearing
anywhere.


I can conceive of a situation where the compiler
(maybe the JIT compiler) can prove that there's no
possibility of throwing an Exception in a block of
code, but that seems like quite a lot to ask to optimize
a technique that would only occur to a very small
percentage of Java programmers.
 
I

Ingo Menger

Karl said:
My apologies to everyone for posting a solution to a homework assignment. I
assume by now the OP has either submitted his solution or received zero
credit. I adapted this implementation from an example written in Python,
which I Googled using the query string "fibonacci algorithm". Just basic
Internet research.

This implementation can generate the first 1000 fibonacci numbers in less
than 5 seconds.
Really?

This might not be the
optimum implementation, but I think it is a decent one.

Way too fat, as most java programs.

How about the following algorithm? (You may want to convert it to real
Java)

bigint fib(bigint n) {
if (n<2) then return 1;
bigint f1 = 1, f2 = 1, i = 2, f3;
while (i < n) {
i++; f3 = f1; f1 = f2; f2 += f3;
}
return f1+f2;
}

This function should compute the first 1000 numbers in no time. Even
the first 100000.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,774
Messages
2,569,599
Members
45,175
Latest member
Vinay Kumar_ Nevatia
Top