Fabonicseries Program required

K

Karl Uppiano

Ingo Menger said:

Yes. including all of the console I/O, four seconds and change on my rather
old laptop.
Way too fat, as most java programs.

Depending on what your features and reqirements are. I was adapting the
original algorithm, which is modeled after the actual mathematical
definition, as classically written, but with the enhancement not to have to
recurse to down to zero every time a new value is needed. One could use an
array of big integers, but an array does not grow dynamically (as a map can)
if someone suddenly requires a larger F(n).
How about the following algorithm? (You may want to convert it to real
Java)

bigint fib(bigint n) {
if (n<2) then return 1;
bigint f1 = 1, f2 = 1, i = 2, f3;
while (i < n) {
i++; f3 = f1; f1 = f2; f2 += f3;
}
return f1+f2;
}

This function should compute the first 1000 numbers in no time. Even
the first 100000.

No time? That *is* impressive. Still, it needs to loop n times for any
arbitrary value of n, which is approximately what the algorithm I presented
has to do if it has never seen n that large before. I agree there is the
issue with storage of all n values in a map with my approach, however, once
calculated, the value never need be calculated again, just looked up. In an
application where the algorithm is used more than once, there is no
iteration at all unless n > any previous n. All engineering involves
compromise, isn't it? BTW, Isn't F(0) = 0?
 
C

Chris Uppal

Furious said:
Now, I am curious. If I understand correctly, it is unreasonable to
expect my compiler to iron out the doubly recursive O(2^n) algorithm,
but it is reasonable to expect my compiler to iron out the tail
recursive O(n) algorithm. I would like to examine whether my compiler
does in fact iron out the tail recursive algorithm.

No, it is not reasonable to /expect/ it to do that. That optimisation is at
the option of the JIT implementation, and even if the JVM you are using is able
to do tail-call elimination at all, there is nothing to guarantee that it will
happen to do so in any particular case (it might, at least in theory, vary from
run to run of the same program in the same JVM).

So if your program depends on tail-call elimination for correctness (which
usually includes completing in a reasonable time), then you have to do it
yourself.

It was easy for me to verify that the compiler was not ironing out the
doubly recursive algorithm (because it is so painfully slow compared to
the iterative O(n) algorithm). But how would I verify that the
compiler is ironing out the tail recursive algorithm.

Very difficult to do. The java->bytecode translator (javac) is unlikely to do
that (even in the rare cases where it would be legal for it to do so), so
reading bytecode is not going to help much. But getting details of what the
JIT has done to generate (say) IA32 instructions from the bytecode is difficult
(as far as I know the only way is to use JVMTA and a suitable disassembler).

-- chris
 
C

Chris Uppal

Simon said:
It will on second and subsequent invocations of the computation,

Yes, it will if you use a long lived cache. I was (unconsciously) only
thinking of the case where the cache is allocated and used only for one
top-level computation, and then discarded. "Transient memoisation" as it
were...

-- chris
 
E

Eric Sosman

Ingo said:
Way too fat, as most java programs.

How about the following algorithm? (You may want to convert it to real
Java)

bigint fib(bigint n) {
if (n<2) then return 1;

if (n < 2)
return n <= 0 ? 0 : 1;
bigint f1 = 1, f2 = 1, i = 2, f3;
while (i < n) {
i++; f3 = f1; f1 = f2; f2 += f3;
}
return f1+f2;

I believe you're off by one here: fib(2) should return 1,
not 2, and fib(3) should return 2, not 3, and so on.
}

This function should compute the first 1000 numbers in no time. Even
the first 100000.

fib(100000) is a 69424-bit number, and adding numbers of
that size is likely to take more than "no time."

Still, the iterative solution beats the pants off recursion.
 
C

Chris Uppal

Eric said:
fib(100000) is a 69424-bit number, and adding numbers of
that size is likely to take more than "no time."

Still, the iterative solution beats the pants off recursion.

On my machine the iterative form takes ~ 0.024 seconds for fib(10,000). The
memoised recursive form takes around 0.09 seconds.

Not really a trouser-ripping difference in speed. What matters more is that
the recursive form blows the default stack at around fib(8,000), and the
default heap somewhere before fib(40,000).

I assume that the increased GC load of the memoised version explains the
difference in speeds -- which otherwise seems implausibly great (the BigInteger
adds should /swamp/ the array accesses and null-tests).

-- chris
 
C

Chris Uppal

I said:
On my machine the iterative form takes ~ 0.024 seconds for fib(10,000).
The memoised recursive form takes around 0.09 seconds.

Forgot to mention that the iterative form does fib(100,000) in about 2 seconds.
I think the memoised recursive version would require more heap space (~500MB)
than I can give it on this machine.

-- chris
 
E

Eric Sosman

Chris Uppal wrote On 11/07/06 10:47,:
I wrote:




Forgot to mention that the iterative form does fib(100,000) in about 2 seconds.
I think the memoised recursive version would require more heap space (~500MB)
than I can give it on this machine.

By "memoized" I guess you mean caching the result of an
already-computed fib(), and caching ~100000 BigInteger results
(many of them rather large) would indeed chew up a lot of RAM.

So we don our thinking caps, and we note that a cached
value will be used at most twice (in any one fib(n) computation),
and never again: the cache entries for greater argument values
will "intercept" the recursion before it gets down to low
numbers for a third time. That is, only a small part of the
cache is active, and the rest can be discarded after just a
few references.

So we decide to implement a cache that discards entries
when they're no longer needed. A little thought convinces us
that the cache never needs more than two entries: one for
fib(k-1) and one for fib(k-2); once fib(k) is known, fib(k-2)
won't be needed again and fib(k) can supplant it in the cache.

The next thing we notice (we're very observant, we are)
is that the searches in this two-element cache are very simple.
They are always successful except for the very first time, and
in fact we always know what the search key will be: k-1 for
one term and k-2 for the other. Since those are the only two
entries in the cache, we can actually eliminate the keys and
do no searching at all.

So: What's the simplest data structure you can think of
that caches exactly two values and retrieves them in a purely
mechanical pattern without searching?

BigInteger fSubKMinus1;
BigInteger fSubKMinus2;

.... and lo! "Memoization" transforms to iteration!
 
P

Piotr Kobzda

Eric said:
Ingo Menger wrote:
[...]
bigint fib(bigint n) {
if (n<2) then return 1;

if (n < 2)
return n <= 0 ? 0 : 1;

if (n < 2)
return n >= 0 ? n :
n % 2 == 0 ? -fib(-n) : fin(-n);
I believe you're off by one here: fib(2) should return 1,
not 2, and fib(3) should return 2, not 3, and so on.

return f2;
fib(100000) is a 69424-bit number, and adding numbers of
that size is likely to take more than "no time."

My PM 1,8G needs 1,448 sec. to compute that.
Still, the iterative solution beats the pants off recursion.

Yup. So then, in comparison with definitely more than "two and a
quarter million years" iterative result could be seen as "no time",
couldn't it? :)


piotr
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Stefan said:
The relation already has been explained to you.

No - actually it was not.
The example was given in order to show that

»The evaluation might be done at compile time,
even when templates are not used«

Your example shows that compilers can do code
optimization.

Which is something rather well known.

The example starting this subthread was about
implementing a recursive algorithm using C++ templates.

The only common I can see is that a compiler does
something.

Which is not enough to make it relevant in any way.

Arne
 
C

Chris Uppal

Eric said:
... and lo! "Memoization" transforms to iteration!

That's a nice line of reasoning.


"And for my next trick..." <drum roll/>

"Ackermann's !"

;-)

-- chris
 
I

Ingo Menger

Eric said:
Ingo Menger wrote:
I believe you're off by one here: fib(2) should return 1,
not 2, and fib(3) should return 2, not 3, and so on.

Ok, just initialize f1 to the desired result of fib(0), then.

fib(100000) is a 69424-bit number, and adding numbers of
that size is likely to take more than "no time."

O(n) is nothing compared to O(n²) :)
 
D

Daniel Pitts

Ingo said:
Ok, just initialize f1 to the desired result of fib(0), then.



O(n) is nothing compared to O(n²) :)

There is an O(logn) solution, which is even less than O(n)
 
C

Chris Uppal

Piotr said:
Yup. So then, in comparison with definitely more than "two and a
quarter million years" iterative result could be seen as "no time",
couldn't it? :)

<grin/>

-- chris

P.S. Piotr, I've tried to reply to your email, but my replies have bounced.
Do you have a different email address I could use ? (Send it privately if you
don't want to mention it here). Or if you can translate this rejection
message from the bounced response:
550 Brak rekordu SPF/MX nadawcy lub bledna autoryzacja SMTP!
then maybe I could get my ISP's support dept to look at it.

-- c
 
P

Piotr Kobzda

Chris said:
P.S. Piotr, I've tried to reply to your email, but my replies have bounced.

Hmmm... Strange...
Do you have a different email address I could use ? (Send it privately if you
don't want to mention it here).

Try this one: pikob (at) op (dot) pl
Or if you can translate this rejection
message from the bounced response:
550 Brak rekordu SPF/MX nadawcy lub bledna autoryzacja SMTP!

550 Missing SPF/MX record of the sender or invalid SMTP authorization!
then maybe I could get my ISP's support dept to look at it.

If you could, it would be nice. I'll send it also to my e-mail account
SP's support. Thank you.

piotr
 
K

Karl Uppiano

O(n) is nothing compared to O(n²) :)

But the algorithm that was being discussed was not O(n²) either... :p
 
C

Chris Smith

Chris Uppal said:
Very difficult to do. The java->bytecode translator (javac) is unlikely to do
that (even in the rare cases where it would be legal for it to do so), so
reading bytecode is not going to help much. But getting details of what the
JIT has done to generate (say) IA32 instructions from the bytecode is difficult
(as far as I know the only way is to use JVMTA and a suitable disassembler).

I'm interested in whether the Hotspot JIT compiler ever actually does
this. As an added snafu, note that the compiler would need to check
that no code inside the recursion ever tries to access the current
runtime stack (e.g., by creating an exception object, performing a
security check, etc.) before it performs that particular optimization.
 
?

=?gb2312?B?yMrV387etdA=?=

I think this is better

static long[][] getMatrix(long n){
long [][]r={{1,0},{0,1}};
long [][]y={{0,1},{1,1}};
long a,b,c,d;

while(n>0){
if(n%2==1)
{

a=r[0][0]*y[0][0]+r[0][1]*y[1][0];
b=r[0][0]*y[0][1]+r[0][1]*y[1][1];
c=r[1][0]*y[0][0]+r[1][1]*y[1][0];
d=r[1][0]*y[0][1]+r[1][1]*y[1][1];
r[0][0]=a;
r[0][1]=b;
r[1][0]=c;
r[1][1]=d;
}
a=y[0][0]*y[0][0]+y[0][1]*y[1][0];
b=y[0][0]*y[0][1]+y[0][1]*y[1][1];
c=y[1][0]*y[0][0]+y[1][1]*y[1][0];
d=y[1][0]*y[0][1]+y[1][1]*y[1][1];
y[0][0]=a;
y[0][1]=b;
y[1][0]=c;
y[1][1]=d;

n>>>=1;
}
return r;

}

static long fibo(long i){
if(i<1)throw new IllegalArgumentException("Error Input");
if(i==1L)return 1;
if(i==2L)return 1;
long[][] temp=getMatrix(i-2);
return temp[1][0]+temp[1][1];
}
 
C

Chris Uppal

ÈÊÕßÎ޵Рsaid:
a=y[0][0]*y[0][0]+y[0][1]*y[1][0];
b=y[0][0]*y[0][1]+y[0][1]*y[1][1];
c=y[1][0]*y[0][0]+y[1][1]*y[1][0];
d=y[1][0]*y[0][1]+y[1][1]*y[1][1];

Very pretty !

Not entirely sane, perhaps, but certainly pretty ;-)

-- chris
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,599
Members
45,175
Latest member
Vinay Kumar_ Nevatia
Top