# Re: Factorials

Discussion in 'Python' started by Phil Weldon, Sep 20, 2003.

1. ### Phil WeldonGuest

Even faster is a look-up table with what, seventy entries? That would go
up to an integer with 100 digits, probably more than any application would
need, and certainly the fastest possible method.

Phil Weldon,

"Andrew Wilkinson" <> wrote in message
news:3ee1fa98\$0\$45181\$...
> Rogue9 wrote:
>
> > Pardon me but could anyone enlighten me if python2.2 has a math function
> > to call to do factorials of integers?If not then is there a code snippet
> > for doing such a thing available?I tried to write some code for this
> > myself but with little success.
> > Thanks
> > Lol

>
> Hi,
>
> The other replies give a much more readable way of getting the factorial,
> but if you're looking for the ultimate in speed then...
>
> def fac(n):
> return reduce(long.__mul__, range(1,n+1), 1L)
>
> ..would probably be a bit quicker. I'm not sure how much quicker though.
>
> Anyway, hope this is of some use,
> Regards,
> Andrew Wilkinson

Phil Weldon, Sep 20, 2003

2. ### M-a-SGuest

> "Andrew Wilkinson" <> wrote in message
> >
> > def fac(n):
> > return reduce(long.__mul__, range(1,n+1), 1L)
> >
> > ..would probably be a bit quicker. I'm not sure how much quicker though.
> >
> > Anyway, hope this is of some use,
> > Regards,
> > Andrew Wilkinson

def factorial_rec(n):
if n <= 1:
return 1
else:
return n*factorial_rec(n-1)

def factorial_iter(n):
s = 1
for i in range(2,n+1):
s *= i
return s

def factorial_rdc(n):
return reduce(long.__mul__, range(1,n+1), 1L)

Best times (seconds) on Compaq 5430US, Pentium 4, 1.8GHz, 512MB
Windows XP, Python 2.3 (#46, Jul 29 2003, 18:54:32) [MSC v.1200 32 bit (Intel)] on win32

Calculations of 950!, 1000 times each (xrange(1000)) --

recursion ..... 8.01
reduction ..... 4.20
iteration ..... 3.79

M-a-S

M-a-S, Sep 20, 2003

3. ### Bengt RichterGuest

On Sat, 20 Sep 2003 07:13:54 GMT, "M-a-S" <> wrote:

>> "Andrew Wilkinson" <> wrote in message
>> >
>> > def fac(n):
>> > return reduce(long.__mul__, range(1,n+1), 1L)
>> >
>> > ..would probably be a bit quicker. I'm not sure how much quicker though.
>> >
>> > Anyway, hope this is of some use,
>> > Regards,
>> > Andrew Wilkinson

>
>def factorial_rec(n):
> if n <= 1:
> return 1
> else:
> return n*factorial_rec(n-1)
>
>def factorial_iter(n):
> s = 1
> for i in range(2,n+1):
> s *= i
> return s
>
>def factorial_rdc(n):
> return reduce(long.__mul__, range(1,n+1), 1L)
>
>
>Best times (seconds) on Compaq 5430US, Pentium 4, 1.8GHz, 512MB
>Windows XP, Python 2.3 (#46, Jul 29 2003, 18:54:32) [MSC v.1200 32 bit (Intel)] on win32
>
>Calculations of 950!, 1000 times each (xrange(1000)) --
>
>recursion ..... 8.01
>reduction ..... 4.20
>iteration ..... 3.79
>

You could also let the factorial benefit from its own experience, e.g.,

>>> def factorial(n, cache={}):

... fn = cache.get(n)
... if fn: print 'got cached %s!'%n; return fn
... if n<2: cache[n]=1; return 1
... return cache.setdefault(n, n*factorial(n-1))
...
>>> factorial(4)

24
>>> factorial(4)

got cached 4!
24
>>> factorial(3)

got cached 3!
6
>>> factorial(6)

got cached 4!
720
>>> factorial(7)

got cached 6!
5040
>>> factorial(7)

got cached 7!
5040
>>> factorial(10)

got cached 7!
3628800
>>> factorial(9)

got cached 9!
362880

Or more concisely:

>>> def fact(n, cache={}):

... return cache.get(n) or cache.setdefault(n, n<2 and 1 or n*fact(n-1))
...
>>> for i in range(10): print i, fact(i)

...
0 1
1 1
2 2
3 6
4 24
5 120
6 720
7 5040
8 40320
9 362880

Regards,
Bengt Richter

Bengt Richter, Sep 20, 2003
4. ### Julian TibbleGuest

In article <S6Tab.2\$>, M-a-S wrote:
> def factorial_iter(n):
> s = 1
> for i in range(2,n+1):
> s *= i
> return s

<snip>

> Calculations of 950!, 1000 times each (xrange(1000)) --
>
> recursion ..... 8.01
> reduction ..... 4.20
> iteration ..... 3.79

Be even faster if you used xrange for the factorial function.
However, apparently reduce builds the list, thereby not
benefitting from the use of xrange:

recursion - 4.7
xrecursion - 0.5
reduction - 5.2
xreduction - 5.2

Can reduce be "fixed"?

Julian

(code I used
def recursion(n):
s = 1
for i in range(2, n+1):
s *= i
return s

def xrecursion(n):
s = 1
for i in xrange(2, n+1):
s *= 1
return s

def reduction(n):
return reduce(long.__mul__, range(2, n+1), 1L)

def xreduction(n):
return reduce(long.__mul__, xrange(2, n+1), 1L)

methods = [ recursion, xrecursion, reduction, xreduction]

import time

for method in methods:
a = time.time()
for i in xrange(1000):
method(950)
b = time.time()
print "%s - %.1f" % (method.__name__, b - a)

Julian Tibble, Sep 21, 2003
5. ### Julian TibbleGuest

In article <>, Julian Tibble wrote:
> recursion - 4.7
> xrecursion - 0.5
> reduction - 5.2
> xreduction - 5.2

Whoops!!!! Make that iteration, not recursion.

I do know the difference...honestly

Julian

Julian Tibble, Sep 21, 2003
6. ### M-a-SGuest

No difference on my computer (see my note at the end):

times:
recursion 6.66
reduction 3.50
iteration 3.16 (for i in range(2,n+1))
xteration 3.16 (for i in xrange(2,n+1))

But when I added another function (it'e a bit slower its range/xrange partners)

def factorial_niter(n):
s = 1
i = 2
while i<=n:
s *= i
i += 1
return s

ALL the times increased, probably because the grown dictionary.

bast times:
recursion 6.77
reduction 3.51
iteration 3.20
xteration 3.17
nteration 3.36

M-a-S

"Julian Tibble" <> wrote in message news:...
> <...>
> (code I used
> def recursion(n):
> s = 1
> for i in range(2, n+1):
> s *= i
> return s
>
> def xrecursion(n):
> s = 1
> for i in xrange(2, n+1):
> s *= 1
> return s

s *= 1 <--------------------------------- maybe this is your difference!

M-a-S, Sep 22, 2003
7. ### Julian TibbleGuest

In article <fEJbb.61\$>, M-a-S wrote:
> No difference on my computer (see my note at the end):

<snip>

> s *= 1 <--------------------------------- maybe this is your difference!

oh dear
damn typos

Julian Tibble, Sep 23, 2003
8. ### M-a-SGuest

"Julian Tibble" <> wrote in message news:...
> In article <fEJbb.61\$>, M-a-S wrote:
> > No difference on my computer (see my note at the end):

>
> <snip>
>
> > s *= 1 <--------------------------------- maybe this is your difference!

>
> oh dear
> damn typos

That's why I had in my code this:

assert f_rec == f_iter == f_xter == f_nter == f_rdc

for all the factorial values
And I like the Python's way of expressing multiple relations!

M-a-S

M-a-S, Sep 23, 2003
9. ### Jacek GenerowiczGuest

(Bengt Richter) writes:

> You could also let the factorial benefit from its own experience, e.g.,
>
> >>> def factorial(n, cache={}):

> ... fn = cache.get(n)
> ... if fn: print 'got cached %s!'%n; return fn
> ... if n<2: cache[n]=1; return 1
> ... return cache.setdefault(n, n*factorial(n-1))
> ...

Or, you could let _any_ (keywordless) function benefit from its own
experience:

# General memoizer: takes a function as returns another function which
# has the same semantics as the original, but caches previously
# calculated values
def memoize(callable):
cache = {}
def proxy(*args):
try: return cache[args]
except KeyError: return cache.setdefault(args, callable(*args))
return proxy

# A quick and dirty timer utility (timeit module in 2.3 ?), just to
# help us see how we're doing.
import time
def timer(fn, *args):
start = time.time()
val = fn(*args)
stop = time.time()
print "Took", stop-start, "seconds ..."
return val

# The function we want to use
def factorial_rec(n):
if n <= 1:
return 1
else:
return n*factorial_rec(n-1)

# Now, make a cache-enabled version of your function
fact = memoize(factorial_rec)
# ... and use that, instead of the original ...

# See how long the bare function takes
print timer(factorial_rec, 100)

# The first call to the proxy will be slightly slower than the bare function
print timer(fact,100)

# But the second call with the same argument will be much faster
print timer(fact,100)

==================================================
Output:

Took 0.000393986701965 seconds ...
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
Took 0.000407934188843 seconds ...
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
Took 4.88758087158e-06 seconds ...
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000

Jacek Generowicz, Sep 23, 2003
10. ### Alex MartelliGuest

Jacek Generowicz wrote:
...
> Or, you could let _any_ (keywordless) function benefit from its own
> experience:

This "_any_ (keywordless) function" is a substantial overbid. The
following implementation:

> # General memoizer: takes a function as returns another function which
> # has the same semantics as the original, but caches previously
> # calculated values
> def memoize(callable):
> cache = {}
> def proxy(*args):
> try: return cache[args]
> except KeyError: return cache.setdefault(args, callable(*args))
> return proxy

only works for a "PURE" function (e.g., not random.random() nor time.time(),
even though both ARE obviously part of the "any keywordless function"
set!-) *WITHOUT* unhashable arguments (e.g., not the new built-in 'sum',
even though it IS pure when called with a list argument -- because when
it tries to index cache with the list as part of the key, you'll get a
TypeError). Basically, it's fine if arguments (if any) are numbers,
strings, or hashable tuples, but it would be pushing your luck A LOT to
use it in any other case -- it's particularly dangerous, because you
will get a silent misbehavior, in a case such as:

class Foo:
def __init__(self, x=0, y=0):
self.x=x
self.y=y

def xplusy(somefoo):
return somefoo.x + somefoo.y

f = Foo()
print xplusy(f)
f.x = 23
print xplusy(f)

# so far so good, but now...:
xplusy = memoize(xplusy)
print xplusy(f)
# still LOOKS fine, but, check this out...:
f.y = 100
print xplusy(f)
# OOPS! xplusy's "memory" is now subtly incorrect...!!!

Unfortunately f "is hashable" (because it doesn't define a __cmp__)
but is NOT immutable, and xplusy, while pure, DOES depend on some
attributes of its (mutable) argument. The resulting behavior may
be rather unexpected, and, indeed, a subtle bug indeed to find...

Alex

Alex Martelli, Sep 23, 2003
11. ### Jacek GenerowiczGuest

Alex Martelli <> writes:

> Jacek Generowicz wrote:
> ...
> > Or, you could let _any_ (keywordless) function benefit from its own
> > experience:

>
> This "_any_ (keywordless) function" is a substantial overbid.

Of course.

Sentence written in haste, and therefore hopelessly inaccurate.

Mea culpa.

Jacek Generowicz, Sep 23, 2003