"pow" (power) function

R

Russ

I have a couple of questions for the number crunchers out there:

Does "pow(x,2)" simply square x, or does it first compute logarithms
(as would be necessary if the exponent were not an integer)?

Does "x**0.5" use the same algorithm as "sqrt(x)", or does it use some
other (perhaps less efficient) algorithm based on logarithms?

Thanks,
Russ
 
S

sam

I not shure which algorithm,but I am assumeing that all Python does,is
to call the underlying C pow() function.

Sam
 
P

Paul Rubin

Schüle Daniel said:
4.2945041656494141

interesting result
seems that ** computates faster

Maybe "111**0.3" parses faster than pow(111,0.3), if timeit uses eval.
Also, pow() may incur more subroutine call overhead--better check
the bytecode for both versions.
 
B

Ben Cartwright

Russ said:
I have a couple of questions for the number crunchers out there:


Sure, but the answers depend on the underlying Python implementation.
And if we're talking CPython, they also depend on the underlying C
implementation of libm (i.e., math.h).

Does "pow(x,2)" simply square x, or does it first compute logarithms
(as would be necessary if the exponent were not an integer)?


The former, using binary exponentiation (quite fast), assuming x is an
int or long.

If x is a float, Python coerces the 2 to 2.0, and CPython's float_pow()
function is called. This function calls libm's pow(), which in turn
uses logarithms.

Does "x**0.5" use the same algorithm as "sqrt(x)", or does it use some
other (perhaps less efficient) algorithm based on logarithms?


The latter, and that algorithm is libm's pow(). Except for a few
special cases that Python handles, all floating point exponentation is
left to libm. Checking to see if the exponent is 0.5 is not one of
those special cases.

If you're curious, download the Python source, open up
Objects/floatobject.c, and check out float_pow(). The binary
exponentation algorithms are in Objects/intobject:int_pow() and
Objects/longobject:long_pow().

The 0.5 special check (and any other special case optimizations) could,
in theory, be performed in the platform's libm. I'm not familiar
enough with any libm implementations to comment on whether this is ever
done, or if it's even worth doing... though I suspect that the 0.5 case
is not.

Hope that helps,
--Ben
 
?

=?ISO-8859-1?Q?Sch=FCle_Daniel?=

Russ said:
I have a couple of questions for the number crunchers out there:

Does "pow(x,2)" simply square x, or does it first compute logarithms
(as would be necessary if the exponent were not an integer)?

Does "x**0.5" use the same algorithm as "sqrt(x)", or does it use some
other (perhaps less efficient) algorithm based on logarithms?

you can try and timeit
4.2945041656494141

interesting result
seems that ** computates faster
 
R

Russ

Ben said:
Russ wrote:


The former, using binary exponentiation (quite fast), assuming x is an
int or long.

If x is a float, Python coerces the 2 to 2.0, and CPython's float_pow()
function is called. This function calls libm's pow(), which in turn
uses logarithms.

I just did a little time test (which I should have done *before* my
original post!), and 2.0**2 seems to be about twice as fast as
pow(2.0,2). That seems consistent with your claim above.

I'm a bit surprised that pow() would use logarithms even if the
exponent is an integer. I suppose that just checking for an integer
exponent could blow away the gain that would be achieved by avoiding
logarithms. On the other hand, I would think that using logarithms
could introduce a tiny error (e.g., pow(2.0,2) = 3.9999999996 <- made
up result) that wouldn't occur with multiplication.
The latter, and that algorithm is libm's pow(). Except for a few
special cases that Python handles, all floating point exponentation is
left to libm. Checking to see if the exponent is 0.5 is not one of
those special cases.

I just did another little time test comparing 2.0**0.5 with sqrt(2.0).
Surprisingly, 2.0**0.5 seems to take around a third less time.

None of these differences are really significant unless one is doing
super-heavy-duty number crunching, of course, but I was just curious.
Thanks for the information.
 
B

Ben Cartwright

Russ said:
I just did a little time test (which I should have done *before* my
original post!), and 2.0**2 seems to be about twice as fast as
pow(2.0,2). That seems consistent with your claim above.


Actually, the fact that x**y is faster than pow(x, y) has nothing do to
with the int vs. float issue. It's actually due to do the way Python
parses operators versus builtin functions. Paul Rubin hit the nail on
the head when he suggested you check the bytecode:
1 0 LOAD_FAST 0 (x)
3 LOAD_FAST 1 (y)
6 BINARY_POWER
7 RETURN_VALUE 1 0 LOAD_GLOBAL 0 (pow)
3 LOAD_FAST 0 (x)
6 LOAD_FAST 1 (y)
9 CALL_FUNCTION 2
12 RETURN_VALUE

LOAD_GLOBAL + CALL_FUNCTION is more expensive than LOAD_FAST,
especially when you're doing it a million times (which, coincidentally,
timeit does).

Anyway, if you want to see the int vs. float issue in action, try this:
0.37949818370600497

As you can see, the int version is much faster than the float version.
The last two cases, which also use the float version, have an
additional performance hit due to type coercion. The relative speed
differences are similar when using pow():
0.55198819605811877

I'm a bit surprised that pow() would use logarithms even if the
exponent is an integer. I suppose that just checking for an integer
exponent could blow away the gain that would be achieved by avoiding
logarithms. On the other hand, I would think that using logarithms
could introduce a tiny error (e.g., pow(2.0,2) = 3.9999999996 <- made
up result) that wouldn't occur with multiplication.


These are good questions to ask an expert in floating point arithmetic.
Which I'm not. :)

I just did another little time test comparing 2.0**0.5 with sqrt(2.0).
Surprisingly, 2.0**0.5 seems to take around a third less time.


Again, this is because of the operator vs. function lookup issue.
pow(2.0, 0.5) vs. sqrt(2.0) is a better comparison:
0.46649096722239847

None of these differences are really significant unless one is doing
super-heavy-duty number crunching, of course, but I was just curious.
Thanks for the information.


Welcome. :)

--Ben
 
M

Mike Ressler

Anyway, if you want to see the int vs. float issue in action, try this:

0.37949818370600497

As you can see, the int version is much faster than the float version.

I have a counterexample. In the original timeit example, 111**111 was
used. When I run that
0.36576294898986816

The pow and ** on integers take 10 seconds, but the float ** takes only
0.36 seconds. (The pow with floats takes ~ 0.7 seconds). Clearly
typecasting to floats is coming in here somewhere. (Python 2.4.1 on
Linux FC4.)

Mike
 
D

Dennis Lee Bieber

0.36576294898986816

The pow and ** on integers take 10 seconds, but the float ** takes only
0.36 seconds. (The pow with floats takes ~ 0.7 seconds). Clearly

The integer only case is performing long-int computations, which are
algorithms... The float case is using hardware floating point

PythonWin 2.3.5 (#62, Feb 9 2005, 16:17:08) [MSC v.1200 32 bit (Intel)]
on win32.
Portions Copyright 1994-2001 Mark Hammond ([email protected]) -
see 'Help/About PythonWin' for further copyright information.
--
 
B

Ben Cartwright

Mike said:
0.36576294898986816

The pow and ** on integers take 10 seconds, but the float ** takes only
0.36 seconds. (The pow with floats takes ~ 0.7 seconds). Clearly
typecasting to floats is coming in here somewhere. (Python 2.4.1 on
Linux FC4.)


No, there is not floating point math going on when the operands to **
are both int or long. If there were, the following two commands would
have identical output:
107362012888474225801214565046695501959850723994224804804775911
175625076195783347022491226170093634621466103743092986967777786
330067310159463303558666910091026017785587295539622142057315437
069730229375357546494103400699864397711L 107362012888474224720018046104893130890742038145054486592605938
348914231670972887594279283213585412743799339280552157756096410
839752020853099983680499334815422669184408961411319810030383904
886446681757296875373689157536249282560L

The first result is accurate. Work it out by hand if you don't believe
me. ;-) The second suffers from inaccuracies due to floating point's
limited precision.

Of course, getting exact results with huge numbers isn't cheap,
computationally. Because there's no type in C to represent arbitrarily
huge numbers, Python implements its own, called "long". There's a fair
amount of memory allocation, bit shifting, and other monkey business
going on behind the scenes in longobject.c.

Whenever possible, Python uses C's built-in signed long int type (known
simply as "int" on the Python side, and implemented in intobject.c).
On my platform, C's signed long int is 32 bits, so values range from
-2147483648 to 2147483647. I.e., -(2**31) to (2**31)-1.

As long as your exponentiation result is in this range, Python uses
int_pow(). When it overflows, long_pow() takes over. Both functions
use the binary exponentiation algorithm, but long_pow() is naturally
slower:
2.4067177773399635

Floating point is a whole 'nother ball game:
0.33437446769630697

--Ben
 
T

Terry Reedy

Mike Ressler said:
I have a counterexample. In the original timeit example, 111**111 was
used. When I run that

0.36576294898986816

The pow and ** on integers take 10 seconds, but the float ** takes only
0.36 seconds. (The pow with floats takes ~ 0.7 seconds). Clearly
typecasting to floats is coming in here somewhere. (Python 2.4.1 on
Linux FC4.)

For floats, f**g == exp(log(f**g)) == exp(g*log(f)) (with maybe further
algebraic manipulation, depending on the implementation). The time for
this should only be mildly dependent on the magnitudes of f and g.

The time for i**j, on the other hand, grows at least as fast as log(j). So
I should expect comparisons to depend on magnitudes, as you discovered.

Terry Jan Reedy
 
P

Paul Rubin

Russ said:
I just did a little time test (which I should have done *before* my
original post!), and 2.0**2 seems to be about twice as fast as
pow(2.0,2). That seems consistent with your claim above...>
I just did another little time test comparing 2.0**0.5 with sqrt(2.0).
Surprisingly, 2.0**0.5 seems to take around a third less time.

I think the explanation is likely here:

Python 2.3.4 (#1, Feb 2 2005, 12:11:53) 1 0 LOAD_FAST 0 (x)
3 LOAD_CONST 1 (0.5)
6 BINARY_POWER
7 RETURN_VALUE
8 LOAD_CONST 0 (None)
11 RETURN_VALUE

See, x**.5 does two immediate loads and an inline BINARY_POWER bytecode.
1 0 LOAD_GLOBAL 0 (sqrt)
3 LOAD_FAST 0 (x)
6 CALL_FUNCTION 1
9 RETURN_VALUE
10 LOAD_CONST 0 (None)
13 RETURN_VALUE

sqrt(x), on the other hand, does a lookup of 'sqrt' in the global
namespace, then does a Python function call, both of which likely
are almost as expensive as the C library pow(...) call.

If you do something like

def h(x, sqrt=sqrt):
return sqrt(x)

you replace the LOAD_GLOBAL with a LOAD_FAST and that might give a
slight speedup:
2 0 LOAD_FAST 1 (sqrt)
3 LOAD_FAST 0 (x)
6 CALL_FUNCTION 1
9 RETURN_VALUE
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
 
D

David M. Cooke

Russ said:
I just did a little time test (which I should have done *before* my
original post!), and 2.0**2 seems to be about twice as fast as
pow(2.0,2). That seems consistent with your claim above.

I'm a bit surprised that pow() would use logarithms even if the
exponent is an integer. I suppose that just checking for an integer
exponent could blow away the gain that would be achieved by avoiding
logarithms. On the other hand, I would think that using logarithms
could introduce a tiny error (e.g., pow(2.0,2) = 3.9999999996 <- made
up result) that wouldn't occur with multiplication.

It depends on the libm implementation of pow() whether logarithms are
used for integer exponents. I'm looking at glibc's (the libc used on
Linux) implementation for Intel processors, and it does optimize
integers. That routine is written in assembly language, btw.
I just did another little time test comparing 2.0**0.5 with sqrt(2.0).
Surprisingly, 2.0**0.5 seems to take around a third less time.

None of these differences are really significant unless one is doing
super-heavy-duty number crunching, of course, but I was just curious.
Thanks for the information.

And if you are, you'd likely be doing it on more than one number, in
which case you'd probably want to use numpy. We've optimized x**n so
that it does handle n=0.5 and integers specially; it makes more sense
to do this for an array of numbers where you can do the special
manipulation of the exponent, and then apply that to all the numbers
in the array at once.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,744
Messages
2,569,483
Members
44,901
Latest member
Noble71S45

Latest Threads

Top