An exponentiation function for int?

Discussion in 'C++' started by Steven T. Hatton, Oct 13, 2004.

  1. Did I mess something along the way, or is there no function in Standard C++
    to raise an (unsigned) int to a power of (unsigned) int returning
    (unsigned) int? I never gave this a second thought until today. I tried to
    do it, and discovered <cmath> std::pow() only takes floating point types
    for the first argument. Sure I could write one. I could have written at
    least 3 fundamentally differnet versions in the time it took to discover
    there isn't such a function.
    --
    "If our hypothesis is about anything and not about some one or more
    particular things, then our deductions constitute mathematics. Thus
    mathematics may be defined as the subject in which we never know what we
    are talking about, nor whether what we are saying is true." - Bertrand
    Russell
     
    Steven T. Hatton, Oct 13, 2004
    #1
    1. Advertising

  2. * Steven T. Hatton:
    > Did I mess something along the way, or is there no function in Standard C++
    > to raise an (unsigned) int to a power of (unsigned) int returning
    > (unsigned) int? I never gave this a second thought until today. I tried to
    > do it, and discovered <cmath> std::pow() only takes floating point types
    > for the first argument. Sure I could write one. I could have written at
    > least 3 fundamentally differnet versions in the time it took to discover
    > there isn't such a function.


    There isn't such a function. There is the C library pow, the C++ library
    valarray::pow and the C++ library complex::pow. Writing an unsigned integer
    version should, as you state, be fairly trivial; why not just wrap the C pow?

    --
    A: Because it messes up the order in which people normally read text.
    Q: Why is it such a bad thing?
    A: Top-posting.
    Q: What is the most annoying thing on usenet and in e-mail?
     
    Alf P. Steinbach, Oct 13, 2004
    #2
    1. Advertising

  3. Steven T. Hatton

    Ron Natalie Guest

    Steven T. Hatton wrote:
    > Did I mess something along the way, or is there no function in Standard C++
    > to raise an (unsigned) int to a power of (unsigned) int returning
    > (unsigned) int? I never gave this a second thought until today. I tried to
    > do it, and discovered <cmath> std::pow() only takes floating point types
    > for the first argument. Sure I could write one. I could have written at
    > least 3 fundamentally differnet versions in the time it took to discover
    > there isn't such a function.


    Correct, there isn't one. There just isn't that much call for it
    I would suspect. You don't have to raise a number to a very high
    power before it starts overflowing.

    You either code fast, or you need to learn how to read the docs faster.
    It only took me about 20 seconds to load up the PDF of the Standard and
    find in 26.5 where it says that these are the overloads for pow:
    pow(float, float);
    pow(float, int)
    pow(double, double)
    pow(double, int)
    pow(long double, long double)
    pow(long double, int)

    I suspect your "integer pow" just iterates to do the exponentiation.
    Pow typically uses a log (which is the only real way to do a fractional
    exponent anyhow) which gives a more constant time.
     
    Ron Natalie, Oct 13, 2004
    #3
  4. Ron Natalie wrote:

    > Steven T. Hatton wrote:
    >> Did I mess something along the way, or is there no function in Standard
    >> C++ to raise an (unsigned) int to a power of (unsigned) int returning
    >> (unsigned) int? I never gave this a second thought until today. I tried
    >> to do it, and discovered <cmath> std::pow() only takes floating point
    >> types
    >> for the first argument. Sure I could write one. I could have written at
    >> least 3 fundamentally differnet versions in the time it took to discover
    >> there isn't such a function.

    >
    > Correct, there isn't one. There just isn't that much call for it
    > I would suspect. You don't have to raise a number to a very high
    > power before it starts overflowing.


    What I'm doing is strictly integer math. I seriously doubt I will get into
    tensors of sufficient rank and order to overflow long int with the index
    range.

    > You either code fast, or you need to learn how to read the docs faster.
    > It only took me about 20 seconds to load up the PDF of the Standard and
    > find in 26.5 where it says that these are the overloads for pow:
    > pow(float, float);
    > pow(float, int)
    > pow(double, double)
    > pow(double, int)
    > pow(long double, long double)
    > pow(long double, int)


    I read that from the error output. I didn't need to grep the standard, but
    I did look it up there as well. It tells me what /is/ there, but not what
    isn't. Sometimes the Standard is a useful reference, and othertimes I run
    into the parts where is redefines 'definition' and proceeds with the
    redefined meaning in a less than consistent manner.

    > I suspect your "integer pow" just iterates to do the exponentiation.


    No, I actually used compile time recursion.

    > Pow typically uses a log (which is the only real way to do a fractional
    > exponent anyhow) which gives a more constant time.


    That's because you can add exponents in one operation rather than performing
    n + m individual operations needed to do the brute force calculation.
    Floating point numbers are represented in hardware in terms of significand
    (base) and exponent. Addition and subtraction are actually more expensive
    than multiplication and division.

    I'm not sure what the cost of converting between floating point and 2's
    complement is, but if I use pow() to do my integer math, I suspect it can
    add up. I really have to test it before I can make a determination. This
    is probably hardware sensitive more so than compiler or OS sensitive.
    --
    "If our hypothesis is about anything and not about some one or more
    particular things, then our deductions constitute mathematics. Thus
    mathematics may be defined as the subject in which we never know what we
    are talking about, nor whether what we are saying is true." - Bertrand
    Russell
     
    Steven T. Hatton, Oct 14, 2004
    #4
  5. > > I suspect your "integer pow" just iterates to do the exponentiation.
    >
    > No, I actually used compile time recursion.


    Really? How interesting, can you please post the code for
    the function?

    Keith
     
    Keith H Duggar, Oct 14, 2004
    #5
  6. Keith H Duggar wrote:

    >> > I suspect your "integer pow" just iterates to do the exponentiation.

    >>
    >> No, I actually used compile time recursion.

    >
    > Really? How interesting, can you please post the code for
    > the function?
    >
    > Keith


    This is based on ideas from _C++_Templates_The_Complet_Guide_

    http://www.josuttis.com/tmplbook/

    I really don't know if this has any advantages, nor do I know if it is even
    correct. I ran it through a few simple tests, but haven't revisited it
    since. As I am working on my testing infrastructure.
    /***************************************************************************
    * Copyright (C) 2004 by Steven T. Hatton *
    * *
    * *
    * This program is free software; you can redistribute it and/or modify *
    * it under the terms of the GNU General Public License as published by *
    * the Free Software Foundation; either version 2 of the License, or *
    * (at your option) any later version. *
    * *
    * This program is distributed in the hope that it will be useful, *
    * but WITHOUT ANY WARRANTY; without even the implied warranty of *
    * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
    * GNU General Public License for more details. *
    * *
    * You should have received a copy of the GNU General Public License *
    * along with this program; if not, write to the *
    * Free Software Foundation, Inc., *
    * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
    ***************************************************************************/
    #ifndef STH_TMATHPOWER_H
    #define STH_TMATHPOWER_H
    #include<stdexcept>

    namespace sth {
    namespace tmath {

    /**
    @author Steven T. Hatton
    */
    template <size_t Exponent_S, typename T>
    class PowerOf
    {
    public:
    static T eval(const T& base)
    {
    return base * PowerOf<Exponent_S - 1, T>::eval(base);
    }
    };

    template <typename T>
    class PowerOf<1, T>
    {
    public:
    static T eval(const T& base)
    {
    return base;
    }
    };

    template <typename T>
    class PowerOf<0, T>
    {
    public:
    static T eval(const T& base)
    {
    if(!base)
    {
    throw std::logic_error(
    "sth::tmath::powerOf<0>(0) is an indeterminate form.");
    }
    return 1;
    }
    };

    template <size_t Exponent_S, typename T>
    T power(const T& base)
    {
    return PowerOf<Exponent_S, T>::eval(base);
    }
    };
    }

    #endif

    --
    "If our hypothesis is about anything and not about some one or more
    particular things, then our deductions constitute mathematics. Thus
    mathematics may be defined as the subject in which we never know what we
    are talking about, nor whether what we are saying is true." - Bertrand
    Russell
     
    Steven T. Hatton, Oct 15, 2004
    #6
  7. ADDENDA: Re: An exponentiation function for int?

    Steven T. Hatton wrote:

    This is for the user to call instead of the member functions:
    template <size_t Exponent_S, typename T>
    T power(const T& base)
    {
    return PowerOf<Exponent_S, T>::eval(base);
    }

    It's used like this:

    size_t thirtytwo = power<5>::eval(2);

    It has the advantage of deducing template parameters.

    NOTE: I should probably force the return type to be size_t. It may not be
    as general, but for my purposes, it't more reasonable.
    --
    "If our hypothesis is about anything and not about some one or more
    particular things, then our deductions constitute mathematics. Thus
    mathematics may be defined as the subject in which we never know what we
    are talking about, nor whether what we are saying is true." - Bertrand
    Russell
     
    Steven T. Hatton, Oct 15, 2004
    #7
  8. > Floating point numbers are represented in hardware in terms of significand
    > (base) and exponent.


    I'm sure he appreciated that most elementary lesson. I'm
    sure anyone who knows exponentiation is done using
    logarithms knows how floating point numbers are stored.

    Also, I forgot to point out earlier that

    > Addition and subtraction are actually more expensive
    > than multiplication and division.


    is not correct. I know of no platform on which floating
    point addition is slower than multiplication. The fastest
    multiplications still take about 1.3 times the time of
    addition for operations in isolation.

    Now, under some caching conditions multiplication can
    approach and even equal the speed of addition. That is why
    many programmers simply consider floating addition and
    multiplication to be about the same speed.

    However, it is simply false to claim that addition is flatly
    more expensive.

    If however I'm wrong, and you have evidence to the contrary
    please pass it along (along with your code for a compile
    time recursive integer power function).
     
    Keith H Duggar, Oct 15, 2004
    #8
  9. Keith H Duggar wrote:

    >> Floating point numbers are represented in hardware in terms of
    >> significand (base) and exponent.

    >
    > I'm sure he appreciated that most elementary lesson. I'm
    > sure anyone who knows exponentiation is done using
    > logarithms knows how floating point numbers are stored.


    I was making the distinction clear.

    > Also, I forgot to point out earlier that
    >
    >> Addition and subtraction are actually more expensive
    >> than multiplication and division.

    >
    > is not correct. I know of no platform on which floating
    > point addition is slower than multiplication. The fastest
    > multiplications still take about 1.3 times the time of
    > addition for operations in isolation.
    >
    > Now, under some caching conditions multiplication can
    > approach and even equal the speed of addition. That is why
    > many programmers simply consider floating addition and
    > multiplication to be about the same speed.
    >
    > However, it is simply false to claim that addition is flatly
    > more expensive.


    My reason for saying that was based on what I learned a decade ago. Perhaps
    I should have worded my statement more clearly. The way Stallings put it
    in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point
    multiplication and division are much simpler processes than addition and
    subtraction, as the following discussion indicates. ..."

    > If however I'm wrong, and you have evidence to the contrary
    > please pass it along (along with your code for a compile
    > time recursive integer power function).


    I already posted that.
    --
    "If our hypothesis is about anything and not about some one or more
    particular things, then our deductions constitute mathematics. Thus
    mathematics may be defined as the subject in which we never know what we
    are talking about, nor whether what we are saying is true." - Bertrand
    Russell
     
    Steven T. Hatton, Oct 15, 2004
    #9
  10. "Steven T. Hatton" wrote:
    >
    >
    > My reason for saying that was based on what I learned a decade ago.


    ???
    A decade ago?

    A decade ago on most (if not all) CPU's the situation was:
    multiplication was much more expensive then addition.

    > Perhaps
    > I should have worded my statement more clearly. The way Stallings put it
    > in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point
    > multiplication and division are much simpler processes than addition and
    > subtraction, as the following discussion indicates. ..."


    That doesn't sound right. It has the smell of some nasty and weird
    argumentation.
    For one: All schemes I know for multiplcation require some additions.
    So in a sense addition is a building block for multiplication. How can
    addition then be slower the multiplication?
    On the other hand: I might not know all possible schemes for multiplication.

    Hmm. Is there a way to study the entire section?
    If not, what is the main point the above is based on?

    The only thing I can think of is:
    'simpler' is not used to mean: in terms of runtime efficiency
    but is meant in terms of: creates less problems.
    And the reason for this is: while in add/subtract one has to
    first bring the exponents to the same number (which creates a
    problem if there is a large difference) no such thing is necc.
    in multiplication.


    --
    Karl Heinz Buchegger
     
    Karl Heinz Buchegger, Oct 15, 2004
    #10
  11. > > Really? How interesting, can you please post the code for
    > > the function?
    > >
    > > Keith

    >
    > This is based on ideas from _C++_Templates_The_Complet_Guide_
    >
    > http://www.josuttis.com/tmplbook/
    >

    ....

    Not to quibble, but this isn't really a "function" is? It is
    a class implementation that happens to have a function you
    call. That's why I asked you to post the code because had you
    built a compile time recursive function that would have been
    most unique. However, had you said you had a compile time
    recursive class that implements pow I would have pointed you
    to search the google group archives for

    "pow() with integer exponents"

    which would take you to

    http://groups.google.com/groups?hl=...G=Search&meta=group%3Dcomp.lang.c%252B%252B.*

    where they discuss a few solutions and the tradeoffs. In
    addition, Cy Edmunds posted a generic solution which is
    actually faster (in terms of multiplication count) than the
    one you posted. Your solution does not utilize repeated
    squaring (an often used concept) to reduce the number of
    multiplications form linear (as yours is) to something closer
    to logarithmic. Cy's solution doesn't handle the general
    case of repeated squaring but he does optimize for some of
    the small powers.

    The iterative solution posted by Nejat Aydin implements
    repeated squaring and will be far faster than either generic
    solution for large powers (and probably even small powers).
    You could implement a similar scheme using compile time
    recursion if you wish but for large powers this will lead to
    serious code bloat and of course will be limited to compile
    time powers only.
     
    Keith H Duggar, Oct 15, 2004
    #11
  12. Karl Heinz Buchegger wrote:

    > "Steven T. Hatton" wrote:


    >> Perhaps
    >> I should have worded my statement more clearly. The way Stallings put it
    >> in his _Computer_Organization_and_Architecture_3rd_Ed_ is "Floating-point
    >> multiplication and division are much simpler processes than addition and
    >> subtraction, as the following discussion indicates. ..."

    >
    > That doesn't sound right. It has the smell of some nasty and weird
    > argumentation.
    > For one: All schemes I know for multiplcation require some additions.
    > So in a sense addition is a building block for multiplication. How can
    > addition then be slower the multiplication?


    With fp multiplication you're adding exponent with 2's complement (or
    similar) addition. With fp addition, you have to play around with the
    exponents and the significands (more).

    --
    "If our hypothesis is about anything and not about some one or more
    particular things, then our deductions constitute mathematics. Thus
    mathematics may be defined as the subject in which we never know what we
    are talking about, nor whether what we are saying is true." - Bertrand
    Russell
     
    Steven T. Hatton, Oct 15, 2004
    #12
  13. Steven T. Hatton

    Howard Guest

    "Steven T. Hatton" <> wrote in message
    news:...
    > Karl Heinz Buchegger wrote:
    >
    >> "Steven T. Hatton" wrote:

    >
    >>> Perhaps
    >>> I should have worded my statement more clearly. The way Stallings put
    >>> it
    >>> in his _Computer_Organization_and_Architecture_3rd_Ed_ is
    >>> "Floating-point
    >>> multiplication and division are much simpler processes than addition and
    >>> subtraction, as the following discussion indicates. ..."

    >>
    >> That doesn't sound right. It has the smell of some nasty and weird
    >> argumentation.
    >> For one: All schemes I know for multiplcation require some additions.
    >> So in a sense addition is a building block for multiplication. How can
    >> addition then be slower the multiplication?

    >
    > With fp multiplication you're adding exponent with 2's complement (or
    > similar) addition.


    But that's not *all* you're doing! That's just how the adjustment for
    differing exponents is made. You still have to multiply the mantissas,
    which in general involves a series of additions. Assuming for simplicity
    that the operations were done in decimal instead of binary, consider
    multiplying 3.0 (3e0) and 5.0 (5e0) versus 3.0 (3e0) and 0.5 (5e-1). The
    exponent addition trick is simply how you determine the placement of the
    final decimal point (resulting in 15.0 (15e0) versus 1.5 (15e-1)). The FPU
    still has to multiply the 3 and the 5, regardless.

    Addition only requires the one addition for the mantissa, although it may
    have to first adjust the exponents.

    Even though the multiplication is likely done in the FPU hardware, it
    probably still involves a loop of the output back to the input of the
    registers, and repeating of additions of the contents. And that repeated
    addition is what will eat your clock cycles.

    Granted, for some specific values, multiplying two values may (possibly) end
    up faster than adding those same two values. But in the general case,
    that's not going to be true.


    -Howard
     
    Howard, Oct 15, 2004
    #13
  14. Keith H Duggar wrote:

    >> > Really? How interesting, can you please post the code for
    >> > the function?
    >> >
    >> > Keith

    >>
    >> This is based on ideas from _C++_Templates_The_Complet_Guide_
    >>
    >> http://www.josuttis.com/tmplbook/
    >>

    > ...
    >
    > Not to quibble, but this isn't really a "function" is? It is
    > a class implementation that happens to have a function you
    > call. That's why I asked you to post the code because had you
    > built a compile time recursive function that would have been
    > most unique.


    I never said it was a function. But I believe what you end up with is
    pretty much a function built at compile-time using recursion. It only uses
    static members, and I suspect it's pretty much loaded and executed in once
    chunk. I haven't read that discussion yet.

    In this version I basically used a binary tree, but reused the calculations
    which would have been identical. If you compile and run the program, you
    will have done as much testing as I have.

    /***************************************************************************
    * Copyright (C) 2004 by Steven T. Hatton *
    * *
    * *
    * This program is free software; you can redistribute it and/or modify *
    * it under the terms of the GNU General Public License as published by *
    * the Free Software Foundation; either version 2 of the License, or *
    * (at your option) any later version. *
    * *
    * This program is distributed in the hope that it will be useful, *
    * but WITHOUT ANY WARRANTY; without even the implied warranty of *
    * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
    * GNU General Public License for more details. *
    * *
    * You should have received a copy of the GNU General Public License *
    * along with this program; if not, write to the *
    * Free Software Foundation, Inc., *
    * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
    ***************************************************************************/


    #include <iostream>
    #include <iomanip>
    #include <stdexcept>

    using std::cout;
    using std::logic_error;

    template<size_t S>
    struct Pwr{
    inline static size_t pwr(const size_t& b)
    {
    size_t p = Pwr<S/2>::pwr(b);
    return (S & 1) ? p * p * b : p * p;
    }
    };

    template<>
    struct Pwr<1>{
    inline static size_t pwr(const size_t& b) { return b; }
    };

    // NOTE: I'm not handling Pwr<0>, but it's trivial.

    template<size_t S>
    struct Powers{
    inline static size_t power(const size_t& b)
    {
    cout << b << "^" << S << " = " << std::setw(8) << Pwr<S>::pwr(b) << "
    ";
    return Powers<S-1>::power(b);
    }
    };

    template<>
    struct Powers<0>{
    inline static size_t power(const size_t& b)
    {
    if(!b){ throw logic_error("ERROR: indeterminate form 0^0 "); }
    cout << "\n";
    return 0;
    }
    };


    int main()
    {


    for(size_t s = 1; s < 10; s++)
    {
    Powers<5>::power(s);
    }

    return 0;
    }

    --
    "If our hypothesis is about anything and not about some one or more
    particular things, then our deductions constitute mathematics. Thus
    mathematics may be defined as the subject in which we never know what we
    are talking about, nor whether what we are saying is true." - Bertrand
    Russell
     
    Steven T. Hatton, Oct 15, 2004
    #14
  15. Steven T. Hatton

    JXStern Guest

    On Wed, 13 Oct 2004 11:38:45 -0400, "Steven T. Hatton"
    <> wrote:
    >Did I mess something along the way, or is there no function in Standard C++
    >to raise an (unsigned) int to a power of (unsigned) int returning
    >(unsigned) int? I never gave this a second thought until today. I tried to
    >do it, and discovered <cmath> std::pow() only takes floating point types
    >for the first argument. Sure I could write one. I could have written at
    >least 3 fundamentally differnet versions in the time it took to discover
    >there isn't such a function.


    Are you pressed for execution speed of this function?

    J.
     
    JXStern, Oct 16, 2004
    #15
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Schnoffos
    Replies:
    2
    Views:
    1,237
    Martien Verbruggen
    Jun 27, 2003
  2. Hal Styli
    Replies:
    14
    Views:
    1,686
    Old Wolf
    Jan 20, 2004
  3. Jeff Davis

    strange exponentiation performance

    Jeff Davis, Nov 23, 2003, in forum: Python
    Replies:
    0
    Views:
    340
    Jeff Davis
    Nov 23, 2003
  4. Tim Peters
    Replies:
    1
    Views:
    433
    Jeff Davis
    Nov 24, 2003
  5. Steve Hicks
    Replies:
    2
    Views:
    1,281
Loading...

Share This Page