negative powers of two

S

stip

Hello,

given the exponent as integer, what's the fastest way to calculate
negative (and positive) integer powers of two?
std::pow() is quite slow, because designed for the general case.
Do I have to do some bit manipulations on my own or are there already
functions?

Thanks a lot,
Alex
 
S

SG

given the exponent as integer, what's the fastest way to calculate
negative (and positive) integer powers of two?
std::pow() is quite slow, because designed for the general case.
Do I have to do some bit manipulations on my own or are there already
functions?

There is no portable way to do it as far as i know. You can certainly
write code that exploits the binary representation of doubles. You
could include a test in your build script that tests whether the
assumptions about implementation details are true and defines a
preprocessor name accordingly so you can use #ifdef #else #endif to
select your "pow2" function. You might also want to look into the
numeric_limits traits class in <limits>

Cheers!
SG
 
J

Juha Nieminen

stip said:
Hello,

given the exponent as integer, what's the fastest way to calculate
negative (and positive) integer powers of two?
std::pow() is quite slow, because designed for the general case.
Do I have to do some bit manipulations on my own or are there already
functions?

I'm not completely sure what you mean by "negative integer powers of
two". Do you mean that you have an integral value (which might be
negative) and you want to raise it to the power of 2? That would be
rather trivial: Just multiply the integer with itself, and you have
raised it to the power of 2.

By your question, however, I deduce that you mean something else. Do
you mean, by any chance, that you want to raise 2 to some integral power
(which might be negative)? Or more generally, to calculate a*pow(2,b)
where a and b are integers?

For positive values of b the solution is simple: a*(1<<b). (But take
into account that, rather obviously, this will only work for values of b
smaller than the amount of bits in your integral type.)

Since raising to a negative power is the same as the inverse of
raising to the positive power, the solution for negative values of b is
equally simple: a/(1<<(-b)). (Of course this gives you only the integral
part of the result. If you need a floating point result, use pow().)
 
A

Alf P. Steinbach

* stip:
given the exponent as integer, what's the fastest way to calculate
negative (and positive) integer powers of two?
std::pow() is quite slow, because designed for the general case.
Do I have to do some bit manipulations on my own or are there already
functions?

Don't know if it helps, but the standard library has two functions one which for
given double d gives you mantissa and exponent in d = m*2^E, and the second goes
the other way. But even if they help for the not-too-small integer exponents,
for powers in the neighborhood around zero you're probably better off doing some
simple multiplication. But have you *measured* -- it's likely that pow() does
this already?

Cheers & hth.,

- Alf
 
B

Bill Davy

stip said:
Hello,

given the exponent as integer, what's the fastest way to calculate
negative (and positive) integer powers of two?
std::pow() is quite slow, because designed for the general case.
Do I have to do some bit manipulations on my own or are there already
functions?

Thanks a lot,
Alex


The ldexp(x,exp) function returns the value of x * 2exp if successful

You can if you like play around with using products of 2^OneBit where
OneBit at a time is taken from exp so (for example) 2^9 = 2^8 * 2^1 but I am
not sure it is worth it.
 
J

Jonathan Lee

given the exponent as integer, what's the fastest way to calculate
negative (and positive) integer powers of two?

Not a C++ solution, but if you happen to be working on x86 I think the
fscale assembly instruction does exactly that.

--Jonathan
 
J

Jerry Coffin

Hello,

given the exponent as integer, what's the fastest way to calculate
negative (and positive) integer powers of two?
std::pow() is quite slow, because designed for the general case.
Do I have to do some bit manipulations on my own or are there already
functions?

If the base is an integer, then you just use bit shifts.

If the base is a floating point, you can use something like:

int exponent;

frexp(base, &exponent);
exponent += added_exponent;
ldexp(base, exponent);

Whether this will be faster than pow() will depend -- for a lot of
cases where the floating point is stored as a significand and a power
of two, frexp will be something like a mask, shift and an integer
subtraction to remove a bias in the exponent. As you'd expect, ldexp
will pretty much reverse that: add the bias, shift to the right
position, OR with the significand. Generally pretty fast. OTOH, if
the native floating point representation is drastically different
from that, they could end up pretty slow -- but machines with such
strange floating point representations are pretty unusual.
 
J

Jerry Coffin

[ ... ]
Strange floating-point representations like decimal? <g> Decimal
floating-point is specified (along with binary) in IEEE 754-2008, and
provided as extensions to C by TR 24732 and to C++ by DTR 24733
(approved for balloting at the Frankfurt meeting).

Yes, that's probably the most obvious anyway. At least AFAIK, about
the only machines that use it are IBM z-series. I believe somewhere
along the line IBM also added decimal FP hardware to the POWER
processors (around POWER 5 or 6) but I believe they use binary
floating point by default, only switching to decimal if its
explicitly enabled. I'd guess most C and C++ compilers leave it set
for binary FP, though it wouldn't surprise me if COBOL compilers (for
one example) normally switched it to use decimal FP instead.
 
J

James Kanze

given the exponent as integer, what's the fastest way to
calculate negative (and positive) integer powers of two?
std::pow() is quite slow, because designed for the general case.
Do I have to do some bit manipulations on my own or are there already
functions?

ldexp.
 
J

James Kanze

If the base is an integer, then you just use bit shifts.
If the base is a floating point, you can use something like:
int exponent;
frexp(base, &exponent);
exponent += added_exponent;
ldexp(base, exponent);
Whether this will be faster than pow() will depend -- for a
lot of cases where the floating point is stored as a
significand and a power of two, frexp will be something like a
mask, shift and an integer subtraction to remove a bias in the
exponent. As you'd expect, ldexp will pretty much reverse
that: add the bias, shift to the right position, OR with the
significand. Generally pretty fast. OTOH, if the native
floating point representation is drastically different from
that, they could end up pretty slow -- but machines with such
strange floating point representations are pretty unusual.

Not really. I don't know of any mainframe which uses base 2 for
its floating point: IBM uses base 16, and both Unisys
architectures use base 8; Unisys MCP also normalizes in a
somewhat strange way. The fact that the bases are powers of two
means that the operation can still be done with masking and
shifting, but it requires a bit more than a base 2
representation would.
 
J

Jerry Coffin

On Jul 31, 6:01 pm, Jerry Coffin <[email protected]> wrote:

[ ... ]
Not really. I don't know of any mainframe which uses base 2 for
its floating point: IBM uses base 16, and both Unisys
architectures use base 8; Unisys MCP also normalizes in a
somewhat strange way. The fact that the bases are powers of two
means that the operation can still be done with masking and
shifting, but it requires a bit more than a base 2
representation would.

You start by saying "not really", but from what I can see, you then
go on to pretty much confirm what I said -- that while these machines
aren't binary, they still work in a power of two, which means that
frexp/ldexp will probably be faster than pow() for all of them.

I have to admit I'm a bit surprised though -- I thought IBM's
mainframes used decimal floating point, not hexadecimal...
 
K

Keith H Duggar

You start by saying "not really", but from what I can see, you then
go on to pretty much confirm what I said -- that while these machines
aren't binary, they still work in a power of two, which means that
frexp/ldexp will probably be faster than pow() for all of them.

You didn't claim anything about systems that "work in a power
of two". You specifically made a claim about systems that do not
store floating point "as a significand and a power of two". The
most obvious interpretation of that does not include systems
that store as a significand and a power of 8 or 16 etc. Though
it is possible your words also meant to include such powers of
powers of 2 as well, the operations you sketched for frexp seem
to indicate you did not mean to include them. So either James
correctly called you on your faulty assumptions and you are
backpedaling (as you have done elsewhere), or it was a totally
understandable miscommunication.
I have to admit I'm a bit surprised though -- I thought IBM's
mainframes used decimal floating point, not hexadecimal...

You are surprised that you were wrong? I'm not ...

By the way, are you just going to slither quietly away from the
goto debate after you were trounced or are you going to man up
and admit goto was faster and that your "analysis" (to use the
word extremely loosely) of modern machines that was filled with
numerous faulty assumptions (such as the possibility above) was
flawed?

http://groups.google.com/group/comp.lang.c++.moderated/msg/3ac2368e485e740d
http://groups.google.com/group/comp.lang.c++.moderated/msg/5d471364e76392d9

(First link presents hard data proving you wrong and the second
calls you out on your flawed understanding and assumptions and
challenges you to explain the data or provide new measurements
that correct the proven problems in your initial testing.)

You know, it really is quite liberating to admit when you are
wrong; there is no shame in learning. You should try doing it
sometime.

KHD
 
J

Jerry Coffin

You didn't claim anything about systems that "work in a power
of two". You specifically made a claim about systems that do not
store floating point "as a significand and a power of two".

Obviously you don't even know how to read. What I said was: "if the
native floating point representation is drastically different from
that, they could end up pretty slow"

Since apparently don't know English very well, and are just too
damned lazy to look it up, "drastically" is defined as: "Extreme,
severe; Acting rapidly or violently; Extremely severe or extensive"

In case that didn't make it obvious, NOT EVERY POSSIBLE DIFFERENCE IS
DRASTIC!
The
most obvious interpretation of that does not include systems
that store as a significand and a power of 8 or 16 etc. Though
it is possible your words also meant to include such powers of
powers of 2 as well, the operations you sketched for frexp seem
to indicate you did not mean to include them. So either James
correctly called you on your faulty assumptions and you are
backpedaling (as you have done elsewhere), or it was a totally
understandable miscommunication.

Nonsense! What I said was that when/if the representation is
drastically different, that frexp/ldexp might be quite slow. When the
base isn't binary, but is still a power of two, there's no real
reason to believe that frexp or ldexp will be particularly slow.
Specifically, they will almost certainly still be substantially
faster than pow().
By the way, are you just going to slither quietly away from the
goto debate after you were trounced or are you going to man up
and admit goto was faster and that your "analysis" (to use the
word extremely loosely) of modern machines that was filled with
numerous faulty assumptions (such as the possibility above) was
flawed?

There's not "slithering" involved. There was a conscious decision to
ignore you because you're clearly an liar who simply started making
up nonsense rather than admitting that he was dead from from
beginning to end.
http://groups.google.com/group/comp.lang.c++.moderated/msg/3ac2368e485e740d
http://groups.google.com/group/comp.lang.c++.moderated/msg/5d471364e76392d9

(First link presents hard data proving you wrong and the second
calls you out on your flawed understanding and assumptions and
challenges you to explain the data or provide new measurements
that correct the proven problems in your initial testing.)

All you presented was a bunch of unsupported claims that didn't prove
anything.
You know, it really is quite liberating to admit when you are
wrong; there is no shame in learning. You should try doing it
sometime.

I've done so many times, and when I'm wrong again, I'll do so again.

In this case, you're the one who was wrong, and you're the one who
has failed to admit it.

In this thread, you've not only been wrong, but clearly lied through
your teeth, not even attempting to quote me accurately at all. At
least previously you attempted to post your lies in a way that it was
difficult to prove them wrong. Now you're not only lying, but doing
it in an incredibly STUPID manner that was trivial to prove.

Go away and grow up you pathetic worm!
 
J

Jerry Coffin

[ ... ]
Not really. I don't know of any mainframe which uses base 2 for
its floating point: IBM uses base 16, and both Unisys
architectures use base 8; Unisys MCP also normalizes in a
somewhat strange way. The fact that the bases are powers of two
means that the operation can still be done with masking and
shifting, but it requires a bit more than a base 2
representation would.

Since Kieth decided to jump in with his usual unprovoked and
inaccurate attack, I decided to do a bit of fact checking.

At: http://www.ibm.com/developerworks/library/pa-bigiron1/

IBM says: "The System/390 implemented the IEEE 754 BFP formats in
addition to HFP for the first time in 1995" (using "BFP" as an
abbreviation for "binary floating point"), so their mainframes have
had binary floating point for about 14 years now.

At:
http://portal.acm.org/citation.cfm?id=1241826
&dl=GUIDE&coll=GUIDE&CFID=46010576&CFTOKEN=73022622

is an article from the IBM Journal of Research and Development. The
full article is not free, but the abstract says:

[ ... ] Use of the newly defined decimal floating-point
(DFP) format instead of binary floating-point is
expected to significantly improve the performance of
such applications. System z9TM is the first IBM machine
to support the DFP instructions.

To summarize, then: current IBM mainframes support floating point in
three bases: hexadecimal, binary, and decimal. Support for the bases
was added in that order. Hexadecimal has been supported for decades,
binary for a decade and a half, and decimal for something like a year
and a half.

As far as the original question goes, the only time you're likely to
run into a substantial difference in speed when using frexp/ldexp
would be with decimal -- for either binary or hexadecimal (or octal),
ldexp() and frexp() will usually be pretty simple and fast, though
differences in (or lack of) normalization will have _some_ effect as
well.
 
K

Keith H Duggar

Obviously you don't even know how to read. What I said was: "if the
native floating point representation is drastically different from
that, they could end up pretty slow"

Obviously you don't know how to reasonably interpret what you read.
James' comment wasn't about them being slow or not. It was about your
usual faulty claims about implementations you are ignorant of being
"pretty unusual".
Since apparently don't know English very well, and are just too
damned lazy to look it up, "drastically" is defined as: "Extreme,
severe; Acting rapidly or violently; Extremely severe or extensive"

In case that didn't make it obvious, NOT EVERY POSSIBLE DIFFERENCE IS
DRASTIC!

That's it, clutch to your "drastically" straw to cover up your faulty
assumptions.
Nonsense! What I said was that when/if the representation is
drastically different, that frexp/ldexp might be quite slow. When the
base isn't binary, but is still a power of two, there's no real
reason to believe that frexp or ldexp will be particularly slow.
Specifically, they will almost certainly still be substantially
faster than pow().

Again, this wasn't about your performance claims it was about your
"pretty usual" claims which you are now trying to link to your vague
"drastically" straw. One must wonder now what "drastic" even means to
you if it is to filter implementations in any meaningful way at all.
Maybe drastic to you means base != 2^n so maybe decimal is "drastic".
Maybe drastic to you means "anything that makes Jerry's performance
claims right". But then who can knows what you meant by such vague
words in a technical context.
There's not "slithering" involved. There was a conscious decision to
ignore you because you're clearly an liar who simply started making
up nonsense rather than admitting that he was dead from from
beginning to end.

LOL, prove it! All the evidence you need should be there: public
code, public timing data, public postings, etc. So back it up!
Demonstrate that I have lied or made something up. So far, about
the only thing you demonstrated in that forum is that you do not
know how to accurately profile, that you speculate wildly beyond
your technical knowledge, that you throw around completely bogus
assumptions about areas you are completely ignorant of (such as
digital geometry), and that when confronted with hard evidence
proving you wrong you either try to quietly slither away or
accuse your opponent of being a "liar".
All you presented was a bunch of unsupported claims that didn't prove
anything.

LOL, I think that is the first time I have ever seen someone try
to dismiss source code and timing data as "unsupported claims".
You, on the other hand, provided little beyond personal attacks,
broken profiling, speculation, and lack of comprehension.
I've done so many times, and when I'm wrong again, I'll do so again.

In this case, you're the one who was wrong, and you're the one who
has failed to admit it.

The evidence shows otherwise and you know it. That's why you are
ignoring it.
In this thread, you've not only been wrong, but clearly lied through
your teeth, not even attempting to quote me accurately at all. At

These "liar" claims are, honestly, truly amusing given how
ridiculous they are.
least previously you attempted to post your lies in a way that it was
difficult to prove them wrong.

It fact so far they have been *impossible* for you to prove them
wrong. Probably because they are, in fact, not lies but truths.
It is easy to try; just run the provided test suites and post
your results. That would speak far louder than your bluster.
Now you're not only lying, but doing
it in an incredibly STUPID manner that was trivial to prove.

Go away and grow up you pathetic worm!

That's it, keep revealing your nasty disposition.

KHD
 
K

Keith H Duggar

Since Kieth decided to jump in with his usual unprovoked and
inaccurate attack, I decided to do a bit of fact checking.

...

To summarize, then: current IBM mainframes support floating point in
three bases: hexadecimal, binary, and decimal. Support for the bases
was added in that order. Hexadecimal has been supported for decades,
binary for a decade and a half, and decimal for something like a year
and a half.

As far as the original question goes, the only time you're likely to
run into a substantial difference in speed when using frexp/ldexp
would be with decimal -- for either binary or hexadecimal (or octal),
ldexp() and frexp() will usually be pretty simple and fast, though
differences in (or lack of) normalization will have _some_ effect as
well.

Great. Now tell us whether you 1) consider decimal to be
"drastically" different 2) consider IBM mainframes to be
"pretty usual" 3) knew this about IBM mainframes before
your "Fact checking" wikipeducation trip.

KHD
 
J

James Kanze

[ ... ]
Not really. I don't know of any mainframe which uses base 2
for its floating point: IBM uses base 16, and both Unisys
architectures use base 8; Unisys MCP also normalizes in a
somewhat strange way. The fact that the bases are powers of
two means that the operation can still be done with masking
and shifting, but it requires a bit more than a base 2
representation would.
You start by saying "not really", but from what I can see, you
then go on to pretty much confirm what I said -- that while
these machines aren't binary, they still work in a power of
two, which means that frexp/ldexp will probably be faster than
pow() for all of them.

The algorithm you described only applies to machines with base 2
floating point, not base 8 or 16. And IBM mainframes aren't
"pretty unusual", they're the most common mainframe around. (I
think that Unisys is second in the market, and the Unisys MCP
does have a somewhat exotic representation, normalizing with the
decimal to the right, and not to the left, and not requiring
normalization at all---the idea is that an integer and a
floating point have the same bitwise representation.)

As for speed compared to pow(), I don't know. Depending on the
speed of hardware floating point and how much support the
hardware has for pow, it could vary. A lot of 32 bit processors
had one clock hardware support for the four basic operations,
but it was fairly expensive to shift 64 bits. (On at least one
processor I worked on, all shift operations were done with a
loop in microcode.) Of course, that's not the case of IBM
mainframes, which support 64 bit shifts directly as well. (But
IIRC, it's 64 bit shifts were optimized for, and maybe only
supported, multiples of four bits---for normalizing BCD values,
etc.)
I have to admit I'm a bit surprised though -- I thought IBM's
mainframes used decimal floating point, not hexadecimal...

They may have both, but the floating point I've always seen was
base 16. And the IMB compatibles from Siemens and Fujitsu, back
when I worked on them, didn't have a single instruction for
decimal floating point (although the instruction set did allow
implementing it in five or six instructions, without any loops,
even for division, since the machines did have fixed point
decimal instructions, and 64 bit shifts).

Of course, this was all a fairly long time ago---I've not had
the occasion to look at the IBM instruction set since then---,
so the situation could easily have changed.
 
J

James Kanze

IBM says: "The System/390 implemented the IEEE 754 BFP formats
in addition to HFP for the first time in 1995" (using "BFP" as
an abbreviation for "binary floating point"), so their
mainframes have had binary floating point for about 14 years
now.

I'm not sure, but I think that this support was optional. On
the machines we were running in 1999, it had just been added,
and it was significantly slower than the traditional IBM base 16
format. (More importantly for us, it wasn't supported in IBM
Cobol, which meant that floating point data coming from the
mainframe still had to be translated in our Java-based
applications running under AIX.)

[...]
As far as the original question goes, the only time you're
likely to run into a substantial difference in speed when
using frexp/ldexp would be with decimal -- for either binary
or hexadecimal (or octal), ldexp() and frexp() will usually be
pretty simple and fast, though differences in (or lack of)
normalization will have _some_ effect as well.

As I mentionned in another posting, this depends very much on
the speed of the various operations. A good implementation of
pow will not take that many floating point operations, and on
some machined (*not* the IMB mainframes), 64 bit shifts can be
very expensive.

Of course, the only reasonable course is to use pow, and if the
profiler shows that to be a problem, measure ldexp, and use it,
if it is faster (likely, but not certain). But especially: when
you get to that point, don't speculate like we're doing, but
measure. That way, you know the answer is correct for your
machine, regardless.
 
K

Keith H Duggar

The algorithm you described only applies to machines with base 2
floating point, not base 8 or 16. And IBM mainframes aren't
"pretty unusual", they're the most common mainframe around. (I
think that Unisys is second in the market, and the Unisys MCP
does have a somewhat exotic representation, normalizing with the
decimal to the right, and not to the left, and not requiring
normalization at all---the idea is that an integer and a
floating point have the same bitwise representation.)

Don't forget about Jerry's "trump card": the word "drastically".
If some variation of his shifting method is slow then the float
representation is "drastically" different and he is right. If some
variation of his shifting method is fast then the representation
is not "drastically" different and he is *still* right.

Furthermore, even if he determines that base-16 is "drastically"
different, since IBM mainframes support both base-16 and base-2
(even if it is optional) he is *still* right because it either
works with his method (base-2) or the users are running the IBM
mainframe in a "drastically" different configuration (even if
it is the default and preferred configuration).

Finally, similar "trump reasoning" will apply to Unisys perhaps
with some market share speculations or buzzwords like "modern"
thrown into the mix as well.

In other words, there is no defense against Jerry's ignorant
speculations. No matter what hard evidence and solid reasoning
you supply he will always be right. And even if by miracle you
convince him, we will probably never know because he likely
will not admit he was wrong here publicly (preferring to
slither quietly away instead).

KHD
 
J

Jerry Coffin

[ ... ]
Whether this will be faster than pow() will depend -- for
a lot of cases where the floating point is stored as a
significand and a power of two, frexp will be something
like a mask, shift and an integer subtraction to remove a
bias in the exponent. As you'd expect, ldexp will pretty
much reverse that: add the bias, shift to the right
position, OR with the significand. Generally pretty fast.
OTOH, if the native floating point representation is
drastically different from that, they could end up pretty
slow -- but machines with such strange floating point
representations are pretty unusual.
Not really. I don't know of any mainframe which uses base 2
for its floating point: IBM uses base 16, and both Unisys
architectures use base 8; Unisys MCP also normalizes in a
somewhat strange way. The fact that the bases are powers of
two means that the operation can still be done with masking
and shifting, but it requires a bit more than a base 2
representation would.
You start by saying "not really", but from what I can see, you
then go on to pretty much confirm what I said -- that while
these machines aren't binary, they still work in a power of
two, which means that frexp/ldexp will probably be faster than
pow() for all of them.

The algorithm you described only applies to machines with base 2
floating point, not base 8 or 16. And IBM mainframes aren't
"pretty unusual", they're the most common mainframe around. (I
think that Unisys is second in the market, and the Unisys MCP
does have a somewhat exotic representation, normalizing with the
decimal to the right, and not to the left, and not requiring
normalization at all---the idea is that an integer and a
floating point have the same bitwise representation.)

Ah, I can finally see the source of the misunderstanding.

The algorithm I mentioned was for machines at one end of the spectrum
-- basically IEEE 754 or something very similar.

I also mentioned machines with "weird" representations, for which
frexp/ldexp could be quite slow -- quite possibly slower than pow,
being the important point under the circumstances. Though not stated
directly, that was more or less a hint that the only way to be sure
about things was to profile, not depend on the "fact" that frexp and
ldexp would necessarily be the fastest approach.

I did not say, or mean to imply, that there was no middle ground
between those extremes. I guess since I didn't mention the middle
ground, I can understand how somebody could infer that I didn't
intend for there to be any.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,780
Messages
2,569,611
Members
45,276
Latest member
Sawatmakal

Latest Threads

Top