Compare sign in C++

A

Andrey Tarasevich

Claude said:
...
How can we compare sign of numbers in C++ ?
...

What do you understand under "sign"? The traditional 'signum' function
(-1, 0, +1) for arithmetical value 'v' can be calculated in C++ as follows

int signum = (v > 0) - (v < 0);

which means that signs of values 'v1' and 'v2' can be compared in the
following manner

bool signs_equal = ((v1 > 0) - (v1 < 0)) == ((v2 > 0) - (v2 < 0));
 
R

Ron Natalie

Claude Gagnon said:
How can we compare sign of numbers in C++ ?
What do you consider the sign of zero to be?

inline int signof(int a) { return (a == 0) ? 0 : (a<0 ? -1 : 1); }

if(signof(-45) == signof(-4) ) ...
 
V

Victor Bazarov

Claude Gagnon said:
How can we compare sign of numbers in C++ ?

Depending on what you need, something like

template<class T> bool signTheSame(T t1, T t2)
{
return t1 < 0 == t2 < 0;
}

Victor
 
N

Niklas Borson

Claude Gagnon said:
Hi,

How can we compare sign of numbers in C++ ?

Is this what you mean?

inline bool SameSign(int a, int b)
{
return (a < 0) == (b < 0);
}

or

template<class T>
inline bool SameSign(T a, T b)
{
return (a < 0) == (b < 0);
}
 
P

Pierre Maurette

Claude Gagnon said:
Hi,

How can we compare sign of numbers in C++ ?
Numbers ?
You can try :
same_sign = ((a*b) > 0)
#define same_sign(a, b) (((a)*(b))>0)
You have to decide about the case of one or two among a and b are 0, and to
be carefull with float types.
Happy new year,
Pierre
 
D

dwrayment

a*b is inefficient

if you know your gonna be using say 32 bit numbers all the time then
same sign = (! (a xor b)) >> 31 if mbs(a) == msb(b) then the
negation of a xor b will be 1 at mbs.


or you can do some kind of bit comparsion with 0x8000 0000 to test status
of msb.
 
J

Jeff Schwab

a*b is inefficient

On what architecture?
if you know your gonna be using say 32 bit numbers all the time then
same sign = (! (a xor b)) >> 31 if mbs(a) == msb(b) then the
negation of a xor b will be 1 at mbs.

Do you mean the same thing by "msb" and "mbs?" I thought mbs was a typo
at first, but you did it twice...
or you can do some kind of bit comparsion with 0x8000 0000 to test status
of msb.

Without the space, right? And still (of course) assuming 32 bits.
 
D

dwrayment

Jeff Schwab said:
On what architecture?
on any architechure. multiplcation is much less efficient then adding and
bitops.

Do you mean the same thing by "msb" and "mbs?" I thought mbs was a typo
at first, but you did it twice...

yes of course i cant spell msb.
Without the space, right? And still (of course) assuming 32 bits.
naturally no space.
 
A

Andrey Tarasevich

dwrayment said:
on any architechure. multiplcation is much less efficient then adding and
bitops.
...

What is your source of information? Can you provide any evidence to
support this statement?
 
D

dwrayment

i cant give you a direct source, as i dont keep track of books i read. it
is common sense that multipying requires more work and relative to adding
and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever
multipy as its still pretty dang quick by human standards, but if you can
do something without multipying do it.
 
M

Martijn Lievaart

On Fri, 02 Jan 2004 06:53:47 +0000, dwrayment wrote:

[ Please don't top post, thanx, M4 ]
i cant give you a direct source, as i dont keep track of books i read. it
is common sense that multipying requires more work and relative to adding
and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever
multipy as its still pretty dang quick by human standards, but if you can
do something without multipying do it.

On modern CPUs, multiply is a one clockcycle instruction and just as fast
as addition or bitwise manipulation. This has been true for a while now.

Besides, compilers are pretty good at optimizing. Shifting left by 4 bits
or multiplying by 16 translate to the same opcodes on any sane compiler.
The occasions are very rare where handoptimizing multiplication into
something else would give a better result than the compiler can give.

Even if it would make a difference, are you going to notice? Not many
programs nowadays on modern hardware (and my main server is a P90!) will
make you want to do this kind of optimization (which it isn't).

HTH,
M4
 
T

Tom Plunket

dwrayment said:
i cant give you a direct source, as i dont keep track of books i
read. it is common sense that multipying requires more work and
relative to adding and bit ops is inefficent (by computers
standards)...

"Common sense" has no place in performance measurements.

Indeed, multiplication has historically been faster than addition
in floating point operations.

Stop trying to use "common sense" and start actually measuring
these things.
-tom!
 
R

Ron Natalie

dwrayment said:
i cant give you a direct source, as i dont keep track of books i read. it
is common sense that multipying requires more work and relative to adding
and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever
multipy as its still pretty dang quick by human standards, but if you can
do something without multipying do it.\

This hasn't been true since the 70's or earlier. Most processors now do integer
operations (multiply or bitwise) in the same amount of time. Multiply hasn't taken
longer since processors had to simulate it with additions.
 
D

dwrayment

Martijn Lievaart said:
On Fri, 02 Jan 2004 06:53:47 +0000, dwrayment wrote:

[ Please don't top post, thanx, M4 ]
adding
and

i cant give you a direct source, as i dont keep track of books i read. it
is common sense that multipying requires more work and relative to adding
and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever
multipy as its still pretty dang quick by human standards, but if you can
do something without multipying do it.

On modern CPUs, multiply is a one clockcycle instruction and just as fast
as addition or bitwise manipulation. This has been true for a while now.

Besides, compilers are pretty good at optimizing. Shifting left by 4 bits
or multiplying by 16 translate to the same opcodes on any sane compiler.
The occasions are very rare where handoptimizing multiplication into
something else would give a better result than the compiler can give.

Even if it would make a difference, are you going to notice? Not many
programs nowadays on modern hardware (and my main server is a P90!) will
make you want to do this kind of optimization (which it isn't).

HTH,
M4


faster processers are no excuse for lazy programming. if you can prgram
something to do the same task faster with less work a good programmer will
do so.

as for the replys below i suggest you try doing it yourself. and dont
multipy by a power of 2 as that is equavialent to bit shifting.
mult still is less efficiant than doing other ops.
 
J

Jeff Schwab

dwrayment said:
faster processers are no excuse for lazy programming. if you can prgram
something to do the same task faster with less work a good programmer will
do so.

as for the replys below i suggest you try doing it yourself. and dont
multipy by a power of 2 as that is equavialent to bit shifting.
mult still is less efficiant than doing other ops.

On some architectures. Can you name one?
 
T

Tom Plunket

dwrayment said:
faster processers are no excuse for lazy programming. if you can
prgram something to do the same task faster with less work a
good programmer will do so.

LOL ok dude.
as for the replys below i suggest you try doing it yourself.
and dont multipy by a power of 2 as that is equavialent to bit
shifting. mult still is less efficiant than doing other ops.

You should probably compile some code yourself and look at the
generated (and optimized) output. You won't though, because you
are lazy and think you already know the answer. Then you should
look through the processor info book and see what the various
costs of doing operations are.

so sad,
-tom!
 
D

dwrayment

ok last post for me if some of you dont like it theres nothing more i can
say.

first off, im not trying to reinvent the multipy op by simulating it with
adding and bit shifts. the question posed was how can we compare
sign of numbers.

soultion 1 was a*b .... i posed a different solution simply !a xor b >>
31. and now i think about it id rather do
a xor b & 0x70000000 == 0. here multiplying is inefficient because doing a
simple xor and bitwise and is less work than doing a mulitpy (on even the
best of the best of machines).

of course if i were trying to reinvent the wheel and do mulipication by
using adding and bit shifts i wouldnt be able to match the work done by
todays
best engineers. im sure they doing an excellent job. thats all i can say.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top