On Fri, 02 Jan 2004 06:53:47 +0000, dwrayment wrote:
[ Please don't top post, thanx, M4 ]
adding
and
i cant give you a direct source, as i dont keep track of books i read. it
is common sense that multipying requires more work and relative to adding
and bit ops is inefficent (by computers standards), so any book about
optimizing code probably has a section on this. that not to say dont ever
multipy as its still pretty dang quick by human standards, but if you can
do something without multipying do it.
On modern CPUs, multiply is a one clockcycle instruction and just as fast
as addition or bitwise manipulation. This has been true for a while now.
Besides, compilers are pretty good at optimizing. Shifting left by 4 bits
or multiplying by 16 translate to the same opcodes on any sane compiler.
The occasions are very rare where handoptimizing multiplication into
something else would give a better result than the compiler can give.
Even if it would make a difference, are you going to notice? Not many
programs nowadays on modern hardware (and my main server is a P90!) will
make you want to do this kind of optimization (which it isn't).
HTH,
M4