H
Hallvard B Furuseth
Malcolm said:Signed ints can trap, whilst unsigned ones cannot. So it is easy to
think of an architecture on which signed int is slower, hard to think
of one on which it is faster.
I've seen loops that gcc warned it could not optimize well because the
loop variables were unsigned. Don't remember what kind of loop caused
that though. I'm guessing it's because unsigned arithmetic is fully
defined, while signed arithmetic has undefined behavior on overflow - so
the compiler sometimes has more options about what to do with signed
arithmetic. The compiler may do whatever it wants with potential
undefined behavior, such as to assume some particular trouble case will
not happen.
Of course, that can work the other way too if the cpu will do nasty
things like traps on overflow: the compiler must in that case make sure
not to introduce a signed overflow where the program had none.
Anyway, just take it as another case of how intuition can be unreliable
about optimization.
Before optimizing, I hope you've been through the normal checklist:
Don't do it.
Don't do it yet.
Profile to find the hot spots where optimization will make a difference,
and cold spots where it won't.
Optimize algorithms and data structures before doing micro-optimizations
and code tweaks.
Regarding the latter, there are some techniques e.g. Paul Hsieh's pages:
http://www.azillionmonkeys.com/qed/tech.shtml
Google(optimization techniques C) has some things to say too.