Jim said:
[...]
As long as you're unrolling the loop and using constant indexes, you
might as well go all the way:
long xx=0;
char[] s = Str.toCharArray();
if (s[50]!='-') xx+=1;
if (s[49]!='-') xx+=2;
if (s[48]!='-') xx+=3;
if (s[47]!='-') xx+=4;
// etc...
long xx = 50 * 51 / 2; // sum of all positions
int i = Str.indexOf('-');
while (i != -1) {
xx -= 51 - i; // subtract the dashed positions
i = Str.indexOf('-', i+1);
}
Hmm. Your first suggestion, I get. If the OP thinks that it's really
worth unrolling the loop, they certainly also should believe that
embedding known constants into the logic is also worthwhile.
But your second proposal? You still have a loop (which the OP
superstitiously wishes to avoid), _and_ you've introduced additional
overhead in the call to indexOf() (which is likely to be more costly
than charAt(), since it embeds a new loop inside your outer one…even
though you still essentially visit each character once, in order, two
loops are harder for the JIT compiler and CPU to deal with than one).
And it's much harder to read as well, as compared to the simple,
unoptimized loop I proposed early on. How is it better to initialize
an accumulator to the sum of the series, and then subtract hyphen
positions, as opposed to initializing the accumulator to 0 and just
adding the other positions as they are found? (If we had some prior
knowledge that there are always many fewer hyphens than other
characters in the string, I could see a theoretical, though still
impractical, benefit…but that's not part of the problem statement).
The best advice the OP has received so far remains: start with
correct, well-designed code, then measure and optimize only as needed
and demonstrably effective to meet specific, unambiguous performance
criteria.
(It was also pointed out that the compiler and CPU designers should be
trusted first. To some extent it's not even that they are so much
smarter and better at optimizations, though that is actually very
likely in most cases. It's that the compiler and CPU _do_
optimizations, those optimizations generally assume certain commonly
used patterns, and writing code that attempts to optimize based on
assumptions rather than real-world measurements are likely to thwart
the ability of the compiler and CPU to optimize, by failing to use
those known, commonly used patterns. "Optimizing" outside the context
of actual measurements can be, and often is, counter-productive).