Eric said:
Jussi said:
Patricia Shanahan writes: ....
How about:
long result;
do{
result = r.nextLong() >>> 1;
}while(result == 0)
[Based on one of Eric's earlier suggestions]
How about just:
long result;
do {
result = r.nextLong();
}
while (result <= 0);
It's a "micro-pessimization." Roughly speaking, Patricia's
code will average one nextLong() call per delivered result, while
Jussi's will average two. (Patricia accepts the nextLong() value
with probability (2^63-1)/2^63 = 1-2^(-63), while Jussi accepts
each value with probability (2^63-1)/2^64 = 0.5-2^(-64).)
Thanks. I didn't realize her code was _that_ efficient.
I should explain why I asked about the alternative at all. The reason
is that my version is rather obviously still uniform (all the accepted
numbers are equally probable in the underlying generator) while with
your/Patricia's version something more subtle is going on, and I get
nervous when there is something subtle going on.
Let me see. Assume we have a uniform distribution for the 2^64 bit
patterns before the shift. When we lop off the rightmost bit, we are
left with 2^63 bit patterns, each of which occurred once with 0 and
once with 1 to its right, so they are all equally probable. And we
reject both occurrences of 63 zeroes.
I think I see it now. Thanks again.