R
Rafael Cunha de Almeida
Hi,
I've found several sites on google telling me that I shouldn't use
rand() % range+1
and I should, instead, use something like:
lowest+int(range*rand()/(RAND_MAX + 1.0))
They fail to make it clear why. There seems to be less randomness in the
lower bits or something like that. I don't know what less randomness
would mean. Would it not be normally distributed or something like that?
I've found several sites on google telling me that I shouldn't use
rand() % range+1
and I should, instead, use something like:
lowest+int(range*rand()/(RAND_MAX + 1.0))
They fail to make it clear why. There seems to be less randomness in the
lower bits or something like that. I don't know what less randomness
would mean. Would it not be normally distributed or something like that?