Pete Becker said:
Generate a number between 0 and 1, and multiply it by (1-x).
But there are some boundary conditions that you have to watch for. In
particular, (float)rand()/RAND_MAX can be 1, and in that case you
can't generate a non-negative value that you can add to x to give you
a value that's < 1. You really need to divide by (RAND_MAX + 1), to
generate a value that's in the range 0 <= x < 1. But you have to be
careful there, because if RAND_MAX is equal to UINT_MAX, adding 1 to
it will give you 0.
I don't think this quite right. rand() returns int and in the (rare)
case where INT_MAX == UINT_MAX, adding 1 is implementation defined. 0
is presumably possible, but it seems unlikely in practise.
Converting to a wider integer type (if there is one) or to unsigned
(if UINT_MAX > INT_MAX) may help, but even so, float often does not
have enough precision:
(float)rand() / ((unsigned long)RAND_MAX + 1)
can be exactly == 1.0. I would avoid float for this purpose
altogether.
[Aside: this can happen even using (double)rand() when the int
returned by rand() is 64 bits. There is a lot of code that relies on
this division being strictly less that 1 (to generate array indexes
for example) that will break with a 64 rand() function!]