random number

R

Roman Töngi

I want to get a random number between 0 and 1.
The following code works but it seems to me a litte
awkward. Is there a "better" solution.

double rnd;
int integerRnd;

srand(static_cast<unsigned>(time(NULL)));
for (int i = 0; i < 9; ++i) {
// rand() returns a value from 0 to 32767
rnd = 10000. / rand();
// integer part of quotient
integerRnd = rnd;
// every random number in the form of 0.xxx
rnd -= integerRnd;
cout << rnd << endl;
}
 
P

Pete Becker

Roman said:
I want to get a random number between 0 and 1.
The following code works but it seems to me a litte
awkward. Is there a "better" solution.

The usual way to reduce values in the range [0, n) to the range [0, 1)
is to divide by n+1. If you want 1 to be in the final range, divide by n
instead.
 
V

Victor Bazarov

Roman said:
I want to get a random number between 0 and 1.
The following code works but it seems to me a litte
awkward. Is there a "better" solution.

There are many solutions out there. Have you tried the web?
You don't say what distribution you need. For uniform one
you can simply scale the numbers you get from 'rand' function
(see comp.lang.c FAQ for the recommendations). For other
distributions there are other approaches. Again, the web is
your friend in that case.
 
P

Pete Becker

Pete said:
Roman said:
I want to get a random number between 0 and 1.
The following code works but it seems to me a litte
awkward. Is there a "better" solution.

The usual way to reduce values in the range [0, n) to the range [0, 1)
is to divide by n+1. If you want 1 to be in the final range, divide by n
instead.

Forgot to mention, the following comment:

// rand() returns a value from 0 to 32767

is incorrect. Some implementations of rand do that, others don't. Look
it up.
 
R

Roman Töngi

Forgot to mention, the following comment:
// rand() returns a value from 0 to 32767

is incorrect. Some implementations of rand do that, others don't. Look it
up.

I did. In my C++ implementation it is as noted.
 
K

Kai-Uwe Bux

Roman said:
I want to get a random number between 0 and 1.
The following code works but it seems to me a litte
awkward. Is there a "better" solution.

double rnd;
int integerRnd;

srand(static_cast<unsigned>(time(NULL)));
for (int i = 0; i < 9; ++i) {
// rand() returns a value from 0 to 32767
rnd = 10000. / rand();
// integer part of quotient
integerRnd = rnd;
// every random number in the form of 0.xxx
rnd -= integerRnd;
cout << rnd << endl;
}

If you want a uniform distribution, this code is broken. About 57.7%
of your random numbers are below 0.5 and about 42.3% are above.

You might want to look into the random number generator library from
boost.org.


Best

Kai-Uwe Bux
 
L

Larry I Smith

Roman said:
I want to get a random number between 0 and 1.
The following code works but it seems to me a litte
awkward. Is there a "better" solution.

double rnd;
int integerRnd;

srand(static_cast<unsigned>(time(NULL)));
for (int i = 0; i < 9; ++i) {
// rand() returns a value from 0 to 32767
rnd = 10000. / rand();
// integer part of quotient
integerRnd = rnd;
// every random number in the form of 0.xxx
rnd -= integerRnd;
cout << rnd << endl;
}


From the rand() manpage:

<quote>

#include <stdlib.h>
 
P

Pete Becker

Roman said:
I did. In my C++ implementation it is as noted.

So you know your code will work as long as you don't use any other
implementation. <g> Use RAND_MAX. It expands to the right value on all
implementations.
 
R

Roman Töngi

If you want a uniform distribution, this code is broken. About 57.7%
of your random numbers are below 0.5 and about 42.3% are above.

How does it come to that percentage?
 
P

Pete Becker

Larry said:
From the rand() manpage:

.
If you want to generate a random integer between 1 and 10, you
should always do it by using high-order bits, as in

j = 1 + (int) (10.0 * rand() / (RAND_MAX + 1.0));

However, this version introduces a different problem: it's not uniform,
even if rand is. For small ranges like 1..10 the nonuniformity isn't
noticeable, but for larger ones (i.e. 1..RAND_MAX/100) it's a definite
problem.

When producing floating point ranges this approach is okay, because the
values in the target range are dense enough that they'r e nearly
continuous. For integral ranges, though, you need a more sophisticated
technique. TR1 will do this, with std::tr1::uniform_int.
 
L

Larry I Smith

Pete said:
However, this version introduces a different problem: it's not uniform,
even if rand is. For small ranges like 1..10 the nonuniformity isn't
noticeable, but for larger ones (i.e. 1..RAND_MAX/100) it's a definite
problem.

When producing floating point ranges this approach is okay, because the
values in the target range are dense enough that they'r e nearly
continuous. For integral ranges, though, you need a more sophisticated
technique. TR1 will do this, with std::tr1::uniform_int.

Yes, I understand the above, but does my suggestion

double d = d = (rand() / ((double)RAND_MAX));

Meet the OP's original need for a number between 0.0
and 1.0?

Regards,
Larry
 
?

=?iso-8859-1?Q?Ali_=C7ehreli?=

Pete Becker said:
However, this version introduces a different problem: it's not uniform,
even if rand is. For small ranges like 1..10 the nonuniformity isn't
noticeable, but for larger ones (i.e. 1..RAND_MAX/100) it's a definite
problem.

When producing floating point ranges this approach is okay, because the
values in the target range are dense enough that they'r e nearly
continuous. For integral ranges, though, you need a more sophisticated
technique. TR1 will do this, with std::tr1::uniform_int.

For integral ranges, here is a function that takes care of the
non-uniformity issue by discarding the extra values:

int randN(int n)
{
const unsigned range = ((unsigned)(RAND_MAX)+1) / n;
int r;

do r = rand() / range;
while (r >= n);

return r;
}

I first heard about this method in a post by Andrew Koenig. The above
corrected version came in further discussions.

Note: Purists may want to use static_cast<unsigned> when casting :)

Ali
 
K

Kai-Uwe Bux

Roman said:
How does it come to that percentage?

Well, right of hand, there is no reason to expect a uniform distribution:
essentially you are starting with a uniformly distributed random variable
X in some interval [a,b], and then you turn this into

fractional part of ( 1/X )

Note that 1/X is not uniformly distributed in [1/b,1/a], and even if it
was taking fractional parts would not be unless both 1/b and 1/a are
an integer distance apart. Moreover, it is apparent that the resulting
distribution will depend heavily on the choice of the initial inverval
[a,b]. Thus, I expected a skewed distribution. Very likely it has several
bumps (I would expect one for each unit interval in [1/b,1/a]). Also the
distribution is probably not easy to describe and analyze theoretically.

As for the percentages, I just ran several experiments (drawing about
1.000.000 instances each) and counted. The numbers I gave are just results
of these experiments.


Best

Kai-Uwe Bux
 
P

__PPS__

Pete said:
When producing floating point ranges this approach is okay, because the
values in the target range are dense enough that they'r e nearly
continuous. For integral ranges, though, you need a more sophisticated
technique. TR1 will do this, with std::tr1::uniform_int.


Or if you don't have std::tr1::uniform_int use boost::uniform_int. For
a better random numbers library check out boost.random
 
P

Pete Becker

Larry said:
Yes, I understand the above, but does my suggestion

double d = d = (rand() / ((double)RAND_MAX));

Meet the OP's original need for a number between 0.0
and 1.0?

As I said, "When producing floating point ranges this approach is okay..."
 
L

Lionel B

Larry I Smith said:
If you want to generate a random integer between 1 and 10, you
should always do it by using high-order bits, as in

j = 1 + (int) (10.0 * rand() / (RAND_MAX + 1.0));

and never by anything resembling

j = 1 + (rand() % 10);

(which uses lower-order bits).

This is certainly true of linear congruential PRNGs (such as [usually] the standard library rand()), where lower-order
bits are known to be rather non-random. However, to what extent does this hold for other PRNGs such as the Mersenne
Twister, lagged Fibonacci, etc. which one would expect to have more random low bits?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,579
Members
45,053
Latest member
BrodieSola

Latest Threads

Top