KISS4691, a potentially top-ranked RNG.

O

orz

So for the put= values in fortran, you need a vector of pseudorandom
integers, which is as good as it gets without truly random devices,
making--one hopes-a period that is large with respect to the interval
you're interested in.

It doesn't seem like a problem with epistemology as much a mathematical
ceiling on how much randomness you can create by a handful of values.


I think you have part of it backwards:
elseif (x < 0) then
tLTx = 1
elseif (t < 0) then
tLTx = 0
endif

should be:
elseif (x < 0) then
tLTx = 0
elseif (t < 0) then
tLTx = 1
endif

....if you want to produce an sequence of numbers identical to his C
code. Though the way you have it written produces significantly
higher quality output according to some statistical tests.

Another issue to worry about is the rightshift must be an unsigned
rightshift, but a quick googling suggests that fortran ISHFT is
unsigned anyway.
 
O

orz

Whoops, I quoted the wrong previous post in my reply... that was
intended to be a reply to Gib Bogle's most recent post.
 
G

Gib Bogle

orz said:
I think you have part of it backwards:
elseif (x < 0) then
tLTx = 1
elseif (t < 0) then
tLTx = 0
endif

should be:
elseif (x < 0) then
tLTx = 0
elseif (t < 0) then
tLTx = 1
endif

...if you want to produce an sequence of numbers identical to his C
code. Though the way you have it written produces significantly
higher quality output according to some statistical tests.

Another issue to worry about is the rightshift must be an unsigned
rightshift, but a quick googling suggests that fortran ISHFT is
unsigned anyway.

I think you are wrong. The code as I have it is correct, as far as I'm
concerned, and it produces identical results to the C code, when the signed
values are treated as unsigned (or vice versa). Remember, a signed negative
(e.g. x < 0) is representing an unsigned value greater than 2^31 - 1. In the
conditional code you refer to, signed x < 0 and signed t > 0 => unsigned x >=
2^31 and unsigned t < 2^31. Q.E.D.

As Glen pointed out, this all assumes a two's complement number representation,
such as is used by my Intel CPU/Intel compiler combination.
 
O

orz

I think you are wrong.  The code as I have it is correct, as far as I'm
concerned, and it produces identical results to the C code, when the signed
values are treated as unsigned (or vice versa).  Remember, a signed negative
(e.g. x < 0) is representing an unsigned value greater than 2^31 - 1.  In the
conditional code you refer to, signed x < 0 and signed t > 0 => unsigned x >=
2^31 and unsigned t < 2^31.  Q.E.D.

As Glen pointed out, this all assumes a two's complement number representation,
such as is used by my Intel CPU/Intel compiler combination.

Whoops, you are correct, I somehow got the two mixed up even when
testing them in code.

But in that case I don't know what you meant in your previous post
when you asked about differences between your output and the original
output. If the information produced is identical, then the
information produced is identical.
 
G

Gib Bogle

orz said:
Whoops, you are correct, I somehow got the two mixed up even when
testing them in code.

But in that case I don't know what you meant in your previous post
when you asked about differences between your output and the original
output. If the information produced is identical, then the
information produced is identical.

Please look at the code. You'll see that tLTx is used only to compute
c_this_works. The code as I posted it contains the method of computing c
suggested by George. This generates results different from the C code. If you
uncomment the line c = c_this_works you get identical results to the C code.
I'm sure I already said this.
 
O

orz

Please look at the code.  You'll see that tLTx is used only to compute
c_this_works.  The code as I posted it contains the method of computing c
suggested by George.  This generates results different from the C code.  If you
uncomment the line c = c_this_works you get identical results to the C code.
I'm sure I already said this.

Yes. Sorry. I was reading backwards from your last post and ended up
missing the point. And getting confused on the sign.

Anyway, the issue is that Georges code uses a different definition of
sign than your implementation of it - his code is actually correct if
sign(x) is 1 if x is positive and 0 if x is negative. Since your sign
function returns -1 on negative, using it produces the wrong
results.

side note: The incorrect results produced that way at a appear to have
vaguely similar statistical properties as the original C codes output,
passing and failing the same tests that the original C code does in my
brief tests.
 
G

Gib Bogle

orz said:
Yes. Sorry. I was reading backwards from your last post and ended up
missing the point. And getting confused on the sign.

Anyway, the issue is that Georges code uses a different definition of
sign than your implementation of it - his code is actually correct if
sign(x) is 1 if x is positive and 0 if x is negative. Since your sign
function returns -1 on negative, using it produces the wrong
results.

side note: The incorrect results produced that way at a appear to have
vaguely similar statistical properties as the original C codes output,
passing and failing the same tests that the original C code does in my
brief tests.

Interesting, who would have guessed that there is a language in which sign(-1) = 0.
 
I

Ilmari Karonen

["Followup-To:" header set to sci.math.]
No, no, a thousand times, NO! That is NOT enough, though FAR too many
Web pages, published papers and books claim that it is. Disjointness
isn't even a poor relation of randomness.
[snip]
Parallel random number generation is not easy, and 99% of the stuff
published on it is somewhere between grossly misleading and complete
nonsense.

I think in the parallel case, one would want to be able to generate
a seed to produce values that are guaranteed not to overlap with any
other node. Maybe something like

call RANDOM_NEW_SEED( old_seed, n, new_seed, my_node )

would be a sufficient interface. new_seed:)) would depend on
my_node in such a way that the generated sequence would not overlap
with that produced by any other possible value of my_node (again,
assuming the cycle length is long enough to satisfy that request).

Let me try to demonstrate, using a simplified (and admittedly somewhat
artificial) counterexample, why lack of overlap is not a sufficient
condition for independence:

Assume we have a "random oracle" R: Z -> [0,1) which takes as its
input an integer and returns a uniformly distributed random real
number between 0 and 1, such that the same input always produces the
same output, but that the outputs for different inputs are completely
independent.

Given such an oracle, we can construct a perfect PRNG P: Z -> [0,1)^N
which takes as its input an integer k, and returns the sequence <R(k),
R(k+1), R(k+2), ...>. Obviously, the sequence generated by P would be
indistinguishable from random (since it is indeed, by definition,
random) and non-repeating.

Now, what if we wanted several independent streams of random numbers?
Obviously, we can't just pass different seed values to P, since we
know that the streams would eventually overlap. We could solve the
problem by modifying P, e.g. to make it return the sequence
<R(f(k,0)), R(f(k,1)), R(f(k,2)), ...> instead, where f is an
injective function from Z^2 to Z. But what if we wanted to do it
_without_ modifying P?

One _bad_ solution would be to define a new generator Q: Z -> [0,1)^N
as Q(k)_i = frac(P(0)_i + ak), where a is some irrational number and
frac: R -> [0,1) returns the part of a real number "after the decimal
point" (i.e. frac(x) = x - floor(x)).

Clearly, the sequences returned by Q for different seed values would
never overlap, and, individually, they would each still be perfectly
random. Yet, just as clearly, all those sequences would be related by
a very trivial linear relation that would be blatantly obvious if you,
say, plotted two of them against each other.

The _right_ solution, as I suggested above, would've been to redesign
the underlying generator so that the different streams would be not
just non-overlapping but actually statistically independent. The same
general advice holds for practical PRNGs too, not just for my
idealized example.

You can't just take any arbitrary PRNG, designed for generating a
_single_ stream of random-seeming numbers, and expect the output to
still look random if you get to compare several distinct output
streams generated from related seeds. It might turn out that it does
look so, if the designer of the PRNG was careful or just lucky. But
designing a PRNG to satisfy such a requirement is a strictly harder
problem than simply designing it to generate a single random-looking
stream.
 
R

Ron Shepard

orz said:
If an RNG has a large number of possible states (say, 2^192) then it's
unlikely for any two runs to EVER reach identical states by any means
other than starting from the same seed. This RNG has a period vastly
in excess of "large", so for practical purposes it just won't
happen.

If you think about it for a second, you can see that this is not true
for a PRNG with the standard fortran interface. This interface allows
the seed values to be both input and output. The purpose (presumably,
although this is not required by the standard) for this is to allow the
seed to be extracted at the end of one run, and then used to initiate
the seed for the next run. This eliminates the possibility of the two
runs having overlapping sequences (provided the PRNG cycle is long
enough, of course).

But, with the ability to read the internal state of the PNRG, this would
allow the user to generate the exact same sequence twice (useful for
debugging, for example). Or the user could generate two sequences that
have arbitrary numbers of overlapping values (the sequences offset by 1,
offset by 2, offset by 100, or whatever). I can't think of any
practical reason for doing this. However, since it can be done with
100% certainty, it is not "unlikely" for it to occur, assuming the user
wants it to happen.

In a previous post, I suggested a new (intrinsic fortran) subroutine of
the form

call RANDOM_NEW_SEED( old_seed, n, new_seed )

One way to implement this subroutine would be for new_seed:)) to be the
internal state corresponding to the (n+1)-th element of the sequence
that results (or would result) from initialization with old_seed:)). Of
course, for this to be practical, especially for large values of n, this
would require the PRNG algorithm to allow the internal state for
arbitrary elements within a sequence to be determined in this way. That
would place practical restrictions on which PRNG algorithms can be used
with this interface, especially if additional features such as
independence of the separate sequences are also required.

$.02 -Ron Shepard
 
S

sturlamolden

I did a speed comparison on my laptop of KISS4691 against Mersenne
Twister 19937. KISS4691 produced about 110 million random numbers per
second, whereas MT19937 produced 118 million. If the compiler was
allowed to inline KISS4691, the performance increased to 148 millon
per second. The function call overhead is imporant, and without taking
it into consideration, MT19937 will actually be faster than KISS4691.

Also note that a SIMD oriented version of MT19937 is twice as fast as
the one used in my tests. There is also a version of Mersenne Twister
suitable for parallel processing and GPUs. So a parallel or SIMD
version of Mersenne Twister is likely to be a faster PRNG than
KISS4691.

Then for the question of numerical quality: Is KISS4691 better than
MT19937? I don't feel qualified to answer this. Marsaglia is the
world's foremost authority on random number generators, so I trust his
advice, but MT19937 is claimed to give impeckable quality except for
crypto.

For those who need details:

The speed test was basically running the PRNGs for five seconds or
more, querying Windows' performance counter on every ten million call,
and finally subtracting an estimate of the timing overhead. The C
compiler was Microsoft C/C++ version 15.00.30729.01 (the only
optimization flag I used was /O2, i.e. maximize for speed). The
processor is AMD Phenom N930 @ 2.0 Gz.



Sturla Molden
 
G

glen herrmannsfeldt

(snip)
In a previous post, I suggested a new (intrinsic fortran) subroutine of
the form
call RANDOM_NEW_SEED( old_seed, n, new_seed )
One way to implement this subroutine would be for new_seed:)) to be the
internal state corresponding to the (n+1)-th element of the sequence
that results (or would result) from initialization with old_seed:)). Of
course, for this to be practical, especially for large values of n, this
would require the PRNG algorithm to allow the internal state for
arbitrary elements within a sequence to be determined in this way. That
would place practical restrictions on which PRNG algorithms can be used
with this interface, especially if additional features such as
independence of the separate sequences are also required.

The standard mostly does not specify what is, or is not, practical.

DO I=1,INT(1000000,SELECTED_INT_KIND(20))**3
ENDDO

For many generators, one should not specify too large a value
for the above RANDOM_NEW_SEED routine.

-- glen
 
U

Uno

sturlamolden said:
I did a speed comparison on my laptop of KISS4691 against Mersenne
Twister 19937. KISS4691 produced about 110 million random numbers per
second, whereas MT19937 produced 118 million. If the compiler was
allowed to inline KISS4691, the performance increased to 148 millon
per second. The function call overhead is imporant, and without taking
it into consideration, MT19937 will actually be faster than KISS4691.

Also note that a SIMD oriented version of MT19937 is twice as fast as
the one used in my tests. There is also a version of Mersenne Twister
suitable for parallel processing and GPUs. So a parallel or SIMD
version of Mersenne Twister is likely to be a faster PRNG than
KISS4691.

Then for the question of numerical quality: Is KISS4691 better than
MT19937? I don't feel qualified to answer this. Marsaglia is the
world's foremost authority on random number generators, so I trust his
advice, but MT19937 is claimed to give impeckable quality except for
crypto.

For those who need details:

The speed test was basically running the PRNGs for five seconds or
more, querying Windows' performance counter on every ten million call,
and finally subtracting an estimate of the timing overhead. The C
compiler was Microsoft C/C++ version 15.00.30729.01 (the only
optimization flag I used was /O2, i.e. maximize for speed). The
processor is AMD Phenom N930 @ 2.0 Gz.


I'd be curious to see the source you used for this purpose. This is
very-slightly adapted from Dann Corbitt in ch 13 of unleashed.

$ gcc -Wall -Wextra cokus2.c -o out
$ ./out
3510405877 4290933890 2191955339 564929546 152112058 4262624192
2687398418
268830360 1763988213 578848526 4212814465 3596577449 4146913070
950422373
1908844540 1452005258 3029421110 142578355 1583761762 1816660702
2530498888
1339965000 3874409922 3044234909 1962617717 2324289180 310281170
981016607
908202274 3371937721 2244849493 675678546 3196822098 1040470160
3059612017
3055400130 2826830282 2884538137 3090587696 2262235068 3506294894
2080537739
1636797501 4292933080 2037904983 2465694618 1249751105 30084166
112252926
1333718913 880414402 334691897 3337628481 17084333 1070118630

....

3009858040 3815089086 2493949982 3668001592 1185949870 2768980234
3004703555
1411869256 2625868727 3108166073 3689645521 4191339889 1933496174
1218198213
3716194408 1148391246 1345939134 3517135224 3320201329 4292973312
3428972922
1172742736 275920387 617064233 3754308093 842677508 529120787
1121641339
$ cat cokus2.c


#include <stdio.h>
#include <stdlib.h>
#include "mtrand.h"
/*
uint32 must be an unsigned integer type capable of holding at least 32
bits; exactly 32 should be fastest, but 64 is better on an Alpha with
GCC at -O3 optimization so try your options and see what's best for you
*/

typedef unsigned long uint32;

/* length of state vector */
#define N (624)

/* a period parameter */
#define M (397)

/* a magic constant */
#define K (0x9908B0DFU)

/* mask all but highest bit of u */
#define hiBit(u) ((u) & 0x80000000U)

/* mask all but lowest bit of u */
#define loBit(u) ((u) & 0x00000001U)

/* mask the highest bit of u */
#define loBits(u) ((u) & 0x7FFFFFFFU)

/* move hi bit of u to hi bit of v */
#define mixBits(u, v) (hiBit(u)|loBits(v))

/* state vector + 1 extra to not violate ANSI C */
static uint32 state[N + 1];

/* next random value is computed from here */
static uint32 *next;

/* can *next++ this many times before reloading */
static int left = -1;


/*
**
** We initialize state[0..(N-1)] via the generator
**
** x_new = (69069 * x_old) mod 2^32
**
** from Line 15 of Table 1, p. 106, Sec. 3.3.4 of Knuth's
** _The Art of Computer Programming_, Volume 2, 3rd ed.
**
** Notes (SJC): I do not know what the initial state requirements
** of the Mersenne Twister are, but it seems this seeding generator
** could be better. It achieves the maximum period for its modulus
** (2^30) iff x_initial is odd (p. 20-21, Sec. 3.2.1.2, Knuth); if
** x_initial can be even, you have sequences like 0, 0, 0, ...;
** 2^31, 2^31, 2^31, ...; 2^30, 2^30, 2^30, ...; 2^29, 2^29 + 2^31,
** 2^29, 2^29 + 2^31, ..., etc. so I force seed to be odd below.
**
** Even if x_initial is odd, if x_initial is 1 mod 4 then
**
** the lowest bit of x is always 1,
** the next-to-lowest bit of x is always 0,
** the 2nd-from-lowest bit of x alternates ... 0 1 0 1 0 1 0 1 ... ,
** the 3rd-from-lowest bit of x 4-cycles ... 0 1 1 0 0 1 1 0 ... ,
** the 4th-from-lowest bit of x has the 8-cycle ... 0 0 0 1 1 1 1 0 ... ,
** ...
**
** and if x_initial is 3 mod 4 then
**
** the lowest bit of x is always 1,
** the next-to-lowest bit of x is always 1,
** the 2nd-from-lowest bit of x alternates ... 0 1 0 1 0 1 0 1 ... ,
** the 3rd-from-lowest bit of x 4-cycles ... 0 0 1 1 0 0 1 1 ... ,
** the 4th-from-lowest bit of x has the 8-cycle ... 0 0 1 1 1 1 0 0 ... ,
** ...
**
** The generator's potency (min. s>=0 with (69069-1)^s = 0 mod 2^32) is
** 16, which seems to be alright by p. 25, Sec. 3.2.1.3 of Knuth. It
** also does well in the dimension 2..5 spectral tests, but it could be
** better in dimension 6 (Line 15, Table 1, p. 106, Sec. 3.3.4, Knuth).
**
** Note that the random number user does not see the values generated
** here directly since reloadMT() will always munge them first, so maybe
** none of all of this matters. In fact, the seed values made here could
** even be extra-special desirable if the Mersenne Twister theory says
** so-- that's why the only change I made is to restrict to odd seeds.
*/

void mtsrand(uint32 seed)
{
register uint32 x = (seed | 1U) & 0xFFFFFFFFU,
*s = state;
register int j;

for (left = 0, *s++ = x, j = N; --j;
*s++ = (x *= 69069U) & 0xFFFFFFFFU);
}


uint32
reloadMT(void)
{
register uint32 *p0 = state,
*p2 = state + 2,
*pM = state + M,
s0,
s1;
register int j;

if (left < -1)
mtsrand(4357U);

left = N - 1, next = state + 1;

for (s0 = state[0], s1 = state[1], j = N - M + 1; --j; s0 = s1, s1
= *p2++)
*p0++ = *pM++ ^ (mixBits(s0, s1) >> 1) ^ (loBit(s1) ? K : 0U);

for (pM = state, j = M; --j; s0 = s1, s1 = *p2++)
*p0++ = *pM++ ^ (mixBits(s0, s1) >> 1) ^ (loBit(s1) ? K : 0U);

s1 = state[0], *p0 = *pM ^ (mixBits(s0, s1) >> 1) ^ (loBit(s1) ? K
: 0U);
s1 ^= (s1 >> 11);
s1 ^= (s1 << 7) & 0x9D2C5680U;
s1 ^= (s1 << 15) & 0xEFC60000U;
return (s1 ^ (s1 >> 18));
}


uint32 mtrand(void)
{
uint32 y;

if (--left < 0)
return (reloadMT());

y = *next++;
y ^= (y >> 11);
y ^= (y << 7) & 0x9D2C5680U;
y ^= (y << 15) & 0xEFC60000U;
return (y ^ (y >> 18));
}
#define UNIT_TEST
#ifdef UNIT_TEST
int main(void)
{
int j;

/* you can seed with any uint32, but the best are odds in 0..(2^32
- 1) */
mtsrand(4357U);

/* print the first 2,002 random numbers seven to a line as an
example */
for (j = 0; j < 2002; j++)
printf(" %10lu%s", (unsigned long) mtrand(), (j % 7) == 6 ?
"\n" : "");

return (EXIT_SUCCESS);
}
#endif

// gcc -Wall -Wextra cokus2.c -o out
$ cat mtrand.h
/*
** The proper usage and copyright information for
** this software is covered in DSCRLic.TXT
** This code is Copyright 1999 by Dann Corbit
*/


/*
** Header files for Mersenne Twister pseudo-random number generator.
*/
extern void mtsrand(unsigned long seed);
extern unsigned long reloadMT(void);
extern unsigned long mtrand(void);
$
 
R

robin

| robin wrote:

| > I have already posted a PL/I version using unsigned arithmetic.
| >
| > Here is another version, this time using signed arithmetic :--
| >
| > (NOSIZE, NOFOFL):
| > RNG: PROCEDURE OPTIONS (MAIN, REORDER);
| >
| > declare (xs initial (521288629), xcng initial (362436069),
| > Q(0:4690) ) static fixed binary (31);
| >
| > MWC: procedure () returns (fixed binary (31));
| > declare (t,x,i) fixed binary (31);
| > declare (c initial (0), j initial (4691) ) fixed binary (31) static;
| > declare (t1, t2, t3) fixed binary (31);
| >
| > if j < hbound(Q,1) then j = j + 1; else j = 0;
| > x = Q(j);
| > t = isll(x,13)+c+x;
| > t1 = iand(x, 3) - iand(t, 3);
| > t2 = isrl(x, 2) - isrl(t, 2);
| > if t2 = 0 then t2 = t1;
| > if t2 > 0 then t3 = 1; else t3 = 0;
| > c = t3 + isrl(x, 19);
| > Q(j)=t;
| > return (t);
| > end MWC;
| >
| > CNG: procedure returns (fixed binary (31));
| > xcng=bin(69069)*xcng+bin(123);
| > return (xcng);
| > end CNG;
| >
| > XXS: procedure returns (fixed binary (31));
| > xs = ieor (xs, isll(xs, 13) );
| > xs = ieor (xs, isrl(xs, 17) );
| > xs = ieor (xs, isll(xs, 5) );
| > return (xs);
| > end XXS;
| >
| > KISS: procedure returns (fixed binary (31));
| > return ( MWC()+CNG+XXS );
| > end KISS;
| >
| > declare (i,x) fixed binary (31);
| > declare y fixed decimal (11);
| >
| > Q = CNG+XXS; /* Initialize. */
| > do i = 1 to 1000000000; x=MWC(); end;
| > put skip edit (" Expected MWC result = 3740121002", 'computed =', x)
| > (a, skip, x(12), a, f(11));
| > y = iand(x, 2147483647);
| > if x < 0 then y = y + 2147483648;
| > put skip edit (y) (x(11), f(22)); put skip;
| > do i = 1 to 1000000000; x=KISS; end;
| > put skip edit ("Expected KISS result = 2224631993", 'computed =', x)
| > (a, skip, x(12), a, f(11));
| > y = iand(x, 2147483647);
| > if x < 0 then y = y + 2147483648;
| > put skip edit (y) (x(11), f(22));
| >
| > end RNG;

| If you were to comment out the PL/I command line that compiled this,
| what would it be?

???
 
S

sturlamolden

Mersenne Twister 19937 speed, single (Hz): 1.63278e+008
Mersenne Twister 19937 speed, array (Hz): 1.3697e+008
KISS 4691 speed, single (Hz):              1.86338e+008
KISS 4691 speed, array (Hz):              1.87675e+008

Those numbers are in 100 million samples per second (10^8), so you
have the same order of magnitude I reported. The array version of KISS
is inlined (by macro expansion).

These numbers are likely dependent on compiler and hardware.

In you case, KISS4691 is always faster than MT19937. That is what I
expected to see on my laptop as well, but did not. The speed
difference is not very substantial, though, less than a factor of 2.

I am more concerned about numerical quality. Which one should we use
based on that?


Sturla
 
O

orz

Those numbers are in 100 million samples per second (10^8), so you
have the same order of magnitude I reported. The array version of KISS
is inlined (by macro expansion).

These numbers are likely dependent on compiler and hardware.

In you case, KISS4691 is always faster than MT19937. That is what I
expected to see on my laptop as well, but did not. The speed
difference is not very substantial, though, less than a factor of 2.

I am more concerned about numerical quality. Which one should we use
based on that?

Sturla

MT19937 fails only a few relatively obscure empirical tests. There's
a widespread belief that it's a good RNG and its few failed tests have
little real world consequence. Reduced strength versions of MT19937
have a strong tendency to fail additional tests.

KISS4691 fails no single-seed empirical tests, but it does fail some
empirical tests for correlation between different seeds. Reduced
strength versions tend to do better than reduced strength versions of
MT19937, but not by much.

If you want to go all out for quality over speed, the standard method
is to simply encrypt the output of some RNG with AES or some
comparable algorithm. That's much much slower than MT19937 or
KISS4691, but pretty much perfect quality. Or you could xor the
output of KISS4691 and MT19937.

If you're willing to go with unknown-shortest-cycle-length RNGs then
there are other options, many of which do perfectly on all empirical
tests, and some of those also offer other advantages (such as being
substantially faster than either KISS4691 or MT19937, or offering some
degree of cryptographic security, or passing all empirical tests even
in drastically reduced strength versions). But the famous people in
RNG theory generally seem to think that unknown-shortest-cycle-length
RNGs should not be trusted.
 
O

orz

Interesting, who would have guessed that there is a language in which sign(-1) = 0.

I have to correct myself for swapping 0 and 1 *again*. And I'm not
even dyslexic, so far as I know.

His code assumed sign returned 1 on negative, and 0 otherwise, as in a
simple unsigned 31 bit rightshift. The exact opposite of what I
said.
 
U

Uno

orz said:
I have to correct myself for swapping 0 and 1 *again*. And I'm not
even dyslexic, so far as I know.

His code assumed sign returned 1 on negative, and 0 otherwise, as in a
simple unsigned 31 bit rightshift. The exact opposite of what I
said.

Zero: the other one.

Zero: One-Lite.

Telling left from right is sometimes the hardest thing.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,020
Latest member
GenesisGai

Latest Threads

Top