S
Sebastian Garth
I'd like to present a new variation of the standard Shrinking LFSR Generator (SLG) that, with just a few slight modifications, achieves both non-periodicity and a significantly improved level security. An SLG is of course currently considered impervious to cryptoanalysis except for the case where the characteristic polynomial is known (ie: public). So the first modification proposed is to have the polynomial change somehow over time. In this casewe'll simply increment the polynomial's binary representation whenever theLFSR's internal state has completed a complete cycle (which will be of course some length on [2, (2^N)-1], where N is the degree of the polynomial). The next modification is to dynamically increase the size of the LFSR whenever an overflow of the polynomial's binary representation is imminent. As an arbitrary convention (and better suggestions are certainly welcome here),the internal state of the LFSR is set to the seed and the binary representation of the polynomial is set to seed + 1. That's basically it!
In summary, the new design should demonstrate the following properties:
1) Non-periodicity;
2) Security; Implied by the current consensus that the only weakness of theSLG is the "known polynomial predicament".
3) Memory efficiency; This is implied by the fact that we are merely incrementing the polynomial's binary representation, and that for any given N thenumber of characteristic polynomials with maximal period ((2^N)-1) is T(N)where T is Euler's totient function, so of course the result is a very slow rate of growth indeed. To illustrate the point, if we were to imagine running the generator non-stop for a trillion years on today's computers, the amount of working memory needed would be much less so than what would be required to store the information contained in this very sentence!
4) Low algorithmic complexity; This is of course somewhat of a subjective statement, but just as an example, a very lightly optimized proof-of-conceptprogram written by the author was capable of producing about 3 million output bits per second. A highly optimized version might have produced 2 or 3 times as much, I imagine.
5) Capable of producing "sufficient" randomness; Implied by what research has been done on SLG's as well as the empirical evidence (albeit meager) that has been gathered thus far by the author. Up to this point, the testing Ihave conducted essentially amounts to running the output of the generator through the ENT tool and several file compressors. In the latter case, compression levels consistently amounted to 2% or less for streams up to 3000 bytes and 0% for anything larger, suggesting a high degree of entropy. A fair representation of the type of results obtained from the ENT tool are as follows:
6) Bit-width neutrality; There is no need for carefully chosen "magic constants" which depend so delicately on machine word size and other such factors, and the input seed can be any arbitrary value whatsoever, even one whosemagnitude is several thousand bits or what have you.
7) Simplicity; The algorithm is arguably trivial to understand, easy to implement, and overall analysis of the algorithm should invariably prove to bea relatively straight-forward task.
A proof-of-concept implementation of the algorithm written in the C programming language can be found here (http://tinyurl.com/moar-zip). I would sincerely appreciate any comments relating to the algorithm, the source code, or anything else that may be relevant to the discussion here.
In summary, the new design should demonstrate the following properties:
1) Non-periodicity;
2) Security; Implied by the current consensus that the only weakness of theSLG is the "known polynomial predicament".
3) Memory efficiency; This is implied by the fact that we are merely incrementing the polynomial's binary representation, and that for any given N thenumber of characteristic polynomials with maximal period ((2^N)-1) is T(N)where T is Euler's totient function, so of course the result is a very slow rate of growth indeed. To illustrate the point, if we were to imagine running the generator non-stop for a trillion years on today's computers, the amount of working memory needed would be much less so than what would be required to store the information contained in this very sentence!
4) Low algorithmic complexity; This is of course somewhat of a subjective statement, but just as an example, a very lightly optimized proof-of-conceptprogram written by the author was capable of producing about 3 million output bits per second. A highly optimized version might have produced 2 or 3 times as much, I imagine.
5) Capable of producing "sufficient" randomness; Implied by what research has been done on SLG's as well as the empirical evidence (albeit meager) that has been gathered thus far by the author. Up to this point, the testing Ihave conducted essentially amounts to running the output of the generator through the ENT tool and several file compressors. In the latter case, compression levels consistently amounted to 2% or less for streams up to 3000 bytes and 0% for anything larger, suggesting a high degree of entropy. A fair representation of the type of results obtained from the ENT tool are as follows:
Entropy = 7.999800 bits per byte.
Optimum compression would reduce the size
of this 1000000 byte file by 0 percent.
Chi square distribution for 1000000 samples is 277.12, and randomly
would exceed this value 16.31 percent of the times.
Arithmetic mean value of data bytes is 127.4699 (127.5 = random).
Monte Carlo value for Pi is 3.146724587 (error 0.16 percent).
Serial correlation coefficient is 0.000509 (totally uncorrelated = 0.0).
6) Bit-width neutrality; There is no need for carefully chosen "magic constants" which depend so delicately on machine word size and other such factors, and the input seed can be any arbitrary value whatsoever, even one whosemagnitude is several thousand bits or what have you.
7) Simplicity; The algorithm is arguably trivial to understand, easy to implement, and overall analysis of the algorithm should invariably prove to bea relatively straight-forward task.
A proof-of-concept implementation of the algorithm written in the C programming language can be found here (http://tinyurl.com/moar-zip). I would sincerely appreciate any comments relating to the algorithm, the source code, or anything else that may be relevant to the discussion here.