##
Çok boyutlu kaotik sistemler ile şifreleme

Çok boyutlu kaotik sistemler ile şifreleme

##### Dosyalar

##### Tarih

1997

##### Yazarlar

Yiğit, Asiye

##### Süreli Yayın başlığı

##### Süreli Yayın ISSN

##### Cilt Başlığı

##### Yayınevi

Fen Bilimleri Enstitüsü

##### Özet

Günümüzde astronomi, biyoloji, biyofizik, kimya, mühendislik, jeoloji, matematik, tıp, meteoroloji, plazma, fizik ve hatta sosyal bilimlerde üzerinde çalışılan bir konu olan kaosun analizi oldukça zordur. Buna rağmen, lojistik dönüşümün belli bir modelinden, kaotik davranışın açıklanması için yararlanılması oldukça iyi bir yoldur. Kaosun üç karakteristiği vardır: İlk koşullara aşın duyarlılık, topolojik geçişlilik ve yoğun periyodik orbitler. Hamurun yoğurulması işlemi, kaosun tüm bu matematiksel özelliklerine ulaşılmasını sezgisel olarak sağlar. Bütün güvenli haberleşme sistemleri, asıl mesaj işaretinin yalnızca mesajın iletilmek istendiği alıcılar tarafindan bilinen ve diğer alıcılar tararından çözülmesi zor olan bir şifreyle karıştırılmasına dayanır. Doğrusal ve doğrusal olmayan ötemeli yazıcılar rasgele şifreleme işaretleri üretmede kullanılan donanımlara örnektirler. Kaotik sistemler çok az parametre ile tanımlı olmalarına rağmen ancak başlangıç koşullarının kesin olarak belirlenmesi durumunda nasıl davranacağı tamamıyla bilinebilen aksi halde ise rasgele davranış sergileyen sistemlerdir. Başlangıç koşullarına aşın duyarlık özelliği bir çok araştırmacıyı, kaotik işaretleri şifreleme işaretleri olarak kullanmaya yöneltmiştir. Fakat gerçel sayı dizileri olarak karşımıza çıkan kaotik işaretler, sayısal gerçeklemede kuvantalama hatalarından dolayı periyodu kısa işaretlere dönüşürler. Buna rağmen, kaotik dönüşümlerin yüksek prezisyonlu bilgisayarlarda gerçekleştirilmesi sonucu meydana gelen çevrim uzunluklarının pratik uygulamalar için yeterli olduğu gösterildi. Tek boyutlu kaotik dönüşümlerin şifrelemede kullanılmasını öneren çalışmalar yanında, çok boyutlu kaotik sistemlerin kullanılmasını öneren çalışmalar da vardır. Fakat bu çalışmalarda sistemlerin kaotik yapısı, At zaman adımının büyük tutularak nümerik olarak çözülmesiyle yok edilmiştir. Bu çalışmada Lorenz ve Chua sistemlerinin, kaotik durumda da rasgele sayılar üretebileceği iddia edilmektedir. Bu amaçla sistemin küçük zaman adımları için elde edilen nümerik çözümlerinin en düşük anlamlı rakamlarından yararlanılarak oluşturulan dizilere, rasgele olduklarım doğrulamak için istatistiksel rasgelelik testleri uygulandı Sonuçlar yukarıda verilen iddiayı destekler yöndedir. Böylelikle Lorenz ve Chua sistemlerinin söz edilen metot ile, sayısal bir güvenli haberleşme sisteminde şifre olarak kullanılabileceği gösterilmiş oldu.

In the last thirty five years, there has been a great deal of interest in the study of nonlinear dynamical systems. And, the result has been a better qualitative understanding of previously intractable systems. The apparent random behavior of solutions of deterministic systems is now very easily comprehended in many cases. Generally speaking, the analysis of chaos is extremely difficult. However, in the specific model case of the logistic map there is a beautiful and illuminating way to really understand chaotic behavior through and through. There are many possible definitions of chaos, ranging from measure theoretic notions of randomness in ergodic theory to the topological approach we will adopt here. Definition 1. f:J -> J is said to be topologicals transitive if for any pair of open setsU.VcJ there exists k> 0 such that fk(U)nV*0. Definition 2. f:J -» J has sensitive dependence on initial conditions if there exists 8> 0 such that, for any x e J and any neighborhood N of x, there exists y e N and n £ 0 such that |f " (x) - f n (y)| > 8. Definition 3. Let V be a set. f: V -» V is said to be chaotic on V if 1. f has sensitive dependence on initial conditions. 2. f is topologicals transitive. 3. Periodic points are dense in V. To summarize, a chaotic map possesses three ingredients: unpredictability, indecomposability, and an element of regularity. A chaotic system is unpredictable because of the sensitive dependence on initial conditions. It cannot be broken down or decomposed into two subsystems (two invariant open subsets) which do not interact under f because of topological transitivity. And, in the midst of this random behavior, we nevertheless have an element of regularity, namely the periodic points which are dense. Having discussed the phenomena of chaos and the routes leading to it in 'simple' one-dimensional settings, we continue with the exposition of chaos in dynamical systems of two or more dimensions. This the relevant case for models in the natural sciences since very rarely can processes be described by only one single state variable. They are strange attractors. Roughly speaking, an attractor is an invariant set to which all nearby orbits converge. The first strange attractor ever recognized as such in the natural sciences is the Lorenz attractor, discovered in 1962. The Lorenz system is a model of thermal convection which, however includes not only a description of the motion of some viscous fluid or atmosphere but also the information about distribution of heat, the driving force of thermal convection. Chaos is seen in the electrical circuits. Chua's circuit is the simplest electronic electronic circuit that satisfies the mathematical properties of chaos. In addition, this remarkable circuit is the only physical system for which the presence of xvin chaos has been proved mathematically. The circuit is readily constructed at low cost using standard electronic components and exhibits a rich variety of bifurcations and chaos. The focus of this research is that Chua attractor and the Lorenz attractor can be used in cryptography as a secure pseudorandom generator. Cryptography is set of techniques used to ensure the secrecy of messages when hostile personnel have the technical capability to intercept and correctly interpret an unprotected message, which is called the plaintext. After transformation to secret form, a message is called the ciphertext or a cryptogram. The process of transformation the plaintext into ciphertext is called encryption. Cryptanalysis is the unauthorized extraction of information from the ciphertext. A practical cryptographic communication system must produce ciphertext that only resists cryptanalysis, but also can be efficiently deciphered by authorized personnel. In a cryptographic system, a set of parameters that determines a specific cryptographic transformation or its inverse is called a key. In most applications, a single key determines both a cryptographic transformation and its inverse. Cryptographic systems are designed so that their security depends upon the inability of the cryptanalyst to determine the key even if she knows the structure of the cryptographic system. An enciphering algorithm or transformation is called a cipher. Messages can be enciphered by a block cipher, a stream cipher, or a combination of these ciphers. A block cipher divides the plaintext into separate blocks of m bits and associates with each block (xi,xi+, xl41B_,) of plaintext a block (yj,yi+,,...,yi+I1_i) of n bits of ciphertext such that yk =fk(xi,xi+1 xi+m_,), i^k^i+n-1 where the fk are functions. Messages are deciphered with inverse functions. For unambiguous deciphering, n 2: m is necessary; for ease of automation, n = m in nearly all practical block ciphers. In stream ciphers, on the other hand, the plaintext is encrypted on a bit - by - bit basis. In encrypting the dataflow to be transmitted, the key is fed into an algorithm called the pseudorandom bit generator to create a long sequence of binary signals. This "key - stream" is then mixed with the plaintext sequence, usually by Exclusive - OR (XOR bit - wise modulo - 2 addition) gates, to produce the ciphertext. Stream ciphers play an especially important role in cryptographic practices - both diplomatic and military - that protect communications in the very high frequency domain. The central problem in stream - cipher cryptography, however, is the difficulty of generating a long unpredictable sequence of binary signals from a short and random key. Unpredictable sequences are desirable in cryptography because it is impossible, given a reasonable segment of its signals and computer resources, to find out more about them. Pseudorandom bit generators have been widely used to construct these sequences. There are two types of stream ciphers: those for which the keystream is independent of the plaintext and those for which the keystream is not. Independently keyed ciphers are often called synchronous ciphers because they require synchronization between the keystream and the ciphertext for successful deciphering. Stream ciphers for which the keystream is a function of the plaintext are often called auto - key ciphers. xix The only unconditionally secure cipher is a synchronous cipher that generates a completely random keystream as long as the message. Because the key, which is the same as the keystream in this case, cannot be reused, this cipher is called the one - time tape or one - time pad. If the key is produced by a binary symmetric source such that the probability for any enciphering signal to be 1 is equal to 1/2 independently of the other signals, the key is random. A known - plaintext attack is an attempted cryptanalysis based upon some knowledge of corresponding ciphertext and plaintext. An unconditionally secure cipher is one that is invulnerable to cryptanalysis no matter how much ciphertext is available. A computationally secure cipher is one that is vulnerable to cryptanalysis only if a prohibitively large amount of computation is performed. Most practical ciphers are at best computationally secure. The necessity of long, frequently changed keys renders the one - time tape impractical for most applications. Therefore since the twenties, many cipher systems have operated with an approximate version of the one - time pad that uses long sequences of pseudorandom bits generated from short secret keys. Generally speaking, we can formulate three requirements for cryptographically secure keystream generators. (1) The period of the keystream must be large enough to accommodate the length of the transmitted message. (2) The output bits must be easy to generate. (3) The output bits must be hard to predict. Given the generator and the first n output bits, a(0), a(l),..., a(n-l), it should be computationally in feasible to predict the (n+l)* bit a(n) in the sequence with better than a 50 - 50 chance. That is, given a portion of the output sequence, the cryptanalyst should not generate other bits forward or backward. Linear congruence generators (LCGs) widely discussed method for generating pseudorandom numbers is based on recurrences of the form X(i + l) = aX(i) + b modm. (1) Here, (a,b,m) are the parameters describing the generator and can be utilized as secret keys. Xo, is the seed. If the parameters are chosen judiciously, the numbers X(i) will not repeat until all integers of the interval [0,m- 1] have occurred. Boyar shows that sequences generated by LCGs are not cryptographically secure, [45], Strong cryptographic sequences can be designed on the basis of shift registers even when the feedback is linear. A feedback shift register consists of n flip - flops and a feedback function that expresses each new element a(t), when t > n, of the sequence in terms of the previously generated elements a(t - n), a(t - n + 1),...,a(t -1). The feedback function must be nonsingular, that is, of the form a(t) = g(a(t-l),a(t-2),...,a(t-n+l))0a(t-n) (2) where © denotes the XOR operation. Depending on the whether the function g is linear (implementable with XORs alone) or not, the generator is called a linear feedback shift register (LFSR) or a nonlinear feedback shift register (NLFSR). Each individual storage element of FSR XX is called stage, and the binary signals a(0), a(l), a(2),..., a(n-l) are loaded into them as initial condition. The period of the sequence produced by an FSR depends both on the number of stages and on the details of the feedback connection. The maximal period of a sequence that can be generated by an n - stage FSR with nonsingular feedback is 2n, the number of possible states an n - stage shift register may have. The state diagram of a nonsingular FSR may have many small cycles, and the output sequence becomes insecure when the generator falls into one of them. A countermeasure is to design n - stage shift registers that generate sequences of the largest possible period 2", the so - called De Brujin sequences of degree n. The following is a De Brujin sequences of period 2, generated by the four - stage NLFSR, 1011001010000111.... LFSRs are also among the most important devices in building pseudorandom bit generators. They have feedback functions of the form a(t) = c1a(t-l)ec2a(t-2)e...ecn_,a(t-n+l)0a(t-n), (3) with a(t) = Cj ? {0,1}. The feedback connection of an LFSR can be represented formally by so - called feedback polynomial f(x) = l+CıX+c2xn-2+...+cn_1x,,", +xn (4) in the indeterminate x, which has no other meaning than as a mathematical symbol. The feedback polynomial decides the period and the statistical behavior of the output sequence. To avoid trivial output, the zero state should be excluded from initial setting. In general, to guarantee the largest possible period 2""1 for the output sequence, the feedback polynomial f(x) of the LIFERS should be primitive. This means f(x) will be chosen so that the least positive integer T that makes XT - 1 divisible by f(x) will be T = 2n -1 Maximality of the period simultaneously guarantees good statistics. There are a lot of tests that have been applied to sequences in order to investigate their randomness. Here, we will present five statistical tests that are designed to determine if a sequence appears random. We will calculate a value, called a statistic, for a sequence and then compare that value with the corresponding distribution of values for random sequences. If the pseudorandom sequence exhibits a property that very few truly random sequences do, then we conclude that the pseudorandom sequence does not model a random sequence very well. Suppose we wish to test a particular feature of a sequence. First we define a statistic to quantify that feature, and then we determine the distribution of that statistic for random sequences. We conduct a statistical test by making a hypothesis, called the null hypothesis, and testing it against an alternative hypothesis. In our case the null hypothesis, Ho, will be the hypothesis mat our sequence was produced by a random source. The alternative hypothesis, Ha, is the hypothesis that our sequence was XXI produced by one of some given set of nonrandom sources. Next we choose a value a, called the significance level, which is equal to the probability that we conclude our sequences was not produced by a random source, when in fact it was. That is, a= P(reject H0 | H0 is true), (5) so 0 ^ a < 1. Common choices for a are 0.05 or 0.01. A random variable X has a %2 - distribution with k degrees of freedom if X is the sum of the squares of k independent random random variables that have a N(0,1) distribution. If we expect the statistic to take on a larger value under the alternative hypothesis, then we have a one - sided test In this case we would determine a cut - off value Xk,i." such that P(X > Xk,_" | X is the statistic of a random sequence) = a. (6) If we expect the statistic under the alternative hypothesis to be either larger or smaller than that under the null hypothesis, instead of just larger, then we have a two sided test. A two sided test is usually used when the statistic of the random sequence follows a normal distribution. In this case we find two values Xaa and X\.aa, and we reject Ho if X>Xi.aa or X X,_a/2 | X is the statistic of a random sequence) = P(X > X,^ I X is the statistic of a random sequence). The first statistical test is frequency test to investigate the randomness of the sequences. Frequency test is designed to compare the number of 0's and the number of 1 's in a sequences. Since our sequences should resemble one produced by a binary symmetric source (BSS), we would expect that number of 0's and l's to be about the same. If n is the length of the sequence we are studying, no is the number of 0's in the sequence, and ni is the number of 1 's, then our statistic is (nü-n,) X = -^ XJ-. (8) n The statistic X follows the %2 - distribution with one degree of freedom. The serial test determines if the number of occurrences of each pair of bits, 00, 01, 10, and 1 1, are approximately the same. If we let noo be the number of times the pair 00 appears in a sequence of length n, and we define noı, nio, and nn similarly, then our statistic is X = ~l(n~+noi+n?o+nn)-f(no+n?) + 1 (9> where no and ni are the number of 0's and l's respectively. The statistic X approximately follows a %2 - distribution with two degrees of freedom. XXII bit patterns, the poker test studies the number of occurrences of each of the 2m m-bit patterns, for some integer m. First we partition a sequence of length n into k m-bit subsequences, where n m -, the greatest integer smaller then n/m. Since there are 2 possible m-bit _mj subsequences we will let nj be the number of occurrences of the i* subsequence for i = 1,...,2m. Our statistic for the poker test is then X =- Snf-k. (10) The statistic X follows a %2 - distribution with 2m-l degrees of freedom. The fourth statistical test we will study is called runs test. A run is defined as a subsequences consisting of a repeated single bit. So a run of ones of length t is a subsequence oft consecutive ones that is not preceded or succeeded immediately by a one. In order to perform the runs test we need to know n, the length of the sequence, and m, the number of runs in the sequence. We also need to know k, the length k is at least 5. It is sufficient to estimate k using the formula, i n l0g2mj (11) Let noi and nı i be the number of runs of zeros and ones of length i, for i < k - 1. Also let nok and nik be the number of runs of zeros and ones of length k or grater. Our test statistic is calculated as -t (noi-yoi)2 j (nH-yii)2 i=l yoi y« (12) This statistic approximately follows a %2 - distribution with 2k-3 degrees of freedom. The final test we look at is called the autocorrelation test. The purpose of this test is to see whether there is a correlation between the bits of the sequence as it is given and the bits of a shifted version of the same sequence. If the sequence we wish to test is S = sı S2... S3, then we will compare Sj with Sj+d if we are considering a shift of size d. If no correlation exists then we would expect Si to be equal to Sj+d approximately half the time. If we let © denote the XOR operator, or equivalents, modulo-2 addition, then the number of bits not equal to their shifts can be calculated as A(d) = 2si©si+(1 d = l Ln/2l (13) i=l A(d) should follow a binomial distribution. Under certain conditions, even simple non - linear iterative functions are capable of generating "chaotic" sequences of random numbers. The apparent ability of such functions to provide a source of random numbers is clearly considerable XXlll Under certain conditions, even simple non - linear iterative functions are capable of generating "chaotic" sequences of random numbers. The apparent ability of such functions to provide a source of random numbers is clearly considerable interest in cryptology. The provably unbreakable one - time system, in which plaintext is encrypted by modular addition to a key sequence of random numbers that is used only once, has never gained wide acceptance because of the difficulty of securely distributing the sequences of random numbers, the "one - time pads", to the members of communication network. If a function were used to generate the random keys, instead of the pads, the key sequence could be generated when and where required, using specific values of the function's tunable parameters. These parameter values would then be the only numbers that have to be disseminated, a task posing a much lower security risk than the safe distribution of one - time pads. Matthews [4] has recently proposed a new method of generating random numbers for cryptographic purposes using a chaotic function.

In the last thirty five years, there has been a great deal of interest in the study of nonlinear dynamical systems. And, the result has been a better qualitative understanding of previously intractable systems. The apparent random behavior of solutions of deterministic systems is now very easily comprehended in many cases. Generally speaking, the analysis of chaos is extremely difficult. However, in the specific model case of the logistic map there is a beautiful and illuminating way to really understand chaotic behavior through and through. There are many possible definitions of chaos, ranging from measure theoretic notions of randomness in ergodic theory to the topological approach we will adopt here. Definition 1. f:J -> J is said to be topologicals transitive if for any pair of open setsU.VcJ there exists k> 0 such that fk(U)nV*0. Definition 2. f:J -» J has sensitive dependence on initial conditions if there exists 8> 0 such that, for any x e J and any neighborhood N of x, there exists y e N and n £ 0 such that |f " (x) - f n (y)| > 8. Definition 3. Let V be a set. f: V -» V is said to be chaotic on V if 1. f has sensitive dependence on initial conditions. 2. f is topologicals transitive. 3. Periodic points are dense in V. To summarize, a chaotic map possesses three ingredients: unpredictability, indecomposability, and an element of regularity. A chaotic system is unpredictable because of the sensitive dependence on initial conditions. It cannot be broken down or decomposed into two subsystems (two invariant open subsets) which do not interact under f because of topological transitivity. And, in the midst of this random behavior, we nevertheless have an element of regularity, namely the periodic points which are dense. Having discussed the phenomena of chaos and the routes leading to it in 'simple' one-dimensional settings, we continue with the exposition of chaos in dynamical systems of two or more dimensions. This the relevant case for models in the natural sciences since very rarely can processes be described by only one single state variable. They are strange attractors. Roughly speaking, an attractor is an invariant set to which all nearby orbits converge. The first strange attractor ever recognized as such in the natural sciences is the Lorenz attractor, discovered in 1962. The Lorenz system is a model of thermal convection which, however includes not only a description of the motion of some viscous fluid or atmosphere but also the information about distribution of heat, the driving force of thermal convection. Chaos is seen in the electrical circuits. Chua's circuit is the simplest electronic electronic circuit that satisfies the mathematical properties of chaos. In addition, this remarkable circuit is the only physical system for which the presence of xvin chaos has been proved mathematically. The circuit is readily constructed at low cost using standard electronic components and exhibits a rich variety of bifurcations and chaos. The focus of this research is that Chua attractor and the Lorenz attractor can be used in cryptography as a secure pseudorandom generator. Cryptography is set of techniques used to ensure the secrecy of messages when hostile personnel have the technical capability to intercept and correctly interpret an unprotected message, which is called the plaintext. After transformation to secret form, a message is called the ciphertext or a cryptogram. The process of transformation the plaintext into ciphertext is called encryption. Cryptanalysis is the unauthorized extraction of information from the ciphertext. A practical cryptographic communication system must produce ciphertext that only resists cryptanalysis, but also can be efficiently deciphered by authorized personnel. In a cryptographic system, a set of parameters that determines a specific cryptographic transformation or its inverse is called a key. In most applications, a single key determines both a cryptographic transformation and its inverse. Cryptographic systems are designed so that their security depends upon the inability of the cryptanalyst to determine the key even if she knows the structure of the cryptographic system. An enciphering algorithm or transformation is called a cipher. Messages can be enciphered by a block cipher, a stream cipher, or a combination of these ciphers. A block cipher divides the plaintext into separate blocks of m bits and associates with each block (xi,xi+, xl41B_,) of plaintext a block (yj,yi+,,...,yi+I1_i) of n bits of ciphertext such that yk =fk(xi,xi+1 xi+m_,), i^k^i+n-1 where the fk are functions. Messages are deciphered with inverse functions. For unambiguous deciphering, n 2: m is necessary; for ease of automation, n = m in nearly all practical block ciphers. In stream ciphers, on the other hand, the plaintext is encrypted on a bit - by - bit basis. In encrypting the dataflow to be transmitted, the key is fed into an algorithm called the pseudorandom bit generator to create a long sequence of binary signals. This "key - stream" is then mixed with the plaintext sequence, usually by Exclusive - OR (XOR bit - wise modulo - 2 addition) gates, to produce the ciphertext. Stream ciphers play an especially important role in cryptographic practices - both diplomatic and military - that protect communications in the very high frequency domain. The central problem in stream - cipher cryptography, however, is the difficulty of generating a long unpredictable sequence of binary signals from a short and random key. Unpredictable sequences are desirable in cryptography because it is impossible, given a reasonable segment of its signals and computer resources, to find out more about them. Pseudorandom bit generators have been widely used to construct these sequences. There are two types of stream ciphers: those for which the keystream is independent of the plaintext and those for which the keystream is not. Independently keyed ciphers are often called synchronous ciphers because they require synchronization between the keystream and the ciphertext for successful deciphering. Stream ciphers for which the keystream is a function of the plaintext are often called auto - key ciphers. xix The only unconditionally secure cipher is a synchronous cipher that generates a completely random keystream as long as the message. Because the key, which is the same as the keystream in this case, cannot be reused, this cipher is called the one - time tape or one - time pad. If the key is produced by a binary symmetric source such that the probability for any enciphering signal to be 1 is equal to 1/2 independently of the other signals, the key is random. A known - plaintext attack is an attempted cryptanalysis based upon some knowledge of corresponding ciphertext and plaintext. An unconditionally secure cipher is one that is invulnerable to cryptanalysis no matter how much ciphertext is available. A computationally secure cipher is one that is vulnerable to cryptanalysis only if a prohibitively large amount of computation is performed. Most practical ciphers are at best computationally secure. The necessity of long, frequently changed keys renders the one - time tape impractical for most applications. Therefore since the twenties, many cipher systems have operated with an approximate version of the one - time pad that uses long sequences of pseudorandom bits generated from short secret keys. Generally speaking, we can formulate three requirements for cryptographically secure keystream generators. (1) The period of the keystream must be large enough to accommodate the length of the transmitted message. (2) The output bits must be easy to generate. (3) The output bits must be hard to predict. Given the generator and the first n output bits, a(0), a(l),..., a(n-l), it should be computationally in feasible to predict the (n+l)* bit a(n) in the sequence with better than a 50 - 50 chance. That is, given a portion of the output sequence, the cryptanalyst should not generate other bits forward or backward. Linear congruence generators (LCGs) widely discussed method for generating pseudorandom numbers is based on recurrences of the form X(i + l) = aX(i) + b modm. (1) Here, (a,b,m) are the parameters describing the generator and can be utilized as secret keys. Xo, is the seed. If the parameters are chosen judiciously, the numbers X(i) will not repeat until all integers of the interval [0,m- 1] have occurred. Boyar shows that sequences generated by LCGs are not cryptographically secure, [45], Strong cryptographic sequences can be designed on the basis of shift registers even when the feedback is linear. A feedback shift register consists of n flip - flops and a feedback function that expresses each new element a(t), when t > n, of the sequence in terms of the previously generated elements a(t - n), a(t - n + 1),...,a(t -1). The feedback function must be nonsingular, that is, of the form a(t) = g(a(t-l),a(t-2),...,a(t-n+l))0a(t-n) (2) where © denotes the XOR operation. Depending on the whether the function g is linear (implementable with XORs alone) or not, the generator is called a linear feedback shift register (LFSR) or a nonlinear feedback shift register (NLFSR). Each individual storage element of FSR XX is called stage, and the binary signals a(0), a(l), a(2),..., a(n-l) are loaded into them as initial condition. The period of the sequence produced by an FSR depends both on the number of stages and on the details of the feedback connection. The maximal period of a sequence that can be generated by an n - stage FSR with nonsingular feedback is 2n, the number of possible states an n - stage shift register may have. The state diagram of a nonsingular FSR may have many small cycles, and the output sequence becomes insecure when the generator falls into one of them. A countermeasure is to design n - stage shift registers that generate sequences of the largest possible period 2", the so - called De Brujin sequences of degree n. The following is a De Brujin sequences of period 2, generated by the four - stage NLFSR, 1011001010000111.... LFSRs are also among the most important devices in building pseudorandom bit generators. They have feedback functions of the form a(t) = c1a(t-l)ec2a(t-2)e...ecn_,a(t-n+l)0a(t-n), (3) with a(t) = Cj ? {0,1}. The feedback connection of an LFSR can be represented formally by so - called feedback polynomial f(x) = l+CıX+c2xn-2+...+cn_1x,,", +xn (4) in the indeterminate x, which has no other meaning than as a mathematical symbol. The feedback polynomial decides the period and the statistical behavior of the output sequence. To avoid trivial output, the zero state should be excluded from initial setting. In general, to guarantee the largest possible period 2""1 for the output sequence, the feedback polynomial f(x) of the LIFERS should be primitive. This means f(x) will be chosen so that the least positive integer T that makes XT - 1 divisible by f(x) will be T = 2n -1 Maximality of the period simultaneously guarantees good statistics. There are a lot of tests that have been applied to sequences in order to investigate their randomness. Here, we will present five statistical tests that are designed to determine if a sequence appears random. We will calculate a value, called a statistic, for a sequence and then compare that value with the corresponding distribution of values for random sequences. If the pseudorandom sequence exhibits a property that very few truly random sequences do, then we conclude that the pseudorandom sequence does not model a random sequence very well. Suppose we wish to test a particular feature of a sequence. First we define a statistic to quantify that feature, and then we determine the distribution of that statistic for random sequences. We conduct a statistical test by making a hypothesis, called the null hypothesis, and testing it against an alternative hypothesis. In our case the null hypothesis, Ho, will be the hypothesis mat our sequence was produced by a random source. The alternative hypothesis, Ha, is the hypothesis that our sequence was XXI produced by one of some given set of nonrandom sources. Next we choose a value a, called the significance level, which is equal to the probability that we conclude our sequences was not produced by a random source, when in fact it was. That is, a= P(reject H0 | H0 is true), (5) so 0 ^ a < 1. Common choices for a are 0.05 or 0.01. A random variable X has a %2 - distribution with k degrees of freedom if X is the sum of the squares of k independent random random variables that have a N(0,1) distribution. If we expect the statistic to take on a larger value under the alternative hypothesis, then we have a one - sided test In this case we would determine a cut - off value Xk,i." such that P(X > Xk,_" | X is the statistic of a random sequence) = a. (6) If we expect the statistic under the alternative hypothesis to be either larger or smaller than that under the null hypothesis, instead of just larger, then we have a two sided test. A two sided test is usually used when the statistic of the random sequence follows a normal distribution. In this case we find two values Xaa and X\.aa, and we reject Ho if X>Xi.aa or X X,_a/2 | X is the statistic of a random sequence) = P(X > X,^ I X is the statistic of a random sequence). The first statistical test is frequency test to investigate the randomness of the sequences. Frequency test is designed to compare the number of 0's and the number of 1 's in a sequences. Since our sequences should resemble one produced by a binary symmetric source (BSS), we would expect that number of 0's and l's to be about the same. If n is the length of the sequence we are studying, no is the number of 0's in the sequence, and ni is the number of 1 's, then our statistic is (nü-n,) X = -^ XJ-. (8) n The statistic X follows the %2 - distribution with one degree of freedom. The serial test determines if the number of occurrences of each pair of bits, 00, 01, 10, and 1 1, are approximately the same. If we let noo be the number of times the pair 00 appears in a sequence of length n, and we define noı, nio, and nn similarly, then our statistic is X = ~l(n~+noi+n?o+nn)-f(no+n?) + 1 (9> where no and ni are the number of 0's and l's respectively. The statistic X approximately follows a %2 - distribution with two degrees of freedom. XXII bit patterns, the poker test studies the number of occurrences of each of the 2m m-bit patterns, for some integer m. First we partition a sequence of length n into k m-bit subsequences, where n m -, the greatest integer smaller then n/m. Since there are 2 possible m-bit _mj subsequences we will let nj be the number of occurrences of the i* subsequence for i = 1,...,2m. Our statistic for the poker test is then X =- Snf-k. (10) The statistic X follows a %2 - distribution with 2m-l degrees of freedom. The fourth statistical test we will study is called runs test. A run is defined as a subsequences consisting of a repeated single bit. So a run of ones of length t is a subsequence oft consecutive ones that is not preceded or succeeded immediately by a one. In order to perform the runs test we need to know n, the length of the sequence, and m, the number of runs in the sequence. We also need to know k, the length k is at least 5. It is sufficient to estimate k using the formula, i n l0g2mj (11) Let noi and nı i be the number of runs of zeros and ones of length i, for i < k - 1. Also let nok and nik be the number of runs of zeros and ones of length k or grater. Our test statistic is calculated as -t (noi-yoi)2 j (nH-yii)2 i=l yoi y« (12) This statistic approximately follows a %2 - distribution with 2k-3 degrees of freedom. The final test we look at is called the autocorrelation test. The purpose of this test is to see whether there is a correlation between the bits of the sequence as it is given and the bits of a shifted version of the same sequence. If the sequence we wish to test is S = sı S2... S3, then we will compare Sj with Sj+d if we are considering a shift of size d. If no correlation exists then we would expect Si to be equal to Sj+d approximately half the time. If we let © denote the XOR operator, or equivalents, modulo-2 addition, then the number of bits not equal to their shifts can be calculated as A(d) = 2si©si+(1 d = l Ln/2l (13) i=l A(d) should follow a binomial distribution. Under certain conditions, even simple non - linear iterative functions are capable of generating "chaotic" sequences of random numbers. The apparent ability of such functions to provide a source of random numbers is clearly considerable XXlll Under certain conditions, even simple non - linear iterative functions are capable of generating "chaotic" sequences of random numbers. The apparent ability of such functions to provide a source of random numbers is clearly considerable interest in cryptology. The provably unbreakable one - time system, in which plaintext is encrypted by modular addition to a key sequence of random numbers that is used only once, has never gained wide acceptance because of the difficulty of securely distributing the sequences of random numbers, the "one - time pads", to the members of communication network. If a function were used to generate the random keys, instead of the pads, the key sequence could be generated when and where required, using specific values of the function's tunable parameters. These parameter values would then be the only numbers that have to be disseminated, a task posing a much lower security risk than the safe distribution of one - time pads. Matthews [4] has recently proposed a new method of generating random numbers for cryptographic purposes using a chaotic function.

##### Açıklama

Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Sosyal Bilimler Enstitüsü, 1997

##### Anahtar kelimeler

Kaotik sistemler,
Şifreleme,
Caotic systems,
Encryption