Please use this identifier to cite or link to this item: http://hdl.handle.net/11527/16632
Title: Uyarlamalı Süzgeçler İçin Yeni Bir Stokastik Kontrol Algoritması Ve Sistem Tanılama Uygulaması
Authors: Erimez, Enise
Koçal, Osman Hilmi
66416
Telekomünikasyon Mühendisliği
Telecommunication Engineering
Keywords: Algoritmalar
Doğrusal sistemler
Filtreler
Algorithms
Linear systems
Filters
Issue Date: 1997
Publisher: Fen Bilimleri Enstitüsü
Institute of Science and Technology
Abstract:  Uyarlamak FIR süzgeçlerde kullanılan kontrol algoritmaları, gradient tabanlı algoritmalar ve ardışıl algoritmalar olmak üzere iki ana sınıfta toplanabilir. Gradient tabanlı algoritmalarda ilgilenilen amaç ölçütünün gradient vektörü kullanılır ve yakınsama hızı özilişki matrisinin yapısına bağımlıdır. Çok kullanılan bir gradient tabanlı algoritma LMS (least-mean-square) algoritmasıdır. Ardışıl algoritmalarda özilişki matrisinin tersi her adımda ardışıl olarak hesaplanır ve yakınsama hızı özilişki matrisinin yapısından bağımsızdır. Bunun sonucu olarak yakınsama hızı gradient tabanlı algoritmalara göre çok daha büyüktür. İyi bilinen bir ardışıl algoritma RLS (recursive least square) algoritmasıdır. Algoritmaların karşılaştırılmasında önemli ölçütlerden biri de madpr (multiplication and division per recusion) olarak adlandırılan bir iterasyon adımındaki çarpma ve bölme sayısıdır. Süzgeç katsayı vektörünün boyutu M olmak üzere, LMS algoritmasında madpr M ile doğrusal artarken, RLS algoritmasında karesel olarak artmaktadır. Bu çalışmada uyarlamak FIR süzgeçler için tabanım doğrusal denklem sistemlerinin çözümünde kullanılan ardışıl yöntemlerin oluşturduğu yeni bir stokastik kontrol algoritması önerilmiştir. Optimum süzgeç katsayılarının elde edildiği Wiener- Hopf denklemlerine Jacobi ve Gauss-Seidel algoritmaları ilk kez uygulanmış ve süzgeç katsayılarını ardışıl olarak hesaplayan iki yöntem bulunmuştur. Jacobi algoritması kullanılarak bulunan yöntemin kararlı olması için gradient tabanlı algoritmalarda kararlılık bakımından sağlanması gereken koşula benzer bir koşulun sağlanması gerektiği görülmüştür. Gauss-Seidel algoritması içeren yöntemin bir koşul gerektirmeden yakınsayacağı anlaşılmıştır. Bu yöntemde özilişki matrisi ve çapraz ilişki vektörü yerine bunların öngörü değerleri olan örnek ortalamaları kullanılarak yeni bir stokastik kontrol algoritması elde edilmiştir. Stokastik kontrol algoritması kullanılarak hesaplanan süzgeç katsayılarıyla ilgilenilen sürecin özilişki katsayılarının istatistiksel ilişkisiz oldukları deneysel olarak gösterilmiştir. Bu özellikten ve özilişki katsayılarının olasılık yoğunluk fonksiyonundaki dağılım parametreleri üzerinde yapılan varsayımlardan yararlanarak yeni stokastik kontrol algoritması yardımıyla öngörülen süzgeç katsayı vektörünün optimum katsayı vektörü için yansız bir kestireç olduğu analitik olarak gösterilmiştir. Önerilen yeni algoritma için, RLS algoritmasında kullanılan benzer işlemlerle, durağan olmayan ortamlarda da işleyen bir yapı elde edilmiştir. Yeni algoritmanın LMS ve RLS algoritmalarıyla yakınsama hızı ve madpr karşılaştırmaları yapılmıştır. Sistem tanılama uygulaması birinci ve ikinci dereceden doğrusal sistemler üzerinde pratik olarak gerçeklenmiştir.
The design of an adaptive filter requires a priori information about the statistics of the data to be processed. The filter is optimum only when the statistical characteristic of the input data match the a priori information on which the design of the filter is based. When this information is not known completely, however, it may not be possible to design the filter or else the design may not be optimum. A straightforward approach is estimating the statistical parameters of the input signal. This is a two stage process whereby the filter first estimates the statistical parameters of the relevant signals and then uses them into a nonrecursive formula for computing the filter parameters. For real time operation, this procedure has the disadvantage of requiring excessively elaborate and costly hardware. A more efficient method is to use an adaptive control algorithm to update the filter parametres. The algorithm starts from some predetermined set of initial conditions, representing complete ignorance about the environment. In a stationary environment, we find that after succesive iterations of the algorithm it converges to the optimum solution in some statistical sense. In a nonstationary environment, the algorithm offers a tracking capability, whereby it can track time variations in the statistic of the input data, provided that the variations are sufficiently slow. As a direct consequence of the application of a recursive algorithm the parametres of an adaptive filter are updated from one iteration to the next they become data dependent. The operation of an adaptive filtering algorithm involves two basic processes: - Filtering process. - Adaptive process. The filtering process designed to produce an output in response to a sequence of input data. The purpose of adaptive process is to provide a mechanism for the adaptive control of an adjustable set of parametres used in filtering process. These two processes work interactively with each other. Naturally, the choise of a sturucture for the filtering process has a profound effect on the operation of the algorithm as a whole. The well known stuructures of an adaptive filter with finite memory or, equivalents finite impuls response (FIR), are transversal filter and lattice filter. The transversal filter consists of three basic elements, unit delay element, multiplier and adder. The number of delay elements is commonly referred to as the order of the filter. For the transversal filter, the filter output y(n) is given by xiii AW y(n) = 2j wk x(n-k) Where x(n) is the filter input, wk is the k.th filter parameter and M is the order of the filter. Adaptive control algorithms which are used in the transversal adaptive filters can be classified into two main groups [1],[2]. -Gradient based algorithms. -Recursive least squares algorithms. Gradient based algorithms requires the use of a gradient vector, the value of which depends on two parametres: the correlation matrix R of the tap inputs x(n), x(n-l),..., x(n-M+l) in the transversal filter and cross- correlation vector p between the desire response d(n) and the same tap inputs. Using instantaneous values for these correlations, an estimate can be obtained for the gradient vector. The resulting algorithm is widely known as the least mean square (LMS) or stochastic gradient algorithm [3],[4], [5]. This algorithm is simple and yet capable of achieving satisfactory performance under the rigth conditions. Its major limitations are relatively slow rate of convergence and a sensitivity to variations in the condition number of the correlations matrix (rate of the maximum eigenvalue to minimum eigenvalue) of the tap inputs. Nevertheless, the LMS algorithm is highly popular and widely used in variety of applications. The LMS algorithm can be written as follows: w(n+l) = w(n) + |i x(n) [ d(n) - xT(n) w(n) ] Where, the index of n denotes discrete time variable, x(n) is the filter input vector with M dimensions, w(n) is the filter parameter vector and \i is the step-size parameter. The LMS algorithm converges to optimum solution w0 w0 = R1 p with following condition : 0 < [i < 2 / A,max where Xmax is the largest eigenvalue of the correlatin matrix R. xiv The derivation of the recursive least square (RLS) algorithm relies on a basic result in linear algebra known as the matrix inversion lemma [6], [7], [8]. An important feature of the RLS algorithm is that it utilizes information contained in the input data, extending back to the instant of time when the algorithm is initiated. The resulting rate of convergence is therefore typically an order of magnitude faster than the simple LMS algorithm [9], [10], [11], [12]. This inprovement in performance, however, is achieved at the expense of a large increase in computational complexity [13], [14], [15]. The RLS algorithm can be written as follows: w(n) = w(n-l) + k(n) a (n) where, R'V-Uxdi) k(n) = 1 + xT(n) R-'(n-l)x(n) FT (n) = FT (n-1) - k(n) x'(n) FT (n-1) a(n) = d(n) - WT(n-l) X(n) The standart RLS algorithm have a computational complexity (the number of multiplication, division in per recursion) that increases as the square of M, where M is the order of filter or number of adjustable weights in the algorithm. Such algorithms are often referred to as 0(M2) algorithms, where 0{.) denotes order of. By contrast, in the LMS algorithm, its computational complexity increases linearly with M. The LMS is an O(M) algorithm. When M is large, the computational complexity of the 0(M2) algorithms may become objectionable from a hardware implementation point of wiev. There is therefore a motivation to modify the formulation of RLS algorithms in such a way that the computational complexity reduce to an O(M) form. Resulting algorithms are known as fast algorithms that combine desirable characteristics of RLS with an O(M) computational complexity. The disadventage of fast algorithms is that they needs complex hardware or software stuructures. In this study, a new stochastic control algorithm is proposed. As well known, the optimum parameters of an adaptive filter are obtained from Wiener-Hopf equations as follows: Rw = p XV The original idea of this study is based on the iterative solution methods for linear systems [16], [17], [18], [19], [20]. The simplest iterative scheme is the Jacobi iteration. It is defined for matrices that have nonzero diagonal elements. The i.th parameter of an adaptive filter can be calculated by using the Jacobi method for the Wiener-Hopf equations, iteratively, as follows: /-i M. Wi (k+1) = ( pi - X rH Wj(k) - X rj-i Wj(k) )/r0 i=l,....,M ;=1 y=i+l where p-, is the i.th element of cross-correlation vector p and r;.j is the i.th row, j.th column element of correlation matrix R. For the Jacobi iteration, the transition from w(k) to w(k+l) can be sufficiently described in terms of the lower triangular correlation matrix RL, upper triangular correlation matrix Ru and diagonal correlation matrix RD. w(k+l) = - IV ( RL + Ru ) w(k) + Rd'1 p The Jacobi algorithm converges to optimum solution w0 under the condition 0 < l/r0 < 2/Xmax Where r0 is the variance of the input signal. This condition is not satisfied for every correlation matrix. For this reason the Jacobi algorithm needs an available step size parameter. The convergence rate of the Jacobi algorithm depends on the condition number of the correlation matrix. Note that in the Jacobi algorithm one does not use the most recently available information when computing w-, (k+1). For example, Wi (k) is used in the calculation of W2 (k+1) even though Wi (k+1) is known. If we revise the Jacobi iteration so that we always use the most current estimate of the exact w-,, we obtain Wi (k+1) = ( Pi - X rH Wj(k+1) - £ fj-i Wj(k) ) / r0 i=l,....,M j=\ j=i+l This defines what is called the Gauss-Seidel algorithm for Wiener-Hopf equations. In matrix form, using proporties of Ru = RlT, Gauss-Seidel algorithm can be written as follows w(k+l) = - ( RL + RD )?' RLT w(k) + ( RL + RD )"' p XVI It can be shown that the Gauss-Seidel algorithm always converges for positive definite symmetric matrices. The correlation matrix R is also positive definite symmetric matrix. So that, the Gauss-Seidel algorithm for Wiener-Hopf equations converges to optimum solution w0 without any condition. When the information about correlation matrix and cross correlation vector is not known completely, it may not be possible to use the Jacobi and Gauss-Seidel algorithms. A straightforward approach estimating the correlation matrix and cross correlation vector. The available estimates of correlation matrix and cross correlation vector are n R(n)= 1/n £ x(k).XT(k) n p(n) = 1/n £ x(k).d (k) k=l where n denotes time variable. Using estimates R(n) and p(n), the Jacobi and Gauss-Seidel algorithms can be rewritten as follows w(n+l) = - Rd"1 (n)( RL (n) + Ru (n)) w(n) + RD"X (n) p(n) w(n+l) = - ( RL (n) + RD (n))"1 RLT w(n) + ( RL (n) + RD (n))"1 p(n) These algorithms can be addressed as Stochastic Jacobi and Stochastic Gauss-Seidel algorithms. Stochastic Jacobi algorithm converges in the mean sense under the condition which the Jacobi algorithm converges. It can be shown that the stochastic Gauss-Seidel algorithm always converges in the mean sense by using distribution of correlation coefficients and some assumptions about on parametres of distribution [21]. So, we can say that the estimate of filter parameter vector w(n) which is obtained by using the stochastic Gauss-Seidel algorithm is an unbiased estimater of optimum solution Wo. E[ w(n) ] = wo Where E[. ] denotes the expected value. Altough the stochastic Gauss- Seidel algorithm does not need a condition about the eigenvalues of the correlation matrix to convergence, its convergence rate depends on the condition number of the correlation matrix. The studies on the numerical examples which consists of correlation matrices with various condition numbers shows that the convergence rate of the stochastic Gauss-Seidel algorithm is faster than the LMS algorithms rate. xvn The stochastic Gauss-Seidel algorithm is an 0(M2) algorithm. The number of multiplication and division per recursion (madpr) in the stochastic Gauss-Seidel algorithm increases with M2. In generally, the relation between madpr and order of filter M in the stochastic Gauss-Seidel algorithm is as follows madpr = M2 + 2M - 1 In RLS algorithm madpr is equal to 3M2 +11M +8. It is clear that the computational comlexity of the stochastic Gauss-Seidel algorithm is better than the RLS algorithm. In fast algorithms (Fast RLS) madpr is equal to 7M +16 [22],[23],[24],[25],[26]. It can be seen that, altough, the stochastic Gauss-Seidel algorithm is an 0(M2) algorithm, the computational comlexity of the stochastic Gauss-Seidel algorithm is better than the Fast RLS algorithm up to M=7. In the application of system identification of adaptive filter [27], [28], [29], [30], especially in the example of this study, the madpr of the stochastic Gauss-Seidel algorithm decreases. So that, the madpr is equal to M2 + M. In such a case the stochastic Gauss-Seidel algorithm can be compaired to Fast RLS algorithm up to M=8. 
Description: Tez (Doktora) -- İstanbul Teknik Üniversitesi, Fen Bilimleri Enstitüsü, 1997
Thesis (Ph.D.) -- İstanbul Technical University, Institute of Science and Technology, 1997
URI: http://hdl.handle.net/11527/16632
Appears in Collections:Telekomünikasyon Mühendisliği Lisansüstü Programı - Doktora

Files in This Item:
File Description SizeFormat 
66416.pdf2.63 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.