##
Optimal LQP kontrol

Optimal LQP kontrol

##### Dosyalar

##### Tarih

1997

##### Yazarlar

Dede, Füsun

##### Süreli Yayın başlığı

##### Süreli Yayın ISSN

##### Cilt Başlığı

##### Yayınevi

Fen Bilimleri Enstitüsü

##### Özet

Sürekli lineer bir sistemin dinamiğini belirleyen diferansiyel denklem giriş u(t), çıkış y(t), t'de zaman olmak üzere sürekli-zamanda d°y «TV dy âTu dm\ an- + an _! +...,+aı = bm- + bm.r K...+ b0 df df1 dt dr drA ve aynk-zamanlı bir sistemde i tam sayı olmak üzere c0y(i) + cıy(i - 1) + + Cny(i - n) = dju(i -1) + d2u(i - 2) + lineer fark denklemleri ile verilir. Durum uzayı gösterilimi x = Ax + Bu ve y = Cx dir. A matrisi companian matristir. Durum değişkenlerinin yer aldığı x vektörüne durum vektörü, bu vektörlerle tanımlı uzaya durum uzayı denir. A, B, C matrisleri sistemin tüm özelliklerini taşır. Sistemin dinamik özellikleri ve kontrol edilebilirlik A, B'ye; transfer fonksiyonu A, B, C'ye; gözlenebilirlik A, C'ye bağlı tanımlanır. Dinamik sistemlerin geçici davranışını karakteristik denklemin kökleri belirler. Durum uzayında herhangibir xo keyfi noktasından diğer bir x durumu u girişleri ile seçilebiliyorsa sisteme kontrol edilebilirdir denir. Giriş sıfır iken y çıkışları o andaki ve geçmişteki değerlerinden yararlanılarak x durumları hesaplanabiliyorsa sisteme gözlenebilirdir denir. Minimal gerçekleme için gerek ve yeter koşul durum modelinin kontrol edilebilir ve gözlenebilir olmasıdır. Kararsız durumlar kontrol edilebilir ise sisteme kararlı kılınabilir, kararsız durumlar gözlenebiliyorsa sisteme detecte edilebilir denir. Yan-sabit yapıda kontrol edilen sistem optimal olacak şekilde kontrolör yapısı belirlenirki bu LQP kontrol olarak bilinir. Zamanla değişen sürekli-zamanlı ve ayrık-zamanlı LQP problemleri Riccati denklemlerini sağlar. Optimal kontrol teori kontrol edilen sistemin x dinamik durumunun değişen değerinin bir fonksiyonu olarak temsil edilen optimal kontrol problemlerine yol gösterir. x durumlarını ölçülen çıkışlar olarak direk almak mümkün olmadığından geri beslemeli kontrolör tasarımı kullanılır. Verilen bir sistemin x durumları gözleyici olarak bilinen uygun dinamik sistemlerle tahmin edilir. LQP teori dinamik sistemlerin x durumunu kontrol etmek için u girişlerinin nasıl seçilmesi gerektiğini veren en genel optimal teorinin bir dalıdır. Dinamik programlama denklemleri varyasyonel hesapta Hamiltonian-Jacobi-Bellman denklemi olarak bilinir. Pontriagin maksimum prensibi optimizasyon hesapla yapılamadığında varyasyonel hesabı içeren problemlerde genelleştirilir. VI

For continuos-time linear system having input u(t) and output y(t) where t represent time the dynamic equation is the ordinary linear equation d°y dn-!y dy d"^ dm_1 u du an + Vi +- + ai + aoy = bm + b^ + + fy + b0u df df-] dt dta df1 dt which specifies relationships between rates of change. Difference equations specifying relationships between point values are conventionally used to characterize discrete-time dynamic system c0y(i) + CjyO - 1) + + c"y(i - n) = dju(i - 1) + d2u(i - 2) +. where i is an integer counting the discrete-time instants. When the equation are linear, matrices can be used. Continuous-time linear dynamics can be written x = Ax + Bu and discrete-time linear dynamics can be written x(i+l) = Ax(i) + Bu(i) with output y given by y = Cx A is known as a companion matrix. vn The set of n variables introduced to translate dynamics of order n into a set of simultaneous first-order equations are known as state variables. The vector x İs known as a state vector and n-dimensional space which is known as state space. The matrices A, B, C provide a complete specification of a linear dynamic system. Dynamical properties which depend only on A and B and the overall transfer function from u to output y which depends also on C where desirable properties of controllability and observability. Transient behaviour of a linear dynamic system is determinated by roots of characteristic solution of I si - A | = 0 are the eigenvalues of A. A set of dynamic state equations is said to be controllable if the output u can is a finite time transfer the dynamic state x between any two arbitrarily chosen points in state space. When the eigenvalues of A are district controllability can be examined by diogonalizing the matrix equations. This is done using the modal matrix of A. X is non-singular and the modal matrix diagonalizes A. X4AX = S where S is a diagonal matrix whose only non zero elements on the main diagonal are the eigenvalues of A. A general necessary and sufficient condition for controllability of time invariant linear systems both continuous-time and discrete- time is that controllability matrix M^fB AB An_1B] must have rank n. A set of dynamic and output equations is said to be observable if for zero input u, it is possible from a finite time history of the output y to determine any value which the state x may have had at any point in time. The general necessary and sufficient condition for observability of time invariant linear systems both continuous-time and discrete-time is that the n x np observability matrix Mo = [ CT ATCT (A^C1] must have rank n. When a linear system is specified by a triple of matrices (A an n x n matrix, B an n x m matrix, C a p x n matrix) its transfer function matrix is a p x m matrix of transfer functions relating transforms of p output variables y to transforms of m input variables u. It can be seen again that it is the eigenvalues of the A matrix which are the poles of the transfer function and roots of the characteristic equation. A state space model A, B, C corresponding to a given transfer function G is known a realization of the transfer function. Possible of state can be partitioned into four subsets. * States which are both controllable and observable * States which are controllable but not observable * States which are observable but not controllable vm * States which are either controllable nor observable. The number of states which are both controllable and observable is the same as order of the transfer function. Absence of additional states is the necessary and sufficient condition for realization to be both controllable and observable. Realization which is minimal in the sense that the number of the states is the same as the order of the transfer function is always both controllable and observable. If the corresponding transfer function is known. If order of the transfer function is not known, it can be examining the rank of the controllability and observability matrices Me and Mo for realization is minimal or not. Other causes of uncontrollability and unobservability can arise that the relationship between states and either or both inputs and outputs may not be linearly independent. Controllability and observability are, as a properties which are desirable in a object which is to be subject to feedback control. A set of dynamic state equation is said to be stabilizable if any uncontrollable states in the set are stable. A set of dynamic and output equations is said to be detectable if any unobservable states in the set are stable. When the performance is measured by a scalar function there two classes of optimal control problem: * Fixed configuration; where the form of a controller is given and it is required to optimize the value of some parameter such as a controller gain. * Semi-free configuration; where the form of the controller is not specified but is to be determination to produce whatever will be the optimal performance of a given controlled object. Optimal semi-free control theory is introduced LQP semi-free LQP problems are in continuous-time with realization x = Ax + Bu and performance criterion 1= j(xTPx + uTQu)dt ti or discrete-time with realization x(i+ l) = Ax + Bu and performance criterion EX h I = S (xTPx + uTQu) ii The matrices A, B, P, Q which specify a problem may in general be time-varying as also may matrix K specifying the solution. The LQP result is by way of simple continuous-time or discrete-time scalar LQP problems. For discrete-time scalar LQP problem time interval N = İ2 - ii + 1 optimal value I*N of IN rmnimized with respect to sequence u(ii) u(i2) depends only on the initial state x(ii) and on the number N of term in a sequence. Optimal value of the performances criterion which are quadratic in the state I*Nj(x) = -IcnX where kN = - vN = p + a2vN.r q + b2vN.! 8Wn.i q + b2vN_! For continuous scalar LQP problems x = ax + bu and performance criterion t2 1= J(px2+qu2)dt All four scalars p, q, a, b are time-invariant. Optimal values of performance criterion which is quadratic in x P(x,T) = v(T)x2 optimal control bv(T) u(x,T) = -k(T)x where k(T) a non-linear difference equation of optimal control dv bV - = p + 2av dT q2 This equation is known as Riccati Equations. Time interval T is defined by T = t2 - 1 like N. Convergence of the solution v(T) to a finite steady state value v. when T goes to a infinity is a desirable property. The optimal control depends on the current state x(t) and the time to go to T according to a linear feedback law u(t) = -K(T)x(t) having gain K(T) = Q_1(t)BT(t)VCr) where V(T) satisfies the Riccati equations. For discrete-time equations are all of them as the same continuous-time equations. Of any LQP problem is a linear function of the current value of dynamic state x of the controlled object: This provides use of linear feedback control law in the practice where controlled objects can be described by linear equations. State regulation, where main source of the variability is the arbitrary initial state of controlled object and the control objective is to achieve a stable equilibrium is at the origin of the state space. Target tracing, with the variability in the form of exogenous target motion r to be tracked by output c of the controlled object. Disturbance rejection, where either or both state regulation or target tracing is to be achieved in spite of the action of additional variability in the form of exogenous disturbances x2 affecting states of the controlled object. Continuous-time scalar example shows how Riccati equations converge to a time invariant steady state solutions in the limit as run time becomes infinite. For continuous-time 0 = P + ATV + VA - VBQ/WV For discrete-time XI V = P + ATVA - ATVB(Q + BTVB)1BTVA The matrix V measures optimal value in performance criteria I. If P is positive definite the state x is regulated to the origin of the state space. General expression for optimal control u(t) = (Va^o + l/q'-aoMt) - (\/a\ + 2a2(Vaio+l/q La<>) -aOy(t) This equation for the optimal control with the original differential equation of the controlled object show that resulting optimal control is a second order system governed the second equation. y + 2Çö)o y +©0 2 =0 ©o natural frequency, £ damping ratio and B, = I/aS"' value which is often chosen as given acceptable transient response is optimal. Optimal control theory leads to solution where optimal control is expressed as a function of the current value of dynamic states x of the controlled object. With real controlled object cannot usually be directly available as measured outputs. Real feedback control can only be driven by y = Cx. Optimal control theory is exploited in designing the feedback control what is known as an observer. This system is designed using the real signal u, y to generate an estimate x of the state x. The observer output x can be used to drive control law u = - Kx. Observer uses A, B, C matrices which are characterized of the controlled object. The observer has the same number n of the dynamic states as to be estimated convergence towards the estimated states x. For linear controlled object in continuous-time x = Ax + Bu + K0(y-Cx) or in discrete-time x (i) =Ax(i - 1) + Bu(i - 1) +K"(y(i) - C(Ax(i - 1)+Bu(i - 1))) Right hand side of the each equation consist of prediction term plus correction term. The correction term can be written Ko(y- y) when y predicted value y is given y = Cx with x the predicted value of x. Ko must be chosen sothat the observation xn error e0; e0 = x - x converges to zero. Convergence to zero of the observation error e0 depends on eigenvalues of (A - K"C) for continuous-time or of (I - K<,C)A for discrete-time systems. For continuous-time system all eigenvalues must be in the left half plane. For discrete-time system all eigenvalues must be within the unit circle. Matrix Ko is known as the observer gain. One way is to choose K<, so as to fix the roots of observer characteristic equation or at specified values chosen to ensure good behavior of observer. This procedure is known as pole placement. Another way to choose Ko is take account of causing any discrepancy eo between the estimate x and true value x. Because they are due to random noises and disturbances. The necessary condition for observer convergence that both poles be in the left half plane and also ki, k2 must be positive. In continuous-time transfer function matrix is H(s) s Kc(sl - A + KoC + BKJ "X and in discrete-time H(s) s K^zl - (I - KoCXA - BIQ)-1 K* Ko is the control gain of the feedback controller. Methods of loop recovery has been developed which aim to recover guaranteed stability margins by making assumptions about exogenous disturbances which have the effected of tuning the observer gain K<, so as to increase loop stability. The controller transfer function matrices are m x p where m is the number of the control u and p is the number of measurement y. Integral control; If a continuous-time system value of A, B, C, Ko, Kc are such the denominator coefficient do is zero, this is an integral control. PID control; If a continuous-time system its numerator polynomial is of higher order then its denominator, so the denominator coefficient di were so dominant in the transfer function, this is the PID control. For the LQP controller design first assumption that only the control error is available to drive the observer. An alternative way to design controller gain K<. would be to force steady state error to zero. This is achieved by kc2 = 1 which set do in the denominator of the transfer function. kd< 0 ensures T' faster then T. If Ti > T2 the transfer function is that of a phase-advance controller with integral action. Very fast observer response (a-> °°) value the di denominator coefficient dominates and the controller transfer function approximation PI rather then PID of the controlled object. Closed-loop behaviour of linear system is determinated by the characteristic equation I si - A I =0 where A having the partitioned structure. The design of an ideal feedback controller requires specification of two gain matrices Kc, Ko Integrators; whose in the feedback controllers was indicated as model of the dynamics of the class of variables which are finite polynomials in time. Derivative action; whether in the form of velocity feedback or of phase-advance can also be seen to serve the purpose of state estimation. Derivative action in feedback control provides a crude because noisy, estimate of the order state. xm error e0; e0 = x - x converges to zero. Convergence to zero of the observation error e0 depends on eigenvalues of (A - K"C) for continuous-time or of (I - K<,C)A for discrete-time systems. For continuous-time system all eigenvalues must be in the left half plane. For discrete-time system all eigenvalues must be within the unit circle. Matrix Ko is known as the observer gain. One way is to choose K<, so as to fix the roots of observer characteristic equation or at specified values chosen to ensure good behavior of observer. This procedure is known as pole placement. Another way to choose Ko is take account of causing any discrepancy eo between the estimate x and true value x. Because they are due to random noises and disturbances. The necessary condition for observer convergence that both poles be in the left half plane and also ki, k2 must be positive. In continuous-time transfer function matrix is H(s) s Kc(sl - A + KoC + BKJ "X and in discrete-time H(s) s K^zl - (I - KoCXA - BIQ)-1 K* Ko is the control gain of the feedback controller. Methods of loop recovery has been developed which aim to recover guaranteed stability margins by making assumptions about exogenous disturbances which have the effected of tuning the observer gain K<, so as to increase loop stability. The controller transfer function matrices are m x p where m is the number of the control u and p is the number of measurement y. Integral control; If a continuous-time system value of A, B, C, Ko, Kc are such the denominator coefficient do is zero, this is an integral control. PID control; If a continuous-time system its numerator polynomial is of higher order then its denominator, so the denominator coefficient di were so dominant in the transfer function, this is the PID control. For the LQP controller design first assumption that only the control error is available to drive the observer. An alternative way to design controller gain K<. would be to force steady state error to zero. This is achieved by kc2 = 1 which set do in the denominator of the transfer function. kd< 0 ensures T' faster then T. If Ti > T2 the transfer function is that of a phase-advance controller with integral action. Very fast observer response (a-> °°) value the di denominator coefficient dominates and the controller transfer function approximation PI rather then PID of the controlled object. Closed-loop behaviour of linear system is determinated by the characteristic equation I si - A I =0 where A having the partitioned structure. The design of an ideal feedback controller requires specification of two gain matrices Kc, Ko Integrators; whose in the feedback controllers was indicated as model of the dynamics of the class of variables which are finite polynomials in time. Derivative action; whether in the form of velocity feedback or of phase-advance can also be seen to serve the purpose of state estimation. Derivative action in feedback control provides a crude because noisy, estimate of the order state. xm Hamiltonian-Jacobi-Bellman equation becomes - = - H and Pj = and Xj = dt dxj dpj They known as Pontriagin's equations. The condition for a trajectory to be optimal is that the Hamiltonian H be a minimum with respect to the control u at every point in time. A set of saturation inequalities is Uji^ Uj< Uj2 j = l,...m on components of control u. Pontriagin maximum principle to problems having linear dynamics with saturation constrains can lead to useful results. The optimal is assign to every element of control u its minimum or maximum allowed value depending on whether to the sign its coefficient in pTB is positive or negative. This control law which requires every control variable to take one or another of two limiting values is called bang-bang control. Features of dynamic programming can be summarized: * The problem has to be describe numerically. * Constrains on the range of the state x can improve the efficiency of the search by reducing the number of the possibilities that need be considered. * Numerical procedures of dynamic programming share with other numerical procedures the curse of dimensionally, an ability to handle problems of high dimensionality. The resulting functions of optimal control specify control action as a function of the absolute value of time and of the current value of the state x. This feature that optimal control is specified in the form of the feedback is in the principle of optimality and is not shared by techniques of calculus of variations. It is the reason why dynamic programming is suitable for analysis of problems of control. XV

For continuos-time linear system having input u(t) and output y(t) where t represent time the dynamic equation is the ordinary linear equation d°y dn-!y dy d"^ dm_1 u du an + Vi +- + ai + aoy = bm + b^ + + fy + b0u df df-] dt dta df1 dt which specifies relationships between rates of change. Difference equations specifying relationships between point values are conventionally used to characterize discrete-time dynamic system c0y(i) + CjyO - 1) + + c"y(i - n) = dju(i - 1) + d2u(i - 2) +. where i is an integer counting the discrete-time instants. When the equation are linear, matrices can be used. Continuous-time linear dynamics can be written x = Ax + Bu and discrete-time linear dynamics can be written x(i+l) = Ax(i) + Bu(i) with output y given by y = Cx A is known as a companion matrix. vn The set of n variables introduced to translate dynamics of order n into a set of simultaneous first-order equations are known as state variables. The vector x İs known as a state vector and n-dimensional space which is known as state space. The matrices A, B, C provide a complete specification of a linear dynamic system. Dynamical properties which depend only on A and B and the overall transfer function from u to output y which depends also on C where desirable properties of controllability and observability. Transient behaviour of a linear dynamic system is determinated by roots of characteristic solution of I si - A | = 0 are the eigenvalues of A. A set of dynamic state equations is said to be controllable if the output u can is a finite time transfer the dynamic state x between any two arbitrarily chosen points in state space. When the eigenvalues of A are district controllability can be examined by diogonalizing the matrix equations. This is done using the modal matrix of A. X is non-singular and the modal matrix diagonalizes A. X4AX = S where S is a diagonal matrix whose only non zero elements on the main diagonal are the eigenvalues of A. A general necessary and sufficient condition for controllability of time invariant linear systems both continuous-time and discrete- time is that controllability matrix M^fB AB An_1B] must have rank n. A set of dynamic and output equations is said to be observable if for zero input u, it is possible from a finite time history of the output y to determine any value which the state x may have had at any point in time. The general necessary and sufficient condition for observability of time invariant linear systems both continuous-time and discrete-time is that the n x np observability matrix Mo = [ CT ATCT (A^C1] must have rank n. When a linear system is specified by a triple of matrices (A an n x n matrix, B an n x m matrix, C a p x n matrix) its transfer function matrix is a p x m matrix of transfer functions relating transforms of p output variables y to transforms of m input variables u. It can be seen again that it is the eigenvalues of the A matrix which are the poles of the transfer function and roots of the characteristic equation. A state space model A, B, C corresponding to a given transfer function G is known a realization of the transfer function. Possible of state can be partitioned into four subsets. * States which are both controllable and observable * States which are controllable but not observable * States which are observable but not controllable vm * States which are either controllable nor observable. The number of states which are both controllable and observable is the same as order of the transfer function. Absence of additional states is the necessary and sufficient condition for realization to be both controllable and observable. Realization which is minimal in the sense that the number of the states is the same as the order of the transfer function is always both controllable and observable. If the corresponding transfer function is known. If order of the transfer function is not known, it can be examining the rank of the controllability and observability matrices Me and Mo for realization is minimal or not. Other causes of uncontrollability and unobservability can arise that the relationship between states and either or both inputs and outputs may not be linearly independent. Controllability and observability are, as a properties which are desirable in a object which is to be subject to feedback control. A set of dynamic state equation is said to be stabilizable if any uncontrollable states in the set are stable. A set of dynamic and output equations is said to be detectable if any unobservable states in the set are stable. When the performance is measured by a scalar function there two classes of optimal control problem: * Fixed configuration; where the form of a controller is given and it is required to optimize the value of some parameter such as a controller gain. * Semi-free configuration; where the form of the controller is not specified but is to be determination to produce whatever will be the optimal performance of a given controlled object. Optimal semi-free control theory is introduced LQP semi-free LQP problems are in continuous-time with realization x = Ax + Bu and performance criterion 1= j(xTPx + uTQu)dt ti or discrete-time with realization x(i+ l) = Ax + Bu and performance criterion EX h I = S (xTPx + uTQu) ii The matrices A, B, P, Q which specify a problem may in general be time-varying as also may matrix K specifying the solution. The LQP result is by way of simple continuous-time or discrete-time scalar LQP problems. For discrete-time scalar LQP problem time interval N = İ2 - ii + 1 optimal value I*N of IN rmnimized with respect to sequence u(ii) u(i2) depends only on the initial state x(ii) and on the number N of term in a sequence. Optimal value of the performances criterion which are quadratic in the state I*Nj(x) = -IcnX where kN = - vN = p + a2vN.r q + b2vN.! 8Wn.i q + b2vN_! For continuous scalar LQP problems x = ax + bu and performance criterion t2 1= J(px2+qu2)dt All four scalars p, q, a, b are time-invariant. Optimal values of performance criterion which is quadratic in x P(x,T) = v(T)x2 optimal control bv(T) u(x,T) = -k(T)x where k(T) a non-linear difference equation of optimal control dv bV - = p + 2av dT q2 This equation is known as Riccati Equations. Time interval T is defined by T = t2 - 1 like N. Convergence of the solution v(T) to a finite steady state value v. when T goes to a infinity is a desirable property. The optimal control depends on the current state x(t) and the time to go to T according to a linear feedback law u(t) = -K(T)x(t) having gain K(T) = Q_1(t)BT(t)VCr) where V(T) satisfies the Riccati equations. For discrete-time equations are all of them as the same continuous-time equations. Of any LQP problem is a linear function of the current value of dynamic state x of the controlled object: This provides use of linear feedback control law in the practice where controlled objects can be described by linear equations. State regulation, where main source of the variability is the arbitrary initial state of controlled object and the control objective is to achieve a stable equilibrium is at the origin of the state space. Target tracing, with the variability in the form of exogenous target motion r to be tracked by output c of the controlled object. Disturbance rejection, where either or both state regulation or target tracing is to be achieved in spite of the action of additional variability in the form of exogenous disturbances x2 affecting states of the controlled object. Continuous-time scalar example shows how Riccati equations converge to a time invariant steady state solutions in the limit as run time becomes infinite. For continuous-time 0 = P + ATV + VA - VBQ/WV For discrete-time XI V = P + ATVA - ATVB(Q + BTVB)1BTVA The matrix V measures optimal value in performance criteria I. If P is positive definite the state x is regulated to the origin of the state space. General expression for optimal control u(t) = (Va^o + l/q'-aoMt) - (\/a\ + 2a2(Vaio+l/q La<>) -aOy(t) This equation for the optimal control with the original differential equation of the controlled object show that resulting optimal control is a second order system governed the second equation. y + 2Çö)o y +©0 2 =0 ©o natural frequency, £ damping ratio and B, = I/aS"' value which is often chosen as given acceptable transient response is optimal. Optimal control theory leads to solution where optimal control is expressed as a function of the current value of dynamic states x of the controlled object. With real controlled object cannot usually be directly available as measured outputs. Real feedback control can only be driven by y = Cx. Optimal control theory is exploited in designing the feedback control what is known as an observer. This system is designed using the real signal u, y to generate an estimate x of the state x. The observer output x can be used to drive control law u = - Kx. Observer uses A, B, C matrices which are characterized of the controlled object. The observer has the same number n of the dynamic states as to be estimated convergence towards the estimated states x. For linear controlled object in continuous-time x = Ax + Bu + K0(y-Cx) or in discrete-time x (i) =Ax(i - 1) + Bu(i - 1) +K"(y(i) - C(Ax(i - 1)+Bu(i - 1))) Right hand side of the each equation consist of prediction term plus correction term. The correction term can be written Ko(y- y) when y predicted value y is given y = Cx with x the predicted value of x. Ko must be chosen sothat the observation xn error e0; e0 = x - x converges to zero. Convergence to zero of the observation error e0 depends on eigenvalues of (A - K"C) for continuous-time or of (I - K<,C)A for discrete-time systems. For continuous-time system all eigenvalues must be in the left half plane. For discrete-time system all eigenvalues must be within the unit circle. Matrix Ko is known as the observer gain. One way is to choose K<, so as to fix the roots of observer characteristic equation or at specified values chosen to ensure good behavior of observer. This procedure is known as pole placement. Another way to choose Ko is take account of causing any discrepancy eo between the estimate x and true value x. Because they are due to random noises and disturbances. The necessary condition for observer convergence that both poles be in the left half plane and also ki, k2 must be positive. In continuous-time transfer function matrix is H(s) s Kc(sl - A + KoC + BKJ "X and in discrete-time H(s) s K^zl - (I - KoCXA - BIQ)-1 K* Ko is the control gain of the feedback controller. Methods of loop recovery has been developed which aim to recover guaranteed stability margins by making assumptions about exogenous disturbances which have the effected of tuning the observer gain K<, so as to increase loop stability. The controller transfer function matrices are m x p where m is the number of the control u and p is the number of measurement y. Integral control; If a continuous-time system value of A, B, C, Ko, Kc are such the denominator coefficient do is zero, this is an integral control. PID control; If a continuous-time system its numerator polynomial is of higher order then its denominator, so the denominator coefficient di were so dominant in the transfer function, this is the PID control. For the LQP controller design first assumption that only the control error is available to drive the observer. An alternative way to design controller gain K<. would be to force steady state error to zero. This is achieved by kc2 = 1 which set do in the denominator of the transfer function. kd< 0 ensures T' faster then T. If Ti > T2 the transfer function is that of a phase-advance controller with integral action. Very fast observer response (a-> °°) value the di denominator coefficient dominates and the controller transfer function approximation PI rather then PID of the controlled object. Closed-loop behaviour of linear system is determinated by the characteristic equation I si - A I =0 where A having the partitioned structure. The design of an ideal feedback controller requires specification of two gain matrices Kc, Ko Integrators; whose in the feedback controllers was indicated as model of the dynamics of the class of variables which are finite polynomials in time. Derivative action; whether in the form of velocity feedback or of phase-advance can also be seen to serve the purpose of state estimation. Derivative action in feedback control provides a crude because noisy, estimate of the order state. xm error e0; e0 = x - x converges to zero. Convergence to zero of the observation error e0 depends on eigenvalues of (A - K"C) for continuous-time or of (I - K<,C)A for discrete-time systems. For continuous-time system all eigenvalues must be in the left half plane. For discrete-time system all eigenvalues must be within the unit circle. Matrix Ko is known as the observer gain. One way is to choose K<, so as to fix the roots of observer characteristic equation or at specified values chosen to ensure good behavior of observer. This procedure is known as pole placement. Another way to choose Ko is take account of causing any discrepancy eo between the estimate x and true value x. Because they are due to random noises and disturbances. The necessary condition for observer convergence that both poles be in the left half plane and also ki, k2 must be positive. In continuous-time transfer function matrix is H(s) s Kc(sl - A + KoC + BKJ "X and in discrete-time H(s) s K^zl - (I - KoCXA - BIQ)-1 K* Ko is the control gain of the feedback controller. Methods of loop recovery has been developed which aim to recover guaranteed stability margins by making assumptions about exogenous disturbances which have the effected of tuning the observer gain K<, so as to increase loop stability. The controller transfer function matrices are m x p where m is the number of the control u and p is the number of measurement y. Integral control; If a continuous-time system value of A, B, C, Ko, Kc are such the denominator coefficient do is zero, this is an integral control. PID control; If a continuous-time system its numerator polynomial is of higher order then its denominator, so the denominator coefficient di were so dominant in the transfer function, this is the PID control. For the LQP controller design first assumption that only the control error is available to drive the observer. An alternative way to design controller gain K<. would be to force steady state error to zero. This is achieved by kc2 = 1 which set do in the denominator of the transfer function. kd< 0 ensures T' faster then T. If Ti > T2 the transfer function is that of a phase-advance controller with integral action. Very fast observer response (a-> °°) value the di denominator coefficient dominates and the controller transfer function approximation PI rather then PID of the controlled object. Closed-loop behaviour of linear system is determinated by the characteristic equation I si - A I =0 where A having the partitioned structure. The design of an ideal feedback controller requires specification of two gain matrices Kc, Ko Integrators; whose in the feedback controllers was indicated as model of the dynamics of the class of variables which are finite polynomials in time. Derivative action; whether in the form of velocity feedback or of phase-advance can also be seen to serve the purpose of state estimation. Derivative action in feedback control provides a crude because noisy, estimate of the order state. xm Hamiltonian-Jacobi-Bellman equation becomes - = - H and Pj = and Xj = dt dxj dpj They known as Pontriagin's equations. The condition for a trajectory to be optimal is that the Hamiltonian H be a minimum with respect to the control u at every point in time. A set of saturation inequalities is Uji^ Uj< Uj2 j = l,...m on components of control u. Pontriagin maximum principle to problems having linear dynamics with saturation constrains can lead to useful results. The optimal is assign to every element of control u its minimum or maximum allowed value depending on whether to the sign its coefficient in pTB is positive or negative. This control law which requires every control variable to take one or another of two limiting values is called bang-bang control. Features of dynamic programming can be summarized: * The problem has to be describe numerically. * Constrains on the range of the state x can improve the efficiency of the search by reducing the number of the possibilities that need be considered. * Numerical procedures of dynamic programming share with other numerical procedures the curse of dimensionally, an ability to handle problems of high dimensionality. The resulting functions of optimal control specify control action as a function of the absolute value of time and of the current value of the state x. This feature that optimal control is specified in the form of the feedback is in the principle of optimality and is not shared by techniques of calculus of variations. It is the reason why dynamic programming is suitable for analysis of problems of control. XV

##### Açıklama

Tez (Yüksek Lisans) -- İstanbul Teknik Üniversitesi, Sosyal Bilimler Enstitüsü, Füsun Dede

##### Anahtar kelimeler

Optimum denetim,
Optimum control