• 검색 결과가 없습니다.

15 Linear Matrix Inequality (LMI)

N/A
N/A
Protected

Academic year: 2022

Share "15 Linear Matrix Inequality (LMI)"

Copied!
12
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

SNU MAE Multivariable Control

15 Linear Matrix Inequality (LMI)

A linear matrix inequality is an expression of the form

F (x) , F0+ x1F1+ · · · + xmFm > 0 (1) where

x = (x1, · · · , xm) is a vector of real numbers,

F0, · · · , Fm are real symmetric matrices, i.e., Fi = FiT ∈ IRn×n, i = 0, · · · , m for some n ∈ Z+, and

the inequality > 0 in (1) means positive definite, i.e., uTF (x)u > 0 for all u ∈ IRn, u 6= 0. Equivalently, the smallest eigenvalue of F (x) is positive.

Definition 15.1 (Linear matrix inequality(LMI)) A linear matrix in- equality is

F (x) > 0 (2)

where F is an affine function mapping a finite dimensional vector space to the set Sn, {M : M = MT ∈ IRn×n}, n > 0, of real matrices.

“Affine matrix inequality” would have been a better name. Note that the constraints F (x) < 0 and A(x) < B(x) are special cases of (2).

Remark. Recall, from definition, that an affine mapping F : V → Sn necessarily takes the form F (x) = F0+ T (x) where F0 ∈ Snand T : V → Sn is a linear transformation. Thus if V is finite dimensional, say of dimension m, and {e1, · · · , em} constitutes a basis for V, then we can write

T (x) = Xm j=1

xjFj

where the elements {x1, · · · , xm} are such that x = Pm

j=1xjej and Fj = T (ej) for j = 1, · · · , m. Hence we obtain (1) as a special case.

(2)

Remark. The same remark applies to mappings F : IRm1×m2 → Snwhere m1, m2∈ Z+. A simple example where m1 = m2is the Lyapunov inequality

F (X) = ATX + XA + Q > 0 .

Here, A, Q ∈ IRm1×m2 are assumed to be given and X ∈ IRm×m is the unknown. The unknown variable is therefore a matrix. Note that this defines an LMI only if Q is symmetric. In this case, the domain V of F in definition 15.1 is equal to Sm. We can view this LMI as a special case of (1) by defining a basis E1, · · · , Em of Sm and writing X =Pm

j=1xjEj: F (X) = F

Xm

j=1

xjEj

 = F0+ Xm j=1

xjF (Ej) = F0+ Xm j=1

xjFj

which is of the form (1).

The LMI

F (x) = F0+ xF1+ · · · + xmFm

defines a convex constraint on x = (x1, · · · , xm). i.e., the set F , {x : F (x) > 0}

is convex. Indeed, if x1, x2 ∈ F and α ∈ (0, 1) then

F (αx1+ (1 − α)x2) = αF (x1) + (1 − α)F (x2) > 0

Convexity has an important consequence: even though the LMI has no analytical solution in general, it can be solved numerically with guarantees of fining a solution when one exists. Although the LMI may seem special, it turns out that many convex sets can be represented in this way.

1. Note that a system of LMIs (i.e. a finite set of LMIs) can be written as a single LMI since





F1(x) < 0 ... FK(x) < 0



 is equivalent to F (x) , diag[F1(x), · · · , FK(x)] < 0

2. Combined constraints (in the unknown x) of the form

½ F (x) > 0

Ax = b or

½ F (x) > 0

x = Ay + b for some y

(3)

where the affine function F : IRm → Sn and matrices A ∈ IRn×m and b ∈ IRn are given can be lumped into one LMI. More generally, the combined equations

½ F (x) > 0

x ∈ M (3)

where M is an affine subset of IRn, i.e.

M = x0+ M0 = {x0+ m | m ∈ M0}

with x0 ∈ IRn and M0 a linear subspace of IRn, can be written in the form of one single LMI. In order to see this, let e1, · · · , ek ∈ IRn be a basis of M0 and let F (x) = F0 + T (x) be decomposed as in remark.

Then (3) can be rewritten as

0 < F (x) = F0+ T (x0+ Xk j=1

xjej) = F0+ T (x0)

| {z }

constant part +

Xk j=1

xjT (ej)

| {z }

linear part

= F¯0+ x1F¯1+ ... + xkF¯k , F (¯¯ x)

where ¯F0 = F0 + T (x0), ¯Fj = T (ej) and x = (x1, · · · , xk). This implies that x ∈ IRn satisfies (3) if and only if F (x) > 0. Note that the dimension of ¯x is smaller than the dimension of x.

3. (Schur Complement) Let F : V → Snbe an affine function partitioned to

F (x) =

· F11(x) F12(x) F21(x) F22(x)

¸

where F11(x) is square. Then F (x) > 0 iff

½ F11(x) > 0

F22(x) − F21(x)F11−1(x)F12(x) > 0 (4) Note that the second inequality in (4) is a nonlinear matrix inequality in x. It follows that nonlinear matrix inequalities of the form (4) can be converted to LMIs, and nonlinear inequalities (4) define a convex constraint on x.

(4)

15.1 Types of LMI problems

Suppose that F, G : V → Sn1 and H : V → Sn2 are affine functions. There are three generic problems related to the study of linear matrix inequalities:

Feasibility: The test whether or not there exist solutions x of F (x) > 0 is called a feasibility problem. The LMI is called non-feasible if no solutions exist.

Optimization: Let f : S → IR and suppose that S = {x|F (x) > 0}. The problem to determine Vopt = infx∈Sf (x) is called an optimization problem with an LMI constraint.

Generalized eigenvalue problem: Minimize a scalar λ ∈ IR subject to



λF (x) − G(x) > 0 F (x) > 0

H(x) > 0 15.2 What are LMIs good for?

Many optimization problems in control design, identification, and signal processing can be formulated using LMIs.

Example. Asymptotic stability of the LTI system

˙x = Ax , A ∈ IRn×n (5)

Lyapunov said, asymptotically stable iff there exists X ∈ Snsuch that X > 0 and ATX + XA < 0. i.e. equivalent to feasibility of the LMI

· X 0

0 −ATX − XA

¸

> 0

Example. Determine a diagonal matrix D such that ||DM D−1|| < 1 where M is some given matrix. Since

||DM D−1|| < 1 ⇐⇒ D−TMTDTDM D−1 < I

⇐⇒ MTDTDM < DTD

⇐⇒ X − MTXM > 0

where X := DTD > 0 we see that the existence of such a matrix means the feasibility of LMI.

(5)

Example. Let F be an affine function and consider the problem of mini- mizing f (x) , λmax(F (x)) over x.

λmax(FT(x)F (x)) < γ ⇐⇒ γI − FT(x)F (x) > 0

⇐⇒

· γI FT(x) F (x) I

¸

> 0

if we define

¯ x ,

· x γ

¸

, F (¯¯ x) ,

· γI FT(x) F (x) I

¸

, f (¯¯x) , γ ,

then ¯F is an affine function of ¯x and the problem to minimize the max- imum eigenvalue of F (x) is equivalent to determining inf ¯f (¯x) subject to the LMI ¯F (¯x) > 0. Hence, this is an optimization problem with a linear objective function ¯f and an LMI constraint.

Example (Simultaneous stabilization)

Consider k LTI systems with n-dim state space and m-dim input space:

˙x = Aix + Biu

where Ai ∈ IRn×n and Bi ∈ IRn×m, i ∈ 1, · · · , k. We’d like to find a state feedback law u = F x, F ∈ IRm×n such that the eigenvalues λ(Ai+ BiF ) lie on the LHP for i ∈ 1, · · · , k. From the example above, this is solved when we find matrices F and Xi, i ∈ 1, · · · , k such that for i ∈ 1, · · · , k,

½ Xi > 0

(Ai+ BiF )TXi+ Xi(Ai+ BiF ) < 0 (6) Note that this is not a system of LMIs in Xi and F . If we introduce Yi = Xi−1 and K = F Yi, then (6) becomes

½ Yi > 0

AiYi+ YiATi + BiK + KiTBi < 0 ,

which can be further simplified by assuming the existence of a joint Lyapunov function, i.e. Xi = · · · = Xk = X. The joint stabilization problem has a solution if this system of LMIs is feasible.

(6)

15.3 Nominal Performance Consider

x = Ax + Bu (7)

y = Cx + Du (8)

with state space X = IRn, input space U = IRm and output space Y = IRp. 15.3.1 H nominal performance

Proposition 15.2 If the system (7) is asymptotically stable then ||G||<

γ whenever there exists a solution K = KT > 0 to the LMI

· ATK + KA + CTC KB + CTD BTK + DTC DTD − γ2I

¸

< 0. (9)

Proof requires discussion on linear dissipative systems. See Scherer, if interested.

Therefore, we can compute the smallest possible upper bound of the L2- induced gain of the system (which is the H norm of the transfer function) by minimizing γ > 0 over all variables γ and K > 0 that satisfy the LMI (9).

15.3.2 H2 nominal performance

Suppose that we are interested only in the impulse responses of this system.

This means, that we take impulsive inputs of the form u(t) = δ(t)ei with ei the ith basis vector in the standard basis of the input space IRm. (i runs from 1 till m). With zero initial conditions, the corresponding output yi belongs to L2 and is given by

yi(t) =



C exp(At)Bei for t > 0 Deiδ(t) for t = 0

0 for t < 0.

.

Only if D = 0, the sum of the squared norms of all such impulse responses Pm

i=1||yi||22 is well defined and given by Xm

i=1

||yi||22 = trace Z

0

BT exp(At)CTC exp(At)B dt

= trace Z

0

C exp(At)BBTexp(ATt)CT dt

= trace Z

−∞

G(jω)G(jω) dω

(7)

where G is the transfer function of the system.

Proposition 15.3 Suppose that the system (7) is asymptotically stable (and D = 0), then the following statements are equivalent.

(a) ||G||2< γ

(b) there exists K = KT > 0 and Z such that

· ATK + KA KB

BTK −I

¸

< 0;

· K CT

C Z

¸

> 0; trace(Z) < γ2 (10)

(c) there exists K = KT > 0 and Z such that

· AK + KAT KCT

CK −I

¸

< 0;

· K B

BT Z

¸

> 0; trace(Z) < γ2 (11)

Proof. . note that ||G||2 < γ is equivalent to requiring that the controlla- bility gramian Wc:=R

0 exp(At)BBTexp(ATt) dt satisfies trace(CW CT) <

γ2. Since the controllability gramian is the unique positive definite solution to the Lyapunov equation AW + W AT + BBT = 0 this is equivalent to saying that there exists X > 0 such that

AX + XAT + BBT < 0; trace(CXCT) < γ2.

With a change of variables K := X−1, this is equivalent to the existence of K > 0 and Z such that

ATK + KA + KBBTK < 0; CK−1CT < Z; trace(Z) < γ2. Now, using Schur complements for the first two inequalities yields that

||G||2< γ is equivalent to the existence of K > 0 and Z such that

· ATK + KA KB

BTK I

¸

< 0;

· K CT

C Z

¸

> 0; trace(Z) < γ2. The equivalence with (11) is obtained by a direct dualization and the obser- vation that ||G||2= ||GT||2.

Therefore, the smallest possible upper bound of the H2-norm of the transfer function can be calculated by minimizing the criterion trace(Z) over the variables K > 0 and Z that satisfy the LMIs defined by the first two inequalities in (10) or (11).

(8)

15.4 Controller Synthesis Let

˙x = Ax + B1w + B2u z = Cx + D∞1w + D∞2u

z2 = C2x + D21w + D22u y = Cyx + Dy1w

and

˙xK = AKxK+ BKy u = CKxK+ DKy

be state-space realizations of the plant P (s) and the controller K(s) respec- tively.

Denoting by T(s) and T2(s) the CL TF from w to z and z2, respec- tively, we consider the following multi-objective synthesis problem:

Design an output feedback controller u = K(s)y such that

H performance: maintains the H norm of T below γ0.

H2 performance: maintains the H2 norm of T2 below ν0.

Multi-objective H2/H controller design: minimizes the trade-off cri- terion of the form α||T||2+ β||T2||22 with some α, β ≥ 0.

Pole placement: places the CL poles in some prescribed LMI region D.

Let the following denote the corresponding CL state-space eqns,

˙

xcl = Aclxcl+ Bclw z = Ccl1xcl+ Dcl1w

z2 = Ccl2xcl+ Dcl2w then our design objectives can be expressed as follows:

H performance: the CL RMS gain from w to z does not exceed γ iff there exists a symmetric matrix X such that

AclX+ XATcl Bcl XCcl1T BclT −I Dcl1T Ccl1X Dcl1 −γ2I

 < 0 X> 0

(9)

H2 performance: the LQG cost from w to z2 does not exceed ν iff Dcl2= 0 and there exists a symmetric matrices X2 and Q such that

· AclX2+ X2ATcl Bcl BclT −I

¸

< 0

· Q Ccl2T X2 X2Ccl2T X2

¸

> 0 trace(Q) < ν2

Pole placement: the CL poles lie in the LMI region D := {z ∈ C : L+M z +MTz < 0} with L = L¯ T = [λij]1≤i,j≤mand M = [µij]1≤i,j≤m iff there exists a symmetric matrix Xpol such that

ijXpol+ µijAclXpol+ µjiXpolATcl]1≤i,j≤m < 0 Xpol > 0 .

For tractability, we seek a single Lyapunov matrix X := X = X2 = Xpol that enforces all three sets of constraints. Factorizing X as

X =

· R I

MT 0

¸ · 0 S I NT

¸−1

and introducing the transformed controller variables:

BK := N BK+ SB2DK CK := CKMT + DKCyR

AK := N AKMT + N BKCyR + SB2CKMT + S(A + B2DKCy)R , the inequality constraints on X are turned into LMI constraints in the vari- ables R, S, Q, AK, BK, CK and DK. And we have the following suboptimal LMI formulation of our multi-objective synthesis problem:

Minimize αγ2+ βtrace(Q) over R, S, Q, AK, BK, CK, DK and γ2 satisfy-

(10)

ing:



AR + RAT + B2CK+ CKTB2T AK+ A + B2DKCy B1+ B2DKDy1 F F ATS + SA + BKCy+ CyTBKT SB1+ BKDy1 F

F F −I F

CR + D∞2CK C+ D∞2DKCy D∞1+ D∞2DKDy1 −γ2I



 < 0

Q C2R + D22CK C2D22DKCy

F R I

F I S

 > 0

· λij

· R I I S

¸ + µij

· AR + B2CK A + B2DKCy AK SA + BKCy

¸ + µji

· (AR + B2CK)T ATK (A + B2DKCy)T (SA + BKCy)T

¸¸

1≤i,j≤m

< 0 trace(Q) < ν02

γ2 < γ02 D21+ D22DKDy1 = 0 . Given optimal solutions γ, Q of this LMI problem, the closed loop

performances are bounded by

||T ||≤ γ, ||T ||2 p

trace(Q) .

This has been implemented by the matlab command “hinfmix”.

15.5 Affine combinations of linear systems

Often models uncertainty about specific parameters is reflected as uncer- tainty in specific entries of the state space matrices A, B, C, D. Let p = (p1, ..., pn) denote the parameter vector which expresses the uncertain quan- tities in the system and suppose that this parameter vector belongs to some subset P ⊂ IRn. Then the uncertain model can be thought of as being parameterized by p ∈ P through its state space representation

˙x = A(p)x + B(p)u (12)

y = C(p)x + D(p)u . (13)

One way to think of equations of this sort is to view them as a set of linear time-invariant systems as parameterized by p ∈ P. However, if p is time, then (12) defines a linear time varying dynamical system and it can therefore

(11)

also be viewed as such. If components of p are time varying and coincide with state components then (12) is better viewed as a nonlinear system.

Of particular interest will be those systems in which the system matrices affinely depend on p. This means that

A(p) = A0+ p1A1+ · · · + pnAn (14) B(p) = B0+ p1B1+ · · · + pnBn (15) C(p) = C0+ p1C1+ · · · + pnCn (16) D(p) = D0+ p1D1+ · · · + pnDn. (17) Or, written in a more compact form

S(p) = S0+ p1S1+ ... + pnSn where

S(p) =

· A(p) B(p) C(p) D(p)

¸

is the system matrix associated with (12). We call these models affine parameter dependent models. In MATLAB such a system is represented with the routines psys and pvec. For n = 2 and a parameter box

P , {(p1, p2)| p1 ∈ [pmin1 , pmax1 ], p2∈ [pmin2 , pmax2 ]}

the syntax is

affsys = psys( p, [s0, s1, s2] );

p = pvec( ‘box’, [p1min p1max ; p2min p2max])

where p is the parameter vector whose i-th component ranges between pimin and pimax. Bounds on the rate of variations, ˙pi(t) can be specified by adding a third argument “rate” when calling “pvec”. See also the following routines:

pdsimul for time simulations of affine parameter models

aff2pol to convert an affine model to an equivalent polytopic model

pvinfo to inquire about the parameter vector References

[1 ] C. Scherer and S. Welland, Lecture Notes on Linear Matrix Inequality in Control

(12)

[2 ] Gahinet et al., LMI-lab Matlab Toolbox for Control Analysis and De- sign.

[3 ] S. Boyd, L. El Ghaoui, E. Feron and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM studies in Applied Mathematics, Philadelphia, 1994.

참조

관련 문서

We will employ the theory of integral inequalities to study Lipschitz and asymptotic stability for solutions of the nonlinear differential systems.. The method

 Manipulations of polarization state and transmittance of light.. Nonlinear Optics Lab.. Nonlinear Optics Lab.. Nonlinear Optics Lab.. Nonlinear Optics Lab.. Nonlinear

(8.8-1) describe traveling-wave parametric interaction, it is still valid if we Think of propagation inside a cavity as a folded optical path...

On the other hand, Diba-Grossman and Yuhn theorems postulate that stock prices are cointegrated with market fundamentals in a nonlinear fashion if there is

Our results unify some continuous inequalities and their correspond- ing discrete analogues, and extend these inequalities to dynamic inequalities on time scales.. Furthermore,

Spalart, &#34;Linear and nonlinear stability of the Blasius boundary layer&#34;, Journal of Fluid Mechanics, Vol. Hussaini, &#34;Linear and Nonlinear PSE for

inequalities in the case of infinite number of variables , and in the second part , we discuss the problem for the system governed by very strongly

The extracted topics were public health policies, social inequalities in health, inequality as a social problem, healthcare policies, and regional health gaps.. The total