• 검색 결과가 없습니다.

Theorem 6.24:

N/A
N/A
Protected

Academic year: 2022

Share "Theorem 6.24:"

Copied!
31
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

Page 1

Review

⌅⌅ Theorem 6.24: V is an inner product space:T : V ! V is linear.

Then T is an orthogonal projection , T2 = T = T. Proof:

” ( ”: Assume T2 = T = T ) T is a projection.

Let x 2 R(T ) and y 2 N(T ). ) T (x) = x, T (y) = 0.

) hx, yi = hT (x), yi = hx, T(y)i = hx, T (y)i = hx, 0i = 0 ) x 2 N(T )? ) R(T ) ✓ N(T )? (1)

Let x 2 N(T )?

) x = x1 + x2, x1 2 R(T ), x2 2 N(T ) [projection]

) 0 = hx, x2i = hx1, x2i + hx2, x2i = ||x2||2 [* x1 2 N(T )? (1)]

) x2 = 0 ) x = x1 2 R(T ) ) N(T )? ✓ R(T ) ) N(T )? = R(T ) [(1)]

(2)

Page 2

⌅⌅ Theorem 6.25 (The spectral theorem): V is an inner product space over F ; dim(V ) < 1; T : V ! V is a linear operator with

distinct eigenvalues: spectrum 1 · · · k

corresponding eigenspaces W1 · · · Wk orthogonal projection on Wi T1 · · · Tk

and T is normal if F = C and self-adjoint if F = R. Then the following statements are true.

1. V = W1 · · · Wk. 2. Wi? = kj=1,j6=iWj. 3. TiTj = T0, i 6= j.

4. T1 + · · · + Tk = I.

5. T = 1T1 + · · · + kTk: spectral decomposition

(3)

Page 3

⌅⌅ Corollary 6.25.2: V is an inner product space over C; dim(V ) <

1; T : V ! V is unitary , T is normal and | | = 1 for every eigenvalue of T .

Proof.

”)” If T is unitary, then T is normal and every eigenvalue of T has absolute value 1 (* ||T(x)|| = ||x||).

⌅⌅ Corollary 6.25.3: V is an inner product space over C; dim(V ) <

1; T : V ! V is normal. Then T is self-adjoint , every eigen- value of T is real.

Proof.

”)” T = T ) T(vi) = ivi = T (vi) = ivi ) i is real.

(4)

Page 4

⌅⌅ Reflection is an example that is both self adjoint and unitary.

T =

 cos2✓ sin2✓

sin2✓ cos2✓

[End of Review ]

II.

(5)

Page 5

Singular value decomposition and pseudoinverse

⌅⌅ generalization:

normal/self-adjoint operator ) linear transformation T : V ! V ) T : V ! W

[T ] 2 Mn⇥n ) [T ] 2 Mm⇥n eigenvalue ) singular value

[T ] = A = QDQ ) [T ] = A = U⌃V

⌅⌅ For convenience, we assume F = R or C and by unitary we mean either unitary (C) or orthogonal (R).

⌅⌅ adjoint of a linear transformation T : V ! W : T : W ! V such that hT (x), yiW = hx, T(y)iV

(6)

Page 6

⌅⌅ Theorem 6.26 (singular value theorem): V and W are inner product spaces; dim(V ), dim )W ) < 1; T : V ! W is linear; rank(T ) = r.

then 9 scalars 1 · · · r > 0, and

1. 9 orthonormal bases {v1, · · · , vn} for V such that TT (vi) =

( 2

i vi, 1  i  r 0, i > r .

2. 9 orthonormal bases {u1, · · · , um} for W such that T (vi) =

(

iui, 1  i  r 0, i > r . proof: TT is self-adjoint.

) 9 orthonormal basis = {v1, · · · , vn} for V consisting of eigen- vectors of TT with corresponding eigenvalues i. [Thm 6.17]

) hT (vi), T (vi)iW = hTT (vi), viiV = h ivi, viiV = ihvi, viiV

) i 0

(7)

Page 7

rank(TT ) = rank([TT ] ) = rank([T] [T ] ) for some .

=rank(([T ] )[T ] ) [exercise 6.3.15]

=rank([T ] ) [Lemma 6.12.2]

=rank(T )

) 1 · · · r > 0; i = 0, i > r after reordering . We have proven ”1” if we set i = p

i. Let ui = 1

iT (vi), i = 1, · · · , r.

) hui, ujiW = h 1iT (vi), 1

jT (vj)i

W

= 1

i jhTT (vi), vjiV

= 1

i jh ivi, vjiV

i2

i jhvi, vjiV = ij

) {u1, · · · , ur} is orthonormal.

)It extends to an orthonormal basis {u1, · · · , um} for W .

(8)

Page 8

) T (vi) = (

iui, 1  i  r 0, i > r[ i = 0]

⌅⌅ singular value of T : 1, · · · , k, where k =min(m, n), in the theorem.

⌅⌅ The singular values are unique to T , but the bases are not.

⌅⌅ The singular values of T and T are identical.

⌅⌅ The theorem is symmetric, ie, the roles of T and T can be inter- changed

(9)

Page 9

⌅⌅ example: consider T : P2(R ! P1(R) such that T (f) = f0 and hf, gi = R 1

1 f (x)g(x)dx.

to find the bases in the theorem, we need to work with the matrix representations relative to ”orthonormal bases”.

Let = {

q1 2,

q3 2x,

q5

8(3x2 1)}, = {

q1 2,

q3

2x}, arbitrary choice.

) A = [T ] =

✓ 0 p

3 0 0 0 p

15

) AA = 0

@

0 0

p3 0 0 p

15 1 A✓

0 p

3 0 0 0 p

15

= 0

@ 0 0 0 0 3 0 0 0 15

1 A

) 1 = 15, 2 = 3, 3 = 0: eigenvalues in decreasing order.

) z1 = (0, 0, 1)t, z2 = (0, 1, 0)t, z3 = (1, 0, 0)t :corresponding eigenvectors of AA in R3

) {v1, v2, v3}: the orthonormal basis for V in the theorem.

-

.

一一

u

. ,

랴홄

Tan

=

制耳

6가

湜言

-

(10)

Page 10

) 1 = p

15, 2 = p

3: singular values.

) u1 = 1

1T (v1) =

q3

2x, u2 = 1

2T (v2) =

q1 2

) {u1, u2}: the orthonormal basis for W in the theorem.

⌅⌅ example: T : R2 ! R2 is linear and invertible;

T has singular values 1 2 > 0;

{v1, v2} and {u1, u2} are orthonormal bases for R2 such that T (v1) =

1u1 and T (v2) = 2u2.

Then the unit circle is mapped to an ellipse as follows.

(11)

Page 11

⌅⌅ singular value of a matrix A: singular value of LA

⌅⌅ Theorem 6.27 (singular value decomposition theorem):

A 2 Mm⇥n; rank(A) = r; A has singular values 1 · · · r;

⌃ 2 Mm⇥m is such that ⌃ij = (

i, i = j < r

0, else . Then 9U 2 Mm⇥m and V 2 Mn⇥n, both unitary, such that A = U ⌃V (singular value decomposition of A).

proof: Let T = LA : Fn ! Fm, and apply theorem 6.26 to get orthonormal bases = {v1, · · · , cn} and = {u1, · · · , um}

such that T (vi) = (

iui, 1  i  r 0, i > r .

Let U = (u1, · · · , um) and V = (v1, · · · , vn).

) AV = (Av1, · · · , Avn) = ( 1u1, · · · , rur, 0, · · · , , 0) = U⌃

) A = U⌃V

2

T Wi)= Ni,

T계가

=

a.ua 0001명 i )

(12)

Page 12

⌅⌅ example: A =

✓ 3 1 1 1 3 1

, AA = 0

@ 10 0 2 0 10 4 2 4 2

1

A , AA =

✓ 11 1 1 11

eigenvalues of AA : 1 = 12, 2 = 10, 3 = 0 eigenvectors of AA, normalized:

v1 = p1

6(1, 2, 1)t, v2 = p1

5(2, 1, 0)t, v3 = p1

30(1, 2, 5)t

1 = p

12, 2 = p 10 u1 = 1

1LA(v1) = p1

2(1, 1)t, u2 = 1

2LA(v2) = p1

2(1, 1)t Note also that eigenvalues of AA : 1 = 12, 2 = 10

eigenvectors of AA, normalized:

u1 = p1

2(1, 1)t, u2 = p1

2(1, 1)t v1 = 1

1LA(u1), v2 = 1

2LA(u2), v3 cannot be computed.

(13)

Page 13

V = 0 BB

@

p1 6

p2 5

p1 2 30

p6

p1 5

p2 1 30

p6 0 p5

30

1 CC

A , ⌃ =

✓ p12 0 0

0 p

10 0

, U = 0

@

p1 2

p1 1 2

p2

p1 2

1 A

A = U ⌃V = 0

@

p1 2

p1 1 2

p2

p1 2

1 A

✓ p12 0 0

0 p

10 0

◆ 0 BB

@

p1 6

p2 6

p1 2 6

p5

p1

5 0

p1 30

p2 30

p5 30

1 CC A

V and W are inner product spaces; dim(V ), dim(W ) < 1;

T : V ! W is linear; rank(T ) = r; then define ”partly invertible”

L : N (T )? ! R(T ) is such that 8x 2 N(T )?, L(x) = T (x).

陶龜

(14)

Page 14

pseudoinverse T of T :

T : W ! V such that T(y) =

(L 1(y), y 2 R(T ) 0, y 2 R(T )?

T is linear.

T exists when T 1 does not.

T is invertible ) T = T 1.

T = T0(V ! W ) ) T = T0(W ) W ).

T TT = T

T T T = T

T T and TT are self-adjoint.

(15)

Page 15

In theorem 6.26,

{v1, · · · , cr} is a basis for N(T )? {vr+1, · · · , vn} is a basis for N(T ), {u1, · · · , ur} is a basis for R(T ), and {ur+1, · · · , um} is a basis for R(T )?. Let L be the restriction of T to N(T )?. ) L 1(ui) = 1

ivi, 1  i  r ) T(ui) =

( 1

ivi, 1  i  r 0, r < i  m ) T T (ui) =

(ui, 1  i  r

0, r < i  m , TT (vi) =

(vi, 1  i  r 0, r < i  n

.

IE NCT) + NCT) t

- -

W

= RC7 t

RT가

(16)

Page 16

⌅⌅ example: continuing the earlier example, T : P2(R) ! P1(R) such that T (f) = f0 and hf, gi = R 1

1 f (x)g(x)dx.

singular values : 1 = p

15, 2 = p 3 v1 =

q5

8(3x2 1), v2 =

q3

2x, v3 =

q1

2; u1 =

q3

2x, u2 =

q1 2

T (u1) = p1

15v1 = p1

24(3x2 1) T (u2) = p1

3v2 = p1

2x ) T(a + bx) = T

ap

2u2 + b

q2 3u1

= ap 2 ⇣

p1

2x⌘

+ b

q2 3

p1

24(3x2 1)⌘

= 6b + ax + 2bx2

(17)

Page 1

Review

Singular value decomposition and pseudoinverse

 generalization:

normal/self-adjoint operator ⇒ linear transformation T : V → V ⇒ T : V → W

[T ]β ∈ Mn×n ⇒ [T ]γβ ∈ Mm×n eigenvalue ⇒ singular value

[T ]β = A = QDQ ⇒ [T ]γβ = A = U ΣV

 For convenience, we assume F = R or C and by unitary we mean either unitary (C) or orthogonal (R).

 adjoint of a linear transformation T : V → W : T : W → V such that hT (x), yiW = hx, T(y)iV

(18)

Page 2

 Theorem 6.26 (singular value theorem): V and W are inner product spaces; dim(V ), dim )W ) < ∞; T : V → W is linear; rank(T ) = r.

then ∃ scalars σ1 ≥ · · · ≥ σr > 0, and

1. ∃ orthonormal bases {v1, · · · , vn} for V such that TT (vi) =

i2vi, 1 ≤ i ≤ r 0, i > r .

2. ∃ orthonormal bases {u1, · · · , um} for W such that T (vi) =

iui, 1 ≤ i ≤ r 0, i > r .

 Singular value of T : σ1, · · · , σk, where k =min(m, n).

 The singular values are unique to T , but the bases are not.

 The singular values of T and T are identical.

 The roles of T and T can be interchanged

(19)

Page 3

 singular value of A: that of LA, σi = pλi(AA) = pλi(AA).

 Theorem 6.27 (singular value decomposition theorem):

A ∈ Mm×n; rank(A) = r; A has singular values σ1 ≥ · · · ≥ σr; Σ ∈ Mm×m is such that Σij =

i, i = j < r

0, else . Then

∃U ∈ Mm×m and V ∈ Mn×n, both unitary, such that A = U ΣV (singular value decomposition of A).

proof: Let T = LA : Fn → Fm, and apply theorem 6.26 to get orthonormal bases β = {v1, · · · , cn} and γ = {u1, · · · , um}

such that T (vi) =

iui, 1 ≤ i ≤ r 0, i > r .

Let U = (u1, · · · , um) and V = (v1, · · · , vn).

⇒ AV = (Av1, · · · , Avn) = (σ1u1, · · · , σrur, 0, · · · , , 0) = U Σ

⇒ A = U ΣV

(20)

Page 4

 T : V → W is linear; rank(T ) = r; then define ”partly invertible”

L : N (T ) → R(T ) is such that ∀x ∈ N (T ), L(x) = T (x).

 pseudoinverse T of T :

T : W → V such that T(y) =

(L−1(y), y ∈ R(T ) 0, y ∈ R(T )

 T is linear.

 T exists when T−1 does not.

 T is invertible ⇒ T = T−1.

 T = T0(V → W ) ⇒ T = T0(W ⇒ W ).

 T TT = T , TT T = T

 T T and TT are self-adjoint.

(21)

Page 5

 In theorem 6.26,

{v1, · · · , cr} is a basis for N (T ) {vr+1, · · · , vn} is a basis for N (T ), {u1, · · · , ur} is a basis for R(T ), and {ur+1, · · · , um} is a basis for R(T ). Let L be the restriction of T to N (T ).

⇒ L−1(ui) = σ1

ivi, 1 ≤ i ≤ r

⇒ T(ui) =

( 1

σivi, 1 ≤ i ≤ r 0, r < i ≤ m

⇒ T T (ui) =

(ui, 1 ≤ i ≤ r

0, r < i ≤ m , TT (vi) =

(vi, 1 ≤ i ≤ r 0, r < i ≤ n

 [End of Review]

(22)

Page 6

 pseudoinverse A of a matrix A : LA = (LA)

 Theorem 6.29: A ∈ Mm×n; rank(A) = r, A = U ΣV ; σ1 ≥ · · · ≥ σr > 0 are singular values of A;

Σ ∈ Mn×m is such that Σij =

( 1

σi, i = j ≤ r

0, else . Then

A = V ΣU, and this is a singular value decomposition of A.

 A ∈ Mm×n ⇒ A ∈ Mn×m

 Σ is the pseudoinverse of Σ

 AAA = U ΣV V ΣUU ΣV = U ΣΣΣV = A

 AA = U ΣΣU and AA = V ΣΣV are self-adjoint.

(23)

Page 7

 example: Continuing the earlier example, A =

 3 1 1

−1 3 1



, AA =

10 0 2 0 10 4 2 4 2

 , AA =  11 1 1 11



A = U ΣV =

1 2

1 1 2

21

2

 √

12 0 0

0 √

10 0



1 6

2 6

1 2 6

51

5 0

1 30

2

305

30

A = V ΣU =

1 6

2 5

1 2 30

61

5

2 1 30

6 0 −5

30

1

12 0 0 1

10

0 0

1 2

1 1 2

21

2

A = 601

17 −7 4 16 5 5

 , AA =  1 0 0 1



, AA = 601

58 −4 10

−4 52 20 10 20 10

(24)

Page 8

 Lemma 6.30: V and W are inner product spaces;

dim(V ), dim(W ) < ∞; T : V → W is linear. Then 1. TT is the orthogonal projection of V on N (T ).

2. T T is the orthogonal projection of W on R(T ).

proof. Let

L : N (T ) → R(T ) be such that ∀x ∈ N (T ), L(x) = T (x).

”1”: TT (x) =

(L−1L(x) = x, x ∈ N (T ) T (0) = 0 x ∈ N (T ) .

”2” is similar.

(25)

Page 9

 Theorem 6.30: A ∈ Mm×n; b ∈ Fm; and z = Ab. Then

1. If Ax = b has a solution, then z is the unique minimal solution (solution with minimum norm).

2. If Ax = b has no solution, then

∀y ∈ Fn, ||Az − b|| ≤ ||Ay − b|| (best approximate solution) with equality if and only if Az = Ay.

Furthermore, Az = Ay ⇒ ||z|| ≤ ||y|| (minimum norm) with equality if and only if z = y (unique).

proof:

”1”: Assume Ax = b has solution y.

⇒ Az = AAb = LALA(b) = b [∵ b ∈ R(LA); Lemma 6.30]

⇒ z is a solution to Ax = b.

⇒ ∀ solution y, LALA(y) = LA(b) = Ab = z

⇒ z is the orthogonal projection of y on N (LA) [∵ Lemma 6.30].

(26)

Page 10

⇒ z is the unique minimal solution.

[N (LA) = R(LA) ⇒ z = Au, AAu = b, see Thm 6.13]

”2”: Assume Ax = b has no solution, ie, b /∈ R(LA).

⇒ Az = AAb = LALA(b) /∈ b

⇒ Az is the orthogonal projection of b on R(LA). [∵ Lemma 6.30]

⇒ ∀y ∈ F n, ||Az − b|| ≤ ||Ay − b||

with equality if and only if Az = Ay. [orthogonal proj]

Now assume Az = Ay = c.

⇒ Ac = AAz = AAAb = Ab = z

⇒ z is the unique minimal solution to Ax = c. [1]

(27)

Page 11

Ch. 7 Jordan canonical form

 V is a vector space; dim(V ) < ∞; T : V → V is linear but ”not diagonalizable”; fT(t) splits. Then ∃ a basis β for V , called Jordan canonical basis for T , such that

[T ]β =

A1 O · · · O O A2 · · · O ... ... ...

O O · · · Ak

, Ai =

λi 1 0 · · · 0 0 0 λi 1 · · · 0 0 0 0 λi · · · 0 0 ... ... ... ... ...

0 0 0 · · · λi 1 0 0 0 · · · 0 λi

 .

 [T ]β is called a Jordan canonical form of T .

 Ai is called a Jordan canonical block.

 If T is diagonalizable, then all the Jordan canonical blocks become 1 × 1, making the Jordan canonical form diagonal.

(28)

Page 12

 example: T : C8 → C8; β = {v1, · · · , v8} is a Jordan canonical basis for T . Then

[T ]β =

2 1 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 3 1 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0

is a Jordan canonical form of T .

fT(t) = (t − 2)4(t − 3)2t2

v1, v4, v5 and v7 are eigenvectors. v2, v3, v6 and v8 are generalized eigenvectors.

(29)

Page 13

 generalized eigenvector x of T corresponding to λ:

(T − λI)p(x) = 0 for a positive integer p

 Theorem 7.4 (portion) : dim(V ) < ∞; T : V → V is linear; fT(t) splits. Then there exists a basis for V consisting of generalized eigenvectors of T .

 Corollary 7.7.1: dim(V ) < ∞; T : V → V is linear; fT(t) splits.

Then T has a Jordan canonical form.

 Corollary 7.7.2: A ∈ Mn×n; fA(t) splits. Then A is similar to a Jordan canonical form.

(30)

Page 14

 example:

A =

3 1 −2

−1 0 5

−1 −1 4

 ⇒ f (t) = det(A − tI) = −(t − 3)(t − 2)2 λ1 = 3 : (A − 3I)v1 = 0 ⇒ v1 = (−1, 2, 1)t

λ2 = 2 : (A − 2I)v2 = 0 ⇒ v2 = (1, −3, −1)t, only one

(A − 2I)2v3 = 0 ⇒ v3 = (−1, 2, 0)t, generalized eigenvector

⇒ β = {v1, v2, v3} ⇒ J = [LA]β =

3 0 0 0 2 1 0 0 2

Q = (v1, v2, v3) =

−1 1 −1 2 −3 2 1 −1 0

 ⇒ A = QJQ−1, J = Q−1AQ

(31)

Page 15

Conclusion

 Linear systems, linear control systems, system applications (com- munication, energy, mechanical, electrical power, signal, ...)

 Optimization, machine Learning

참조

관련 문서

 A linear system of equations in n unknowns has a unique solution if the coefficient matrix and the augmented matrix have the same rank n, and infinitely many solution if

11 Mobile and Embedded Machine Learning Systems: Basics 12 Mobile and Embedded Machine Learning Systems:

Damper with spring loaded valves. System Analysis

Linear Systems Analysis i th Ti D i III in the Time Domain III.. Transient Response -

 Forget about vectors you are accustomed to, with magnitude and direction, with arrow marks on the top, and with coordi- nates..  They are vectors in the Euclidean space and

 This definition of dimension applies to all spaces with algebraic structure-vector, normed linear, inner product, Banach, Hilbert, and Euclidean spaces.  There are other

 A norm gives length to a vector and, hence, provide the notion of distance for a vector space... These two fields are of

This is true for any square matrix, but Theorem 4.1 illustrates the case