Page 1
Review
⌅⌅ Theorem 6.24: V is an inner product space:T : V ! V is linear.
Then T is an orthogonal projection , T^{2} = T = T^{⇤}. Proof:
” ( ”: Assume T^{2} = T = T ^{⇤} ) T is a projection.
Let x 2 R(T ) and y 2 N(T ). ) T (x) = x, T (y) = 0.
) hx, yi = hT (x), yi = hx, T^{⇤}(y)i = hx, T (y)i = hx, 0i = 0 ) x 2 N(T )^{?} ) R(T ) ✓ N(T )^{?} (1)
Let x 2 N(T )^{?}
) x = x1 + x_{2}, x_{1} 2 R(T ), x2 2 N(T ) [projection]
) 0 = hx, x2i = hx1, x_{2}i + hx2, x_{2}i = x2^{2} [* x1 2 N(T )^{?} (1)]
) x2 = 0 ) x = x1 2 R(T ) ) N(T )^{?} ✓ R(T ) ) N(T )^{?} = R(T ) [(1)]
Page 2
⌅⌅ Theorem 6.25 (The spectral theorem): V is an inner product space over F ; dim(V ) < 1; T : V ! V is a linear operator with
distinct eigenvalues: spectrum _{1} · · · k
corresponding eigenspaces W_{1} · · · W_{k} orthogonal projection on W_{i} T_{1} · · · Tk
and T is normal if F = C and selfadjoint if F = R. Then the following statements are true.
1. V = W_{1} · · · W_{k}. 2. W_{i}^{?} = ^{k}_{j=1,j}_{6=i}W_{j}. 3. T_{i}T_{j} = T_{0}, i 6= j.
4. T_{1} + · · · + T_{k} = I.
5. T = _{1}T_{1} + · · · + kT_{k}: spectral decomposition
Page 3
⌅⌅ Corollary 6.25.2: V is an inner product space over C; dim(V ) <
1; T : V ! V is unitary , T is normal and   = 1 for every eigenvalue of T .
Proof.
”)” If T is unitary, then T is normal and every eigenvalue of T has absolute value 1 (* T(x) = x).
⌅⌅ Corollary 6.25.3: V is an inner product space over C; dim(V ) <
1; T : V ! V is normal. Then T is selfadjoint , every eigen value of T is real.
Proof.
”)” T^{⇤} = T ) T^{⇤}(v_{i}) = _{i}v_{i} = T (v_{i}) = _{i}v_{i} ) i is real.
Page 4
⌅⌅ Reflection is an example that is both self adjoint and unitary.
T =
cos2✓ sin2✓
sin2✓ cos2✓
⌅ [End of Review ]
II.
Page 5
Singular value decomposition and pseudoinverse
⌅⌅ generalization:
normal/selfadjoint operator ) linear transformation T : V ! V ) T : V ! W
[T ] 2 M^{n}_{⇥n} ) [T ] 2 M^{m}_{⇥n} eigenvalue ) singular value
[T ] = A = QDQ^{⇤} ) [T ] = A = U⌃V ^{⇤}
⌅⌅ For convenience, we assume F = R or C and by unitary we mean either unitary (C) or orthogonal (R).
⌅⌅ adjoint of a linear transformation T : V ! W : T ^{⇤} : W ! V such that hT (x), yi_{W} = hx, T^{⇤}(y)i_{V}
Page 6
⌅⌅ Theorem 6.26 (singular value theorem): V and W are inner product spaces; dim(V ), dim )W ) < 1; T : V ! W is linear; rank(T ) = r.
then 9 scalars 1 · · · ^{r} > 0, and
1. 9 orthonormal bases {v1, · · · , v^{n}} for V such that T^{⇤}T (v_{i}) =
( 2
i v_{i}, 1 i r 0, i > r .
2. 9 orthonormal bases {u1, · · · , u^{m}} for W such that T (v_{i}) =
(
iu_{i}, 1 i r 0, i > r . proof: T^{⇤}T is selfadjoint.
) 9 orthonormal basis = {v1, · · · , v^{n}} for V consisting of eigen vectors of T^{⇤}T with corresponding eigenvalues _{i}. [Thm 6.17]
) hT (vi), T (v_{i})i_{W} = hT^{⇤}T (v_{i}), v_{i}i_{V} = h iv_{i}, v_{i}i_{V} = _{i}hvi, v_{i}i_{V}
) i 0
Page 7
rank(T^{⇤}T ) = rank([T^{⇤}T ] ) = rank([T^{⇤}] [T ] ) for some .
=rank(([T ] )^{⇤}[T ] ) [exercise 6.3.15]
=rank([T ] ) [Lemma 6.12.2]
=rank(T )
) 1 · · · ^{r} > 0; _{i} = 0, i > r after reordering . We have proven ”1” if we set _{i} = p
i. Let u_{i} = ^{1}
iT (v_{i}), i = 1, · · · , r.
) hui, u_{j}i_{W} = h ^{1}_{i}T (v_{i}), ^{1}
jT (v_{j})i
W
= ^{1}
i jhT^{⇤}T (v_{i}), v_{j}i_{V}
= ^{1}
i jh iv_{i}, v_{j}i_{V}
i2
i jhvi, v_{j}i_{V} = _{ij}
) {u1, · · · , u^{r}} is orthonormal.
)It extends to an orthonormal basis {u1, · · · , u^{m}} for W .
Page 8
) T (vi) = (
iu_{i}, 1 i r 0, i > r[ _{i} = 0]
⌅⌅ singular value of T : _{1}, · · · , _{k}, where k =min(m, n), in the theorem.
⌅⌅ The singular values are unique to T , but the bases are not.
⌅⌅ The singular values of T and T^{⇤} are identical.
⌅⌅ The theorem is symmetric, ie, the roles of T and T ^{⇤} can be inter changed
Page 9
⌅⌅ example: consider T : P_{2}(R ! P1(R) such that T (f) = f^{0} and hf, gi = R _{1}
1 f (x)g(x)dx.
to find the bases in the theorem, we need to work with the matrix representations relative to ”orthonormal bases”.
Let = {
q1 2,
q3 2x,
q5
8(3x^{2} 1)}, = {
q1 2,
q3
2x}, arbitrary choice.
) A = [T ] =
✓ 0 p
3 0 0 0 p
15
◆
) A^{⇤}A = 0
@
0 0
p3 0 0 p
15 1 A✓
0 p
3 0 0 0 p
15
◆
= 0
@ 0 0 0 0 3 0 0 0 15
1 A
) 1 = 15, _{2} = 3, _{3} = 0: eigenvalues in decreasing order.
) z1 = (0, 0, 1)^{t}, z_{2} = (0, 1, 0)^{t}, z_{3} = (1, 0, 0)^{t} :corresponding eigenvectors of A^{⇤}A in R^{3}
) {v1, v_{2}, v_{3}}: the orthonormal basis for V in the theorem.

.
一一
u
^{.} ^{,}랴홄
^{Tan}=
制耳
^{6가}湜言 ^{뗴}

Page 10
) 1 = p
15, _{2} = p
3: singular values.
) u1 = ^{1}
1T (v_{1}) =
q3
2x, u_{2} = ^{1}
2T (v_{2}) =
q1 2
) {u1, u_{2}}: the orthonormal basis for W in the theorem.
⌅⌅ example: T : R^{2} ! R^{2} is linear and invertible;
T has singular values _{1} _{2} > 0;
{v1, v_{2}} and {u1, u_{2}} are orthonormal bases for R^{2} such that T (v_{1}) =
1u_{1} and T (v_{2}) = _{2}u_{2}.
Then the unit circle is mapped to an ellipse as follows.
Page 11
⌅⌅ singular value of a matrix A: singular value of L_{A}
⌅⌅ Theorem 6.27 (singular value decomposition theorem):
A 2 M^{m}_{⇥n}; rank(A) = r; A has singular values _{1} · · · ^{r};
⌃ 2 M^{m}_{⇥m} is such that ⌃_{ij} = (
i, i = j < r
0, else . Then 9U 2 Mm⇥m and V 2 Mn⇥n, both unitary, such that A = U ⌃V ^{⇤} (singular value decomposition of A).
proof: Let T = L_{A} : F^{n} ! F^{m}, and apply theorem 6.26 to get orthonormal bases = {v1, · · · , c^{n}} and = {u1, · · · , u^{m}}
such that T (v_{i}) = (
iu_{i}, 1 i r 0, i > r .
Let U = (u_{1}, · · · , u^{m}) and V = (v_{1}, · · · , v^{n}).
) AV = (Av1, · · · , Avn) = ( _{1}u_{1}, · · · , ru_{r}, 0, · · · , , 0) = U⌃
) A = U⌃V ^{⇤}
2
계
_{T} Wi)^{=} Ni^{락}_{,}T계가
=
a.ua 0001명 _{i} )
Page 12
⌅⌅ example: A =
✓ 3 1 1 1 3 1
◆
, A^{⇤}A = 0
@ 10 0 2 0 10 4 2 4 2
1
A , AA^{⇤} =
✓ 11 1 1 11
◆
eigenvalues of A^{⇤}A : _{1} = 12, _{2} = 10, _{3} = 0 eigenvectors of A^{⇤}A, normalized:
v_{1} = ^{p}^{1}
6(1, 2, 1)^{t}, v_{2} = ^{p}^{1}
5(2, 1, 0)^{t}, v_{3} = ^{p}^{1}
30(1, 2, 5)^{t}
1 = p
12, _{2} = p 10 u_{1} = ^{1}
1L_{A}(v_{1}) = p^{1}
2(1, 1)^{t}, u_{2} = ^{1}
2L_{A}(v_{2}) = p^{1}
2(1, 1)^{t} Note also that eigenvalues of AA^{⇤} : _{1} = 12, _{2} = 10
eigenvectors of AA^{⇤}, normalized:
u_{1} = ^{p}^{1}
2(1, 1)^{t}, u_{2} = ^{p}^{1}
2(1, 1)^{t} v_{1} = ^{1}
1L_{A}⇤(u_{1}), v_{2} = ^{1}
2L_{A}⇤(u_{2}), v_{3} cannot be computed.
Page 13
V = 0 BB
@
p1 6
p2 5
p1 2 30
p6
p1 5
p2 1 30
p6 0 ^{p}^{5}
30
1 CC
A , ⌃ =
✓ p12 0 0
0 p
10 0
◆
, U = 0
@
p1 2
p1 1 2
p2
p1 2
1 A
A = U ⌃V ^{⇤} = 0
@
p1 2
p1 1 2
p2
p1 2
1 A
✓ p12 0 0
0 p
10 0
◆ 0 BB
@
p1 6
p2 6
p1 2 6
p5
p1
5 0
p1 30
p2 30
p5 30
1 CC A
⌅ V and W are inner product spaces; dim(V ), dim(W ) < 1;
T : V ! W is linear; rank(T ) = r; then define ”partly invertible”
L : N (T )^{?} ! R(T ) is such that 8x 2 N(T )^{?}, L(x) = T (x).
陶龜
Page 14
⌅ pseudoinverse T^{†} of T :
T ^{†} : W ! V such that T^{†}(y) =
(L ^{1}(y), y 2 R(T ) 0, y 2 R(T )^{?}
⌅ T ^{†} is linear.
⌅ T ^{†} exists when T ^{1} does not.
⌅ T is invertible ) T^{†} = T ^{1}.
⌅ T = T_{0}(V ! W ) ) T ^{†} = T_{0}(W ) W ).
⌅ T T^{†}T = T
⌅ T ^{†}T T^{†} = T^{†}
⌅ T T^{†} and T^{†}T are selfadjoint.
Page 15
⌅ In theorem 6.26,
{v1, · · · , c^{r}} is a basis for N(T )^{?} {vr+1, · · · , v^{n}} is a basis for N(T ), {u1, · · · , u^{r}} is a basis for R(T ), and {ur+1, · · · , u^{m}} is a basis for R(T )^{?}. Let L be the restriction of T to N(T )^{?}. ) L ^{1}(u_{i}) = ^{1}
iv_{i}, 1 i r ) T^{†}(u_{i}) =
( _{1}
iv_{i}, 1 i r 0, r < i m ) T T ^{†}(u_{i}) =
(u_{i}, 1 i r
0, r < i m , T^{†}T (v_{i}) =
(v_{i}, 1 i r 0, r < i n
.
랑
^{IE} ^{NCT}^{)} ^{+} ^{NCT)} ^{t} ^{}
W
^{=} RC7 ^{t}RT가
Page 16
⌅⌅ example: continuing the earlier example, T : P_{2}(R) ! P1(R) such that T (f) = f^{0} and hf, gi = R _{1}
1 f (x)g(x)dx.
singular values : _{1} = p
15, _{2} = p 3 v_{1} =
q5
8(3x^{2} 1), v_{2} =
q3
2x, v_{3} =
q1
2; u_{1} =
q3
2x, u_{2} =
q1 2
T ^{†}(u_{1}) = p^{1}
15v_{1} = p^{1}
24(3x^{2} 1) T ^{†}(u_{2}) = ^{p}^{1}
3v_{2} = ^{p}^{1}
2x ) T^{†}(a + bx) = T^{†}
✓
ap
2u_{2} + b
q2 3u_{1}
◆
= ap 2 ⇣
p1
2x⌘
+ b
q2 3
⇣p1
24(3x^{2} 1)⌘
= _{6}^{b} + ax + _{2}^{b}x^{2}
Page 1
Review
Singular value decomposition and pseudoinverse
generalization:
normal/selfadjoint operator ⇒ linear transformation T : V → V ⇒ T : V → W
[T ]_{β} ∈ M_{n×n} ⇒ [T ]^{γ}_{β} ∈ M_{m×n} eigenvalue ⇒ singular value
[T ]_{β} = A = QDQ^{∗} ⇒ [T ]^{γ}_{β} = A = U ΣV ^{∗}
For convenience, we assume F = R or C and by unitary we mean either unitary (C) or orthogonal (R).
adjoint of a linear transformation T : V → W : T ^{∗} : W → V such that hT (x), yi_{W} = hx, T^{∗}(y)i_{V}
Page 2
Theorem 6.26 (singular value theorem): V and W are inner product spaces; dim(V ), dim )W ) < ∞; T : V → W is linear; rank(T ) = r.
then ∃ scalars σ_{1} ≥ · · · ≥ σ_{r} > 0, and
1. ∃ orthonormal bases {v_{1}, · · · , v_{n}} for V such that T^{∗}T (v_{i}) =
(σ_{i}^{2}v_{i}, 1 ≤ i ≤ r 0, i > r .
2. ∃ orthonormal bases {u_{1}, · · · , u_{m}} for W such that T (v_{i}) =
(σ_{i}u_{i}, 1 ≤ i ≤ r 0, i > r .
Singular value of T : σ_{1}, · · · , σ_{k}, where k =min(m, n).
The singular values are unique to T , but the bases are not.
The singular values of T and T^{∗} are identical.
The roles of T and T^{∗} can be interchanged
Page 3
singular value of A: that of L_{A}, σ_{i} = pλ_{i}(A^{∗}A) = pλ_{i}(AA^{∗}).
Theorem 6.27 (singular value decomposition theorem):
A ∈ M_{m×n}; rank(A) = r; A has singular values σ_{1} ≥ · · · ≥ σ_{r}; Σ ∈ M_{m×m} is such that Σ_{ij} =
(σ_{i}, i = j < r
0, else . Then
∃U ∈ M_{m×m} and V ∈ M_{n×n}, both unitary, such that A = U ΣV ^{∗} (singular value decomposition of A).
proof: Let T = L_{A} : F^{n} → F^{m}, and apply theorem 6.26 to get orthonormal bases β = {v_{1}, · · · , c_{n}} and γ = {u_{1}, · · · , u_{m}}
such that T (v_{i}) =
(σ_{i}u_{i}, 1 ≤ i ≤ r 0, i > r .
Let U = (u_{1}, · · · , u_{m}) and V = (v_{1}, · · · , v_{n}).
⇒ AV = (Av_{1}, · · · , Av_{n}) = (σ_{1}u_{1}, · · · , σ_{r}u_{r}, 0, · · · , , 0) = U Σ
⇒ A = U ΣV ^{∗}
Page 4
T : V → W is linear; rank(T ) = r; then define ”partly invertible”
L : N (T )^{⊥} → R(T ) is such that ∀x ∈ N (T )^{⊥}, L(x) = T (x).
pseudoinverse T^{†} of T :
T ^{†} : W → V such that T^{†}(y) =
(L^{−1}(y), y ∈ R(T ) 0, y ∈ R(T )^{⊥}
T ^{†} is linear.
T ^{†} exists when T^{−1} does not.
T is invertible ⇒ T ^{†} = T^{−1}.
T = T_{0}(V → W ) ⇒ T ^{†} = T_{0}(W ⇒ W ).
T T^{†}T = T , T^{†}T T^{†} = T ^{†}
T T^{†} and T^{†}T are selfadjoint.
Page 5
In theorem 6.26,
{v_{1}, · · · , c_{r}} is a basis for N (T )^{⊥} {v_{r+1}, · · · , v_{n}} is a basis for N (T ), {u_{1}, · · · , u_{r}} is a basis for R(T ), and {u_{r+1}, · · · , u_{m}} is a basis for R(T )^{⊥}. Let L be the restriction of T to N (T )^{⊥}.
⇒ L^{−1}(u_{i}) = _{σ}^{1}
iv_{i}, 1 ≤ i ≤ r
⇒ T^{†}(u_{i}) =
( _{1}
σ_{i}v_{i}, 1 ≤ i ≤ r 0, r < i ≤ m
⇒ T T ^{†}(u_{i}) =
(u_{i}, 1 ≤ i ≤ r
0, r < i ≤ m , T^{†}T (v_{i}) =
(v_{i}, 1 ≤ i ≤ r 0, r < i ≤ n
[End of Review]
Page 6
pseudoinverse A^{†} of a matrix A : L_{A}_{†} = (L_{A})^{†}
Theorem 6.29: A ∈ M_{m×n}; rank(A) = r, A = U ΣV ^{∗}; σ_{1} ≥ · · · ≥ σ_{r} > 0 are singular values of A;
Σ^{†} ∈ M_{n×m} is such that Σ^{†}_{ij} =
( _{1}
σ_{i}, i = j ≤ r
0, else . Then
A^{†} = V Σ^{†}U^{∗}, and this is a singular value decomposition of A^{†}.
A ∈ M_{m×n} ⇒ A^{†} ∈ M_{n×m}
Σ^{†} is the pseudoinverse of Σ
AA^{†}A = U ΣV ^{∗}V Σ^{†}U^{∗}U ΣV ^{∗} = U ΣΣ^{†}ΣV ^{∗} = A
AA^{†} = U ΣΣ^{†}U^{∗} and A^{†}A = V Σ^{†}ΣV ^{∗} are selfadjoint.
Page 7
example: Continuing the earlier example, A =
3 1 1
−1 3 1
, A^{∗}A =
10 0 2 0 10 4 2 4 2
, AA^{∗} = 11 1 1 11
A = U ΣV ^{∗} =
√1 2
√1 1 2
√2 −^{√}^{1}
2
√
12 0 0
0 √
10 0
√1 6
√2 6
√1 2 6
√5 −^{√}^{1}
5 0
√1 30
√2
30 −^{√}^{5}
30
A^{†} = V Σ^{†}U^{∗} =
√1 6
√2 5
√1 2 30
√6 −^{√}^{1}
5
√2 1 30
√6 0 −^{√}^{5}
30
√1
12 0 0 ^{√}^{1}
10
0 0
√1 2
√1 1 2
√2 −^{√}^{1}
2
A^{†} = _{60}^{1}
17 −7 4 16 5 5
, AA^{†} = 1 0 0 1
, A^{†}A = _{60}^{1}
58 −4 10
−4 52 20 10 20 10
Page 8
Lemma 6.30: V and W are inner product spaces;
dim(V ), dim(W ) < ∞; T : V → W is linear. Then 1. T^{†}T is the orthogonal projection of V on N (T )^{⊥}.
2. T T^{†} is the orthogonal projection of W on R(T ).
proof. Let
L : N (T )^{⊥} → R(T ) be such that ∀x ∈ N (T )^{⊥}, L(x) = T (x).
”1”: T^{†}T (x) =
(L^{−1}L(x) = x, x ∈ N (T )^{⊥} T ^{†}(0) = 0 x ∈ N (T ) .
”2” is similar.
Page 9
Theorem 6.30: A ∈ M_{m×n}; b ∈ F^{m}; and z = A^{†}b. Then
1. If Ax = b has a solution, then z is the unique minimal solution (solution with minimum norm).
2. If Ax = b has no solution, then
∀y ∈ F^{n}, Az − b ≤ Ay − b (best approximate solution) with equality if and only if Az = Ay.
Furthermore, Az = Ay ⇒ z ≤ y (minimum norm) with equality if and only if z = y (unique).
proof:
”1”: Assume Ax = b has solution y.
⇒ Az = AA^{†}b = L_{A}L^{†}_{A}(b) = b [∵ b ∈ R(LA); Lemma 6.30]
⇒ z is a solution to Ax = b.
⇒ ∀ solution y, L^{†}_{A}L_{A}(y) = L^{†}_{A}(b) = A^{†}b = z
⇒ z is the orthogonal projection of y on N (L_{A})^{⊥} [∵ Lemma 6.30].
Page 10
⇒ z is the unique minimal solution.
[N (L_{A})^{⊥} = R(L_{A}^{∗}) ⇒ z = A^{∗}u, AA^{∗}u = b, see Thm 6.13]
”2”: Assume Ax = b has no solution, ie, b /∈ R(L_{A}).
⇒ Az = AA^{†}b = L_{A}L^{†}_{A}(b) /∈ b
⇒ Az is the orthogonal projection of b on R(L_{A}). [∵ Lemma 6.30]
⇒ ∀y ∈ F ^{n}, Az − b ≤ Ay − b
with equality if and only if Az = Ay. [orthogonal proj]
Now assume Az = Ay = c.
⇒ A^{†}c = A^{†}Az = A^{†}AA^{†}b = A^{†}b = z
⇒ z is the unique minimal solution to Ax = c. [1]
Page 11
Ch. 7 Jordan canonical form
V is a vector space; dim(V ) < ∞; T : V → V is linear but ”not diagonalizable”; f_{T}(t) splits. Then ∃ a basis β for V , called Jordan canonical basis for T , such that
[T ]_{β} =
A_{1} O · · · O O A_{2} · · · O ... ... ...
O O · · · A_{k}
, A_{i} =
λ_{i} 1 0 · · · 0 0 0 λ_{i} 1 · · · 0 0 0 0 λ_{i} · · · 0 0 ... ... ... ... ...
0 0 0 · · · λ_{i} 1 0 0 0 · · · 0 λ_{i}
.
[T ]_{β} is called a Jordan canonical form of T .
A_{i} is called a Jordan canonical block.
If T is diagonalizable, then all the Jordan canonical blocks become 1 × 1, making the Jordan canonical form diagonal.
Page 12
example: T : C^{8} → C^{8}; β = {v_{1}, · · · , v_{8}} is a Jordan canonical basis for T . Then
[T ]_{β} =
2 1 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 3 1 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
is a Jordan canonical form of T .
f_{T}(t) = (t − 2)^{4}(t − 3)^{2}t^{2}
v_{1}, v_{4}, v_{5} and v_{7} are eigenvectors. v_{2}, v_{3}, v_{6} and v_{8} are generalized eigenvectors.
Page 13
generalized eigenvector x of T corresponding to λ:
(T − λI)^{p}(x) = 0 for a positive integer p
Theorem 7.4 (portion) : dim(V ) < ∞; T : V → V is linear; f_{T}(t) splits. Then there exists a basis for V consisting of generalized eigenvectors of T .
Corollary 7.7.1: dim(V ) < ∞; T : V → V is linear; f_{T}(t) splits.
Then T has a Jordan canonical form.
Corollary 7.7.2: A ∈ M_{n×n}; f_{A}(t) splits. Then A is similar to a Jordan canonical form.
Page 14
example:
A =
3 1 −2
−1 0 5
−1 −1 4
⇒ f (t) = det(A − tI) = −(t − 3)(t − 2)^{2} λ_{1} = 3 : (A − 3I)v_{1} = 0 ⇒ v_{1} = (−1, 2, 1)^{t}
λ_{2} = 2 : (A − 2I)v_{2} = 0 ⇒ v_{2} = (1, −3, −1)^{t}, only one
(A − 2I)^{2}v_{3} = 0 ⇒ v_{3} = (−1, 2, 0)^{t}, generalized eigenvector
⇒ β = {v_{1}, v_{2}, v_{3}} ⇒ J = [L_{A}]_{β} =
3 0 0 0 2 1 0 0 2
Q = (v_{1}, v_{2}, v_{3}) =
−1 1 −1 2 −3 2 1 −1 0
⇒ A = QJQ^{−1}, J = Q^{−1}AQ
Page 15
Conclusion
Linear systems, linear control systems, system applications (com munication, energy, mechanical, electrical power, signal, ...)
Optimization, machine Learning