Page 1
Review
Gram-Schmidt orthogonalization and orthogonal complement
⌅⌅ Theorem 6.3: V is an inner product space; S = {v1, · · · , vk} ✓ V is orthogonal. Then
y = Pk
i=1 aivi ) aj = hy,vji
kvjk2, j = 1, · · · , k.
⌅⌅ Corollary 6.3.1: If S is orthonormal, y = Pk
i=1 aivi ) ai = hy, vii ) y = Pk
i=1hy, viivi.
⌅⌅ Corollary 6.3.2: Orthogonality implies linear independence.
Page 2
⌅⌅ Gram-Schmidt orthogonalization:
v1 = w1
v2 = w2 hw2,v1i
kv1k2 v1 v3 = w3 hw3,v1i
kv1k2 v1 hw3,v2i
kv2k2 v2
· · ·
⌅⌅ Theorem 6.4: V is an inner product space; S = {w1, · · · , wn} ✓ V is linearly independent. Then letting v1 = w1 and vi = wi Pi 1
j=1
hwi,vji
kvjk2 vj, i = 2, · · · , n,
makes S0 = {v1, · · · , vn} orthogonal and span(S0) = span(S).
Page 3
⌅⌅ Theorem 6.5: V is an inner product space; dim(V ) < 1. Then 1. V has an orthonormal basis = {v1, · · · , vn}.
2. 8x 2 V, x = Pn
i=1hx, viivi
⌅ That is, [x] = (hx, v1i, · · · , hx, vni)t.
⌅ 2 is called a Fourier series expansion.
⌅ hx, vii are called Fourier coefficients.
⌅ kxk2 = hPn
i=1hx, viivi, Pn
j=1hx, vjivji
= Pn
i=1 Pn
j=1hx, viihx, vjihvi, vji = Pn
i=1 |hx, vii|2 : Parseval’s relation
⌅ example: C([0, 2⇡), C) with hf, gi = 2⇡1 R 2⇡
0 f (t)g(t)dt
= {fk(t) = eikt : k 2 Z} is an orthonormal basis.
hf, fki = 2⇡1 R 2⇡
0 f (t)e iktdt : k-th Fourier coefficient of f
Page 4
⌅⌅ Corollary 6.5: V is an inner product space; dim(V ) < 1;
= {v1, · · · , vn} is an orthonormal basis for V ; T is a linear operator on V ; A = [T ] . Then Aij = hT (vj), vii.
⌅⌅ orthogonal complement S? of S : S? = {x 2 V : hx, yi = 0, 8y 2 S}
⌅ Do not confuse this with the set complement
⌅ S? is a subspace for any set S.
⌅ S ✓ (S?)?, but if S is a subspace, then S = (S?)?.
⌅ {0}? = V, V ? = {0}
⌅ [End of Review]
Page 5
⌅⌅ Theorem 6.6. and Corollary 6.6: V is an inner product space; W is a subspace; dim(W ) < 1; {v1, · · · , vk} is an orthonormal basis for W ;y 2 V. Then
1. y = u + z, where u 2 W is unique and closest to y, and if dim(V ) < 1,
z 2 W? is unique and closest to y.
2. u = Pk
i=1hy, viivi.
⌅ proof:
Let u = Pk
i=1hy, viivi. and z = y u.
) u 2 W
To show z 2 W?, we only need to show hz, vii = 0, j = 1, · · · , k.
•
① ②
③
④爬河
①
③
Page 6
hz, vji = h(y Pk
i=1hy, viivi), vji
= hy, vji hPk
i=1hy, viivi, vji = hy, vji Pk
i=1hy, viihvi, vji
= 0 [hvi, vji = ij] ) z 2 W?
“unique”: Let y = u0 + z0, u0 2 W, z0 2 W?.
) u u0 = z0 z, u u0 2 W, z0 z 2 W? [y = u + z]
) u u0 = z0 z 2 W \ W? = {0}
) u0 = u and z0 = z
“closest”: For any x in W,
ky xk2 = k(y u) + (u x)k2
= h(y u) + (u x), (y u) + (u x)i
= ky uk2+ku xk2+hy u, u xi+hu x, y ui iii. utz
Page 7
) ky xk2 = ky uk2 + ku xk2 [y u 2 W ?, u x 2 W ]
) ky xk2 ky uk2
A similar proof can show that z is unique and closest to y.
⌅⌅ orthogonal projection of y on W : PW?(y) = Pk
i=1hy, viivi, where {v1, · · · , vk} is an orthonormal basis for W .
⌅ A non-orthogonal projection is shown above. V = W1 W2
⌅⌅ example: Consider the inner product space PR 1 2(R) withhf, gi =
1 f (x)g(x)dx.
We found earlier the orthonormal basis by G-S process.
= {v1, v2, v3} = ⇢q
12,
q3 2x,
q5
8(3x2 1) .
U-U t Z
t.y-uc.it#ft
Futzz E w L<肅w
u =<Ut Z , Ni >
=
= <a , vi>
Page 8
Let f(x) = a + bx + cx2. hf, v1i = R 1
1(a + bx + cx2) q1
2dx =
p2
3 (3a + c) hf, v2i = R 1
1(a + bx + cx2) q3
2xdx =
q2 3b hf, v3i = R 1
1(a + bx + cx2) q5
8(3x2 1)dx = 23 q2
5c f (x) = hf, v1iv1 + hf, v2iv2 + hf, v3iv3
=
p2
3 (3a+c)· q1
2+ q2
3b· q3
2x+23 q2
5c· q5
8(3x2 1)) = a+bx+cx2 Let W = span({v1, v3}) =span
✓⇢q1 2,
q5
8(3x2 1)
◆
=span({1, x2}) PW?(f ) = hf, v1iv1 + hf, v3iv3
=
p2
3 (3a + c) ·
q1
2 + 23 q2
5c ·
q5
8(3x2 1) = a + cx2 P ?
W?(f ) = hf, v2iv2 = bx
Page 9
If instead we let W =⇠ span({v1, v2}) =span✓⇢q
12, q3
2x
◆
=span({1, x}) P ?⇠
W(f ) = hf, v1iv1 + hf, v2iv2 = (a + c3) + bx P ?⇠
W?
(f ) = hf, v3iv3 = c(x2 13)
So (a + c3) + bx is the polynomial in W⇠ that is “closest” to f, and c(x2 13) is the polynomial in W⇠ ? that is “closest” to f.
Note that span({1, x}) and span({x2}) are not orthogonal comple- ments of each other, while span({(1, 0, 0), (0, 1, 0)}) and span({0, 0, 1}) are . Why?
遼閑
.Page 10
⌅⌅ Theorem 6.7: V is an inner product space; dim(V ) = n; S = {v1, · · · , vk} is an orthonormal set. Then
1. S can be extended to an orthonormal basis {v1, · · · , vk, vk+1, · · · , vn} for V .
2. W = span(S) ) {vk+1, · · · , vn} is a basis for W?.
3. For any subspace W of V , dim(V ) = dim(W )+ dim(W?).
⌅ proof of 2: 8x 2 W?, x = Pn
i=1 aivi [basis for V ]
) hx, vii = ai = 0, i = 1, · · · , k [orthogonal complement]
) x = Pn
i=k+1 aivi ↳ View
Page 11
Adjoint
⌅⌅ The adjoint T⇤ of a linear operator T on a finite-dimensional inner product space V is another operator on V such that for any x and y in V, hT (x), yi = hx, T⇤(y)i.
⌅ On Fn, this correspond to the conjugate transpose A⇤ = At of the matrix A. A⇤ is called the adjoint of A.
hx, yi = Pn
i=1 xiyi = ytx = y⇤x
hAx, yi = y⇤(Ax) = (A⇤y)⇤x = hx, A⇤yi That is, (LA)⇤ = LA⇤.
⌅ In R2, if T is rotation by ✓, then T ⇤ = T 1 is rotation by ✓;
and if T is projection onto x axis, then T⇤ = T.
✓cos sin sin cos
◆
: rotation by
( con
jugatetranspc.se
in Matrix Rep )Existence of H = ?
로 Unique
res岾金 풣
해 ?軀批 任閭門
..fr?Eaib).TLN=(a.o ③
.)西停
.印作偕 믻 햛 炸漑
T.
一一
Page 12
⌅⌅ For the derivation of adjoint, consider a scalar-valued linear trans- formation g : V ! F, which is also called a functional.
⌅⌅ Theorem 6.8: V is an inner product space; dim(V ) < 1; g : V ! F is a linear transformation. Then 9y, unique in V, such that 8x 2 V, g(x) = hx, yi.
proof: Let = {v1, · · · , vn} be an orthonormal basis for V, and, for the given g, let y = Pn
i=1 g(vi)vi. Then 8x 2 V, g(x) = g(Pn
i=1hx, viivi) [orthonormal basis]
= Pn
i=1hx, viig(vi) [linear]
= Pn
i=1hx, g(vi)vii = hx, Pn
i=1 g(vi)vii = hx, yi
“uniqueness” : 8x 2 V, g(x) = hx, yi = hx, y0i ) y = y0 [Thm 6.1-5]
⌅ Note that this y is not affected by the choice of the basis.
f e
ftp.y.me#iF=gt
Page 13
⌅⌅ example: g : R3 ! R such that g((a1, a2, a3)) = 2a1 a2 + 4a3. y = Pn
i=1 g(vi)vi = g(e1)e1 + g(e2)e2 + g(e3)e3 = (2, 1, 4) g((a1, a2, a3)) = h(a1, a2, a3), (2, 1, 4)i = 2a1 a2 + 4a3
⌅⌅ Theorem 6.9: V is an inner product space; dim(V ) < 1; T is a linear operator on V. Then 9T⇤, a unique operator on V, such that
1. 8x, y 2 V, hT (x), yi = hx, T⇤(y)i.
2. T⇤ is linear.
⌅ proof: Fix y and let g(x) = hT (x), yi.
) g is linear. [T is linear; h·, yi is linear.]
) 8x 2 V, g(x) = hx, y0i for some y0, which is unique in V. [Thm 6.8]
Define T⇤ such that T⇤(y) = y0.
) hT (x), yi = hx, y0i = hx, T⇤(y)i :“1”
• ( ? . 00) ( 0-10) (004)= -
EU
사가 어떤것인지는
정의인참1TH성
→ Property는 나중에 만 )
Page 14
“linearity”: 8x 2 V,
hx, T⇤(c1y1 + c2y2)i = hT (x), c1y1 + c2y2i [1]
= c1hT (x), y1i + c2hT (x), y2i
= c1hx, T⇤(y1)i + c2hx, T⇤(y2)i [1]
= hx, c1T⇤(y1) + c2T ⇤(y2)i
) T⇤(c1y1 + c2y2) = c1T⇤(y1) + c2T ⇤(y2) [Thm 6.1-5]
⌅⌅ Theorem 6.10: V is an inner product space; dim(V ) < 1;
= {v1, · · · , vn} is an orthonormal basis for V ; T is a linear operator on V .
Then [T ⇤] = [T ]⇤.
proof: Let A = [T ] and B = [T ⇤] . ) Bij = hT⇤(vj), vii [Corol 6.5]
= hvi, T⇤(vj)i = hT (vi), vji = Aji = (A⇤)ij
⇒
B 二拌Page 1
Review
⌅⌅ Theorem 6.6. and Corollary 6.6: V is an inner product space; W is a subspace; dim(W ) < 1; {v1, · · · , vk} is an orthonormal basis for W ;y 2 V. Then
1. y = u + z, where u 2 W is unique and closest to y, and if dim(V ) < 1, z 2 W? is unique and closest to y.
2. u = Pk
i=1hy, viivi.
⌅⌅ orthogonal projection of y on W : PW?(y) = Pk
i=1hy, viivi, where {v1, · · · , vk} is an orthonormal basis for W .
Page 2
⌅⌅ Theorem 6.7: V is an inner product space; dim(V ) = n; S = {v1, · · · , vk} is an orthonormal set. Then
1. S can be extended to an orthonormal basis {v1, · · · , vk, vk+1, · · · , vn} for V .
2. W = span(S) ) {vk+1, · · · , vn} is a basis for W?.
3. For any subspace W of V , dim(V ) = dim(W )+ dim(W?).
⌅⌅ The adjoint T⇤ of a linear operator T on a finite-dimensional inner product space V is another operator on V such that for any x and y in V, hT (x), yi = hx, T⇤(y)i.
⌅ On F n, this correspond to the conjugate transpose A⇤ = At of the matrix A. A⇤ is called the adjoint of A.
hAx, yi = y⇤(Ax) = (A⇤y)⇤x = hx, A⇤yi. That is, (LA)⇤ = LA⇤.
forthogondcomplimeuT.tw
王石比) (ATF( Y) = LH Dy)
Page 3
⌅⌅ Theorem 6.8: V is an inner product space; dim(V ) < 1; g : V ! F is a linear transformation (functional). Then 9y, unique in V, such that 8x 2 V, g(x) = hx, yi, where y = Pn
i=1 g(vi)vi.
⌅⌅ example: g : R3 ! R such that g((a1, a2, a3)) = 2a1 a2 + 4a3. y = Pn
i=1 g(vi)vi = g(e1)e1 + g(e2)e2 + g(e3)e3 = (2, 1, 4)
⌅⌅ Theorem 6.9: V is an inner product space; dim(V ) < 1; T is a linear operator on V. Then 9T⇤, a unique operator on V, such that
1. 8x, y 2 V, hT (x), yi = hx, T⇤(y)i.
2. T⇤ is linear.
⌅⌅ Theorem 6.10: V is an inner product space; dim(V ) < 1;
= {v1, · · · , vn} is an orthonormal basis for V ; T is a linear operator on V . Then [T ⇤] = [T ]⇤.
⌅⌅ [End of Review]
Page 4
⌅⌅ Corollary 6.10: A is an n ⇥ n matrix. Then LA⇤ = (LA)⇤. proof: Let be the standard basis for F n.
[(LA)⇤] = [LA]⇤ [Thm 6.10] = A⇤ = [LA⇤] .
Then the uniqueness of the representation completes the proof.
⌅ This can directly be shown by taking conjugate transpose of the matrix.
hLA(x), yi = hAx, yi = y⇤(Ax)
= (A⇤y)⇤x = hx, A⇤yi = hx, LA⇤(y)i ! (LA)⇤ = LA⇤.
⌅⌅ example: T : C3 ! C3; is the standard basis.
T (a, b, c) = (2a + ib, b 5ic, a + (1 i)b + 3c) [T ] =
0
@2 i 0
0 1 5i
1 1 i 3
1
A , [T⇤] = [T ]⇤ = 0
@ 2 0 1
i 1 1 + i 0 5i 3
1 A T ⇤(a, b, c) = (2a + c, ia + b + (1 + i)c, 5ib + 3c)
=A
<자, KA
片姓
i.
.
邨
Page 5
⌅⌅ example: Consider the inner product space PR 1 2(R) with hf, gi =
1 f (x)g(x)dx.
Let f(x) = a + bx + cx2 and g(x) = p + qx + rx2.
When T (f) = f0 = b + 2cx, what would T⇤(g) look like?
=
⇢q1 2,
q3 2x,
q5
8(3x2 1) [orthonormal basis]
[g] = ⇣p
2p +
p2 3 r,
p6
3 q, 2
p10
15 r⌘t [T ] =
0
@0p
3 0 0 0 p
15 0 0 0
1
A , [T⇤] = [T ]⇤ = 0
@
0 0 0
p3 0 0 0 p
15 0 1 A
[T⇤(g)] = [T⇤] [g]
< 9시, 닥 > =? ⇒ g쌰 룱 미is > 닺
.io
←E湟 准坊器漣
.sc漲拓
Page 6
[T⇤(g)] = 0
@
0 0 0
p3 0 0 0 p
15 0 1 A
0 BB
@
p2p +
p2 3 r
p6 3 q
2p 1510r
1 CC A =
0 BB
@ p 0
6(p + r3) p10q
1 CC A
T ⇤(p + qx + rx2) =p
6(p + r3) q3
2x +p 10q
q5
8(3x2 1)
= 52q + (3p + r)x + 152 qx2 We can confirm this result by computing hT (f), gi = R 1
1(b + 2cx)(p + qx + rx2)dx = 2bp + 23br + 43cq
= R 1
1(a + bx + cx2)( 52q + (3p + r)x + 152 qx2)dx = hf, T⇤(g)i.=
Page 8
⌅⌅ Theorem 6.11 and Corollary 6.11: V is an inner product space; T and U are linear operators on V ; A and B are n ⇥ n matrices; c is a scalar. Then the following hold.
(a) (T + U)⇤ = T⇤ + U⇤ (A + B)⇤ = A⇤ + B⇤ (b) (cT )⇤ = cT⇤ (cA)⇤ = cA⇤
(c) (T U )⇤ = U⇤T⇤ (AB)⇤ = B⇤A⇤ (d) (T⇤)⇤ = T (A⇤)⇤ = A (e) I⇤ = I In⇤ = In
proof of (c): 8x, y 2 V,
hx, (T U)⇤(y)i = h(T U)(x), yi = hT (U(x)), yi
= hU(x), T⇤(y)i = hx, U⇤(T⇤(y))i
= hx, (U⇤T⇤)(y)i.
L(AB)⇤ = (LAB)⇤ = (LALB)⇤ [Corol 6.10, Thm 2.15-6]
= (LB)⇤(LA)⇤ = LB⇤LA⇤ = LB⇤A⇤
Page 9
proof of (d):
hT (x), yi = hx, T⇤(y)i = hT⇤(y), xi = hy, (T ⇤)⇤(x)i
= h(T⇤)⇤(x), yi ) (T⇤)⇤ = T
⌅⌅ least square approximation:
measurements: (t1, y1), (t2, y2), · · · , (tm, ym)
approximation: Find a and b such that y = at + b, or
find a, b and c such that y = at2 + bt + c.
criterion: Minimize E = Pm
i=1[yi (ati + b)]2, or E = Pm
i=1[yi (at2i + bti + c)]2.
( No exit Solution , East error)
fcarrefit.mg
*recession
.Page 10
) E = ky Axk2, where y = (y1, · · · , ym)t, and A =
0
@ t1 1 ... ...
tm 1 1
A , x =
✓a b
◆
, or A = 0
@ t21 t1 1 ... ... ...
t2m tm 1 1
A , x = 0
@a b c
1 A
) Find x that minimizes ky Axk2.
Terror
Anda.b.ctamtcti.my
"
~ Eth elementatitboratitbtn.to
= 통 Mi - (ADD2
Page 11
⌅⌅ orthogonality principle:
The minimizing x0 satisfies 8x, hAx, y Ax0i = 0
W = R(LA)
= {Ax : x 2 Fn}
= {P
aixi : x 2 Fn}
where ai are the column vectors of A.
So W is the column space of A.
The principle is quite general.
⌅⌅ Lemma 6.12.1: A 2 Mm⇥n(F ); x 2 Fn; and y 2 Fm. Then hAx, yim = hx, A⇤yin.
proof: hAx, yim = y⇤(Ax) = (y⇤A)x = (A⇤y)⇤x = hx, A⇤yin
⌅⌅ Lemma 6.12.2: A 2 Mm⇥n(F ). Then rank(A⇤A) = rank(A).
m) # 아
Di
Y
= AX 7ERotparanln.IE
R , #No Solution
Since M7개 → (불능)
ERm
구최소로하는 Xo
찾는일
= R"
nxm.mx n = nxn
Page 12
proof: (Ax = 0 ) A⇤Ax = 0) ) N(LA) ✓ N(LA⇤A)
) nullity(LA) nullity(LA⇤A)
) rank(LA⇤A) rank(LA) [dim thm: nullity + rank = n]
) rank(A⇤A) rank(A) [also from Thm 3.7]
So we need only to show that N(LA⇤A) ✓ N(LA).
x 2 N(LA⇤A) ) A⇤Ax = 0
) hA⇤Ax, xin = hAx, Axim = 0 ) Ax = 0 ) x 2 N(LA)
Page 13
⌅⌅ Theorem 6.12: A 2 Mm⇥n(F ); y 2 Fm. Then 9x0 2 Fn such that 1. A⇤Ax0 = A⇤y
2. 8x 2 Fn, kAx0 yk kAx yk 3. rank(A) = n ) x0 = (A⇤A) 1A⇤y.
⌅ proof: Let W = {Ax : x 2 Fn} = R(LA) [col sp of A]
) y = u + z, u 2 W, z 2 W? such that u 2 W is unique and closest to y, and
z 2 W ? is unique and closest to y. [Thm 6.6]
) 9x0 such that u = Ax0. : “2”
) z = y Ax0 2 W ? ) 8x 2 Fn, hAx, y Ax0i = 0 ) 8x 2 Fn, hx, A⇤(y Ax0)i = 0 : “1”, orthogonality
“3” follows from Lemma 6.12.2 and “1”.
⌅⌅ Corollary 6.12: A 2 Mm⇥n(F ); rank(A) = n. Then A⇤A is invert- ible.
( East Square Solution)
= 0