Corollary 1.10.1: Every basis for V has the same number of vectors.
dimension: the number of vectors in a basis
Corollary 1.10.2: Let dim(V )=n.
1. S generates V. ⇒ |S| ≥ n, S can be reduced to a basis.
2. S generates V and |S| = n. ⇒ S is a basis for V . 3. S is LI. ⇒ |S| ≤ n, S can be extended to a basis.
Theorem 1.11: dim(V ) < ∞; W is a subspace of V . Then 1. dim(W ) ≤ dim(V )
2. dim(W )=dim(V ) ⇒ W = V .
Corollary 1.11: dim(V ) < ∞; W is a subspace of V . Then any basis for W can be extended to a basis for V .
Chapter 2 Linear transformation and Matrices
linear transformation T : V → W for vector spaces V (F ) and W (F ) :, T (ax + by) = aT (x) + bT (y).
function f : X → Y : ∀x ∈ X, ∃ unique f (x) ∈ Y
domain of f : X
codomain of f : Y
range of f : f (X) = {f (x) : x ∈ X}
image of A under f : f (A) = {f (x) : x ∈ A}
preimage of B under f : f−1(B) = {x : f (x) ∈ B};
also called as inverse image
onto: f (X) = Y
one-to-one: f (u) = f (v) ⇒ u = v
inverse of f : f−1 : Y → X such that
∀x ∈ X, f−1(f (x)) = x; and ∀y ∈ Y, f (f−1(y)) = y
invertible f : f−1 exists (⇔ one-to-one and onto)
restriction of f to A: fA : A → Y such that
∀x ∈ A, fA = f (x)
composite or composition of f : f : X → Y and g : Y → Z:
g ◦ f : X → Z such that ∀x ∈ X, (g ◦ f )(x) = g(f (x)) (We will use the notation gf in place of g ◦ f .)
examples
Tθ : R2 → R2, rotation by θ:
Tθ((a1, a2)) = (a1 cos θ − a2 sin θ, a1 sin θ + a2 cos θ)
T : R2 → R2, projection on the y-axis: T ((a1, a2)) = (0, a2) linearity: T (c(a1, a2) + d(b1, b2)) = T (ca1 + db1, ca2 + db2) = (0, ca2 + db2) = (0, ca2) + (0, db2) = c(0, a2) + d(0, b2) =
cT ((a1, a2)) + dT ((b1, b2))
V = C(R, R): space of continuous functions from R to R T : V → R, integration over [a, b] : T (f ) = R b
a f (t) dt linearity: T (cf + dg) = R b
a(cf (t) + dg(t)) dt = c R b
a f (t) dt + d R b
a g(t) dt = cT (f ) + dT (g)
IV : V → V , identity transformation: IV (v) = v
T0 : V → W , zero transformation: T0(v) = 0
null space and range
null space of T : T : V → W : N (T ) = {x ∈ V : T (x) = 0}
range space of T : T : V → W : R(T ) = {T (x) : x ∈ V }
The null space and range of a linear transformation cannot be empty because T (0) = 0. That is, 0 ∈ N (T ) and 0 ∈ R(T )
Another name for the null space is kernel.
Another name for the range is image.
example:
T : R2 → R2, projection on y-axis: T ((a1, a2)) = (0, a2) N (T ) = {(a1, 0) : a1 ∈ R}, R(T ) = {(0, a2) : a2 ∈ R}
V = C(R, R): space of continuous functions T : V → R, integration over [a, b]: T (f ) = R b
a f (t) dt N (T ) =
n
f : R b
a f (t) dt = 0 o
, R(T ) = R
Theorem 2.1: V and W are vector spaces; T : V → W is a linear transformation. Then N (T ) and R(T ) are subspaces of V and W , respectively.
proof for N (T ): x, y ∈ N (T ), a, b ∈ F
⇒ T (x) = T (y) = 0
⇒ T (ax + by) = aT (x) + bT (y) = 0 [linearity]
⇒ ax + by ∈ N (T )
Theorem 2.2: T : V → W is a linear transformation; β = {v1, · · · , vn} is a basis for V . Then, R(T ) = span(T (β)) = span({T (v1), · · · , T (vn)}).
proof:
(i) T (β) ⊆ R(T ) [range]
⇒ span(T (β)) ⊆ R(T ) [R(T ) is a subspace; Theorem 1.5]
(ii) w ∈ R(T ) ⇒ ∃v such that T (v) = w [range]
⇒ v = Σaivi [basis]
⇒ w = T (v) = T (Σaivi) = ΣaiT (vi) ∈ span(T (β)))
⇒ R(T ) ⊆ span(T (β))
(i), (ii) ⇒ R(T ) = span(T (β))
Is {T (v1), · · · , T (vn)} a basis for R(T )?
nullity and rank
nullity of T : nullity(T ) = dim(N (T ))
rank of T : rank(T ) = dim(R(T ))
Theorem 2.3 (dimension theorem): T : V → W is a linear trans- formation; V < ∞. Then, nullity(T ) + rank(T ) = dim(V ).
proof: Assume the following:
dim(V ) = n, nullity(T ) = k
{v1, · · · , vk} is a basis for N (T ), and it extends to a basis {v1, · · · , vk, vk+1, · · · , vn} for V .
S = {T (vk+1), · · · , T (vn)}
We will then show that S is a basis for R(T ).
“S generates R(T )”: Let w ∈ R(T ).
⇒ ∃v ∈ V such that T (v) = w. [range]
⇒ v = Pn
i=1 aivi. [basis for V ] T (v) = w = Pn
i=1 aiT (vi) = Pn
i=k+1 aiT (vi) [null space]
w ∈ span(S)
“S is linearly independent”: Let Pn
i=k+1 biT (vi) = 0.
⇒ T (Pn
i=k+1 bivi) = 0 [linearity]
⇒ Pn
i=k+1 bivi ∈ N (T ) [null space]
⇒ Pn
i=k+1 bivi = Pk
i=1 civi for some ci’s [basis for N (T )]
⇒ Pk
i=1 civi + Pn
i=k+1(−bi)vi = 0
⇒ c1 = · · · = ck = bk+1 = · · · = bn = 0 [basis for V ]
This dimension theorem holds only for finite dimensional vec- tor spaces.
example:
Tθ: R2 → R2, rotation by θ:
Tθ((a1, a2)) = (a1 cos θ − a2 sin θ, a1 sin θ + a2 cos θ) dim(V ) = 2 = 2 + 0 = rank(T ) + nullity(T )
Tθ: R2 → R2, projection on the y-axis: Tθ((a1, a2)) = (0, a2) dim(V ) = 2 = 1 + 1 = rank(T ) + nullity(T )
Theorem 2.4: T : V → W is a linear transformation. Then the following holds.
T is one-to-one ⇔ N (T ) = {0}.
proof: “⇒”: Assume T is one-to-one and v ∈ N (T ).
⇒ T (u + v) = T (u) + T (v) = T (u) [null space]
⇒ u + v = u [one-to-one]
⇒ v = 0 [cancellation law]
“⇐”: Assume N (T ) = {0} and T (u) = T (v).
⇒ T (u) − T (v) = T (u − v) = 0 [linear]
⇒ u − v ∈ N (T ) ⇒ u − v = 0 [assumption] ⇒ u = v
Theorem 2.5: T : V → W is a linear transformation; dim(V ) = dim(W ) < ∞.
Then the followings are equivalent.
1. T is one-to-one.
2. T is onto.
3. rank(T ) = dim(V )
? The following figure serves as an intuitive proof.
Rotation by θ is an example for the last two theorems.
example: T : P2(R) → P3(R), T (f )(x) = 2f (x) + 0x 3f (t) dt
? by theorem 2.2,
R(T ) = span(T (1), T (x), T (x2) ) = span(
3x, 2 + 3
2x2, 4x + x3
)
3x, 2 + 3
2x2, 4x + x3
is linearly independent.
⇒ It is a basis for R(T ) ⇒ rank(T ) = 3
⇒ nullity(T ) = 0 [dim thm] ⇒ T is one-to-one. [Thm 2.4]
But T is not onto. [dim(V ) = 3 6= dim(W ) = 4]
example: T : P2(R) → P2(R), T (f )(x) = 2f0(x) − 3f (x)
? by theorem 2.2,
R(T ) = span(T (1), T (x), T (x2) ) = span(−3, 2 − 3x, 4x − 3x2 )
⇒ rank(T ) = 3 = dim(P2(R))
⇒ T is onto; T is one-to-one. [Theorem 2.5]
Theorem 2.6: {v1, · · · , vn} is a basis for V ; w1, · · · , wn ∈ W . Then there exists only one linear transformation T : V → W such that T (vi) = wi, i = 1, · · · , n.
proof: Let x = Σaivi, and define T as follows:
∀x ∈ V , T (x) = T (Σaivi) = Σaiwi. [linear? unique?]
“linearity”: T (cx + dy) = T (cΣaivi + dΣbivi) [y = Σbivi]
= T (Σ(cai + dbi)vi) = Σ(cai + dbi)wi [def of T ]
= cΣaiwi + dΣbiwi = cT (Σaivi) + dT (Σbivi) [def of T ]
= cT (x) + dT (y)
“uniqueness”: Let a linear transformation U satisfy U (vi) = wi.
∀x ∈ V , U (x) = U (Σaivi) = ΣaiU (vi) = Σaiwi = T (x)
⇒ U = T
dependent or even repeated.
This theorem says that in order to specify a linear transforma- tion, you only need to specify it on a basis.
It also implies that R(T ) = span({w1, · · · , wn}) by theorem 2.2, whereby you can design R(T ) as you wish.
This theorem is analogous to:
that for two real numbers v and w, there is only one linear function f : R → R such that f (v) = w. [straight line]
that for six real numbers; v11, v12, v21, v22, w1, w2, there is only one linear function f R2 → R such that f((v11, v12)) = w1 and f ((v21, v22)) = w2. [plane]
⌅⌅ Null space and Range space
⌅ null space of T : T : V ! W : N(T ) = {x 2 V : T (x) = 0}
⌅ range space of T : T : V ! W : R(T ) = {T (x) : x 2 V }
⌅⌅ Theorem 2.1: V and W are vector spaces; T : V ! W is a linear transformation. Then N(T ) and R(T ) are subspaces of V and W , respectively.
⌅⌅ Theorem 2.2: T : V ! W is a linear transformation; = {v1, · · · , vn} is a basis for V . Then, R(T ) = span(T ( )) = span({T (v1), · · · , T (vn)}).
⌅ {T (v1), · · · , T (vn)} is not a basis for R(T )?
V쌔0-0t2-R(T)
⌅⌅ nullity and rank
⌅ nullity of T : nullity(T ) = dim(N(T ))
⌅ rank of T : rank(T ) = dim(R(T ))
⌅⌅ Theorem 2.3 (dimension theorem): T : V ! W is a linear trans- formation; V < 1. Then, nullity(T ) + rank(T ) = dim(V ).
⌅ This dimension theorem holds only for finite dimensional vector spaces.
⌅⌅ Theorem 2.4: T : V ! W is a linear transformation. Then the following holds.
T is one-to-one , N(T ) = {0}.
⌅⌅ Theorem 2.5: T : V ! W is a linear transformation; dim(V ) = dim(W ) < 1. Then the followings are equivalent.
1. T is one-to-one.
2. T is onto.
3. rank(T ) = dim(V )
Et
MCT) 다 이
N.at/ity--0#RanKD=dimGRCTl)=dinn
( V) ( ' , ' NullitytR.tn/C=dimCV
))
Ran KIT) =dimCW)
# T is onto
⌅⌅ Theorem 2.6: {v1, · · · , vn} is a basis for V ; w1, · · · , wn 2 W . Then there exists only one linear transformation T : V ! W such that T (vi) = wi, i = 1, · · · , n.
proof: Let x = ⌃aivi, and define T as follows:
8x 2 V , T (x) = T (⌃aivi) = ⌃aiwi. [linear? unique?]
“T (vi) = wi”: clear by letting ai = 1, aj = 0) for j 6= i
“linearity”: T (⌃aivi) = ⌃aiT (vi), 8{a1, ..., an}
“uniqueness”: Let a linear transformation U satisfy U(vi) = wi. 8x 2 V , U(x) = U(⌃aivi) = ⌃aiU (vi) = ⌃aiwi = T (x)
) U = T
E, Estate
몸
⌅ Note that wi’s can be arbitrary vectors in W , possibly linearly dependent or even repeated.
⌅ This theorem says that in order to specify a linear transforma- tion, you only need to specify it on a basis.
⌅ It also implies that R(T ) = span({w1, · · · , wn}) by theorem 2.2, whereby you can design R(T ) as you wish.
⌅ This theorem is analogous to:
⌅⌅ that for two real numbers v and w, there is only one linear function f: R ! R such that f(v) = w. [straight line]
⌅⌅ that for six real numbers; v11, v12, v21, v22, w1, w2, there is only one linear function f R2 ! R such that f((v11, v12)) = w1 and f((v21, v22)) = w2. [plane]
.c n 一一
WFYFWi7-sey.fi#TCUi)=WieTT'bydetiningT
-
臧沿
.鷄癎
not가 .
∴
. 潘石冶Matrix representation of a linear transformation
⌅⌅ Recall that given a basis = {v1, · · · , vn} for V , we can write x = ⌃civi for any x 2 V and that [v] = (c1, · · · , cn)t is unique by theorem 1.8 and is called the (n-tuple) representation of x in or relative to .
⌅ [v] = (0, · · · , 0, 1, 0, · · · , 0)t, in which 1 is the i-th element.
⌅⌅ ordered basis: basis with a given order of the elements
⌅ If the basis is considered as a set, i.e. not ordered, the represen- tation is unique only up to a permutation.
⌅ From now on all the bases are assumed ordered.
⌅⌅ example: = {e1, · · · , en}, standard basis for Fn, where
e1 = (1, 0, · · · , 0)t, e2 = (0, 1, · · · , 0)t,· · · ,en = (0, 0, · · · , 1)t n = 3 ) [(2, 3, 1)] = 2, 3, 1t
I
i
_
=
1 ) , If Bit와의 .es
}
, 뗜高昨岾⌅⌅ example: V = P2(R), f(x) = 4 + 6x 7x2
= 1, x, x2 , standard basis, [f] = (4, 6, 7)t
= 1, 1 + x, 1 + x + x2 [f ] = (c1, c2, c3)t
) f = c1 + c2(1 + x) + c3(1 + x + x2)
) 4 + 6x 7x2 = (c1 + c2 + c3) + (c2 + c3)x + c3x2 ) [f] = ( 2, 13, 7)t
⌅⌅ Consider T : V ! W , a basis for V , and a basis for W . Then what is the relationship between [x] and [T (x)] ?
쐖 誓筒
眞一
丁門吻."
Page 8
⌅⌅ matrix multiplication rule:
A: m⇥n; B: n⇥p; C = AB ) C: m⇥p; Cij = Pm
k=1 AikBkj
⌅ example: a b ·
✓c d
◆
= ac + bd,
✓a b
◆
· c d =
✓ac ad bc bd
◆
✓a b c d e f
◆
· 0
@g h i j k l
1 A=
✓ag + bi + ck ah + bj + cl dg + ei + f k dh + ej + f l
◆
⌅⌅ matrix representation of a linear transform:
= {v1, · · · , vn} is a basis for V ;
= {w1, · · · , wn} is a basis for W ; and T : V ! W is a linear transformation.
si.tt ww
name
.j삶 mm
a
KEV.TT ( Y
Eni n E Wi
GH
=I )
T의 Matrix
Rap
.⇒
蹄二 쏎竺劉 훎) 8x 2 V , x = Pn
j=1 cjvj
) [x] = (c1, · · · , cn)t representation of x in ) T (x) = T (Pn
j=1 cjvj) = Pn
j=1 cjT (vj) T (vj) 2 W ) T (vj) = Pm
i=1 aijwi for some aij ) T (x) = Pn
j=1 cjT (vj) = Pn
j=1 cj Pm
i=1 aijwi ) T(x) = c1(a11w1 + a21w2 + · · · + am1wm)
+ ...
= + cn(a1nw1 + a2nw2 + · · · + amnwm) ) T (x) = Pm
i=1(Pn
j=1 aijcj)wi ) [T (x)] =
Pn
j=1a1jcj Pn ...
j=1amjcj
!
=
a11 · · · a1n
... ...
am1 · · · amn
!
·
c1 ...
cn
!
= A[x]
→ 臨拗
池帖懺
쐈〗
pdl.it
A = [T ] is the matrix representation of T in and . ) T (x) = [T ] [x]
[T ] = ([T (v1)] , · · · , [T (vn)] ) =
a11 ...
am1
!
· · ·
a1n ...
amn
!!
= A
⌅ The matrix representation of T is unique by Theorem 1.8.
⌅ T (x) = y ) [T ] [x] = y
⌅ You can remember this notation by relating it to · = .
89 .ee
T"
德拙 : 燕慟玭
m4$ i
幼p
⌅ If V = W and = , the notation [T ] simplifies to [T ] . ) [T (x)] = [T ] [x]
That is, [T ] is the matrix representation of T in and .
) [T ] = ([T (v1)] , · · · , [T (vn)] ) = 0
@a11 · · · a1n
... ...
an1 · · · ann 1 A, where = {v1, · · · , vn}.
When the domain and the codomain are the same vector space, the linear transformation is called the .linear operator.
⌅⌅ example: T : P3(R) ! P2(R), T (f) = f0;
= 1, 1 + x, 1 + x + x2, 1 + x + x2 + x3 is a basis for P3(R);
and = 1, x, x2 is a basis for P2(R).
T (1) = 0 = 0 · 1 + 0 · x + 0 · x2 T (1 + x) = 1 = 1 · 1 + 0 · x + 0 · x2 T (1 + x + x2) = 1 + 2x = 1 · 1 + 2 · x + 0 · x2 T (1 + x + x2 + x3) = 1 + 2x + 3x2 = 1 · 1 + 2 · x + 3 · x2 [T ] =
0
@0 1 1 1 0 0 2 2 0 0 0 3
1
A, [4 6x + 3x3] = (10, 6, 3, 3)t
T (4 6x + 3x3) = 6 + 9x2 ) 0
@0 1 1 1 0 0 2 2 0 0 0 3
1 A ·
0 BB
@ 10
6 3 3
1 CC A =
0
@ 6
0 9
1 A
A (
TIB
-NIH --I
Tes 10-6 ( HD - 3117세E)
1-3CH세루7
Et
Et
te
A p =E6th
⌅⌅ addition and scalar multiplication of linear transformations:
⌅ Definition. Let T , U: V (F ) ! W (F ); a 2 F be arbitrary functions. We define
1. T + U: V ! W by (T + U)(x) = T (x) + U(x);
2. aT : V ! W by (aT )(x) = aT (x)
⌅⌅ Theorem 2.7: Let T , U: V (F ) ! W (F ); be linear transforma- tions. Then the followings are true.
1. 8a, b 2 F , aT + bU is a linear transformation.
2. The set of all linear transformations from V to W becomes a vector space over F .
⌅ T0, the zero transformation, plays the role of the zero vector in the vector space.
Show at
+b U ) (2 자py
)= MattbUM)
3T에 i
www.t/3C9TtbblD3Ai4iVerspae'.'TiksAi
⌅⌅ space of linear transformations:
L(V, W ) : vector space of all linear transformations from V to W L(V ) : vector space of all linear transformations from V to V
⌅⌅ Theorem 2.8: T , U: V (F ) ! W (F ); are linear transformations;
dim(V ), dim(W ) < 1; a 2 F . Then the followings are true.
1. [T + U] = [T ] + [U]
2. [aT ] = a[T ]
⌅ By this theorem, given bases for V , for W , dim(V ) = n, and dim(W ) = m, the vector space L(V, W ) can be identified by the vector space of Mm⇥n.
⌅⌅ example: T ,U: P2(R) ! P2(R), T (f) = f0, U(f) = f 2f0;
= 1, 1 + x, 1 + x + x2 ; = 1, x, x2 . (4T + 2U )(f ) = 4f0 + 2(f 2f0) = 2f
[T ] = 0
@0 1 1 0 0 2 0 0 0
1
A; [U] = 0
@1 1 1
0 1 3
0 0 1 1 A;
[4T + 2U ] = 0
@2 2 2 0 2 2 0 0 2
1
A = 4[T ] + 2[U]
[T ] = 0
@0 1 0 0 0 2 0 0 0
1
A; [U] = 0
@1 2 0
0 1 4
0 0 1
1
A; [4T + 2U] = 0
@2 0 0 0 2 0 0 0 2
1 A.
T111 = 0
(
TC HERTC HD의)= 11-2Xµ ( 1) = 1
4
U ((Hit1세 )-1hL-1-3×+5T11) = 0 U' (1 ) = 1
T1조) = 1 U 1조) -27C
(
THE)- x(
u쌰 -4개다
Composition of linear transformations
⌅⌅ The composition UT of T : V ! W and U: W ! Z is a linear transformation such that 8x 2 V , (UT )(x) = U(T (x))
⌅ This composition is a linear transformation from V to Z.
⌅⌅ example: T ,U: P2(R) ! P2(R), T (f) = f0, U(f) = f 2f0. U (T (f )) = T (f ) 2(T (f ))0 = f0 2f00
T (U (f )) = (U (f ))0 = f0 2f00
It happens that U(T (f)) = T (U(f)), but this is not always the case.
⌅⌅ example: T ,U: P2(R) ! P2(R), T (f) = xf0, U(f) = f0. U (T (f )) = (T (f ))0 = (xf0)0 = f0 + xf00
T (U (f )) = x(U (f ))0 = x(f0)0 = xf00 ) U(T (f)) 6= T (U(f))
⌅⌅ Theorem 2.9 and 2.10: S, T , T1, T2, U, U1, and U2 are linear transformations with appropriate domains and co-domains; I is an identity transformation; a 2 F . Then we have as belows:
1. UT is a linear transform.
2. distributivity(1): U(T1 + T2) = U T1 + U T2 3. distributivity(2): (U1 + U2)T = U1T + U2T 4. associativity: S(UT ) = (SU)T
5. identity: IT = T I = T (Note that the two I’s are different.) 6. a(UT ) = (aU)T = U(aT )
U 5127GBP EU ( Na) t pTag))
= 2 UTCX)
tpngjsee.tt#bydefiniti@(UtT)bc)=Ub4tTbc
)②
(
aT)
소) = a The)⌅⌅ matrix representation of composition
For j = 1, · · · , n,
(U T )(vj) = U (T (vj)) = U (Pm
k=1 Bkjwk) = Pm
k=1 BkjU (wk)
= Pm
k=1 Bkj(Pp
i=1 Aikzi) = Pp
i=1(Pm
k=1 AikBkj)zi
= Pp
i=1 Cijzi ! jth column of C.
⌅⌅ Theorem 2.11: [UT ]↵ = C = AB = [U ] [T ]↵
⌅ This equation is related to
↵ = · ↵.
⌅⌅ example:
U: P3(R) ! P2(R), U(f) = f0 T : P2(R) ! P3(R), T (f) = R x
0 f (t) dt
[U T ]↵ = [U ]↵[T ]↵ = 0
@0 1 0 0 0 0 2 0 0 0 0 3
1 A ·
0 BB
@
0 0 0 1 0 0 0 12 0 0 0 13
1 CC A =
0
@1 0 0 0 1 0 0 0 1
1 A =
[IP2]↵
[T U ] = [T ]↵[U ]↵ = 0 BB
@
0 0 0 1 0 0 0 12 0 0 0 13
1 CC A ·
0
@0 1 0 0 0 0 2 0 0 0 0 3
1 A =
0 BB
@
0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
1 CC A 6=
[IP3]
⌅⌅ identity matrix: I: Iij = ij =
(1, if i = j
0, else (i 6= j), where ij is the Kronecker Delta.
⌅⌅ Theorem 2.12: A, A1, A2, m⇥n; B, B1, B2, ,n⇥p; and C, p⇥
q, are matrices; Im and In are identity matrices of the respective sizes; a 2 F . Then we have the followings.
1. AB is a matrix of the size m ⇥ p.
2. distributivity(1): A(B1 + B2) = AB1 + AB2 3. distributivity(2): (A1 + A2)B = A1B + A2B 4. associativity: (AB)C = A(BC)
5. identity: ImA = AIn = A 6. a(AB) = (aA)B = A(aB)