Corollary 1.10.1: Every basis for V has the same number of vectors.
dimension: the number of vectors in a basis
Corollary 1.10.2: Let dim(V )=n.
1. S generates V. ⇒ S ≥ n, S can be reduced to a basis.
2. S generates V and S = n. ⇒ S is a basis for V . 3. S is LI. ⇒ S ≤ n, S can be extended to a basis.
Theorem 1.11: dim(V ) < ∞; W is a subspace of V . Then 1. dim(W ) ≤ dim(V )
2. dim(W )=dim(V ) ⇒ W = V .
Corollary 1.11: dim(V ) < ∞; W is a subspace of V . Then any basis for W can be extended to a basis for V .
Chapter 2 Linear transformation and Matrices
linear transformation T : V → W for vector spaces V (F ) and W (F ) :, T (ax + by) = aT (x) + bT (y).
function f : X → Y : ∀x ∈ X, ∃ unique f (x) ∈ Y
domain of f : X
codomain of f : Y
range of f : f (X) = {f (x) : x ∈ X}
image of A under f : f (A) = {f (x) : x ∈ A}
preimage of B under f : f^{−1}(B) = {x : f (x) ∈ B};
also called as inverse image
onto: f (X) = Y
onetoone: f (u) = f (v) ⇒ u = v
inverse of f : f^{−1} : Y → X such that
∀x ∈ X, f^{−1}(f (x)) = x; and ∀y ∈ Y, f (f^{−1}(y)) = y
invertible f : f^{−1} exists (⇔ onetoone and onto)
restriction of f to A: f_{A} : A → Y such that
∀x ∈ A, f_{A} = f (x)
composite or composition of f : f : X → Y and g : Y → Z:
g ◦ f : X → Z such that ∀x ∈ X, (g ◦ f )(x) = g(f (x)) (We will use the notation gf in place of g ◦ f .)
examples
T_{θ} : R^{2} → R^{2}, rotation by θ:
T_{θ}((a_{1}, a_{2})) = (a_{1} cos θ − a_{2} sin θ, a_{1} sin θ + a_{2} cos θ)
T : R^{2} → R^{2}, projection on the yaxis: T ((a_{1}, a_{2})) = (0, a_{2}) linearity: T (c(a_{1}, a_{2}) + d(b_{1}, b_{2})) = T (ca_{1} + db_{1}, ca_{2} + db_{2}) = (0, ca_{2} + db_{2}) = (0, ca_{2}) + (0, db_{2}) = c(0, a_{2}) + d(0, b_{2}) =
cT ((a_{1}, a_{2})) + dT ((b_{1}, b_{2}))
V = C(R, R): space of continuous functions from R to R T : V → R, integration over [a, b] : T (f ) = R _{b}
a f (t) dt linearity: T (cf + dg) = R _{b}
a(cf (t) + dg(t)) dt = c R _{b}
a f (t) dt + d R _{b}
a g(t) dt = cT (f ) + dT (g)
I_{V} : V → V , identity transformation: I_{V} (v) = v
T_{0} : V → W , zero transformation: T_{0}(v) = 0
null space and range
null space of T : T : V → W : N (T ) = {x ∈ V : T (x) = 0}
range space of T : T : V → W : R(T ) = {T (x) : x ∈ V }
The null space and range of a linear transformation cannot be empty because T (0) = 0. That is, 0 ∈ N (T ) and 0 ∈ R(T )
Another name for the null space is kernel.
Another name for the range is image.
example:
T : R^{2} → R^{2}, projection on yaxis: T ((a_{1}, a_{2})) = (0, a_{2}) N (T ) = {(a_{1}, 0) : a_{1} ∈ R}, R(T ) = {(0, a2) : a_{2} ∈ R}
V = C(R, R): space of continuous functions T : V → R, integration over [a, b]: T (f ) = R _{b}
a f (t) dt N (T ) =
n
f : R _{b}
a f (t) dt = 0 o
, R(T ) = R
Theorem 2.1: V and W are vector spaces; T : V → W is a linear transformation. Then N (T ) and R(T ) are subspaces of V and W , respectively.
proof for N (T ): x, y ∈ N (T ), a, b ∈ F
⇒ T (x) = T (y) = 0
⇒ T (ax + by) = aT (x) + bT (y) = 0 [linearity]
⇒ ax + by ∈ N (T )
Theorem 2.2: T : V → W is a linear transformation; β = {v_{1}, · · · , v_{n}} is a basis for V . Then, R(T ) = span(T (β)) = span({T (v_{1}), · · · , T (v_{n})}).
proof:
(i) T (β) ⊆ R(T ) [range]
⇒ span(T (β)) ⊆ R(T ) [R(T ) is a subspace; Theorem 1.5]
(ii) w ∈ R(T ) ⇒ ∃v such that T (v) = w [range]
⇒ v = Σa_{i}v_{i} [basis]
⇒ w = T (v) = T (Σa_{i}v_{i}) = Σa_{i}T (v_{i}) ∈ span(T (β)))
⇒ R(T ) ⊆ span(T (β))
(i), (ii) ⇒ R(T ) = span(T (β))
Is {T (v_{1}), · · · , T (v_{n})} a basis for R(T )?
nullity and rank
nullity of T : nullity(T ) = dim(N (T ))
rank of T : rank(T ) = dim(R(T ))
Theorem 2.3 (dimension theorem): T : V → W is a linear trans formation; V < ∞. Then, nullity(T ) + rank(T ) = dim(V ).
proof: Assume the following:
dim(V ) = n, nullity(T ) = k
{v_{1}, · · · , v_{k}} is a basis for N (T ), and it extends to a basis {v_{1}, · · · , v_{k}, v_{k+1}, · · · , v_{n}} for V .
S = {T (v_{k+1}), · · · , T (v_{n})}
We will then show that S is a basis for R(T ).
“S generates R(T )”: Let w ∈ R(T ).
⇒ ∃v ∈ V such that T (v) = w. [range]
⇒ v = P_{n}
i=1 a_{i}v_{i}. [basis for V ] T (v) = w = P_{n}
i=1 a_{i}T (v_{i}) = P_{n}
i=k+1 a_{i}T (v_{i}) [null space]
w ∈ span(S)
“S is linearly independent”: Let P_{n}
i=k+1 b_{i}T (v_{i}) = 0.
⇒ T (P_{n}
i=k+1 b_{i}v_{i}) = 0 [linearity]
⇒ P_{n}
i=k+1 b_{i}v_{i} ∈ N (T ) [null space]
⇒ P_{n}
i=k+1 b_{i}v_{i} = P_{k}
i=1 c_{i}v_{i} for some c_{i}’s [basis for N (T )]
⇒ P_{k}
i=1 c_{i}v_{i} + P_{n}
i=k+1(−b_{i})v_{i} = 0
⇒ c_{1} = · · · = c_{k} = b_{k+1} = · · · = b_{n} = 0 [basis for V ]
This dimension theorem holds only for finite dimensional vec tor spaces.
example:
T_{θ}: R^{2} → R^{2}, rotation by θ:
T_{θ}((a_{1}, a_{2})) = (a_{1} cos θ − a_{2} sin θ, a_{1} sin θ + a_{2} cos θ) dim(V ) = 2 = 2 + 0 = rank(T ) + nullity(T )
T_{θ}: R^{2} → R^{2}, projection on the yaxis: T_{θ}((a_{1}, a_{2})) = (0, a_{2}) dim(V ) = 2 = 1 + 1 = rank(T ) + nullity(T )
Theorem 2.4: T : V → W is a linear transformation. Then the following holds.
T is onetoone ⇔ N (T ) = {0}.
proof: “⇒”: Assume T is onetoone and v ∈ N (T ).
⇒ T (u + v) = T (u) + T (v) = T (u) [null space]
⇒ u + v = u [onetoone]
⇒ v = 0 [cancellation law]
“⇐”: Assume N (T ) = {0} and T (u) = T (v).
⇒ T (u) − T (v) = T (u − v) = 0 [linear]
⇒ u − v ∈ N (T ) ⇒ u − v = 0 [assumption] ⇒ u = v
Theorem 2.5: T : V → W is a linear transformation; dim(V ) = dim(W ) < ∞.
Then the followings are equivalent.
1. T is onetoone.
2. T is onto.
3. rank(T ) = dim(V )
? The following figure serves as an intuitive proof.
Rotation by θ is an example for the last two theorems.
example: T : P_{2}(R) → P3(R), T (f )(x) = 2f (x) + _{0}^{x} 3f (t) dt
? by theorem 2.2,
R(T ) = span(T (1), T (x), T (x^{2}) ) = span(
3x, 2 + 3
2x^{2}, 4x + x^{3}
)
3x, 2 + 3
2x^{2}, 4x + x^{3}
is linearly independent.
⇒ It is a basis for R(T ) ⇒ rank(T ) = 3
⇒ nullity(T ) = 0 [dim thm] ⇒ T is onetoone. [Thm 2.4]
But T is not onto. [dim(V ) = 3 6= dim(W ) = 4]
example: T : P_{2}(R) → P2(R), T (f )(x) = 2f^{0}(x) − 3f (x)
? by theorem 2.2,
R(T ) = span(T (1), T (x), T (x^{2}) ) = span(−3, 2 − 3x, 4x − 3x^{2} )
⇒ rank(T ) = 3 = dim(P_{2}(R))
⇒ T is onto; T is onetoone. [Theorem 2.5]
Theorem 2.6: {v_{1}, · · · , v_{n}} is a basis for V ; w_{1}, · · · , w_{n} ∈ W . Then there exists only one linear transformation T : V → W such that T (v_{i}) = w_{i}, i = 1, · · · , n.
proof: Let x = Σa_{i}v_{i}, and define T as follows:
∀x ∈ V , T (x) = T (Σa_{i}v_{i}) = Σa_{i}w_{i}. [linear? unique?]
“linearity”: T (cx + dy) = T (cΣa_{i}v_{i} + dΣb_{i}v_{i}) [y = Σb_{i}v_{i}]
= T (Σ(ca_{i} + db_{i})v_{i}) = Σ(ca_{i} + db_{i})w_{i} [def of T ]
= cΣa_{i}w_{i} + dΣb_{i}w_{i} = cT (Σa_{i}v_{i}) + dT (Σb_{i}v_{i}) [def of T ]
= cT (x) + dT (y)
“uniqueness”: Let a linear transformation U satisfy U (v_{i}) = w_{i}.
∀x ∈ V , U (x) = U (Σa_{i}v_{i}) = Σa_{i}U (v_{i}) = Σa_{i}w_{i} = T (x)
⇒ U = T
dependent or even repeated.
This theorem says that in order to specify a linear transforma tion, you only need to specify it on a basis.
It also implies that R(T ) = span({w_{1}, · · · , w_{n}}) by theorem 2.2, whereby you can design R(T ) as you wish.
This theorem is analogous to:
that for two real numbers v and w, there is only one linear function f : R → R such that f (v) = w. [straight line]
that for six real numbers; v_{11}, v_{12}, v_{21}, v_{22}, w_{1}, w_{2}, there is only one linear function f R^{2} → R such that f((v11, v_{12})) = w_{1} and f ((v_{21}, v_{22})) = w_{2}. [plane]
⌅⌅ Null space and Range space
⌅ null space of T : T : V ! W : N(T ) = {x 2 V : T (x) = 0}
⌅ range space of T : T : V ! W : R(T ) = {T (x) : x 2 V }
⌅⌅ Theorem 2.1: V and W are vector spaces; T : V ! W is a linear transformation. Then N(T ) and R(T ) are subspaces of V and W , respectively.
⌅⌅ Theorem 2.2: T : V ! W is a linear transformation; = {v1, · · · , v^{n}} is a basis for V . Then, R(T ) = span(T ( )) = span({T (v1), · · · , T (v^{n})}).
⌅ {T (v1), · · · , T (v^{n})} is not a basis for R(T )?
V쌔00t2R^{(}^{T}^{)}
⌅⌅ nullity and rank
⌅ nullity of T : nullity(T ) = dim(N(T ))
⌅ rank of T : rank(T ) = dim(R(T ))
⌅⌅ Theorem 2.3 (dimension theorem): T : V ! W is a linear trans formation; V < 1. Then, nullity(T ) + rank(T ) = dim(V ).
⌅ This dimension theorem holds only for finite dimensional vector spaces.
⌅⌅ Theorem 2.4: T : V ! W is a linear transformation. Then the following holds.
T is onetoone , N(T ) = {0}.
⌅⌅ Theorem 2.5: T : V ! W is a linear transformation; dim(V ) = dim(W ) < 1. Then the followings are equivalent.
1. T is onetoone.
2. T is onto.
3. rank(T ) = dim(V )
Et
MCT^{)} 다 이
N.at/ity0#RanKD=dimGRCTl)=dinn
( V) ( ^{'} ^{,} ^{'} NullitytR.tn/C=dimCV
))
Ran _{KIT}_{)} ^{=}_{d}_{imCW}_{)}
# ^{T} ^{is} onto
⌅⌅ Theorem 2.6: {v1, · · · , vn} is a basis for V ; w1, · · · , wn 2 W . Then there exists only one linear transformation T : V ! W such that T (v_{i}) = w_{i}, i = 1, · · · , n.
proof: Let x = ⌃a_{i}v_{i}, and define T as follows:
8x 2 V , T (x) = T (⌃aiv_{i}) = ⌃a_{i}w_{i}. [linear? unique?]
“T (v_{i}) = w_{i}”: clear by letting a_{i} = 1, a_{j} = 0) for j 6= i
“linearity”: T (⌃a_{i}v_{i}) = ⌃a_{i}T (v_{i}), 8{a1, ..., a_{n}}
“uniqueness”: Let a linear transformation U satisfy U(v_{i}) = w_{i}. 8x 2 V , U(x) = U(⌃aiv_{i}) = ⌃a_{i}U (v_{i}) = ⌃a_{i}w_{i} = T (x)
) U = T
E, Estate
몸
⌅ Note that w_{i}’s can be arbitrary vectors in W , possibly linearly dependent or even repeated.
⌅ This theorem says that in order to specify a linear transforma tion, you only need to specify it on a basis.
⌅ It also implies that R(T ) = span({w1, · · · , w^{n}}) by theorem 2.2, whereby you can design R(T ) as you wish.
⌅ This theorem is analogous to:
⌅⌅ that for two real numbers v and w, there is only one linear function f: R ! R such that f(v) = w. [straight line]
⌅⌅ that for six real numbers; v_{11}, v_{12}, v_{21}, v_{22}, w_{1}, w_{2}, there is only one linear function f R^{2} ! R such that f((v11, v_{12})) = w_{1} and f((v_{21}, v_{22})) = w_{2}. [plane]
.c n 一一
WFYFWi7sey.fi#TCUi)=WieTT'bydetiningT

臧沿
^{.}^{鷄癎}
not가 .
∴
. 潘石冶Matrix representation of a linear transformation
⌅⌅ Recall that given a basis = {v1, · · · , v^{n}} for V , we can write x = ⌃c_{i}v_{i} for any x 2 V and that [v] = (c1, · · · , c^{n})^{t} is unique by theorem 1.8 and is called the (ntuple) representation of x in or relative to .
⌅ [v] = (0, · · · , 0, 1, 0, · · · , 0)^{t}, in which 1 is the ith element.
⌅⌅ ordered basis: basis with a given order of the elements
⌅ If the basis is considered as a set, i.e. not ordered, the represen tation is unique only up to a permutation.
⌅ From now on all the bases are assumed ordered.
⌅⌅ example: = {e1, · · · , e^{n}}, standard basis for F^{n}, where
e_{1} = (1, 0, · · · , 0)^{t}, e_{2} = (0, 1, · · · , 0)^{t},· · · ,en = (0, 0, · · · , 1)^{t} n = 3 ) [(2, 3, 1)] = 2, 3, 1^{t}
I
i
_
=
1 ) _{,} _{If} Bit와의 ^{.es}
}
, 뗜高昨岾⌅⌅ example: V = P_{2}(R), f(x) = 4 + 6x 7x^{2}
= 1, x, x^{2} , standard basis, [f] = (4, 6, 7)^{t}
= 1, 1 + x, 1 + x + x^{2} [f ] = (c_{1}, c_{2}, c_{3})^{t}
) f = c1 + c_{2}(1 + x) + c_{3}(1 + x + x^{2})
) 4 + 6x 7x^{2} = (c_{1} + c_{2} + c_{3}) + (c_{2} + c_{3})x + c_{3}x^{2} ) [f] = ( 2, 13, 7)^{t}
⌅⌅ Consider T : V ! W , a basis for V , and a basis for W . Then what is the relationship between [x] and [T (x)] ?
쐖 誓筒
眞一
丁門吻^{.}"
Page 8
⌅⌅ matrix multiplication rule:
A: m⇥n; B: n⇥p; C = AB ) C: m⇥p; Cij = P_{m}
k=1 A_{ik}B_{kj}
⌅ example: a b ·
✓c d
◆
= ac + bd,
✓a b
◆
· c d =
✓ac ad bc bd
◆
✓a b c d e f
◆
· 0
@g h i j k l
1 A=
✓ag + bi + ck ah + bj + cl dg + ei + f k dh + ej + f l
◆
⌅⌅ matrix representation of a linear transform:
= {v1, · · · , v^{n}} is a basis for V ;
= {w1, · · · , w^{n}} is a basis for W ; and T : V ! W is a linear transformation.
s^{i.tt} ^{ww}
^{name}
^{.}_{j}_{삶} _{mm}
a
KEV.TT ( Y
^{En}i n ^{E} ^{Wi}
GH
^{=}I _{)}
T의 Matrix
Rap
^{.}⇒
蹄二 쏎竺劉 ^{훎}) 8x 2 V , x = P_{n}
j=1 c_{j}v_{j}
) [x] = (c1, · · · , c^{n})^{t} representation of x in ) T (x) = T (P_{n}
j=1 c_{j}v_{j}) = P_{n}
j=1 c_{j}T (v_{j}) T (v_{j}) 2 W ) T (vj) = P_{m}
i=1 a_{ij}w_{i} for some a_{ij} ) T (x) = P_{n}
j=1 c_{j}T (v_{j}) = P_{n}
j=1 c_{j} P_{m}
i=1 a_{ij}w_{i} ) T(x) = c_{1}(a_{11}w_{1} + a_{21}w_{2} + · · · + am1w_{m})
+ ...
= + c_{n}(a_{1n}w_{1} + a_{2n}w_{2} + · · · + a^{mn}w_{m}) ) T (x) = P_{m}
i=1(P_{n}
j=1 a_{ij}c_{j})w_{i} ) [T (x)] =
P_{n}
j=1a_{1j}c_{j} P_{n} ...
j=1a_{mj}c_{j}
!
=
a_{11} · · · a1n
... ...
a_{m1} · · · amn
!
·
c_{1} ...
c_{n}
!
= A[x]
→ 臨拗
池帖懺
쐈〗
pdl.it
A = [T ] is the matrix representation of T in and . ) T (x) = [T ] [x]
[T ] = ([T (v_{1})] , · · · , [T (v^{n})] ) =
a_{11} ...
a_{m1}
!
· · ·
a_{1n} ...
a_{mn}
!!
= A
⌅ The matrix representation of T is unique by Theorem 1.8.
⌅ T (x) = y ) [T ] [x] = y
⌅ You can remember this notation by relating it to · = .
89 .ee
_{T}"
德拙 _{:} 燕慟玭
^{m}4$ i
^{幼}_{p}
⌅ If V = W and = , the notation [T ] simplifies to [T ] . ) [T (x)] = [T ] [x]
That is, [T ] is the matrix representation of T in and .
) [T ] = ([T (v1)] , · · · , [T (v^{n})] ) = 0
@a_{11} · · · a1n
... ...
a_{n1} · · · a^{nn} 1 A, where = {v1, · · · , v^{n}}.
When the domain and the codomain are the same vector space, the linear transformation is called the _{.}linear operator.
⌅⌅ example: T : P_{3}(R) ! P2(R), T (f) = f^{0};
= 1, 1 + x, 1 + x + x^{2}, 1 + x + x^{2} + x^{3} is a basis for P_{3}(R);
and = 1, x, x^{2} is a basis for P_{2}(R).
T (1) = 0 = 0 · 1 + 0 · x + 0 · x^{2} T (1 + x) = 1 = 1 · 1 + 0 · x + 0 · x^{2} T (1 + x + x^{2}) = 1 + 2x = 1 · 1 + 2 · x + 0 · x^{2} T (1 + x + x^{2} + x^{3}) = 1 + 2x + 3x^{2} = 1 · 1 + 2 · x + 3 · x^{2} [T ] =
0
@0 1 1 1 0 0 2 2 0 0 0 3
1
A, [4 6x + 3x^{3}] = (10, 6, 3, 3)^{t}
T (4 6x + 3x^{3}) = 6 + 9x^{2} ) 0
@0 1 1 1 0 0 2 2 0 0 0 3
1 A ·
0 BB
@ 10
6 3 3
1 CC A =
0
@ 6
0 9
1 A
A (
TIB
^{}_{NIH} ^{}^{}^{I}
Tes ^{106} ^{(} ^{HD} ^{} 3117세E)
13CH세루7
Et
Et
te
^{A} ^{p} ^{=}^{E6th}
⌅⌅ addition and scalar multiplication of linear transformations:
⌅ Definition. Let T , U: V (F ) ! W (F ); a 2 F be arbitrary functions. We define
1. T + U: V ! W by (T + U)(x) = T (x) + U(x);
2. aT : V ! W by (aT )(x) = aT (x)
⌅⌅ Theorem 2.7: Let T , U: V (F ) ! W (F ); be linear transforma tions. Then the followings are true.
1. 8a, b 2 F , aT + bU is a linear transformation.
2. The set of all linear transformations from V to W becomes a vector space over F .
⌅ T_{0}, the zero transformation, plays the role of the zero vector in the vector space.
Show _{at}
^{+}^{b} ^{U} _{)} _{(}2 자py
^{)}= Matt^{b}UM^{)}
3T에 ^{i}
www.t/3C9TtbblD3Ai4iVerspae'.'TiksAi
⌅⌅ space of linear transformations:
L(V, W ) : vector space of all linear transformations from V to W L(V ) : vector space of all linear transformations from V to V
⌅⌅ Theorem 2.8: T , U: V (F ) ! W (F ); are linear transformations;
dim(V ), dim(W ) < 1; a 2 F . Then the followings are true.
1. [T + U] = [T ] + [U]
2. [aT ] = a[T ]
⌅ By this theorem, given bases for V , for W , dim(V ) = n, and dim(W ) = m, the vector space L(V, W ) can be identified by the vector space of M_{m}_{⇥n}.
⌅⌅ example: T ,U: P_{2}(R) ! P2(R), T (f) = f^{0}, U(f) = f 2f^{0};
= 1, 1 + x, 1 + x + x^{2} ; = 1, x, x^{2} . (4T + 2U )(f ) = 4f^{0} + 2(f 2f^{0}) = 2f
[T ] = 0
@0 1 1 0 0 2 0 0 0
1
A; [U] = 0
@1 1 1
0 1 3
0 0 1 1 A;
[4T + 2U ] = 0
@2 2 2 0 2 2 0 0 2
1
A = 4[T ] + 2[U]
[T ] = 0
@0 1 0 0 0 2 0 0 0
1
A; [U] = 0
@1 2 0
0 1 4
0 0 1
1
A; [4T + 2U] = 0
@2 0 0 0 2 0 0 0 2
1 A.
T111 ^{=} ^{0}
(
_{TC HER}^{TC} ^{HD의}_{)}^{=} 112Xµ ^{(} ^{1}) ^{=} ^{1}
4
_{U} _{(}^{(}_{Hit}^{1세} _{)}^{1hL}_{13×+5}T11) ^{=} ^{0} ^{U}^{'} ^{(}^{1} ) ^{=} ^{1}
T1조) ^{=} ^{1} U ^{1조}^{)} 27C
(
_{THE}_{)}_{ x}_{(}
_{u쌰} ^{}_{4개다}
Composition of linear transformations
⌅⌅ The composition UT of T : V ! W and U: W ! Z is a linear transformation such that 8x 2 V , (UT )(x) = U(T (x))
⌅ This composition is a linear transformation from V to Z.
⌅⌅ example: T ,U: P_{2}(R) ! P2(R), T (f) = f^{0}, U(f) = f 2f^{0}. U (T (f )) = T (f ) 2(T (f ))^{0} = f^{0} 2f^{00}
T (U (f )) = (U (f ))^{0} = f^{0} 2f^{00}
It happens that U(T (f)) = T (U(f)), but this is not always the case.
⌅⌅ example: T ,U: P_{2}(R) ! P2(R), T (f) = xf^{0}, U(f) = f^{0}. U (T (f )) = (T (f ))^{0} = (xf^{0})^{0} = f^{0} + xf^{00}
T (U (f )) = x(U (f ))^{0} = x(f^{0})^{0} = xf^{00} ) U(T (f)) 6= T (U(f))
⌅⌅ Theorem 2.9 and 2.10: S, T , T_{1}, T_{2}, U, U_{1}, and U_{2} are linear transformations with appropriate domains and codomains; I is an identity transformation; a 2 F . Then we have as belows:
1. UT is a linear transform.
2. distributivity(1): U(T_{1} + T_{2}) = U T_{1} + U T_{2} 3. distributivity(2): (U_{1} + U_{2})T = U_{1}T + U_{2}T 4. associativity: S(UT ) = (SU)T
5. identity: IT = T I = T (Note that the two I’s are different.) 6. a(UT ) = (aU)T = U(aT )
U 5127GBP ^{EU} ( ^{Na}^{)} ^{t} p^{Tag}^{)}^{)}
= 2 UTCX^{)}
tpngjsee.tt#bydefiniti@(UtT)bc)=Ub4tTbc
^{)}②
(
^{aT}^{)}
^{소}^{)} ^{=} ^{a} ^{The}^{)}⌅⌅ matrix representation of composition
For j = 1, · · · , n,
(U T )(v_{j}) = U (T (v_{j})) = U (P_{m}
k=1 B_{kj}w_{k}) = P_{m}
k=1 B_{kj}U (w_{k})
= P_{m}
k=1 B_{kj}(P_{p}
i=1 A_{ik}z_{i}) = P_{p}
i=1(P_{m}
k=1 A_{ik}B_{kj})z_{i}
= Pp
i=1 C_{ij}z_{i} ! j^{th} column of C.
⌅⌅ Theorem 2.11: [UT ]_{↵} = C = AB = [U ] [T ]_{↵}
⌅ This equation is related to
↵ = · ↵.
⌅⌅ example:
U: P_{3}(R) ! P2(R), U(f) = f^{0} T : P_{2}(R) ! P3(R), T (f) = R _{x}
0 f (t) dt
[U T ]_{↵} = [U ]^{↵}[T ]_{↵} = 0
@0 1 0 0 0 0 2 0 0 0 0 3
1 A ·
0 BB
@
0 0 0 1 0 0 0 ^{1}_{2} 0 0 0 ^{1}_{3}
1 CC A =
0
@1 0 0 0 1 0 0 0 1
1 A =
[I_{P}_{2}]_{↵}
[T U ] = [T ]_{↵}[U ]^{↵} = 0 BB
@
0 0 0 1 0 0 0 ^{1}_{2} 0 0 0 ^{1}_{3}
1 CC A ·
0
@0 1 0 0 0 0 2 0 0 0 0 3
1 A =
0 BB
@
0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
1 CC A 6=
[I_{P}_{3}]
⌅⌅ identity matrix: I: I_{ij} = _{ij} =
(1, if i = j
0, else (i 6= j), where _{ij} is the Kronecker Delta.
⌅⌅ Theorem 2.12: A, A_{1}, A_{2}, m⇥n; B, B1, B_{2}, ,n⇥p; and C, p⇥
q, are matrices; I_{m} and I_{n} are identity matrices of the respective sizes; a 2 F . Then we have the followings.
1. AB is a matrix of the size m ⇥ p.
2. distributivity(1): A(B_{1} + B_{2}) = AB_{1} + AB_{2} 3. distributivity(2): (A_{1} + A_{2})B = A_{1}B + A_{2}B 4. associativity: (AB)C = A(BC)
5. identity: I_{m}A = AI_{n} = A 6. a(AB) = (aA)B = A(aB)