Theorem 1.3: V is a vector space and W ⊆ V

28  Download (0)

전체 글

(1)

 Theorem 1.1(cancellation law):

x, y, z ∈ V ; x + z = y + z ⇒ x = y

 Theorem 1.3: V is a vector space and W ⊆ V . Then W is a subspace of V if and only if

1. x, y ∈ W ⇒ x + y ∈ W and 2. a ∈ F, x ∈ W ⇒ ax ∈ W .

 V = P (F ), W = Pn(F ) = {polynomials of degree n or less}

 V = R, W ={convergent sequences}

(2)

 Theorem 1.4: Any intersection of subspaces of a vector space V is a subspace of V .

 The union of two subspaces of a vector space V is not a subspace in general.

 example: V = R3,

U1 = {(a1, 0, a3) : a1, a3 ∈ R}, U2 = {(a1, a2, 0) : a1, a2 ∈ R}

are subspaces.

 U1 T U2 = {(a1, 0, 0) : a1 ∈ R} is a subspace.

 U1 S U2 is not a subspace because (a1, 0, a3) + (a1, a2, 0) /∈ U1 S U2.

(3)

Linear combination

 linear combination of u1, · · · , uk ∈ V : v = a1u1 + · · · + akuk, ai ∈ F

 span of S: the set of all linear combinations of the vectors in S

 notation: span(S)

 span(∅)= {0} for conve- nience

 example

 V = R3,

span({(1, 0, 0), (0, 1, 0)})= {(a1, a2, 0) : a1, a2 ∈ R}

(4)

 Theorem 1.5: V is a vector space; W is its subspace; and S ⊆ V . Then

1. span(S) is a subspace of V , and 2. S ⊆ W ⇒ span(S)⊆ W .

 generating set S for V : span(S)= V

 R3 = span({(1, 0, 0), (0, 1, 0), (0, 0, 1)})

 R3 = span({(1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 1)})

 A smallest generating set which generates a vector space V . (smallest → linearly independent → basis)

(5)

Linear dependence and linear independence

 linearly dependent S ∈ V (F ):

For u1, · · · , uk ∈ S, and ∃a1, · · · , ak ∈ F , not all zero, such that a1u1 + · · · + akuk = 0.

– ai 6= 0 ⇒ ui = a−1i (−a1)u1 + · · · + a−1i (−ai−1)ui−1 + a−1i (−ai+1)ui+1 + · · · + a−1i (−ak)uk

– 0 ∈ S ⇒ S is linearly dependent. [a0 = 0]

 linearly independent S ∈ V (F ): not linearly dependent

 equivalent definition: ∀u1, · · · , uk ∈ S,

a1u1 + · · · + akuk = 0 ⇒ a1 = · · · = ak = 0

 ∅ is linearly independent. [convention]

 A set containing a single nonzero vector is linearly indep.

(6)

Basis and dimension

 basis for V : linearly independent generating set for V

 A basis is a ‘smallest’generating set for V .

 {(1, 0, 0), (0, 1, 0), (0, 0, 1)}:

a basis for R3.

 {(1, 0, 0), (0, 1, 0), 0, 0, 1), (1, 1, 1)}: not a basis for R3.

 {(1, 1, 0), (1, 0, 1)} a basis for R2.

(7)

 example:

 {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is the “standard basis” for R3.

 {(1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 1)} is not a basis for R3.

 {(1, 1, 0), (1, 0, 1), (0, 1, 1)} is a basis for R3.

 {1, x, x2} is the “standard basis” for P2(R).

 {x2 + 3x − 2, 2x2 − 3, 5x, x + 1, −x2 − 4x + 4} is not a basis for P2(R).

 {x2 + 3x − 2, 2x2 − 3, x + 1} is a basis for P2(R).

 Lagrange polynomials fi(x) = Πnj=0,j6=icx−cj

i−cj, i = 0, · · · , n, where ci 6= cj for i 6= j, form a basis for Pn(R).

[determined by n + 1 coefficients ci, i = 0, · · · , n]

(8)

 Theorem 1.8: representation theorem β = {u1, · · · , un} is a basis for V

⇔ ∀v ∈ V , ∃ unique (a1, · · · , an) such that v = a1u1 + · · · + anun.

proof: “⇒”: Let β is a basis for V .

⇒ V =span(β) [basis]

⇒ ∀v ∈ V , v = a1u1 + · · · + anun for some a1, · · · , an Show uniqueness:

Let also v = b1u1 + · · · + bnun.

⇒ 0 = (a1 − b1)u1 + · · · + (an − bn)un

(9)

⇒ a1 = b1, · · · , anbn [β is lin indep]

“⇐”: Assume ∀v ∈ V , ∃ unique a1, · · · , an such that v = a1u1 + · · · + anun

⇒ β = {u1, · · · , un} generates V . Show linear independence:

Let 0 = c1u1 + · · · + cnun.

⇒ c1 = 0, · · · , cn = 0 [0u1 + · · · + 0un = 0, uniqueness]

 This theorem means that given a vector space V (F ) and its basis β, whatever kind of vector space it may be, each vector v in V is uniquely represented by (a1, · · · , an).

 Given a basis, there is a one-to-one correspondence between V and Fn.

 [v]β = (a1, · · · , an)t is called the (n-tuple) representation of v in β or relative to β.

(10)

 example:

 β = {1, x, x2}, a0 + a1x + a2x2 → (a0, a1, a2)

 β = {1, 1 + x, 1 + x + x2},

a0 + a1x + a2x2 → (a0 − a1, a1 − a2, a2)

(11)

 β =  1 0

0 0 ,  0 1

0 0 ,  0 0

1 0 ,  0 0

0 1 , ,  a b

c d →

(a, b, c, d)

 β = {ei2πkf0t : k = · · · , −1, 0, 1, · · · }, f (t) = P

k=−∞ akei2πkf0t → (· · · , a−1, a0, a1, · · · )

periodic function with frequency f0 → Fourier coefficients

(12)

 Theorem 1.9: A finite generating set S for V can be reduced to a basis for V .

proof:

(i) If S = ∅ or S = {0}, they generate V={ 0 }.

Henceβ = ∅ is considered a basis for V .

(ii) If S has non-zero vectors, let β = {u1, · · · , uk} be a largest linearly independent subset of S.

⇒ span(β) ⊆ span(S) = V [subset, generating set]

To show V =span(S) ⊆ span(β), we show S ⊆ span(β).

v ∈ S ⇒ v ∈ β or v /∈ β[If v ∈ β then v ∈ span(β), done.]

v /∈ β ⇒ v S β is lin dep. [β is a largest lin indep subset]

⇒ v ∈ span(β) [Thm 1.7]

⇒ S ⊆ span(β) ⇒ span(S) ⊆ span(β) [Thm 1.5]

(13)

 example:

 {x2 + 3x − 2, 2x2 − 3, 5x, x + 1, −x2 − 4x + 4} spans P2(R).

{x2 + 3x − 2, 2x2 − 3, 5x} is a basis for P2(R).

 {(1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 1)} spans R3. {(0, 1, 0), (0, 0, 1), (1, 1, 1)} is a basis for R3.

 Theorem 1.10(replacement theorem):

G ⊆ V is a generating set for V and has n elements;

L ⊆ V is a linearly independent set and has m elements. Then 1. m ≤ n, and

2. L can be extended to a generating set for V by adding n − m elements from G.

(14)

Proof by induction:

Let the set of the n − m elements be H.

(i) If |L| = m = 0, then 1. 0 ≤ n and

2. H = G generates V .

(ii) Assume the theorem holds for |L| = m. That is, assume 1. m ≤ n, and

2. ∃H ⊆ G such that |H| = n − m and L S H generates V . (iii) To show that the theorem also holds for |L0| = m + 1,

(15)

let L = {v1, · · · , vm+1} be linearly independent, and let L = {v1, · · · , vm}.

⇒ L is linearly independent. [Thm 1.6]

⇒ m ≤ n and ∃H = {u1, · · · , un−m} such that L S H generates V .[(ii)]

⇒ vm+1 = a1v1 + · · · + amvm + b1u1 + · · · + bn−mun−m If m = n

⇒ vm+1 = a1v1 + · · · + amvm: contradiction [L0 is lin indep]

⇒ m < n [(ii): m ≤ n]

⇒ m + 1 ≤ n : “1”

bi 6= 0 for some i [L0 is lin indep], and by rearranging we can let i = 1 such that b1 6= 0.

⇒ u1 = (−b−11 a1)v1+· · ·+(−b−11 am)vm+(b−11 )vm+1+(−b−11 b2)u2+

· · · + (−b−11 bn−m)un−m: (1) Let H0 = {u2, · · · , un−m}.

(16)

⇒ L S H = {v1, · · · , vm, vm+1, u2, · · · , un−m} L S H = {v1, · · · , vm, u1, u2, · · · , un−m}

⇒ V =span(L S H) ⊆ span(L0 S H)=span(L0 S H0) ⊆ V.

(by Thm 1.5)

 example:

 G = {x2+ 3x − 2, 2x2− 3, 5x, x + 1, −x2− 4x + 4} ⊆ P2((R)).

L = {1, x} ⇒ H = {5x, x + 1, −x2 − 4x + 4}

 G = {(1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 1, )} generates (R)3. L = {(1, 1, 0)} ⇒ H = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}

 In these examples, H can be any three elements from G.

(17)

 basis for V : linearly independent generating set for V

 {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is the “standard basis” for R3.

 {(1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 1)} is not a basis for R3.

 {(1, 1, 0), (1, 0, 1)} is not a basis for R3.

 Theorem 1.8: representation theorem β = {u1, · · · , un} is a basis for V

⇔ ∀v ∈ V , ∃ unique (a1, · · · , an) such that v = a1u1 + · · · + anun.

⇒ [v]β = (a1, · · · , an)t is called the (n-tuple) representation of v in β or relative to β.

(18)

 Theorem 1.9: A finite generating set G for V can be reduced to a basis for V , that is, a basis β is a subset of G.

 Theorem 1.10(replacement theorem):

(19)

 Corollary 1.10.1: If V has a finite basis, then every basis for V contains the same number of vectors.

proof: Let β and γ be bases for V .

⇒ |γ| ≤ |β|. [γ is lin indep; β spans V ; Thm 1.10]

⇒ |β| ≤ |γ|. [β is lin indep; γ spans V ; Thm 1.10]

 dimension: the number of vectors in a basis

 The dimension can be either finite or infinite.

 This definition of dimension applies to all spaces with algebraic structure-vector, normed linear, inner product, Banach, Hilbert, and Euclidean spaces.

 There are other kinds of dimensions, eg topological or fractal, but they are not our concern.

(20)

 example:

 dim({0}) = 0

 dim(Fn) = n

 dim(Mm×n) = mn

 dim(Pn) = n + 1

 The space of complex numbers can be

1. a vector space of dimension 1 when the field is complex.

2. a vector space of dimension 2 when the field is real.

(21)

 Corollary 1.10.2, expanded: Let dim(V )=n.

1. S generates V.

 ⇒ |S| ≥ n

 ⇒ S can be reduced to a basis for V . [Thm 1.9]

2. S generates V and |S| = n.

⇒ S is a basis for V . 3. S is linearly independent.

 ⇒ |S| ≤ n

 ⇒ S can be extended to a basis for V . 4. S is linearly independent and |S| = n.

⇒ S is a basis for V .

(22)

 Note that

 |S| ≥ n ; S generates V .

 |S| ≤ n ; S is linearly independent.

 |S| = n ; S is a basis for V .

(23)

 Theorem 1.11: dim(V ) < ∞; W is a subspace of V . Then 1. dim(W ) ≤ dim(V )

2. dim(W )=dim(V ) ⇒ W = V . proof: Let α be a basis for W .

”1”: dim(W ) = |α| ≤ dim(V ). [α is lin indep; Corol 1.10.2]

”2”: If |α|=dim(V ),

⇒ α is a basis for V . [α is lin indep; Corol 1.10.2]

 Corollary 1.11: dim(V ) < ∞; W is a subspace of V . Then any basis for W can be extended to a basis for V .

proof: Let α be a basis for W .

⇒ α is linearly independent.

⇒ α can be extended to a basis for V . [Corol 1.10.2]

(24)

 So the best way to describe a vector space V and its subspace W is to find a basis {u1, · · · , um} for W and extend it to a basis {u1, · · · , um, um+1, · · · , un} for V .

 example:

 Though our discussion considers mainly finite-dimensional vector spaces, the discussion can be generalized to infinite-dimensional vector spaces.

(25)

Chapter 2 Linear transformation and Matrices

 Let us now consider a function from a vector space to another, satisfying linearity, and call it a linear transformation.

 linear transformation T : V → W for vector spaces V (F ) and W (F ) : ∀x, y ∈ V and ∀a ∈ F ,

1. T (x + y) = T (x) + T (y) and 2. T (ax) = aT (x)

 The two linearity conditions can be replaced by one:

∀x, y ∈ V and ∀a, b ∈ F , T (ax + by) = aT (x) + bT (y)

 T is linear ⇒ T (0) = 0

Note that the first 0 is the zero vector of V and the second is that of W .

(26)

 Why linear? Linearity of the transformation matches linearity of vector spaces (closed under linear combination).

 A subspace is mapped to a subspace.

 A linear transformation has a matrix representation (will be explained).

 It allows simpler computation, systematic analyses, and more applications (reguiring linearization at a local region in space or time).

 terms related to linear transformation:

 function f : X → Y : ∀x ∈ X, ∃ unique f (x) ∈ Y

 domain of f : X

 codomain of f : Y

 range of f : f (X) = {f (x) : x ∈ X}

(27)

 image of A under f : f (A) = {f (x) : x ∈ A}

 preimage of B under f : f−1(B) = {x : f (x) ∈ B};

also called as inverse image

 onto: f (X) = Y

 one-to-one: f (u) = f (v) ⇒ u = v

 inverse of f : f−1 : Y → X such that

∀x ∈ X, f−1(f (x)) = x; and ∀y ∈ Y, f (f−1(y)) = y

(28)

 invertible f : f−1 exists (⇔ one-to-one and onto)

 restriction of f to A: fA : A → Y such that

∀x ∈ A, fA = f (x)

 composite or composition of f : f : X → Y and g : Y → Z:

g ◦ f : X → Z such that ∀x ∈ X, (g ◦ f )(x) = g(f (x)) (We will use the notation gf in place of g ◦ f .)

수치

Updating...

참조

관련 주제 :