2012 Spring semester
Linear Algebra
Overtake class
(Section 3.4 - Section 3.6)
(1) Subspaces of
R
nDefinition
W (= ∅) ⊂ Rn is called a subspace of Rn if it is closed under addition and scalar multiplication.
Examples (Subspaces ofRn)
Lines and planes through o of Rn {o} : zero space
Rn
For v1, . . . , vs ∈ Rn,
{x = t1v1+· · · + tsvs|t1, . . . , ts ∈ R} is a subspace of Rn.
(2) Span
Definition Let v1, . . . , vs ∈ Rn. span{v1, . . . , vs} := {t1v1+· · · + tsvs|t1, . . . , ts ∈ R} : thespan of v1, . . . , vs Examples(o) = span{o}, Rn= span{e1, . . . , en}
span{v} : line through o of Rn span{v1, v2} : plane through o of Rn
(3) Solution space of a linear system
Theorem
Let A be an m×n matrix. Then {x ∈ Rn|Ax = o} is a subspace of Rn, which is called the solution spaceof the system. • R2
Homo. lin. Solution system space
(o) line through o
R2
• R3
Homo. lin. Solution system space
(o) line through o plane through o
(3) Solution space of a linear system (continued)
Theorem
(a) For an m × n matrix A,
{x ∈ Rn|Ax = o} = Rn ⇔ A = O.
(b) For m × n matrices A, B,
A = B ⇔ Ax = Bx, ∀x ∈ Rn. (Proof.)
(4) Linear independence
The geometric properties of a subspace x = t1v1+· · · + tsvs are affected by interrelationships among the vectors v1, . . . , vs.
Definition
S(= ∅) = {v1, . . . , vs} ⊂ Rn is said to be linearly indepen-dent if
c1v1+· · · + csvs = o ⇒ c1=· · · = cs = 0.
(4) Linear independence (continued)
Examples
v is linearly dependent if v = o; v is linearly independent if v= o.
For S(= ∅) ⊂ Rn, if o∈ S then S is linearly dependent.
Theorem
S = {v1, . . . , vs} ⊂ Rn (s ≥ 2) is linearly independent
⇔ At least one of the vectors in S is expressible as a linear combination of the other vectors in S.
(5) Linearly independence and Homogeneous linear
systems
Theorem
Ax = o has only the trivial solution
⇔ the column vectors of A are linearly independent.
Examples
Determine whether the given vectors are linearly independent or not.
v1 = (1, 2, 1), v2 = (2, 5, 0), v3 = (3, 3, 8) v1 = (1, 2, −1), v2 = (6, 4, 2), v3 = (4, −1, 5)
(5) Linearly independence and Homogeneous linear
systems (continued)
Example
v1 = (2, −4, 6), v2 = (0, 7, −5), v3 = (6, 9, 8), v4 = (5, 0, 1)
Theorem
(6) A unifying theorem
Theorem
For an n × n matrix A, TFAE: (a) RREF (A) = In.
(b) A = E1· · · Ek, Ei : elementary matrix. (c) A is invertible.
(d) Ax = o has only the trivial solution. (e) Ax = b is consistent for every b ∈ Rn.
(f) Ax = b has exactly one solution for every b ∈ Rn. (g) The column vectors of A are linearly independent. (h) The row vectors of A are linearly independent.
(7) The relationship between
Ax = b and Ax = o
For x0, v1, . . . , vs ∈ Rn, let W = span{v1, . . . , vs}.x0+ W = {x0+ w|w ∈ W }
is called the translation of W by x0.
Theorem
If Ax = b is consistent (b = o) and W = {x|Ax = o} then {x|Ax = b} = x0+ W ={x0+ xh|Axh= o}
(7) The relationship between
Ax = b and Ax = o
(continued)
Solution sets inR2 Solution sets in R3 A point A point
A line A line
R2 A plane
R3 Theorem
Let A be an m × n matrix. Then Ax = o has only the trivial solution
⇔ Ax = b has at most one solution for every b ∈ Rm.
A nonhomogeneous linear system with more unknowns than equations is either inconsistent or has infinitely many solutions.
(8) Consistency of a linear system from the vector point
of view
Theorem
For an m × n matrix A,
Ax = b is consistent ⇔ b ∈ col(A) : column space of A.
Examples
Is w = (9, 1, 0) a linear combination of v1 = (1, 2, 3), v2 = (1, 4, 6), v3 = (2, −3, −5)?
(9) Hyperplanes, Geometric interpretation of solution
spaces
Definition For a1, . . . , an, b ∈ R, {(x1, . . . , xn)∈ Rn|a1x1+· · · + anxn= b} ={x ∈ Rn|a · x = b} (a = o) : ahyperplanein Rna⊥:={x ∈ Rn|a · x = 0} : theorthogonal complementof a
Theorem
Let A be an m × n matrix. Then
(10) Diagonal matrix
D = ⎡ ⎢ ⎢ ⎢ ⎣ d1 O d2 . . . O dn ⎤ ⎥ ⎥ ⎥ ⎦, D−1 = ⎡ ⎢ ⎢ ⎢ ⎣ 1/d1 O 1/d2 . .. O 1/dn ⎤ ⎥ ⎥ ⎥ ⎦ Dk = ⎡ ⎢ ⎢ ⎢ ⎣ d1k O d2k . .. O dnk ⎤ ⎥ ⎥ ⎥⎦ (true for k < 0 if D is invertible.) A diagonal matrix is invertible ⇔ its main diagonal entries are all nonzero.
(11) Triangular matrix
A square matrix in which all entries above (below, resp.) the main diagonal are zero is said to be lower (upper, resp.) triangular.
A is upper (lower, resp.) triangular ⇒ AT is lower (upper, resp.) triangular.
(upper triangular)=upper triangular,
(lower triangular)=lower triangular
A triangular matrix is invertible ⇔ its main diagonal entries are all nonzero.
A is invertible upper (lower, resp.) triangular ⇒ A−1 is upper (lower, resp.) triangular.
(12) Symmetric and Skew-symmetric matrices
A square matrix A is said to be symmetricif AT = A, and skew-symmetric if AT =−A.
Let A, B be n × n symmetric matrices. Then AT, A + B, A − B, cA are symmetric. AB is symmetric ⇔ AB = BA
If A is an invertible symmetric matrix then A−1 is symmetric. [Note] For any m × n matrix A, AAT and ATA are symmetric.
(13) Inverting
I − A when A is nilpotent
Let A be an n × n matrix. x ∈ Rn is called the fixed point of
A if Ax = x i.e., (I − A)x = o.
Theorem
For a square matrix A, if Ak = O for some positive integer k then I − A is invertible and
(I − A)−1 = I + A + · · · + Ak−1.
A square matrix A is said to be nilpotentif Ak = O for some positive integer k.
(14) Inverting
I − A by power series
[Observation] For 0< x < 1,lim
k→+∞(1− x)(1 + x + x
2+· · · + xk) = 1.
For a square matrix A,
(I − A)(I + A + A2+· · · ) = I.
Theorem
For an n × n matrix A = [aij], if ni=1|aij| < 1 (or nj=1|aij| <
1) for each j = 1, . . . , n (or i = 1, . . . , n) then I −A is invertible and