Page 1
Chapter 3 (2)
Review: Elementary matrix operation and elementary matrix
⌅ w = T (v) turns into y = Ax, a system of linear equations.
type 1:
0
@1 0 0 0 0 1 0 1 0
1
A type 2:
0
@-2 0 0 0 1 0 0 0 1
1
A type 3:
0
@1 0 0 0 1 -2 0 0 1
1 A
횵 ! ;
Page 2
Review: Rank and inverse of matrix
⌅ ⌅ rank of a matrix A, rank(A): rank(LA)
⌅ ⌅ Theorem 3.3: rank(T ) = rank([T ] )
⌅ ⌅ Theorem 3.4: A 2 Mm⇥n; P 2 Mm⇥m; Q 2 Mn⇥n; P and Q are invertible. Then
rank(A) = rank(P A) = rank(AQ) = rank(P AQ) ⇤
⌅ ⌅ Theorem 3.5: rank(A) = dim(column space of A). In other words, the rank of a matrix A is the maximum number of linearly indepen- dent columns of A.
⌅ ⌅ Theorem 3.6: An m ⇥ n matrix A of rank r can be transformed into D =
✓ Ir O1 O2 O3
◆
by elementary row and column operations, where Oi are zero matrices.
res www.op.
一潚
, column de . Op .
Y = AK = E7Gdi e span 3
ois
=R ( LA)
Page 3
⌅ ⌅ Corollary 3.6.1: D =
✓ Ir O1 O2 O3
◆
= BAC for some invertible matri- ces B and C. This is from that D = EpEp 1 · · · E1AG1G2 · · · Gp, where Ei and Gj are elementary matrices.
⌅ ⌅ matrix inversion by Gaussian elimination:
(A|In) ! (In|B) by eros [not by ecos]
) A = B 1
⌅ ⌅ We can find the inverse of a linear transformation T : V ! V through computing the inverse of its matrix representation in a basis .
T ! [T ] ! [T ] 1 ! [T 1] ! T 1
○
쏌Singer
B = A
t.int
Page 4
System of linear equations
⌅ ⌅ system of linear equations:
a11x1 + a12x2 + · · · + a1nxn = b1 a21x1 + a22x2 + · · · + a2nxn = b2
· · ·
am1x1 + am2x2 + · · · + amnxn = bm A =
0
@ a11 · · · a1n
... ...
am1 · · · amn 1
A, x = 0
@x1 ...
xn 1
A, b = 0
@b1 ...
bm 1
A ) Ax = b ⇤
⌅ Recall that this can be considered an n-tuple-matrix representation of w = T (v).
any Linear Transformation
Page 5
⌅ solution to the system: s such that As = b ⇤
⌅ solution set to the system: {s 2 Fn : As = b} ⇤
⌅ ⌅ example: x1 + x2 + x3 = 0, x1 + x2 x3 = 2 )
✓1 1 1 1 1 1
◆ 0
@x1 x2 x3
1
A = ✓ 0 2
◆
) x1 + x2 = 1, x3 = 1 ) solution set = {(x1, x2, 1)t : x1 + x2 = 1}
f = {(t, 1 t, 1)t : t 2 IR}
f =
(0
@ 0 1
1 1
A + t 0
@ 1 1 0
1
A: t 2 IR )
The last two are parametric expressions.
?
①
tf
→③ 에 대 입부정 soso.it
김 = 1 -E
Not vs
coat
각의) 御二 印北汀bEN
SE V
Et
玭勖and aisNA) .
23.5
Page 6
⌅ ⌅ homogeneous system: Ax = 0 ⇤
⌅ nonhomogeneous system: Ax = b, b 6= 0 ⇤
⌅ A homogeneous system Ax = 0 always has a solution x = 0, called the trivial solution.
⌅ ⌅ Theorem 3.8: Ax = 0.A 2 Mm⇥n, and K is the solution set.
Then K = N(LA) ⇤
30, et
/
Ax.이-
t.cl
AE 이시에Space
Page 7
⌅ The solution set K of an m ⇥ n homogeneous system is a subspace of Fn of dimension
n - rank(LA) = n - rank(A).
⌅ ⌅ Corollary 3.8: A 2 Mm⇥n, m < n ) Ax = 0 has a nonzero solution. ⇤
proof: LA : Fn ! Fm
) rank(LA) = rank(A) m
) dim(K) = nullity(LA) = n - rank(LA) n m > 0 ⇤ 仁{ 桁些
f
Page 8
⌅ ⌅ Theorem 3.9: K is the solution set of Ax = b; KH is the solution set of Ax = 0; and s 2 K.
Then K = {s} + KH = {s + k : k 2 KH}. ⇤ proof: (i) w 2 K
) A(w s) = Aw As = b b = 0 ) w s 2 KH
) w = s + (w s) 2 {s} + KH
) K ✓ {s} + KH
(ii) w 2 {s} + KH ) w = s + k, k 2 KH
) Aw = As + Ak = b + 0 = b ) w 2 K ) {s} + KH ✓ K ⇤
ritotftrbF0KH-EKF-stK@0tlineofpfAi.B
AEB.BE AWEA ⇒ WEB , WEB ⇒ W Et
Page 9
⌅ example: x1 + x2 + x3 = 0, x1 + x2 x3 = 2
(0, 1, 1)t is a solution to the nonhomogeneous system.
homogeneous system: x1 + x2 + x3 = 0, x1 + x2 x3 = 0 ) x1 + x2 = 0, x3 = 0
) KH = {(t, t, 0)t : t 2 IR} = {t(1, 1, 0)t : t 2 IR}
) K = {(0, 1, 1)t} + KH =
(0
@ 0 1
1 1
A + t 0
@ 1 1 0
1
A: t 2 IR )
(3, 2, 1)t is another solution to the nonhomogeneous sys.
) K = {(3, 2, 1)t} + KH =
(0
@ 3 2 1
1
A + t 0
@ 1 1 0
1
A: t 2 IR )
⇒ 죄= t , X2= - t , 石二0
€
41
E)
Page 10
⌅ example: x1 2x2 + x3 = 4
(4, 0, 0)t is a solution to the nonhomogeneous system.
) homogeneous system: x1 2x2 + x3 = 0 ) x3 = x1 + 2x2
) x1 = t1, x2 = t2, x3 = t1 + 2t2: parameterization ) KH = {t1(1, 0, 1)t + t2(0, 1, 2)t : t1, t2 2 IR}
) K = {(4, 0, 0)t} + KH
f = {(4, 0, 0)t + t1(1, 0, 1)t + t2(0, 1, 2)t : t1, t2 2 IR}
쟜쓺
TS = 2t 一打
徽喇
乂穩 髓一
坤凹Not Vector Space
Page 11
⌅ ⌅ Theorem 3.10: A is invertible
f , Ax = b has only one solution A 1b. ⇤
⌅ ⌅ Theorem 3.11: Ax = b has a solution f , rank(A) = rank(A|b). ⇤
proof: Ax = b has a solution s = (s1, · · · , sn)t. , b 2 R(LA) [b = As = LA(s)]
, b 2 span({a1, · · · , an}) = column space
f [ai is the ith column of A; a1s1 + · · · + ansn = b] , span({a1, · · · , an}) = span({a1, · · · , an, b}) ⇤
⌅ example: x1 + x2 = 0, x1 + x2 = 1 A =
✓1 1 1 1
◆
, (A|b) =
✓1 1 0 1 1 1
◆
rank(A) = 1 6= rank(A|b) = 2 ) no solution
(a and叫剡
7×7.7×1 = 7시 & AN ra.#
/
b= AS =
ESiai.ts.is
. 'spiral by
39is⇒ te column 수 같은 .ci b 든
of
종속'b is not spand
by
A .池 b et RA ) 며 A자 ,
虹一
Page 12
⌅ example: x1 + x2 = 0, x1 x2 = 1 A =
✓1 1 1 1
◆
, (A|b) =
✓1 1 0 1 1 1
◆
rank(A) = rank(A|b) = 2 ) solution
Page 13
⌅ ⌅ equivalent systems of linear equations: systems with the same solution set ⇤
⌅ ⌅ Theorem 3.13: C is invertible
) CAx = Cb is equivalent to Ax = b. ⇤ proof: K is the solution set for Ax = b;
K0 is the solution set for CAx = Cb.
(i) w 2 K ) Aw = b ) CAw = Cb ) w 2 K0 ) K ✓ K0
(ii) w 2 K0 ) CAw = Cb ) C 1CAw = C 1Cb ) Aw = b ) w 2 K ) K0 ✓ K ⇤
Show
談威
Page 14
⌅ ⌅ Corollary 3.13: (A|b) ! (A0|b0) by eros ) Ax = b and A0x = b0 are equivalent. ⇤
proof: C = Ep · · · E1, where Ei is and elementary matrix.
(A0|b0) = C(A|b) = (CA|Cb) ) A0 = CA, b0 = Cb
) CAx = Cb is equivalent to Ax = b. [Thm 3.13] ⇤
⌅ ⌅ example:
x1 + 2x2 x3 = 1f x1 = 4 2x1 + 2x2 + x3 = 1f ) x2 = 3 3x1 + 5x2 2x3 = 1f x3 = 1 0
@1 2 1 1 2 2 1 1
3 5 2 1
1 A 3,3!
0
@1 2 1 1
0 2 3 3
0 1 1 2
1 A !1
0
@1 2 1 1
0 1 1 2
0 2 3 3
1 A
- 2 X
+ELn
Page 15
!2
0
@1 2 1 1
0 1 1 2
0 2 3 3
1 A 3,3!
0
@1 0 1 3
0 1 1 2
0 0 1 1
1 A 3,3!
0
@1 0 0 4 0 1 0 3 0 0 1 1
1 A
) 0
@1 0 0 0 1 0 0 0 1
1
A x = 0
@ 4 3 1
1
A ( EAx = Eb)
) x = 0
@ 4 3 1
1 A
(
1끄
Page 16
⌅ ⌅ example:
3x1+ 2x2+ 3x3 2x4 = 1f x1+ x3 = 1f x1 = t, x3 = 1 t x1 + x2 + x3 = 3f ! x2 = 2f ! x2 = 2
x1 + 2x2 + x3 x4 = 2f x4 = 3f x4 = 3
) 0 BB
@ x1 x2 x3 x4
1 CC A =
0 BB
@ 0 2 1 3
1 CC
A + t 0 BB
@ 1 0
1 0
1 CC A 2
(0 BB
@ 0 2 1 3
1 CC A
)
+ KH
0
@3 2 3 2 1 1 1 1 0 3 1 2 1 1 2
1 A !1
0
@1 1 1 0 3 3 2 3 2 1 1 2 1 1 2
1 A 3,3!
0
@1 1 1 0 3
0 1 0 2 8
0 1 0 1 1
1 A
!2
0
@1 1 1 0 3 0 1 0 2 8 0 1 0 1 1
1 A 3,3!
0
@1 0 1 2 5 0 1 0 2 8 0 0 0 3 9
1 A !2
0
@1 0 1 2 5 0 1 0 2 8 0 0 0 1 3
1 A
9
8
15
-品
'Page 17
!3,3
0
@1 0 1 0 1 0 1 0 0 2 0 0 0 1 3
1 A
) 0
@1 0 1 0 0 1 0 0 0 0 0 1
1 A
0 BB
@ x1 x2 x3 x4
1 CC A =
0
@1 2 3
1
A ( EAx = Eb)
) x1 + x3 = 1f x1 = t, x3 = 1 t f x2 = 2f ! x2 = 2
f x4 = 3f x4 = 3
) 0 BB
@ x1 x2 x3 x4
1 CC A =
0 BB
@ 0 2 1 3
1 CC
A + t 0 BB
@ 1 0
1 0
1 CC A 2
(0 BB
@ 0 2 1 3
1 CC A
)
+ KH
Page 18
⌅ ⌅ reduced row echelon form, rref:
1. Nonzero row precedes all-zero rows.
2. The first nonzero entry in a row is the only nonzero entry in its column (key column, pivot column).
3. The first nonzero entry in a row is 1 and moves right as we go down.
⌅ It is obtained by only elementary row operations.
⌅ If a matrix is invertible, its rref is the identity matrix.
⌅ ⌅ example: rref ↳ 屯們
Page 19
⌅ ⌅ Gaussian elimination: conversion of a matrix into a rref by eros.
⇤
⌅ finding the rref: A Ge! B, B = rref(A)
⌅ inverting a matrix: (A|I) Ge! (I|A 1) [invertible]
⌅ solving a system of linear equations:
(A|b) Ge! (I|s) [unique]
(A|b) Ge! (B|s) [general solution using null vectors]
→ B
K-군
at WeWAY
If
t.UA/7Afo,fuHrankK=tGhdisunique.-
Page 20
⌅ ⌅ Theorem 3.15: Ax = b, A 2 Mm⇥n, with the solution set K;
rank(A) = rank(A|b); (A|b) is in rref; and (A|b) has r nonzero row.
1. rank(A) = rThen
2. The general solution has the form of f s = s0 + t1u1 + · · · + tn run r
f ) s0 2 K and {u1, · · · , un r} is a basis for KH. ⇤ proof:“1”: rref ) Nonzero rows are linearly independent.
f ) r = rank(A|b) = rank(A)
“2”: Let t1 = · · · = tn r = 0 ) s = s0 ) s0 2 K ) KH = {s0} + K [Thm 3.9]
f = {t1u1+· · ·+tn run r : ti 2 F } = span({u1, · · · , un r}) ) {u1, · · · , un r} is a basis for KH (* rank(A) = r and so dim(KH) = n r). ⇤
⇒
nat
세대다:
繼訂particle
兪三?
Sole - Null Space
Page 21
⌅ ⌅ Theorem 3.16: A 2 Mm⇥n; rank(A) = r > 0; and B = rref(A).
Then
1. B has r nonzero row.
2. B has r key columns that are e1, · · · , er.
3. The key columns of A are linearly independent.
4. If column k of B is a linear combination of the key columns of B, then column
f k of A is the same linear combination of the key columns of A.
f bk = d1bj1 + · · · + drbjr ) ak = d1aj1 + · · · + drajr ⇤
A
He
.az9,9×5
IT
→Ta Faint
Ek
)
' ' '
( 譏 斷和
①姙乃
A =E'
B썂崧卿
悲鬚
.Page 22
⌅ ⌅ Corollary 3.16: The rref of a matrix is unique. ⇤ proof:
if A has two rrefs B and C, then either
(1) they have different sets of key columns, or (2) they have the same set of key columns.
(1) ) contradicts to Thm 3.16 - 3, because
by Thm 3.16 - 3, the set of key columns of A is the set of linearly independent columns, and
the set of linearly independent vectors with the lowest indices is unique.
(2) ) bk 6= ck for some k [non-key column k]
f ) bk and ck are the same lin comb of the key columns f ) contradiction [Thm 3.16 - 4] ⇤
une
④ aaa.at 斷門
q T T① ② ③ ①② ③
.
Tcl
Page 23
⌅ ⌅ example: finding linearly independent vectors using rref:
v1 = 2 + x + 2x2 + 3x3 v2 = 4 +2x + 4x2+ 6x3 v3 = 6 +3x + 8x2+ 7x3 v4 = 2 + x + 5x3
v5 = 4 + x + 9x3
! 0 BB
@
2 4 6 2 4 1 2 3 1 1 2 4 8 0 0 3 6 7 5 9
1 CC A !
0 BB
@
1 2 0 4 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0
1 CC A
) v1, v3, v5 are linearly independent. [key columns]
Of course there are other linearly independent sets, such as {v1, v4, v5} and {v2, v3, v5}
Page 24
⌅ ⌅ example: extending linearly independent vectors into a basis using rref:
0 BB
@
2 6 0 1 3 2 2 8 4 3 7 6
1 CC A !
0 BB
@
2 6 0 1 0 0 0 1 3 2 0 1 0 0 2 8 4 0 0 1 0 3 7 6 0 0 0 1
1 CC A !1
0 BB
@
1 3 2 0 1 0 0 2 6 0 1 0 0 0 2 8 4 0 0 1 0 3 7 6 0 0 0 1
1 CC A
3,3,3
!
0 BB
@
1 3 2 0 1 0 0 0 0 4 1 2 0 0 0 2 0 0 2 1 0 0 2 0 0 3 0 1
1 CC A
1,2! 0 BB
@
1 3 2 0 1 0 0 0 1 0 0 1 12 0 0 0 4 1 2 0 0 0 2 0 0 3 0 1
1 CC A
4th
Key
columnsPage 25
!3,3
0 BB
@
1 0 2 0 4 32 0 0 1 0 0 1 12 0 0 0 4 1 2 0 0 0 0 0 0 5 1 1
1 CC A !2
0 BB B@
1 0 2 0 4 32 0 0 1 0 0 1 12 0 0 0 1 14 12 0 0 0 0 0 0 5 1 1
1 CC CA
!3,2
0 BB B@
1 0 0 12 3 32 0 0 1 0 0 1 12 0 0 0 1 14 12 0 0 0 0 0 0 1 15 15
1 CC CA
3,3,3
!
0 BB B@
1 0 0 12 0 109 35 0 1 0 0 0 103 15 0 0 1 14 0 101 101 0 0 0 0 1 15 15
1 CC CA
) (0, 1, 0, 0)t can be added to form a basis.
⌅ This method of extending a set into a basis is especially useful in finding a basis using a computer program.
4-th
Key
columnsPage 1
Chapter 4 Review
⌅ ⌅ Corollary 3.6.1: D =
✓ Ir O1 O2 O3
◆
= BAC for some invertible matri- ces B and C. This is from that D = EpEp 1 · · · E1AG1G2 · · · Gp, where Ei and Gj are elementary matrices.
⌅ ⌅ matrix inversion by Gaussian elimination:
(A|In) ! (In|B) by eros [not by ecos]
) A = B 1 E(A|In) = (In|B) Question:
(I|A)G = (B|I)!IG = B, AG = I ! AB = I ! A = B 1
✓A I
◆
G =
✓I B
◆
!IG = B, AG = I ! AB = I ! A = B 1
Page 2
⌅ ⌅ homogeneous system: Ax = 0.
⌅ nonhomogeneous system: Ax = b, b 6= 0.
⌅ ⌅ Theorem 3.8: Ax = 0. A 2 Mm⇥n, and K is the solution set.
Then K = N(LA).
⌅ ⌅ Theorem 3.9: K is the solution set of Ax = b; KH is the solution set of Ax = 0; and s 2 K.
Then K = {s} + KH = {s + k : k 2 KH}.
⌅ example: x1 2x2 + x3 = 4
(4, 0, 0)t is a solution to the nonhomogeneous system.
) homogeneous system: x1 2x2 + x3 = 0 ) x3 = x1 + 2x2
) x1 = t1, x2 = t2, x3 = t1 + 2t2: parameterization ) KH = {t1(1, 0, 1)t + t2(0, 1, 2)t : t1, t2 2 IR}
) K = {(4, 0, 0)t} + KH
f = {(4, 0, 0)t + t1(1, 0, 1)t + t2(0, 1, 2)t : t1, t2 2 IR}
Page 3
⌅ ⌅ Theorem 3.10: A is invertible
f , Ax = b has only one solution A 1b.
⌅ ⌅ Theorem 3.11: Ax = b has a solution f , rank(A) = rank(A|b).
⌅ ⌅ Theorem 3.13: C is invertible
) CAx = Cb is equivalent to Ax = b.
⌅ ⌅ Corollary 3.13: (A|b) ! (A0|b0) by eros ) Ax = b and A0x = b0 are equivalent.
⌅ ⌅ reduced row echelon form, rref:
v1 = 2+x+2x2+3x3 v2 = 4+2x+4x2+6x3 v3 = 6+3x+8x2+7x3 v4 = 2 + x + 5x3
v5 = 4 + x + 9x3
! 0 BB
@
2 4 6 2 4 1 2 3 1 1 2 4 8 0 0 3 6 7 5 9
1 CC A !
0 BB
@
1 2 0 4 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0
1 CC A
) v1, v3, v5 are linearly independent. [key columns]
Page 4
⌅ ⌅ Gaussian elimination: conversion of a matrix into a rref by eros. ⇤
⌅ finding the rref: A Ge! B, B = rref(A)
⌅ inverting a matrix: (A|I) Ge! (I|A 1) [invertible]
⌅ solving a system of linear equations:
(A|b) Ge! (I|s) [unique]
(A|b) Ge! (B|s) [general solution using null vectors]
⌅ ⌅ example:
0
@3 2 3 2 1 1 1 0 1 2 1 1
1 A
0 BB
@ x1 x2 x3 x4
1 CC A =
0
@1 3 2
1 A
Page 5
0
@3 2 3 2 1 1 1 1 0 3 1 2 1 1 2
1
A eroc! 0
@1 0 1 0 1 0 1 0 0 2 0 0 0 1 3
1 A
) 0
@1 0 1 0 0 1 0 0 0 0 0 1
1 A
0 BB
@ x1 x2 x3 x4
1 CC A =
0
@1 2 3
1
A ( (rref)EAx = Eb)
) x1 + x3 = 1f x1 = t, x3 = 1 t f x2 = 2f ! x2 = 2
f x4 = 3f x4 = 3
) 0 BB
@ x1 x2 x3 x4
1 CC A =
0 BB
@ 0 2 1 3
1 CC
A + t 0 BB
@ 1 0
1 0
1 CC A 2
(0 BB
@ 0 2 1 3
1 CC A
)
+ KH
Page 6
⌅ ⌅ Theorem 3.15: Ax = b, A 2 Mm⇥n, with the solution set K;
rank(A) = rank(A|b); (B|s) is in rref; and (A0|b0) has r nonzero row. Then
1. rank(A) =rank(A0)= r
2. The general solution has the form of f s = s0 + t1u1 + · · · + tn run r
⌅ ⌅ Theorem 3.16: A 2 Mm⇥n; rank(A) = r > 0; and B = rref(A).
1. B has r nonzero row.Then
2. B has r key columns that are e1, · · · , er.
3. The key columns of A are linearly independent.
4. If column k of B is a linear combination of the key columns of B, then column
f k of A is the same linear combination of key columns of A.
f bk = d1bj1 + · · · + drbjr ) ak = d1aj1 + · · · + drajr
Page 7
Determinant
⌅⌅ Determinant is a number that represents a square matrix.
⌅ determinant of a 2⇥2 matrix: det
✓ a b c d
◆
= ad bc.
⌅ det(A + B) 6= det(A) + det(B), eg, det
✓ ✓ 1 0 0 1
◆ +
✓ 1 0 0 1
◆ ◆
= 4 6= 2.
det(cA) 6= c det(A), eg, det
✓ 3
✓ 1 0 0 1
◆ ◆
= 9 6= 3.
) Determinant is not linear.
⌅ The determinant is a linear function of one row, when the other rows are held fixed, which is explained in Theorem 4.1. This is true for any square matrix, but Theorem 4.1 illustrates the case of 2⇥2 matrix.
Page 8
⌅⌅ Theorem 4.1: u1, u2, v are row vectors of 2⇥2 matrices; k1, k2 are scalars. Then
det
✓ k1u1 + k2u2 v
◆
= k1 det
✓ u1 v
◆
+ k2 det
✓ u2 v
◆
det
✓ v
k1u1 + k2u2
◆
= k1 det
✓ v u1
◆
+ k2 det
✓ v u2
◆
⌅
⌅⌅ Theorem 4.2: For a 2⇥2 matrix A =
✓ A11 A12 A21 A22
◆ , 1. det(A) 6= 0 , A is invertible.
2. A is invertible , A 1 = det(A)1
✓ A22 A12 A21 A11
◆
⌅ proof: “2” can be verified by multiplying A 1 to A.
“1”: “)”: det(A) 6= 0 ) A 1 is given by “2”.
122번AnAn -- AutoAn A리
= de LA)
牆箚
if (
뻉뼥二工
Page 9
“(”: A is invertible ) rank(A) = 2 ) A11 6= 0 or A21 6= 0 (i) A11 6= 0 ) A !3 A11 A12
0 A22 A12AA21
11
!
) A22 A12A21 A11 6=
0 ) det(A)6= 0
(ii) A21 6= 0 ) A !3 0 A12 A11AA22 A21 A22 21
!
) A12 A11A22 A21 6=
0 ) det(A)6= 0 ⌅
淵門 만
''偕劉
nar An
An )
十數= 0 A22-
TF
= 0 M2 - 베A2T2Page 10
⌅⌅ The area of the parallelogram determined by 2-dim vectors
✓ a b
◆
and
✓ c d
◆
is det
✓ a b c d
◆
= det
✓ a c b d
◆ .
⌅⌅ determinant of A 2 Mn⇥n, direct definition:
det(A)= P
⇡sgn(⇡)A1⇡(1)A2⇡(2) · · · An⇡(n), where
⌅ the sum is taken over all permutations ⇡
⌅ k(⇡): the total number of inversions in ⇡
⌅ sgn(⇡) = ( 1)k(⇡). ⌅
1세
A2 -Ain't
* . 箭迦:i剖
A. ? A2? - - - An2
1 2 ._ . 7
2 f . . . . n 2 4 -- 1
:
Page 11
⌅ example:✓ A11 A12 A21 A22
◆ )
det(A)= A11A22 A12A21
⌅ example:0
@ A11 A12 A13 A21 A22 A23 A31 A32 A33
1 A )
det(A)= A11A22A33 A11A23A32 A12A21A33+A12A23A31+ A13A21A32 A13A22A31
Page 12
⌅⌅ determinant of A 2 Mn⇥n, recursive definition:
1. n = 1 ) A = (A11) ) det(A)= A11
2. n 2 ) det(A) = Pn
j=1 A1jc1j[“1st” row], where cij is the cofactor given by cij = ( 1)i+jdet( ˜Aij), and ˜Aij is the (n 1) ⇥ (n 1) matrix obtained from A by removing i-th row and j-th
column, ie,
Fth column
2 Cm ) 시머)
← Eth ww
Page 13
⌅ example:
det 0
@ 1 2 3 4 5 6 7 8 9
1
A = 1·( 1)1+1·det
✓ 5 6 8 9
◆
+2·( 1)1+2·det
✓ 4 6 7 9
◆
+3 · ( 1)1+3·det
✓ 4 5 7 8
◆
= (45 48) 2(36 42) + 3(32 35) = 0
⌅ det(I)=1
proof: det(In) = ( 1)1+1det(In 1) = · · · = ( 1)1+1det(I2) = 1
Page 14
⌅⌅ The volume of the parallelepiped determined by 3-dim vectors 0
@ a b c
1 A,
0
@ d e f
1
A and 0
@ g h
i 1 A is
det 0
@ a b c d e f g h i
1
A = det 0
@ a d g b e h c f i
1 A .
⌅⌅ Theorem 4.3: The determinant is a linear function of one row, when the other rows are held fixed. That is,
det 0 BB BB
@
a1 ...
ku + lv ...
an
1 CC CC
A = k det 0 BB BB
@ a1
...
u...
an 1 CC CC
A + l det 0 BB BB
@ a1
...
v...
an 1 CC CC A. ⌅
Page 15
proof: (i) If i = 1,
det 0 BB
@
ku + lv a2
...
an
1 CC
A = Pn
j=1(kuj + lvj)c1j [def]
= k Pn
j=1 ujc1j + l Pn
j=1 vjc1j
= k det 0 BB
@ u a2
...
an 1 CC
A + ldet 0 BB
@ v a2
...
an 1 CC A.
(ii) If i > 1, we use induction in n. The theorem is already proved for n = 1, 2.
(iii) Assume that it holds for (n 1) ⇥ (n 1) matrices.
Page 16
(iv) det 0 BB BB
@
a1 ...
ku + lv ...
an
1 CC CC
A = Pn
j=1 A1jc1j
c1j = ( 1)1+jdet( ˜A1j)
= ( 1)1+j[kdet( ˜A(u)1j ) + ldet( ˜A(v)1j ) [assumption]
= kc(u)1j + lc(v)1j
det 0 BB BB
@
a1 ...
ku + lv ...
an
1 CC CC
A = Pn
j=1 Aijc1j = k Pn
j=1 A1jc(u)1j +l Pn
j=1 A1jc(v)1j
h
Page 17
=kdet 0 BB BB
@ a1
...
u...
an 1 CC CC
A + ldet 0 BB BB
@ a1
...
v...
an 1 CC CC A ⌅
⌅⌅ Corollary 4.3: If A has an all-zero row, then det(A)= 0. ⌅ Proof: Set k = l = 0 in Theorem 4.3. ⌅
Page 18
⌅⌅ Lemma 4.4:
det(A) = det 0 BB BB
@
a1 ...
0 · · · 010 · · · 0 ...
an
1 CC CC
A = cik = ( 1)i+kdet( ˜Aik), where Aik = 1. ⌅
proof:
(i) If i = 1, det 0 BB
@
0 · · · 010 · · · 0 a2
...
an
1 CC
A = c1k = ( 1)1+kdet( ˜A1k) [def]
(ii) If i > 1, we use induction in n. The lemma holds for n = 2.
(iii) Assume that it holds for (n 1) ⇥ (n 1) matrices.
-
Page 19
(iv) det(A)= Pn
j=1 A1j( 1)1+jdet( ˜A1j) [def]
If j = k, ˜A1j has all-zero row, and det( ˜A1j) = 0. [Corol 4.3]
If j 6= k, either j < k or j > k.
det( ˜A1j) = 8<
:
( 1)(i 1)+(k 1)det(( ˜˜A1j)(i 1)(k 1)), j < k
( 1)(i 1)+kdet(( ˜˜A1j)(i 1)k), j > k [(iii)]
det(A)= P
j<k A1j( 1)1+j( 1)(i 1)+(k 1)det(( ˜˜A1j)(i 1)(k 1)) + P
j>k A1j( 1)1+j( 1)(i 1)+kdet(( ˜˜A1j)(i 1)k)
i .tt
ti" r
to
Page 20
= ( 1)i+k[P
j<k A1j( 1)1+jdet(( ˜˜A1j)(i 1)(k 1)) + P
j>k A1j( 1)1+(j 1)det(( ˜˜A1j)(i 1)k)]
= ( 1)i+k[P
j<k A1j( 1)1+jdet(( ˜˜Aik)1j) + P
j>k A1j( 1)1+(j 1)det(( ˜˜Aik)1(j 1))]
= ( 1)i+kdet( ˜Aik) [def] ⌅ A와
AT
. 의"
쏶는
.io j 潑茁 ; ! ;
Elk j
凹一 嘲
(K)
Page 21
⌅⌅ Theorem 4.4: det(A) = Pn
j=1( 1)i+jAijdet( ˜Aij) ⌅
proof: The ith row is ai = Ai1e1 + Ai2e2 + · · · + Ainen, where ej = (0, · · · , 0, 1, 0, · · · , 0) with 1 in the j-th place.
By Thm 4.3,
det(A)= Ai1det 0 BB BB
@
a1 ...
10 · · · 0 ...
an
1 CC CC
A + · · · + Aindet 0 BB BB
@
a1 ...
0 · · · 01 ...
an
1 CC CC A By Lemma 4.4,
det(A)= Ai1( 1)i+1det( ˜Ai1)+ · · · + Ain( 1)i+nodet( ˜Ain) ⌅
=
#
Air
Di de f
)며