• 검색 결과가 없습니다.

How to solve Ax=b

N/A
N/A
Protected

Academic year: 2021

Share "How to solve Ax=b"

Copied!
26
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

How to solve Ax=b

Wanho Choi (wanochoi.com)

(2)

Many engineering problems result in

• A system of linear equations (n unknowns, n eqns.)

a

11

x

1

+ a

12

x

2

+!+ a

1n

x

n

= b

1

a

21

x

1

+ a

22

x

2

+!+ a

2n

x

n

= b

2

"

(3)

Many engineering problems result in

• A system of linear equations (n unknowns, n eqns.)

a

11

a

12

! a

1n

a

21

a

22

! a

2n

"

"

#

"

a

n1

a

n2

! a

nn

x

1

x

2

"

x

n

=

b

1

b

2

"

b

n

(4)

A Linear System

• A system of linear equations (n unknowns, n eqns.)

Ax

= b

(5)

Exceptional Cases

Singular case:

‣ Infinite amount of solutions (부정)

‣ No solutions (불능)

Under-determined case: fewer eqns. than unknowns

‣ Infinite amount of solutions (부정)

Over-determined case: more eqns. than unknowns

‣ No solutions (불능)

(# of rows) < (# of columns)

(# of rows) > (# of columns)

det(A)

= 0

(6)

Solutions: Ax=b

Invert A ‣ Very expensive • Direct methods ‣ Gaussian elimination ‣ LU-factorization ‣ QR-factorizaiton ‣ Cholesky-factorization ‣ etc. • Iterative methods ‣ Jacobi method ‣ Gauss-Seidel

‣ Successive Over Relaxation (SOR)

‣ Steepest descent, (preconditioned) conjugate gradient

‣ etc.

(7)

Triangular Matrix

L = a1,1 0 ! 0 0 a2,1 a2,2 ! 0 0 ! ! " ! ! an−1,1 an−1,2 ! an−1,n−1 0 an,1 an,2 ! an,n−1 an,n ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ U = a1,1 a1,2 ! a1,n−1 a1,n 0 a2,2 ! a2,n−1 a2,n ! ! " ! ! 0 0 ! an−1,n−1 an−1,n 0 0 ! 0 an,n ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥

lower triangular matrix upper triangular matrix

(8)

A technique to solve Ax=b

when A=L with non-zero diagonal elements

Forward Substitution

a1,1 0 ! 0 0 a2,1 a2,2 ! 0 0 ! ! " ! ! an−1,1 an−1,2 ! an−1,n−1 0 an,1 an,2 ! an,n−1 an,n ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ x1 x2 ! xn−1 xn ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ = b1 b2 ! bn−1 bn ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ x1 = b1/ a1,1 x2 = b( 2− a2,1x1)/ a2,2 x3 = b( 3− a3,1x1− a3,2x2)/ a3,3 ! xn = b( n− an,1x1− an,2x2 −!− an,n−1xn−1)/ an,n cost : 1+ 3+ 5 +!+ (2n −1) = n2 flops

(9)

A technique to solve Ax=b

when A=U with non-zero diagonal elements

Backward Substitution

a1,1 a1,2 ! a1,n−1 a1,n 0 a2,2 ! a2,n−1 a2,n ! ! " ! ! 0 0 ! an−1,n−1 an−1,n 0 0 ! 0 an,n ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ x1 x2 ! xn−1 xn ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ = b1 b2 ! bn−1 bn ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ xn = bn / an,n xn−1 = b( n−1− an−1,nxn)/ an−1,n−1 xn−2 = b( n−2 − an−2,n−1xn−1− an−2,nxn)/ an−2,n−2 ! x1 = b( 1− a1,2x2 − a1,3x3−!− a1,nxn)/ a1,1 cost : 1+ 3+ 5 +!+ (2n −1) = n2 flops

(10)

Gaussian Elimination

• Exists for any non-singular square matrix

2 4 −2 4 9 −3 −2 −3 7 ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ x y z ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ = 2 8 10 ⎡ ⎣ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ 2x + 4y − 2z = 2 4x + 9y − 3z = 8 −2x − 3y + 7z = 10 2x +4y −2z = 2 4x +9y −3z = 8 −2x −3y +7z = 10 2x +4y −2z = 2 y + z = 4 y +5z = 12 2x +4y −2z = 2 y + z = 4 +4z = 8

(11)

LU Factorization

• Exists for any non-singular square matrix

Ax

= b

LUx

= b

y

= Ux

Ly

= b

L = * 0 0 0 * * 0 0 * * * 0 * * * * ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ U = * * * * 0 * * * 0 0 * * 0 0 0 * ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ forward substitution backward substitution

(12)

QR Factorization

• Exists for any non-singular square matrix

Ax

= b

QUx

= b

y

= Ux

Qy

= b

U = * * * * 0 * * * 0 0 * * 0 0 0 * ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ backward substitution Q−1 = QT

(13)

Cholesky Factorization

• Exists for a square symmetric positive definite matrix

Ax

= b

LL

T

x

= b

y

= L

T

x

Ly

= b

L = * 0 0 0 * * 0 0 * * * 0 * * * * ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ LT = * * * * 0 * * * 0 0 * * 0 0 0 * ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ forward substitution backward substitution A = ΦΛΦT = ΦΛ1/2Λ1/2ΦT = (ΦΛ1/2) (ΦΛ1/2)T = LLT

(14)

Cholesky Factorization

A

= LL

T a11 A21T A21 A22 ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ = l11 0 L21 L22 ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ l11 0 L21 L22 ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ T = l11 0 L21 L22 ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ l11 LT21 0 LT22 ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ∴ a11 A21 T A21 A22 ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ = l112 l11LT21 l11L21 L21LT21 + L22LT22 ⎡ ⎣ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ a11 = l112 A21T = l11LT21 A21 = l11L21 A22 = L21LT21 + L22LT22 ⇔ l11 = a11 ⇔ − ⇔ L21 = A21 / l11 ⇔ L22LT22 = A22 − L21LT21

(15)

Jacobi Iteration

a11x1 + a12x2 +!+ a1nxn = b1 a21x1 + a22x2 +!+ a2nxn = b2 " an1x1 + an2x2 +!+ annxn = bn x11 = 1 a11 b1 − a12x2 0 −!− a 1nxn 0

(

)

x21 = 1 a22 b2 − a21x1 0 −!− a 2nxn 0

(

)

! xn1 = 1 ann bn − an1x1 0 −"− a n−1n−1xn−1 0

(

)

x0 = x10 x20 ! xn0 ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ xik+1 = 1 aii biaijxj k j=1 i−1

aijxkj j=i+1 n

⎛ ⎝⎜ ⎞ ⎠⎟

(16)

Gauss-Seidel Iteration

a11x1 + a12x2 +!+ a1nxn = b1 a21x1 + a22x2 +!+ a2nxn = b2 " an1x1 + an2x2 +!+ annxn = bn x11 = 1 a11 b1− a12x2 0 −!− a 1nxn 0

(

)

x21 = 1 a22 b2 − a21x1 1−!− a 2nxn 0

(

)

! xn1 = 1 ann bn − an1x1 1−"− a n−1n−1xn−1 1

(

)

x0 = x10 x20 ! xn0 ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ xik+1 = 1 aii biaijxj k+1 j=1 i−1

aijxkj j=i+1 n

⎛ ⎝⎜ ⎞ ⎠⎟

(17)

Jacobi vs Gauss-Seidel

xik+1 = 1 aii biaijxj k j=1 i−1

aijxjk j=i+1 n

⎛ ⎝⎜ ⎞ ⎠⎟ xi k+1 = 1 aii biaijxj k+1 j=1 i−1

aijxkj j=i+1 n

⎛ ⎝⎜ ⎞ ⎠⎟ Ax = b ⇒ L + D + U

(

)

x = b Dxk+1 = − L + U

(

)

xk + b

(

D + L

)

xk+1 = −Uxk + b Lxk Uxk Dxk+1 Dxk+1 Lxk+1 Uxk

(18)

Jacobi vs Gauss-Seidel

• Gauss-Seidel converges more rapidly than Jacobi.

Simple to implement

No need for sparse data structure

Low memory consumption O(n)

Easy to parallelize

• The convergence of the problem depends on the

diagonality of the matrix.

• Fortunately, Laplacian operator is a diagonal dominant operator.

(19)

Steepest Descent Method

• If is symmetric positive-definite,

solving is equivalent to minimizing

Ax

= b

f (x)

=

1

2

x

T

Ax

− b

T

x

+ c

λi > 0 i = 1,2,!,n

(

)

⇒ xT Ax = xT

(

VΛVT

)

x = xTVΛVTx = V

( )

Tx T Λ V

( )

Tx = dTΛd > 0 AT = A ∵symmetric

(

)

⇒ ∇f (x) = 1 2 A T x + 1 2 Ax − b = Ax − b = 0 ∴concave up

A

(20)

Steepest Descent Method

• A specific example f (x) = 1 2 x T Ax − bTx + c A = 3 2 2 6 ⎡ ⎣ ⎢ ⎤ ⎦ ⎥,b =−82 ⎣ ⎢ ⎤ ⎦ ⎥,c = 0 ⇒ x =−22 ⎣ ⎢ ⎤ ⎦ ⎥

energy function contours

(21)

Steepest Descent Method

• Given:

Where is ? LINE SEARCH ALGORITHM

Let’s go to the direction in which decreases

most quickly! ‣ Direction: Amount:

x

k

x

k+1

f (x)

d

k

= −∇f (x

k

)

= − Ax

(

k

− b

)

= −r

k

α

k

= ?

d dα k f x kk rk

(

)

= 0 → ∇f xkk rk

(

)

T rk = 0 ∴αk = r k

( )

T rk rk

( )

T Ark residual

x

k+1

= x

k

+

α

k

d

k

(22)
(23)

Conjugate Gradient Method

• Better directions than “Steepest Descent Method”

It improves “Steepest Descent” by avoiding

repetitious steps.

Only walking once in each direction

Orthogonal w.r.t the system matrix

It can reach the solution in steps.

• Ideal directions are the eigenvectors of .

• Instead of eigenvectors, however, which are too hard

to compute, use “conjugate” or “A-orthogonal” directions.

A

x

n

(24)

Conjugate Gradient Method

(25)

Preconditioning

• To improve the condition number of a matrix

: symmetric, positive-definite that approximates ,

but easier to invert

The eigenvalues of are better clustered than

those of .

M

−1

Ax

= M

−1

b

M

A

M

−1

A

A

(26)

참조

관련 문서

기업 차원에서도 원자재 가격 급등에 대한 자구 노력을 지속해야 한다 기업 차원에서도 원자재 가격 급등에 대한 자구 노력을 지속해야 한다 기업 차원에서도 원자재

Choose the Language after “Sign Up” at application homepage.. Choose the course (Undergraduate or Post-graduate) and

[r]

• Combining chemotherapy with radiation is better than alone in locally advanced HNSCC. - Induction chemotherapy followed by RT

[r]

- 전체 알고리즘은 SMNLMS 와 Gradient descent 을 반복 수행하여 주어진 Specification 을 만족시키는 필터 계수를

To solve this problem, GTAW assisted Friction Stir Welding method developed (TIG assisted FSW system) exhibited more than 80% the tensile strength of the

Penalty Function Method, Sequential Linear Programming, Constrained Steepest Descent Method, Method of Feasible Directions(MFD), Sequential Quadratic Programming(SQP), etc.