• 검색 결과가 없습니다.

INTRODUCTION TO NUMERICAL ANALYSIS

N/A
N/A
Protected

Academic year: 2022

Share "INTRODUCTION TO NUMERICAL ANALYSIS"

Copied!
40
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

Cho, Hyoung Kyu

Department of Nuclear Engineering Seoul National University

INTRODUCTION TO

NUMERICAL ANALYSIS

(2)

5. EIGENVALUES AND EIGENVECTORS

5 .1 Background

5.2 The Characteristic Equation 5.3 The Basic Power Method 5.4 The Inverse Power Method 5.5 The Shifted Power Method

5.6 The QR Factorization and Iteration Method 5.7 Use of MATLAB Built-In Functions

(3)

5.1 Background

Eigenvalue

The word eigenvalue is derived from the German word eigenwert, which means "proper or characteristic value.“

[𝑎][𝑢] = 𝜆[𝑢]

 Generalized form 𝐿𝑢 = 𝜆𝑢

𝐿 : operator that can represent multiplication by a matrix, differentiation, integration, and so on.

Ex) : second differentiation with respect to 𝑥, 𝑦

𝑑2𝑦

𝑑𝑥2 = 𝑘2𝑦

𝜆: eigenvalue associated by the operator 𝐿

𝑢: eigenvector or eigenfunction corresponding to the eigenvalue 𝜆 and operator 𝐿

(4)

5.1 Background

Importance of eigenvalues and eigenvectors in science and engineering

 Ex) vibration

Eigenvalues represent the natural frequencies of a system or component.

Eigenvectors represent the modes of these vibrations.

Important to identify these natural frequencies

Periodic external loads at or near these frequencies, resonance can cause the motion of the structure to be amplified.

Leading to failure of the component

 Mechanics of materials

The principal stresses are the eigenvalues of the stress matrix.

The principal directions are the directions of the associated eigenvectors.

(5)

5.1 Background

Importance of eigenvalues and eigenvectors in science and engineering

 Quantum mechanics

In Heisenberg’s formulation of quantum mechanics 𝐿Ψ = 𝑐Ψ

Ψ : any quantity that can be measured or inferred experimentally, wave function

Such as position, velocity, or energy 𝑐 : eigenvalue

Ex)

: energy operator : momentum operator

(6)

5.1 Background

Importance of eigenvalues and eigenvectors in science and engineering

 Link between eigenvalue problems involving differential equations and eigenvalue problems involving matrices ?

 Numerical solution of eigenvalue problems involving ODEs

Results in systems of simultaneous equations

 Eigenvalues of a matrix can also provide useful information

Jacobi and Gauss-Siedel iterative methods

It turns out that whether or not these iterative methods converge to a solution depends on the eigenvalues of the matrix [𝑎].

How quickly the iterations converge depends on the magnitudes of the eigenvalues of [𝑎].

[𝑎][𝑢] = 𝜆[𝑢]

(7)

5.2 The Characteristic Equation

Eigenvalues of a matrix

 [𝑎 − 𝜆𝐼] : non-singular (if it has an inverse) ⇒ [𝑢] = 0 (trivial solution)

 [𝑎 − 𝜆𝐼] : singular (if it does not have an inverse) ⇒ [𝑢] = 0 (non-trivial solution is possible.)

Characteristic equation

 Polynomial equation for 𝜆

 For a small matrix [𝑎]

The eigenvalues can be determined directly by calculating the determinant and solving for the roots of the characteristic equation.

 For a large matrix

Difficult to determine

Various numerical methods: power method and QR factorization method

(8)

5.3 Basic Power Method

Power method

 Iterative procedure for determining the largest real eigenvalue and the corresponding eigenvector of a matrix

 For a (𝑛 × 𝑛) matrix [𝑎], 𝑛 distinct real eigenvalues and 𝑛 associated eigenvectors

 Eigenvectors are linearly independent!

Any vector can be written as a linear combination of the basis vectors.

𝑐1𝜆1 𝑢 1 𝑐2𝜆2 𝑢 2 𝑐𝑛𝜆𝑛 𝑢 𝑛

(9)

5.3 Basic Power Method

Power method

 Successive iteration

(10)

5.3 Basic Power Method

Power method

 Successive iteration

𝜆1 : largest eigenvalue

When 𝑘 is sufficiently large,

(11)

5.3 Basic Power Method

Power method

 Algorithm of the power method

Start with a column eigenvector 𝑥 𝑖 of length 𝑛. (the vector can be any non zero vector)

Multiply the vector 𝑥 𝑖 by the matrix [𝑎] ⇒ 𝑥 𝑖+1

Normalizing 𝑥 𝑖+1

Assign the normalized vector and go back to the first step

 Convergence criteria

(12)

5.3 Basic Power Method

Power method

 Example

(13)

5.3 Basic Power Method

Power method

 Convergence of the power method

Converges very slowly unless the starting vector [𝑥] is close to the eigenvector 𝑢

A problem can arise when 𝑐1 is zero.

 When can the power method be used?

Only the largest eigenvalue is desired.

The largest eigenvalue cannot be a repeated root of the characteristic equation.

Other eigenvalues with the same magnitude

The largest eigenvalue must be real.

Two eigenvalues with the same magnitude.

(14)

5.4 Inverse Power Method

Inverse Power Method

 To determine the smallest eigenvalue

 Power method: for the largest eigenvalue

 Apply the power method to the inverse of the given matrix [𝑎]

This works because the eigenvalues of the inverse matrix 𝑎 −1 Reciprocals of the eigenvalues of [𝑎]

𝑎 𝑥 = 𝜆[𝑥] ⇒ 𝑎 −1 𝑎 𝑥 = 𝑎 −1𝜆 𝑥 = 𝜆 𝑎 −1 𝑥

𝑎 −1 𝑎 = [𝐼] ⇒ 𝑥 = 𝜆 𝑎 −1 𝑥 ⇒ 𝑎 −1 𝑥 = 1𝜆 𝑥

𝟏/𝝀 : eigenvalue of the inverse matrix 𝒂 −𝟏

 Application of the power method ⇒ the largest value of 1/𝜆

the smallest value of 𝝀

(15)

5.4 Inverse Power Method

Procedure

 Starting vector 𝑥 𝑖

 Multiplied by 𝑎 −1

 Normalization

 Inverse matrix 𝑎 −1 has to be calculated before iterations !

Calculating the inverse of a matrix is computationally inefficient and not desirable.

By solving systems of linear equations, 𝑥 𝑖+1 can be obtained.

This can best be done by using the LU decomposition method.

 With the power method and inverse power method, the largest and the smallest eigenvalues of a matrix can be found.

 In some instances, it is necessary to find all the eigenvalues.

Shifted power method, QR factorization method

(16)

5.5 Shifted Power Method

Shifted Power Method

 Once the largest or the smallest eigenvalue is known, the shifted power method can be used for finding the other eigenvalues.

𝜆1 : the largest or smallest eigenvalue obtained by using the power method or inverse power method

 Shifted matrix

𝛼 : eigenvalues of the shifted matrix 𝛼 = 𝜆 − 𝜆1

𝜆 = 𝜆1, 𝜆2, 𝜆3, … , 𝜆𝑛

𝑎 𝑥 = 𝜆[𝑥]

𝑎 − 𝜆1𝐼 𝑥 = 𝛼[𝑥]

𝑎 𝑥 = 𝜆[𝑥] 𝜆 − 𝜆1 𝑥 = 𝛼[𝑥]

(17)

5.5 Shifted Power Method

Shifted Power Method

 Shifted matrix

Apply the basic power method !

The largest eigenvalue of the shifted matrix, 𝛼𝑘 can be determined.

The eigenvalue 𝜆𝑘 can be determined. 𝛼𝑘 = 𝜆𝑘 − 𝜆1

Repeat this process together with shifted inverse power method!

 Comment

Tedious and inefficient process!

QR factorization

𝑎 − 𝜆1𝐼

(18)

5.6 QR Factorization and Iteration Method

QR factorization and iteration method

 Popular means for finding all the eigenvalues of a matrix

 Based on the facts

Similar matrices have the same eigenvalues and associated eigenvectors.

The eigenvalues of an upper triangular matrix are the elements along the diagonal.

 Strategy

Transform the matrix into a similar matrix that is upper triangular.

Iterative process is required.

 The QR factorization method finds all the eigenvalues of a matrix, but cannot find the corresponding eigenvectors.

 For real eigenvalues

QR factorization method eventually factors the given matrix into an orthogonal matrix and an upper triangular matrix

 For complex eigenvalues (not covered in this course)

(19)

5.6 QR Factorization and Iteration Method

Similar matrices

 Two matrices 𝑎 and 𝑏 are similar if

𝑎 = 𝑐 −1 𝑏 𝑐 : similarity transformation

Similar matrices have the same eigenvalues and associated eigenvectors.

Orthogonal matrix

 Whose inverse is the same as its transpose

QR factorization and iteration procedure

 Start with the matrix 𝑎 1

𝑄 1 : orthogonal matrix, 𝑅 1: upper triangular matrix

𝑎 1 = 𝑄 1 𝑅 1

𝑄 −1 = 𝑄 𝑇 ⇒ 𝑄 𝑇 𝑄 = 𝑄 −1 𝑄 = [𝐼]

(20)

5.6 QR Factorization and Iteration Method

QR factorization and iteration procedure

 The first iteration

𝑅 1 is multiplied by 𝑄 1 from the right

𝑎 1 and 𝑎 2 are similar; have the same eigenvalues.

 The second iteration

𝑎 2 and 𝑎 3 are similar; have the same eigenvalues.

 Iterations continue until an upper triangular matrix is resulted in.

𝑎 1 = 𝑄 1 𝑅 1

𝑎 2 = 𝑅 1 𝑄 1

𝑅 1 = 𝑄1 −1 𝑎 1

𝑅 1 𝑄 1 = 𝑄1 −1 𝑎 1 𝑄 1 = 𝑄1 𝑇 𝑎 1 𝑄 1

𝑎 = 𝑐 −1 𝑏 𝑐

𝑎 2 = 𝑄 2 𝑅 2

𝑎 3 = 𝑅 2 𝑄 2= 𝑄2 𝑇 𝑎 2 𝑄 2

The eigenvalues of an upper triangular matrix are the elements along the diagonal.

(21)

5.6 QR Factorization and Iteration Method

QR factorization

 Householder matrix [𝐻] for factorization

[𝑣] : n-element column vector

𝑣 𝑇 : row vector

𝑣 𝑇[𝑣] : scalar

[𝑣] 𝑣 𝑇 : (𝑛 × 𝑛) matrix

Special properties of [𝐻]

Symmetric Orthogonal

𝐻 𝑎 𝐻 : similar to [𝑎]

𝑎 = 𝑄 𝑅

𝑎 = 𝑐 −1 𝑏 𝑐 𝐻 −1 𝐻 𝑎 𝐻 𝐻 = [𝑎]

(22)

5.6 QR Factorization and Iteration Method

QR factorization

 Step 1: identify the vector [𝑐] and [𝑒]

In [𝑒], the first element is

+1 : if the first element of [𝑐] (𝑎11)is positive.

−1 : if the first element of [𝑐] (𝑎11)is negative.

𝐻 (1) can be constructed.

𝑎 = 𝑄 𝑅

(23)

5.6 QR Factorization and Iteration Method

QR factorization

 Step 2: identify the vector [𝑐] and [𝑒]

In [𝑒], the second element is

+1 : if the first element of [𝑐] (𝑅22(1))is positive.

−1 : if the first element of [𝑐] (𝑅22(1))is negative.

𝐻 (2) can be constructed.

𝑎 = 𝑄 𝑅

𝑎 = 𝐻 (1) 𝑅 (1) = 𝑄 (1) 𝑅 (1) 𝑅 (1) = 𝐻 (2) 𝑅 (2)

𝑎 = 𝑄 (1) 𝐻 (2) 𝑅 (2)= 𝑄 (2) 𝑅 (2)

(24)

5.6 QR Factorization and Iteration Method

QR factorization

 Step 3: identify the vector [𝑐] and [𝑒]

In [𝑒], the third element is

+1 : if the first element of [𝑐] (𝑅33(1))is positive.

−1 : if the first element of [𝑐] (𝑅33(1))is negative.

𝐻 (3) can be constructed.

𝑎 = 𝑄 𝑅

(25)

5.6 QR Factorization and Iteration Method

QR factorization

 Step 4: last step (n-1)

Eigenvalue ?

𝑎 𝑛 = 𝑅 𝑛−1 𝑄 𝑛−1 𝑎 𝑛 = 𝑄 𝑛 𝑅 𝑛 𝑎 𝑛+1 = 𝑅 𝑛 𝑄 𝑛

𝑎 = 𝑄 𝑅

Upper triangular matrix Orthogonal matrix

Step for the factorization!

(26)

5.6 QR Factorization and Iteration Method

QR factorization

 Example

(27)

5.6 QR Factorization and Iteration Method

QR factorization

 Example

(28)

5.6 QR Factorization and Iteration Method

QR factorization

 Example

(29)

5.6 QR Factorization and Iteration Method

QR factorization

 Step 2

(30)

5.6 QR Factorization and Iteration Method

QR factorization

 Step 2

(31)

5.6 QR Factorization and Iteration Method

QR factorization

 Example

(32)

5.6 QR Factorization and Iteration Method

Iteration

 Repeat the factorization until the last matrix in the sequence is upper triangular.

𝐴𝑛 = 𝑅 𝑛−1 𝑄 𝑛−1

Example

(33)

5.6 QR Factorization and Iteration Method

Example

function [Q R] = QRFactorization(R)

% The function factors a matrix [A] into an orthogonal matrix [Q]

% and an upper-triangular matrix [R].

% Input variables:

% A The (square) matrix to be factored.

% Output variables:

% Q Orthogonal matrix.

% R Upper-triangular matrix.

nmatrix = size(R);

n = nmatrix(1);

I = eye(n);

Q = I;

for j = 1:n-1 c = R(:,j);

c(1:j-1) = 0;

e(1:n,1)=0;

if c(j) > 0 e(j) = 1;

else

e(j) = -1;

end

clength = sqrt(c'*c);

v = c + clength*e;

H = I - 2/(v'*v)*v*v';

Q = Q*H;

R = H*R;

end

clear all

A = [45 30 -25; 30 -24 68; -25 68 80]

for i = 1:100

[q R] = QRFactorization(A);

A = R*q;

end A

e = diag(A)

(34)

5.6 QR Factorization and Iteration Method

QR factorization 

Householder matrix transformation

Characteristics 

Eigenvalue: ‐1

Orthogonal vector ݒ

Eigenvector of householder matrix

Eigenvalue: 1

2

T

T

H I uu

  u u

2

T

2  

T

2

Hu   I uu u   Iu u u u   u u   u

T

0

u v uv  

2

T

2  

T

Hv   I uu v v   u u vv

(35)

5.6 QR Factorization and Iteration Method

QR factorization 

Characteristics

Symmetric

Orthogonal

2

T

2  

T

2  

T

2

T T T T T T T T

H   I uu   I uu   I u u   I uuH

2  2

T

4 4

T T T T T T

HH   I uu Iuu   I uuuu uuI

(36)

5.6 QR Factorization and Iteration Method

QR factorization 

Characteristics v   xe

1

, 

x e e e x

e

T T T

e  [  , 0 ,  , 0 ]

T

x

 

2

x

T

x

2 2

( ) (

T

)

T T T T

vx   e x   ex x   e x  

x e

  e e  2 

2

 2  e

T

x  2  (   e

T

x )

2

2 2 2

T T T

T T

vv vv vv

Hx I x x x x x

v v v v v

 

       

 

1

( )

T

x

T

vv x e

e x

     

x ee xe

ex xx

x e x

e x

x

vv

T

 (   )(   )

T

 (

T

 

T

 

T

 

2 T

)

2 2 3 2

( )

( )

T T

T T T T

x

xx xex xx

e x

e

e x

  e

e x x

e x e

       

e x e x

x

e

T

) (

T

)

(  

2

       (   e

T

x )( x   e )    

 

 



 

 

 

 

0

2

0

1

 

Hx x

n

(37)

5.6 QR Factorization and Iteration Method

QR factorization 

Characteristics

 

2 ,0,...,0

T T

T

Hx I uu x

u u

 

     

 

0 4 1 x

   

  

   

2

 x x

T

 17

   4 . 123

0 4.123 4.123

4 0 4

1 0 1

u xe

     

     

          

     

     

H

0

0.97 0.243

0.97 0.059

0.235

0.243

0.235

0.941









H x

4.123

0 0









(38)

5.6 QR Factorization and Iteration Method

QR factorization with Householder matrix

Step 1

Step 2



 



 



 



 

*

* 0

*

* 0

*

*

*

2 1

2 22

21

1 12

11

1 1

nn n

n

n n

a a

a

a a

a

a a

a H A

H

u   xe

x x

T

  2

T

T

H I uu

u u

 

   

 

11 21

1 n

a x a

a

 

 

 

  

 

 

 

 

 

 

 

 

 

 



 



 

 

 

 

 

* 0

0

0

*

* 0

0 0 0

0

0 0

1

0 0

' 1 '

12 '

11

' '

2

' 2 '

22

' 1 '

12 '

11

2 '

' 2

' 2 '

22

' 1 '

12 '

11

2 1

2

n

nn n

n n

nn n

n

n

a a a

a a

a a

a a

a

H a

a

a a

a a

a H A

H

(39)

5.7 MATLAB Built-in Functions

Eigenvalues and eigenvectors

(40)

5.7 MATLAB Built-in Functions

QR factorization

참조

관련 문서

and an improvement set of extragradient-type iteration methods in [5], we in- troduce new iteration algorithms for finding a common of the solution set of an equilibrium problem

The key issue is whether HTS can be defined as the 6th generation of violent extremism. That is, whether it will first safely settle as a locally embedded group

Average of the indexed values of the following data: (1) Total value of the indexed score for disability-adjusted life years (the number of years lost due to illness,

The “Asset Allocation” portfolio assumes the following weights: 25% in the S&P 500, 10% in the Russell 2000, 15% in the MSCI EAFE, 5% in the MSCI EME, 25% in the

The eigenvalue problem can be used to identify the limit state of the process, in which the state vector x is reproduced under the multiplication by the.. stochastic matrix

The selective harmonics elimination technique or fundamental switching frequency technique is well-known method that used to eliminate the lower order harmonics from the

It is used when a movement or an event of the following clause has to be done before the movement or the event of the preceding clause.. The tense marker cannot be used

In gi ngi va,LCs are found i n oralepi thel i um ofnormalgi ngi va and i n smal l er amountsi nthesul cul arepi thel i um,buttheyareprobabl yabsentfrom thejuncti onal epi thel