• 검색 결과가 없습니다.

Vectors, Vector Additions, etc.

N/A
N/A
Protected

Academic year: 2022

Share "Vectors, Vector Additions, etc."

Copied!
21
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

Chapter 1 Chapter 1

Vectors and Tensors

School of Mechanical and Aerospace Engineering School of Mechanical and Aerospace Engineering School of Mechanical and Aerospace Engineering School of Mechanical and Aerospace Engineering

Seoul National University Seoul National University

(2)

„

Vectors, Vector Additions, etc.

„

Vectors, Vector Additions, etc.

Convention:

x y z

a a a

= + +

a i j k

Base vectors:

i, j, k

or

e , e , e

1 2 3

Indicial notation:

a

i

= ⋅ a e

i

1 2 3

a

i

a a a

=

i

=

1

+

2

+

3

a e e e e

(3)

„

Vectors, Vector Additions, etc.

„

Vectors, Vector Additions, etc.

Summation Convention:

Repeated index

1 1 2 2 3 3

i i

a a

i i

= a a

1 1

+ a a

2 2

+ a a

3 3

a a a a + a a + a a

11 22 33

a

kk

= a + a + a

11 11 12 12 13 13

ij ij

a a = a a + a a + a a

21 21 22 22 23 23

a a a a a a

+ + +

31 31 32 32 33 33

a a a a a a

+ + +

(4)

„

Scalar Product and Vector Product

„

Scalar Product and Vector Product

The scalar product is defined as the product of the two magnitudes times the

i f h l b h

Scalar Product

cosine of the angle between the vectors

cos ab θ

⋅ = a b

From the definition of eq (1 1) e immediatel ha e

(1-1) From the definition of eq. (1-1), we immediately have

( m a ) ( ⋅ n b ) = mn ( a b ⋅ )

⋅ = ⋅ a b = b a a b b a

( + ) = +

⋅ ⋅ ⋅

a b c a b a c

The scalar product of two different unit base vectors defined above is zero, since cos(90 )° =0 that is

since , that is,cos(90 )=0

⋅ = ⋅ = ⋅ = 0 i j j k k i

⋅ = ⋅ = ⋅ =

(5)

„

Scalar Product and Vector Product

„

Scalar Product and Vector Product

Then the scalar product becomes

( a

x

a

y

a ) ( b

x

b

y

b

z

)

⋅ = + + ⋅ + +

a b (

x

i

y

j j k ) (

x

i

y

j j

z

k )

z z y y x

xb a b a b

a + +

=

or in an indicial notation, we have

V t P d t ( d t)

a b

i i

⋅ = a b

Vector Product (or cross product)

= × c a b

defined as a vector c perpendicular to both a and b in the sense that makes a b defined as a vector c perpendicular to both a and b in the sense that makes a, b, c a right-handed system. Magnitude of vector c is given by

) sin( θ ab

c =

(6)

„

Scalar Product and Vector Product

In terms of components, vector product can be written as

„

Scalar Product and Vector Product

( ) ( b b b )

b ( a

x

i a

y

j a

z

k ) ( b

x

i b

y

j b

z

k )

× = + + × + +

a b i j k i j k

x y z

a a a

=

i j k

( a b

y z

a b

z y

) ( a b

z x

a b

x z

) ( a b

x y

a b

y x

)

= − i + − j + − k

x y z

x y z

b b b

Fig. 1.1 Definition of vector product

The vector product is distributive as

( ) ( ) ( )

× + = × + × a b c a b a c

but it is not associative as

( ) ( )

× × ≠ × ×

a b c a b c

(7)

„

Scalar Product and Vector Product Scalar Triple Product

„

Scalar Product and Vector Product

x y z

a a a

( ) ( )

x y z

x y z

b b b

c c c

× ⋅ = ⋅ × = a b c a b c

i S b l

Permutation Symbol e

mnr

⎪⎨

⎧ +

= 1 when ( , , )is even purmutation of (1,2,3) equal

are indices any two

when 0

r n m emnr

⎪⎩−1 when (m,n,r)isodd purmutation of (1,2,3) where

even permutations are (1,2,3), (2,3,1), and (3,1,2) odd permutations are (1 3 2) (2 1 3) and (3 2 1) odd permutations are (1,3,2), (2,1,3), and (3,2,1).

Using the permutation symbol, the vector product can be represented by

e a b

× =

a b × = e

pqr

a b

q r p

i

a b i

(8)

„

Scalar Product and Vector Product Kronecker Delta

„

Scalar Product and Vector Product

δ

ij

⎧ 1 if p = q

⎩ ⎨

= ≠

q p

q p

pq

0 if if δ 1

Some examples are given below:p g

ii

3 δ =

δ δ

ij ij

δ

ii

3 3 δ δ = δ =

i ij i j

u δ = = u u

ij ij ii

T δ = T

(9)

„

Rotation of Axes, etc

„

Rotation of Axes, etc

Change of Orthogonal Basis

ƒ Vector is independent of a coordinate system.

ƒ However, the components of a vector change when the coordinate system changes.

Let and be the two coordinate systems, as shown in Fig. 1-2, xi xi

which have the same origins.

Also, let the orientation of the two coordinate systems is given by the direction cosines as

i

1

i

2

i

3

i

11 a11 a12 a13

i

2

i3

1 2

a1 3

a1

2 2

a2 3

a2

3 2

a3 3

a3

Fig. 1.2 Coordinate transformation

(10)

„

Rotation of Axes, etc

Under this system,

„

Rotation of Axes, etc

i

s

i

d

i = a

s

i

Since the unit vectors are orthogonal, we have

r s

s

a

r

=

i i

and

i

s

= a

r

i

r

δ

i i

and

i i δ

From these

p

⋅ =

q

δ

pq

i i i

p

⋅ = i

q

δ

pq

s r s r

δ

s s

δ

i i i i

and

i.e.,

p q s r

s r s r s s

p q p q rs p q pq

a a a a δ a a δ

⋅ = ⋅ = = =

i i i i

pq s

q s pa

a = δ

Similarly, we get

mn n

r m

r a

a =

δ

(11)

2D coodinate transformation 2D coodinate transformation

cos sin sin cos

x x

y y

u u

u u

θ θ

θ θ

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ = ⎢ ⎥ ⎢ ⎥

⎣ ⎦

⎣ ⎦ ⎣ ⎦

or u = A u

y

⎣ ⎦

y

⎣ ⎦ ⎣ ⎦

The inverse of the above transformation equation becomes

cos sin sin cos

x x

y y

u u

u u

θ θ

θ θ

⎡ ⎤ ⎡ ⎤ ⎡ ⎤

⎢ ⎥ = ⎢ ⎣ − ⎥ ⎦ ⎢ ⎥

⎣ ⎦ ⎣ ⎦

or u = A

1

u = A

T

u y

y u

x x

θ x

(12)

Coordinate Transformation of Vector f f

Let,

⎥⎥

⎢⎢

= 12 22 32

1 3 1

2 1

1

a a

a

a a

a

A Then,

v = A v

or

v = A

T

v

⎥⎥

⎢ ⎦

13 23 33

3 2

1

a a

a

a a

a

v A v

or

v = A v

[ ]

A = a a a

Note that

[

1 2 3

]

A = a a a

Second Order Tensors

h d d b d b d d f

T T

1

A A = AA =

The second order tensor may be expressed by tensor product or open product of two vectors. Let

1 1 2 2 3 3

a a a

= + +

a e e e

and

1 1 2 2 3 3

b b b

= + +

b e e e

(13)

1 1 2 2 3 3 1 1 2 2 3 3

T =ab = ⊗ =a b (a e + a e +a e )⊗(be +b e +b e )

a b a b a b

= a b1 1 e1 ⊗ +e1 a b1 2 e1 ⊗ +e2 a b1 3 e1e3

= e ⊗ +e e ⊗ +e ee

2 1 2 1 2 2 2 2 2 3 2 3

a b a b a b

+ e ⊗ +e e ⊗ +e ee

3 1 3 1 3 2 3 2 3 3 3 3

a b a b a b

+ e ⊗ +e e ⊗ +e ee

11 1 1 12 1 2 13 1 3

T T T T

T T T

= ⊗ + ⊗ + ⊗ +

+ ⊗ + ⊗ + ⊗

e e e e e e

e e e e e e

Or

21 2 1 22 2 2 23 2 3

31 3 1 32 3 2 33 3 3

T T T

T T T

+ ⊗ + ⊗ + ⊗

+ ⊗ + ⊗ + ⊗

e e e e e e

e e e e e e

Here, we may consider as a base of the second order tensor, and is a component of the second order tensor T .

i ⊗ ≠j j i

e e e e Tij

(14)

It can be seen that the second order tensor map a vector to another vector, that is,

11 1 1 12 1 2 13 1 3

(

T T T T

= ⋅ = ⊗ + ⊗ + ⊗ +

u v 11 e1 e1 12 e1 e2 13 1e e3

21 2 1 22 2 2 23 2 3

31 3 1 32 3 2 33 3 3 1 1 2 2 3 3

11 1 12 2 13 3 1 21 1 22 2

(

) ( )

= ( ) (

T T T

T T T v v v

T v T v T v T v T v

+ ⊗ + ⊗ +

+ ⊗ + ⊗ + + +

+ + + +

e e e e e e

e e e e e e e e e

e +T v23 3)e2

11 1 12 2 13 3 1 21 1 22 2

(T v +T v +T v )e +(T v +T v 23 3 2

31 1 32 2 33 3 3

) ( )

ij j i

T v T v T v T v

T v

+

+ + +

=

e e

e

Symmetric Tensors and Skew tensors

T T = Symmetric tensor

ji

ij T

T =

ji

ij T

T = − Symmetric tensor

Skew or

Antisymmetric tensory

(15)

Rotation of axes, change of tensor components

Let and ui =Tipvp ui =Tipvp

Where and are the same vector but decomposed into two different coordinate systems, and . The same applies to and . Then by the transformation of vector, we get

p

p p p

ui

u

i

xi xi

v

i

v

i

q jq j i j j i

i a u a T v

u = =

j q

i jq p p

a T a v

=

Th f Therefore,

jq q p j i

ip a a T

T = In matrix form, we have,

In matrix form, we have,

T = A TA

T or,

T = ATA

T

(16)

„

Rotation of Axes, etc

„

Rotation of Axes, etc

Scalar product of two tensors

ij ijU

=T U :

T j j

ji ijU

=T

⋅⋅U T

If we list all the terms of tensor product, we get U

=T U :

T: U =TijUij T

13 13 12

12 11

11U T U T U

T + +

=

23 23 22

22 21

21U T U T U

T + +

+

33 33 32

32 31

31U T U T U

T + +

+

The product of two second-order tensors

U

T U T⋅

v) (U T v U)

(T ⋅ ⋅ = ⋅ ⋅

U

T

P = ⋅ Pij =TikUkj

If , then

(17)

„

Rotation of Axes, etc

„

Rotation of Axes, etc The Trace

Definition : tr(T) = Tkk Note that

Note that

B) A ( B

A ⋅⋅ = tr

B) A

( )

B A ( B

:

A = tr

T

= tr

T

(1) (2)

A) B ( B)

A

( ⋅ = trtr

B) A C ( A)

C B ( C)

B A

( ⋅ ⋅ = tr ⋅ ⋅ = tr ⋅ ⋅ tr

(3)

(4) Cyclic property of trace

Proof >

Proof (1)

Let, B) A ( B

A ⋅⋅ = tr

B A

P = ⋅ , i.e., Pij = AikBkj

Th t (P) P A B A B

Then, tr(P) = Pii = AikBki = A ⋅⋅B (2)

A : B = tr ( A ⋅ B

T

) = tr ( A

T

⋅ B)

Let P = A⋅BT i e Pij = AikBjk

Let, , i.e.,

Then,

B A

P = Pij AikBjk

B : A )

B A

( ⋅ T = Pii = AikBik = tr

(18)

„

Review of Elementary Matrix Concepts

„

Review of Elementary Matrix Concepts Eigenvectors and eigenvalues of a matrix

Linear transformation, , associates, to each point , another point Also it associates to any other point on the line OP

Mx

y = P(x1,x2,x3)

) (y y y

Q . Also, it associates to any other point on the line OP , (rx rx rx ) another point on the line OQ.

So we may consider the transformation to be a transformation of the line OP into the line OQ.

) , ,

(y1 y2 y3

Q (rx1,rx2,rx3)

) , ,

(ry1 ry2 ry3

Now any line transformed into itself is called an eigenvector of the matrix M.

That is,

λ λ

= =

Mx = λ x = λ I x

Mx x I x

A nontrivial solution will exist if and only if the determinant vanishes:

( M − λ I ) = 0 ( M λ I ) = 0

Note that 3x3 determinant is expanded, it will be a cubic polynominal equation with real coefficients. The roots of this equation are called the eigenvalues of the matrix.

Upon solving the equation some of the roots could be complex numbers Upon solving the equation, some of the roots could be complex numbers.

(19)

x

2

( )

1 2

( , ) P rx rx

1 2

( , )

Q ry ryy = Mx

x

1 1 2

( , ) P x x O

1 2

( , ) Q y y

x

2

Linear Transformation M maps into the same vector . v v λ v = M v

x

1

O

v

(20)

„

Review of Elementary Matrix Concepts

„

Review of Elementary Matrix Concepts

A real symmetric matrix has only real eigenvalues.

If there were a complex root , then its complex conjugate is also a root.

Therefore

λ

λ

Therefore,

λ λ

=

=

Mx x

Mx x

These equations can be written as

T

= λ

T

x Mx x x

T

λ

T

T

= λ

T

x Mx x x

Note that and since M is symmetric, we get

x x

T

= x x

k k

= x x

T

T

= M x x x Mx = M x x

ij i j

x Mx

i j jix x

= M

T ij i j

M x x

= = x Mx

(by interchanging the dummy indices) (by symmetry of M)

(21)

„

Review of Elementary Matrix Concepts

„

Review of Elementary Matrix Concepts

Subtracting, we get

( λ λ − ) x x

T

= 0

( )

Since is nontrivial, . Therefore, we should have

x x x

T

0 λ

λ =

So that must be real.

λ

We can obtain the eigenvector associated to each eigenvalue by substituting each eigenvalue into the matrix equation.

When eigenvalues are all distinct : ??

Two of the eigenvalues are equal : ??

All of the eigenvalues are equal : ??

참조

관련 문서

Based on the initial refined division of the network into clusters by k-means++, nodes of each network cluster can negotiate internally to select the head node of each

The eigenvalue problem can be used to identify the limit state of the process, in which the state vector x is reproduced under the multiplication by the.. stochastic matrix

Perform the iteration and estimate the spectral radius of the iteration matrix by computing the ratio of pseudo error norms at each iteration step.. Exit the iteration

addition of each monomer molecules to the chain end addition of each monomer molecules to the chain end involves an attack by the radical site on the unsaturated

 So the rank vector r is an eigenvector of the web matrix M, with the corresponding eigenvalue 1.  Fact: The largest eigenvalue of a column stochastic

The resolved-rate control method [9] is based on the manipulator velocity equation kX = kje where kj is the Jacobian matrix, e is the vector of relative joint rates, kX is the

▶ One example where the Rayleigh distribution naturally arises is when wind speed is analyzed into its orthogonal 2-dimensional vector components. Assuming that the magnitude

à We note that there are three unknown parallel forces acting on the bar in Fig. ii) By substituting (a) into (c), obtain all the deflections in terms of