• 검색 결과가 없습니다.

The main aim of this paper is to study the Hoffman-Wielandt inequality and some weaker versions of the Wielandt-Mirsky conjecture for matrix polynomials

N/A
N/A
Protected

Academic year: 2022

Share "The main aim of this paper is to study the Hoffman-Wielandt inequality and some weaker versions of the Wielandt-Mirsky conjecture for matrix polynomials"

Copied!
11
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

https://doi.org/10.4134/BKMS.b181065 pISSN: 1015-8634 / eISSN: 2234-3016

ON WIELANDT-MIRSKY’S CONJECTURE FOR MATRIX POLYNOMIALS

Công-Trình Lê

Abstract. In matrix analysis, the Wielandt-Mirsky conjecture states that

dist(σ(A), σ(B)) ≤ kA − Bk

for any normal matrices A, B ∈ Cn×n and any operator norm k · k on Cn×n. Here dist(σ(A), σ(B)) denotes the optimal matching distance be- tween the spectra of the matrices A and B. It was proved by A. J. Hol- brook (1992) that this conjecture is false in general. However it is true for the Frobenius distance and the Frobenius norm (the Hoffman-Wielandt inequality). The main aim of this paper is to study the Hoffman-Wielandt inequality and some weaker versions of the Wielandt-Mirsky conjecture for matrix polynomials.

1. Introduction

Let Cn×n denote the set of all n × n matrices whose entries in C. Let A, B ∈ Cn×nbe complex matrices whose spectra are σ(A) = {α1, . . . , αn} and σ(B) = {β1, . . . , βn}, respectively. The optimal matching distance between σ(A) and σ(B) is defined by

dist(σ(A), σ(B)) := min

θ max

j=1,...,nj− βθ(j)|,

where the minimum is taken over all permutations θ on the set {1, . . . , n}.

One of the interesting conjectures in matrix analysis is the Wielandt-Mirsky conjecture [2] which states that for any normal matrices A, B ∈ Cn×n,

(1) dist(σ(A), σ(B)) ≤ kA − Bk,

where k · k denotes the operator bound norm.

This conjecture has been proved to be true in the following special cases (cf.

[10]):

(1) A and B are Hermitian (Weyl, 1912);

(2) A, B and A − B are normal (Bhatia, 1982);

Received November 6, 2018; Revised January 27, 2019; Accepted February 27, 2019.

2010 Mathematics Subject Classification. Primary 15A18, 15A42, 15A60, 65F15.

Key words and phrases. Wielandt-Mirsky’s conjecture, Hoffman-Wielandt’s inequality, Wielandt’s inequality, matrix polynomial, spectral variation.

2019 Korean Mathematical Societyc 1273

(2)

(3) A is Hermitian and B is skew-Hermitian (Sunder, 1982);

(4) A and B are constant multiples of unitaries (Bhatia and Holbrook, 1985).

It has been proven by Holbrook (1992, [10]) that this conjecture is false in general. However, if we replace the optimal matching distance dist(σ(A), σ(B)) by the Frobenius distance between the spectra of the matrices A and B,

distF(σ(A), σ(B)) := min

θ



n

X

j=1

j− βθ(j)|212 , then we have the following Hoffman-Wielandt inequality [9]

(2) distF(σ(A), σ(B)) ≤ |A − B|F

for any normal matrices A and B. Here |A − B|F denotes the Frobenius norm1 of the matrix A − B.

It is clear that

dist(σ(A), σ(B)) ≤ distF(σ(A), σ(B)) ≤√

n · dist(σ(A), σ(B)).

Therefore it follows from the Hoffman-Wielandt inequality that the Wielandt- Mirsky conjecture is true for the Frobenius norm.

A weaker version of the Wielandt-Mirsky conjecture was proved by R. Bha- tia, C. Davis and A. Mcintosh (1983, [3]) that there exists a universal constant c such that for any normal matrices A, B ∈ Cn×nwe have

(3) dist(σ(A), σ(B)) ≤ ckA − Bk.

If we don’t require the universality of the constant c in the above inequality, for any normal matrix A ∈ Cn×n and for any B ∈ Cn×n, we have (cf. [2, p. 4]) (4) dist(σ(A), σ(B)) ≤ (2n − 1)kA − Bk.

If A is Hermitian and B is arbitrary, we have the following inequality due to W. Kahan ([11, p. 166]):

(5) dist(σ(A), σ(B)) ≤ (γn+ 2)kA − Bk, where γn is a constant depending on the size n of the matrices.

For a matrix polynomial we mean the matrix-valued function of a complex variable of the form

(6) P (z) = Amzm+ · · · + A1z + A0,

where Ai ∈ Cn×n for all i = 0, . . . , m. If Am 6= 0, P (z) is called a matrix polynomial of degree m. When Am= I, the identity matrix in Cn×n, the matrix polynomial P (z) is called a monic.

1For a matrix A = (aij) ∈ Cn×n, the Frobenius norm of A is defined by

|A|F:=p

trace(AA) =

n

X

i,j=1

|aij|21

2. It is easy to see that |A|2F = |AA|F.

(3)

A number λ ∈ C is called an eigenvalue of the matrix polynomial P (z) if there exists a nonzero vector x ∈ Cnsuch that P (λ)x = 0. Then the vector x is called, as usual, an eigenvector associated to the eigenvalue λ. Note that each eigenvalue of P (z) is a root of the characteristic polynomial det(P (z)).

For an (m + 1)-tuple A = (A0, . . . , Am) of matrices Ai∈ Cn×n, the matrix polynomial

PA(z) := Amzm+ · · · + A1z + A0 is called the matrix polynomial associated to A.

The spectrum of the matrix polynomial PA(z) is defined by σ(A) := σ PA(z) = {λ ∈ C | det(PA(λ)) = 0},

which is the set of all its eigenvalues. We should observe that for a matrix A ∈ Cn×n, its usual spectrum σ(A) is actually the spectrum of the monic matrix polynomial Iz − A. Interested readers may refer to the book of I. Gohberg, P. Lancaster and L. Rodman [5] for the theory of matrix polynomials and applications.

The main goal of this paper is to give some versions of the Wielandt-Mirsky conjecture for matrix polynomials.

The paper is organized as follows. In Section 2 we establish some Wielandt’s inequalities for matrix polynomials. In Section 3 we give some weaker versions of the Wielandt-Mirsky conjecture for monic matrix polynomials and for matrix polynomials whose leading coefficients are non-singular.

Notation and conventions. Throughout this paper, by a positive integer p we mean p ≥ 1 or p = ∞.

For a matrix A = (aij) ∈ Cn×n, a positive integer p, and a vector p-norm

| · |pon Cn, the matrix p-norm of A is defined by

|A|p:=





 Xn

i,j=1

|aij|p1p

(1 ≤ p < ∞),

i,j=1,...,nmax |aij| (p = ∞).

In particular, |A|2= |A|F, the Frobenius norm.

The operator p-norm of A is defined by

kAkp:= max{|Ax|p: |x|p= 1}.

Note that

kAk1:= max

j=1,...,n n

X

i=1

aij, kAk:= max

i=1,...,n n

X

j=1

aij.

There are many relations between operator and matrix p-norms. Interested readers may refer to the paper of A. Tonge [13] and the references therein for more details.

(4)

2. Some Wielandt’s inequalities for matrix polynomials In the first part of this section we give some versions of the Hoffman- Wielandt inequality (2) for monic matrix polynomials.

For a monic matrix polynomial PA(z) = I · zm+ Am−1zm−1+ · · · + A1z + A0

with Ai∈ Cn×n, the (mn × mn)-matrix

CA:=

0 I 0 · · · 0

0 0 I · · · 0

... ... ... ...

0 0 0 · · · I

−A0 −A1 −A2 · · · −Am−1

is called the companion matrix of the matrix polynomial PA(z) or of the tuple (A0, . . . , Am−1, I).

Note that the spectrum σ(A) of PA(z) coincides to the spectrum σ(CA) of CA (cf. [5]).

For two (m + 1)-tuples A = (A0, . . . , Am−1, I) and ¯A = ( ¯A0, . . . , ¯Am−1, I), the relation between the operator norms of their difference and those of their companion matrices is given in the following key lemma.

Lemma 2.1. Let A = (A0, . . . , Am−1, I) and ¯A = ( ¯A0, . . . , ¯Am−1, I) be (m + 1)-tuples. Then for any integer p > 0, we have

(1) |CA− CA¯|p= |A − ¯A|p =

Pm−1

i=0 |Ai− ¯Ai|ppp1

(1 ≤ p < ∞), max

i=0,...,m|Ai| (p = ∞).

(2) kCA− CA¯kp= kA − ¯Akp

m−1

X

i=0

kAi− ¯Aikp. (3) kCA− CA¯k1= max

i=0,...,m−1kAi− ¯Aik1.

Proof. We have the following expression of the difference of companion matrices

CA− CA¯ =

0 0 0 · · · 0

0 0 0 · · · 0

... ... ... ...

0 0 0 · · · 0

0− A01− A12− A2 · · · A¯m−1− Am−1

 .

This implies that the matrix (resp. operator) norm of CA− CA¯ is the same of that of the m-tuple ( ¯A0− A0, . . . , ¯Am−1− Am−1), i.e., we have the first equalities in (1) and (2).

The second equality in (2) follows from the subadditivity of the operator p-norm. On the other hand, for an (m + 1)-tuple A = (A0, . . . , Am) of matrices

(5)

in Cn×n, by a direct computation, we have

(7) |A|p:=

 Pm

i=0|Ai|pp1p

(1 ≤ p < ∞), max

i=0,...,m|Ai| (p = ∞), thus we get the second equality of (1). Moreover, we have

(8) kAk1= max

i=0,...,mkAik1.

Thus, we get (3). 

As a consequence of Lemma 2.1, we obtain the following Hoffman-Wielandt inequality for matrix polynomials.

Proposition 2.2. Let PA(z) = I · zm+ Am−1zm−1+ · · · + A1z + A0 and PA¯(z) = I · zm+ ¯Am−1zm−1+ · · · + ¯A1z + ¯A0 be monic matrix polynomials whose corresponding companion matrices CA and CA¯ are normal. Then we have

distF(σ(A), σ( ¯A)) ≤ |A − ¯A|F.

Proof. Applying the Hoffman-Wielandt inequality (2) for two normal matrices CA and CA¯ we get

distF(σ(CA), σ(CA¯)) ≤ |CA− CA¯|F.

Then the theorem follows from Lemma 2.1 (applying for p = 2) with the observation that σ(A) = σ(CA) and σ( ¯A) = σ(CA¯). 

We should observe that

(9) CA is normal if and only if A0 is unitary and A1= · · · = Am−1= 0.

Therefore, Theorem 2.2 yields the following consequence.

Corollary 2.3. Let PA(z) = I · zm+ A0 and PA¯(z) = I · zm+ ¯A0 be monic matrix polynomials with A0 and ¯A0 unitary. Then we have

distF(σ(A), σ( ¯A)) ≤ |A0− ¯A0|F.

One more interesting inequality was established by Wielandt for scalar ma- trices which is stated as follows.

Theorem 2.4 (cf. [8, Theorem 1.45]). Let A ∈ Cn×n be a Hermitian matrix such that for some numbers a, b > 0 the inequalities bI ≤ A ≤ aI hold. Then for any orthogonal unit vectors x, y ∈ Cn, the inequality

(10) |xAy|2≤ a − b

a + b

2

(xAx) · (yAy) holds.

In the following we establish a version of this inequality for matrix polyno- mials.

(6)

Theorem 2.5. Let PA(λ) = Pd

i=0Aiλi be a matrix polynomial whose coeffi- cient matrices satisfy

bI ≤ Ai≤ aI

for some numbers a, b > 0 and for all i = 0, . . . , d. Then for any orthogonal unit vectors x, y ∈ Cn,

|xPA(λ)y|2≤ a − b a + b

2

(xPA(|λ|)x) · (yPA(|λ|)y).

Proof. It follows from (10) that for each i = 0, . . . , d,

|xAiy| ≤ a − b

a + b(xAix)12 · (yAiy)12. Then

|xPA(λ)y|2=

d

X

i=0

(xAiy)λi

2

d

X

i=0

|xAiy| · |λ|i2

≤ a − b a + b

2

d

X

i=0

(xAix)12|λ|i2 · (yAiy)12 · |λ|i22

≤ a − b a + b

2 d

X

i=0

(xAix)|λ|i

d

X

i=0

(yAiy)|λ|i

= a − b a + b

2 x(

d

X

i=0

Ai|λ|i)x y(

d

X

i=0

Ai|λ|i)y

= a − b a + b

2

xPA(|λ|)x · yPA(|λ|)y,

where the last inequality follows from the classical Cauchy-Schwarz inequality.

The proof is complete. 

3. Some weaker versions of the Wielandt-Mirsky conjecture for matrix polynomials

In this section we give some estimations for the optimal matching distance between the spectra of matrix polynomials.

Proposition 3.1. There exists a constant c > 0 such that for every positive integer p and monic matrix polynomials PA(z) = I · zm+ Am−1zm−1+ · · · + A1z + A0and PA¯(z) = I · zm+ ¯Am−1zm−1+ · · · + ¯A1z + ¯A0whose corresponding companion matrices CA and CA¯ are normal we have

dist(σ(A), σ( ¯A)) ≤ ckA − ¯Akp≤ c

m−1

X

i=0

kAi− ¯Aikp.

(7)

Proof. It follows from the result of R. Bhatia, C. Davis and A. Mcintosh (see the inequality (3)) that there exists a constant c > 0 such that for every monic matrix polynomials PA(z) and PA¯(z) whose corresponding companion matrices CA and CA¯ are normal, we have

dist(σ(CA), σ(CA¯)) ≤ ckCA− CA¯kp.

Then the theorem follows from Lemma 2.1(2) with the observation that σ(A) =

σ(CA) and σ( ¯A) = σ(CA¯). 

Using again the observation (9) we obtain the following consequence.

Corollary 3.2. There exists a constant c > 0 such that for every positive integer p and monic matrix polynomials PA(z) = I · zm+ A0 and PA¯(z) = I · zm+ ¯A0 with A0 and ¯A0 unitary, we have

dist(σ(A), σ( ¯A)) ≤ ckA0− ¯A0kp.

Similarly, applying the inequality (4) and Lemma 2.1(2) with the observation (9) we obtain the following results.

Corollary 3.3. Let PA(z) = I · zm+ A0 and PA¯(z) = I · zm+ ¯A0 be matrix polynomials with A0and ¯A0 unitary. Then for every positive integer p we have

dist(σ(A), σ( ¯A)) ≤ (2mn − 1)kA0− ¯A0kp.

A similar version of (5) for monic matrix polynomials is given as follows, whose proof is similar to that of Proposition 2.2.

Proposition 3.4. Let PA(z) = I · zm+ Am−1zm−1+ · · · + A1z + A0 and PA¯(z) = I · zm+ ¯Am−1zm−1+ · · · + ¯A1z + ¯A0 be monic matrix polynomials.

Assume that CA is Hermitian. Then for every positive integer p we have dist(σ(A), σ( ¯A)) ≤ (γm,n+ 2)kA − ¯Akp≤ (γm,n+ 2)

m−1

X

i=0

kAi− ¯Aikp, where γm,n is a constant depending on m and n.

Remark 3.5. The constant γm,n have the following properties (cf. [2, p. 3]):

(1) π2ln(mn) − 0(1) ≤ γm,n≤ log2(mn) + 0.038.

(2) A. Pokrzywa (1981, [12]) proved that γm,n= 2

mn

[mn/2]

X

j=1

cot2j − 1 2mn π.

One of the condition for the companion matrix CA to be Hermitian is given as follows.

Corollary 3.6. Let PA(z) = I · zm+ A0 and PA¯(z) = I · zm+ ¯A0 be matrix polynomials with A0 unitary. Assume that PA(z) has only real eigenvalues.

Then for every positive integer p we have

dist(σ(A), σ( ¯A)) ≤ (γm,n+ 2)kA0− ¯A0kp.

(8)

Proof. It is well-known that a normal matrix A ∈ Cn×nis Hermitian if and only if it has only real eigenvalues. Therefore, the normal matrix CAis Hermitian if and only if it, whence the matrix polynomial PA(z), has only real eigenvalues.

Note also that in this case, CA is normal if and only if A0is unitary. Then the

result follows from Proposition 3.4. 

Remark 3.7. There are some characterization for matrix polynomials to have only real eigenvalues. These kinds of matrix polynomials are sometimes called weakly hyperbolic. The readers can refer to the work of M. Al-Ammari and F. Tisseur (2012, [1, Theorem 3.4]) and the references therein for more details characterization.

In the following we will establish an estimation for the optimal matching distance between spectra of two arbitrary monic matrix polynomials. For the proof, we need the following estimation given by Bhatia and Friedland (1981), Elsner (1982, 1985).

Lemma 3.8 ([2, Theorem 20.4]). Let A and B be any k × k-matrices. Then the optimal matching distance between their eigenvalues are bounded as

dist(σ(A), σ(B)) ≤ c(k) · k1k· (2M )1−1k · kA − Bkk1,

where M = max{kAk, kBk}, k · k any operator norm, and c(k) = k or k − 1 according to whether k is odd or even.

Theorem 3.9. Let N be any positive number and p any positive integer. Let PA(z) = I · zm+ Am−1zm−1+ · · · + A1z + A0and PA¯(z) = I · zm+ ¯Am−1zm−1+

· · · + ¯A1z + ¯A0 be monic matrix polynomials such that |A|p≤ N and | ¯A|p≤ N . Then there exists a constant c such that

dist(σ(A), σ( ¯A)) ≤ ckA − ¯Ak

1

pmn.

Proof. Applying Lemma 3.8 for the companion matrices CA and CA¯ with the operator norm k · kpwe obtain

dist(σ(CA), σ(CA¯)) ≤ c(mn) · (mn)mn1 · (2M )1−mn1 · kCA− CA¯kpmn1 , where M = max{kCAkp, kCA¯kp}. Then it follows from Lemma 2.1(2) and the equalities σ(A) = σ(CA) and σ( ¯A) = σ(CA¯) that

dist(σ(A), σ( ¯A)) ≤ c(mn) · (mn)mn1 · (2M )1−mn1 · kA − ¯Ak

1

pmn. Now we need to estimate kCAkp and kCA¯kp. By a comparison of the operator p-norm kCAkp and the matrix p-norm |CA|pgiven by M. Gohberg [6] (see also in [13]), we have

(11) kCAkp≤ (mn)max{1/p0−1/p,0}|CA|p. It is easy to see that for 1 ≤ p < ∞,

(12) |CA|pp=

m−1

X

i=0

|Ai|pp+ (m − 1)n = |A|pp+ (m − 1)n ≤ Np+ (m − 1)n,

(9)

and for p = ∞,

(13) |CA|= max{|A|, 1} ≤ max{N, 1}.

It follows from the inequalities (11), (12) and (13) that kCAkp≤ M0:=

(

(mn)max{1/p0−1/p,0} Np+ (m − 1)n1p

(1 ≤ p < ∞),

mn max{N, 1} (p = ∞).

Similarly,

kCA¯kp≤ M0. Then the number

c := c(mn) · (mn)mn1 · (2M0)1−mn1

satisfies the requirement. 

For matrix polynomials which are not necessarily monic, we have the follow- ing estimation, only for operator 1-norm.

Theorem 3.10. Let ¯A = ( ¯A0, . . . , ¯Am) be a fixed (m + 1)-tuple such that A¯m non-singular. Then there exists a constant c > 0 such that for every pair of matrix polynomials PA(z) = Am· zm+ Am−1zm−1+ · · · + A1z + A0 and PA0(z) = A0m· zm+ A0m−1zm−1+ · · · + A01z + A00 with kA − ¯Ak1< 1

k ¯A−1mk1 and kA0− ¯Ak1< 1

k ¯A−1mk1 we have

dist(σ(A), σ(A0)) < ckA − A0k1mn1 .

Proof. It follows from the sub-multiplicative property of operator 1-norm and the formula (8) that

k ¯A−1m(Am− ¯Am)k1≤ k ¯A−1mk1· kAm− ¯Amk1≤ k ¯A−1mk1· kA − ¯Ak1< 1.

Then Am is also non-singular (cf. [7, Theorem 2.3.4]), thus each element of σ(A) is finite. Similarly, each element of σ(A0) is also finite.

Since σ(A0) is the set of roots of the polynomial det(PA0(z)) ∈ C[z] whose degree is mn, it is easy to see that

dist(x, σ(A0))mn≤ | det(PA0(x))| for all x ∈ C.

In particular, for any λ ∈ σ(A), we have

(14) dist(λ, σ(A0))mn≤ | det(PA0(λ))|.

Since det(PA(λ)) = 0, we have

| det(PA0(λ))| = | det(PA0(λ)) − det(PA(λ))|.

(15)

The following inequality is useful for the later estimation (cf. [2, 2007, p. 107]): For any matrices A, B ∈ Cn×n and for any integer p > 0, we have (16) | det(A) − det(B)| ≤ n max{kAkp, kBkp}n−1· kA − Bkp.

(10)

Applying the inequality (16), we get (17) | det(PA0(λ)) − det(PA(λ))|

≤ n max{kPA0(λ)k1, kPA(λ)k1}n−1· kPA(λ) − PA0(λ)k1.

Using again the sub-multiplicativity of operator 1-norm and the formula (8), we have

kPA(λ) − PA0(λ)k1= k

m

X

i=0

(Ai− A0iik1

m

X

i=0

|λ|i· kA − A0k1. It follows from a matrix version of Cauchy theorem (cf. [4, Theorem 3.4]) that for λ ∈ σ(A), we have

|λ| < 1 + kA−1mk1· max{kAik1, i = 0, . . . , m − 1} ≤ 1 + kA−1mk1· kAk1. On the other hand, by the sub-additivity of operator norm, we have

kAk1≤ kA − ¯Ak1+ k ¯Ak1< 1 k ¯A−1mk1

+ k ¯Ak1. Hence for each λ ∈ σ(A) we have

|λ| < 2 + k ¯A−1mk1· k ¯Ak1.

Then m

X

i=0

|λ|i< L := (2 + k ¯A−1mk1· k ¯Ak1)m+1− 1 1 + k ¯A−1mk1· k ¯Ak1

. This yields

(18) kPA(λ) − PA0(λ)k1< L · kA − A0k1. By a similar estimation, we get

kPA(λ)k1< L · kAk1≤ L · 1 k ¯A−1mk1

+ k ¯Ak1, (19)

kPA0(λ)k1< L · kA0k1≤ L · 1 k ¯A−1mk1

+ k ¯Ak1.

(20)

It follows from the inequalities (14), (15), (17), (18), (19) and (20) that dist(λ, σ(A0)) < ckA − A0k1mn1 ,

where

c :=

n · Ln−1· 1 k ¯A−1mk1

+ k ¯Ak1n−1mn1 . Similarly, for every λ0∈ σ(A0), we have also

dist(λ, σ(A)) < ckA − A0k1mn1 . It follows that

dist(σ(A), σ(A0)) < ckA − A0k1mn1 . 

(11)

Acknowledgements. The author would like to thank Dr. Minh Toan Ho for his suggestion to establish a Wielandt’s inequality for matrix polynomials in Section 2. He would also like to express his sincere gratitude to the anonymous referees for their useful comments and suggestions to improve the results in the original version of this paper. The final version of this paper was finished during the visit of the author at the Vietnam Institute for Advanced Study in Mathematics (VIASM). He thanks VIASM for financial support and hospital- ity.

References

[1] M. Al-Ammari and F. Tisseur, Hermitian matrix polynomials with real eigenvalues of definite type. Part I: Classification, Linear Algebra Appl. 436 (2012), no. 10, 3954–3973.

https://doi.org/10.1016/j.laa.2010.08.035

[2] R. Bhatia, Perturbation bounds for matrix eigenvalues, Pitman Research Notes in Math- ematics Series, 162, Longman Scientific & Technical, Harlow, 1987.

[3] R. Bhatia, C. Davis, and A. McIntosh, Perturbation of spectral subspaces and solution of linear operator equations, Linear Algebra Appl. 52/53 (1983), 45–67. https://doi.

org/10.1016/0024-3795(83)80007-X

[4] T. H. B. Du, C.-T. Le, and T. D. Nguyen, On the location of eigenvalues of matrix polynomials. Preprint, 2017.

[5] I. Gohberg, P. Lancaster, and L. Rodman, Matrix Polynomials, Academic Press, Inc., New York, 1982.

[6] M. Goldberg, Equivalence constants for lp norms of matrices, Linear and Multilinear Algebra 21 (1987), no. 2, 173–179. https://doi.org/10.1080/03081088708817789 [7] G. H. Golub and C. F. Van Loan, Matrix Computations, fourth edition, Johns Hopkins

Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, MD, 2013.

[8] F. Hiai and D. Petz, Introduction to Matrix Analysis and Applications, Universitext, Springer, Cham, 2014. https://doi.org/10.1007/978-3-319-04150-6

[9] A. J. Hoffman and H. W. Wielandt, The variation of the spectrum of a normal matrix, Duke Math. J. 20 (1953), 37–39. http://projecteuclid.org/euclid.dmj/1077465062 [10] J. A. Holbrook, Spectral variation of normal matrices, Linear Algebra Appl. 174 (1992),

131–141. https://doi.org/10.1016/0024-3795(92)90047-E

[11] W. Kahan, Inclusion theorems for clusters of eigenvalues of Hermitian matrices. Tech- nical Report, Computer Science Department, University of Toronto, 1967.

[12] A. Pokrzywa, Spectra of operators with fixed imaginary parts, Proc. Amer. Math. Soc.

81 (1981), no. 3, 359–364. https://doi.org/10.2307/2043464

[13] A. Tonge, Equivalence constants for matrix norms: a problem of Goldberg, Linear Alge- bra Appl. 306 (2000), no. 1-3, 1–13. https://doi.org/10.1016/S0024-3795(99)00155-X

Công-Trình Lê

Division of Computational Mathematics and Engineering Institute for Computational Science

Ton Duc Thang University Ho Chi Minh City, Vietnam and

Faculty of Mathematics and Statistics Ton Duc Thang University

Ho Chi Minh City, Vietnam

Email address: lecongtrinh@tdtu.edu.vn

참조

관련 문서

• 대부분의 치료법은 환자의 이명 청력 및 소리의 편안함에 대한 보 고를 토대로

• 이명의 치료에 대한 매커니즘과 디지털 음향 기술에 대한 상업적으로의 급속한 발전으로 인해 치료 옵션은 증가했 지만, 선택 가이드 라인은 거의 없음.. •

In this paper, we presented the first, second, and some special cases of the third-order Convex Splitting Runge–Kutta (CSRK) methods for solving phase-field models such

12) Maestu I, Gómez-Aldaraví L, Torregrosa MD, Camps C, Llorca C, Bosch C, Gómez J, Giner V, Oltra A, Albert A. Gemcitabine and low dose carboplatin in the treatment of

Levi’s ® jeans were work pants.. Male workers wore them

Boubaker, An attempt to solve the heat transfer equation in a model of pyrolysis spray using 4q-order m-Boubaker polynomials,

The contents of the paper are as follows. After this introduction where several basic definitions and facts will be recalled, in Section 3, we de- voted to characterize

Motivated by the work mentioned above, and keep the interesting of the row and column determinant theory of quaternion matrix, we in this paper aim to consider a series