• 검색 결과가 없습니다.

An Ecient MILU Preconditioning for Solving the 2D Poisson equation with Neumann boundary condition

N/A
N/A
Protected

Academic year: 2022

Share "An Ecient MILU Preconditioning for Solving the 2D Poisson equation with Neumann boundary condition"

Copied!
15
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

An Ecient MILU Preconditioning for Solving the 2D Poisson equation with Neumann boundary condition

Yesom Park, Jeongho Kim, Jinwook Jung, Euntaek Lee, Chohong Min January 27, 2018

Abstract

MILU preconditioning is known to be the optimal one among all the ILU-type preconditionings in solving the Poisson equation with Dirichlet boundary condition. It is optimal in the sense that it reduces the condition number from O h−2

, which can be obtained from other ILU-type preconditioners, to O h−1. However, with Neumann boundary condition, the conventional MILU cannot be used since it is not invertible, and some MILU preconditionings achieved the order O h−1only in rectangular domains.

In this article, we consider a standard nite volume method for solving the Poisson equation with Neumann boundary condition in general smooth domains, and introduce a new and ecient MILU preconditioning for the method in two dimensional general smooth domains. Our new MILU preconditioning achieved the order O h−1

in all our empirical tests. In addition, in a circular domain with a ne grid, the CG method preconditioned with the proposed MILU runs about two times faster than the CG with ILU.

1 Introduction

The Poisson equation is of primal importance in many areas of science and engineering, such as incompressible uid ows [12, 14], electro-magnetic waves [9, 2], and image processing [16, 21]. Its solution is known to exist in a well-posed problem, but can not be explicitly formulated except for simplistic cases. Instead of providing an explicit formulation of the solution, many successful numerical methods have been developed to approximate the solution. To list a few among them,

nite dierence methods [20, 7], nite element methods [4, 17], and spectral methods [19, 13] are devised to solve the Poisson equation numerically.

The inverse Laplacian is a self-adjoint and compact operator [18], and has countably many eigenvalues that are real and have a subsequence converging to zero. Thus, the Laplace operator has a spectrum that ranges from a negative value to −∞. So if we obtain an associated matrix of Laplace operator from any of the numerical methods, it has a large condition number that grows to ∞ as the step size of the grid decreases to zero. This is because the matrix is a discrete analogue of an unbounded operator. The large condition number not only delays the convergence of an iterative method for solving the associated linear system, but also invokes round-o errors to the loss of signicant digits [8].

A basic example is the standard 5-point nite dierence method on rectangular domains with Dirichlet boundary condition. The condition number of the matrix is given as 4h−22 on a unit square domain, and h is the uniform step size of grid. To obtain an accurate approximation, h needs to be small and then the condition number 4h−22 becomes very large. For an ecient calculation of an accurate approximation, one therefore needs to utilize a preconditioning to reduce the condition number such as Jacobi, Symmetric Gauss-Seidel (SGS), Incomplete LU (ILU), and Modied Incomplete LU (MILU) preconditioners.

In a seminal paper [6], Gustafsson showed that only the MILU preconditioning results in the condition number of size O h−1

, while all the other ILU-type preconditionings result in O h−2 The same result was proved in [11] to hold true for the nite dierence method by Gibou et al..

(2)

applied to the cases of general smooth domains. Also, the same result was observed in [22] for the

nite dierence method by Shortley and Weller [20].

While MILU preconditioning is the optimal choice among all ILU-type ones for Dirichlet bound- ary condition, the choice of optimal preconditioner is unclear for Neumann boundary condition.

The conventional MILU cannot be dened in the case of Neumann boundary condition. From sev- eral papers [23, 24, 25], we could nd out that a lot of MILU-type preconditioners can be devised from the general denition of MILU. Moreover, it is mentioned in the papers [24, 25] that those can be applied to solving Neumann boundary problems numerically. In addition, actual applica- tions to rectangular domain case were given, resulting in O(h−1). However, to the best of our knowledge, there is no appropriate preconditioner to get O(h−1)on general smooth domains. The unpreconditioned matrix has a large condition number of size O h−2

. Any of Jacobi, SGS, and ILU preconditioners does keep the order −2, but just reduces the constant in C · h−2' O h−2

The Purvis-Burkhalter method [3] is a standard nite volume method for solving the Poisson. equation with Neumann boundary condition. The method plays a central role for solving the incompressible Navier-Stokes' equations in general smooth domains [5] and the interaction between

uid and solid [10]. Most of the computational cost for solving incompressible uid ows is occupied by the Poisson solver, and it is very required to nd a good preconditioner for the method.

In order to obtain a preconditioner that results in O h−1on general smooth domains, even for Poisson equation with Neumann boundary condition, we take a practical guide in [15] that suggests a mixture of more than 97% MILU and less than 3% ILU, while increasing the ratio of MILU as the step size decreases to 0. The mixture provides well-dened preconditioner and turns out to perform well. Due to the large weight ≥ 97% of MILU, the mixture seems to behave like MILU. One important role of small portion of ILU makes the MILU matrix invertible. In the setting, one naturally becomes curious about the optimal ratio of MILU and ILU resulting in O(h−1), which is exactly the theme of this article.

This article begins with some basic analysis on MILU preconditioning applied to Purvis- Burkhalter method. The analysis shows why MILU preconditioning breaks down and explains why MILU-ILU mixture is a well-dened preconditioner. Then we report intensive numerical tests that searches the optimal rates for various step size h. From the numerical results, we introduce a conjecture that the MILU-ILU preconditioning of a certain ratio reduces the condition number from O h−2

to O h−1

. We provide empirical evidences that support the conjecture.

2 Basic analysis for MILU

In this section, we provide the basic setting for the domain and review the MILU preconditioning.

More precisely, let Ω be a open subset in R2. Then, we discretize the Poisson equation with pure Neumann boundary condition :

(−∆u = f, in Ω,

∂u

∂n = g, on ∂Ω. (2.1)

2.1 Domain setting

In this subsection, we use Purvis-Burkhalter method [3, 5] to dene the discrete domain Ω and Heaviside function which will be used in the later analysis. Let hZ2denote the uniform grid in R2 with grid size h. Then, for each node (xi, yj) ∈ hZ2, we dene the nite volume Ci,jand its edges E1

2,j and Ei,j±12 as follows:

Cij := [xi−1 2, xi+1

2] × [yj−1 2, yj+1

2], E1

2,j := x1

2 × [yj−1

2, yj+1 2], Ei,j±1

2 := [xi−1 2, xi+1

2] × y1 2.

(2.2)

Based on these grid points and edges, we dene node and edge sets.

Denition 2.1 (Node and edge sets). By Ωh:=n

(xi, yj) ∈ hZ2 | Cij∩ Ω 6= ∅o, we denote the set of all nodes whose control volumes intersect the domain. Moreover, we dene the edge sets Exh:=n

(xi+1

2, yj) | Ei+1

2,j∩ Ω 6= ∅o and Eyh:=n

(xi, yj+1

2) | Ei,j+1

2 ∩ Ω 6= ∅o in the same way.

(3)

We also dene the total edge set Eh:= Exh∪ Eyh. Finally, we denote the total number of node |Ωh| by K.

Now, we dene the Heaviside function which measures how large portion of each edge belongs to the domain.

Denition 2.2 (Heaviside function). For each edge Ei+1

2,j and Ei,j+1

2, we dene the Heaviside function Hi+12,j and Hi,j+12 by

Hi+1

2,j=length (Ei+12,j∩ Ω)

length (Ei+12,j) , Hi,j+1

2 =length (Ei,j+12 ∩ Ω) length (Ei,j+12) , respectively.

With these denitions, now we can discretize the Poisson equation (2.1). Let uhi,j be the average value of u over Cij. Then by divergence theorem, we have

Z Z

Cij∩Ω

−∆udxdy = Z Z

Cij∩Ω

f dxdy

= Z

∂(Cij∩Ω)

−∂u

∂nds = − Z

∂Cij∩Ω

∂u

∂nds − Z

Cij∩∂Ω

gds =: I1+ I2.

We approximate ∂u/∂x and ∂u/∂y at ∂Cij by using central dierences of u0i,js. As a result, I1

can be approximated by using the discrete Laplace operator as follows:

(Lhuh)ij= Hi+1

2,j(uhi+1,j− uhij) − Hi−1

2,j(uhi,j− uhi−1,j) + Hi,j+1

2(uhi,j+1− uhi,j) − Hi,j+1

2(uhi,j− uhi,j−1), (2.3) where uh = (uhi,j) ∈ RK, Lh ∈ RK×K is the discrete Laplace operator and K = |Ωh|. Let bhi,j be the approximated value of I2+RR

Cij∩Ωf dxdy. Then to solve the Poisson equation (2.1) discretely, it is sucient to solve the following linear equation:

Lhuh= bh, Lh∈ RK×K, uh, bh∈ RK. (2.4) Here, to vectorize the values on the grid point, we use the lexicographical ordering [1] starting from the left-lower part of the domain, in the same way as in [11]. That is, align nodes in Ωhas follows: x1:= (xi1, yj1) ≤ x2:= (xi2, yj2) ≤ · · · ≤ xK:= (xiK, yjK), where

(xi, yj) ≤ (xi0, yj0) ⇐⇒ j < j0

∨ (j = j0 ∧ i ≤ i0).

For simplicity, we use the matrix A instead of Lh and omit all h-superscripts. Note that A is symmetric positive semi-denite and N (A) = span{1}, where 1 denotes the vector in RK whose all components are 1. From now on, we focus on the linear equation of the form

Au = b, A ∈ RK×K, u, b ∈ RK. (2.5)

where A = (ars)satises the following:

ars=







 Hir+1

2,jr+ Hir1

2,jr+ Hir,jr+1

2 + Hir,jr1

2 if s = r,

−Hir±1

2,jr if js= jr and is= ir± 1,

−Hir,jr±1

2 if is= ir and js= jr± 1,

0 otherwise.

(2.6)

Here, arsshows how the node xr and xs are connected.

(4)

2.2 Analysis on the preconditioner

In this subsection, we review the classical denition of Incomplete LU decomposition (ILU) and Modied Incomplete LU decomposition (MILU) and their analysis. To solve (2.5) by using iterative method, the condition number κ(A) has signicant inuence on the convergence rate. Therefore, instead of solving (2.5) directly, one can solve the modied equation

M−1Au = M−1b, A, M ∈ RK×K, u, b ∈ RK. (2.7) with κ(M−1A) < κ(A). There are two conditions for preconditioner M to minimize the condition number κ(M−1A):

• M should be similar to A in some sense to make M−1A closer to I, and hence make the condition number small.

• The inverse operation of M should be easily calculated.

Note that the two extreme choices for M is M = A and M = I. For the case M = A, M is same as A and hence κ(M−1A) = 1. However, in this case, it is expensive to calculate the inverse operation of M = A. For the other extreme case, M = I, the inverse operation has no cost, but there is no eect on the condition number. Therefore, one has to choose the preconditioner M between these two extreme cases. Among many other preconditioners, we will focus on the two standard preconditioners, namely, ILU and conventional MILU.

2.2.1 Incomplete LU decomposition

The ILU preconditioner is dened by following procedure: Let A = L + D + U where L, D and U denotes the (strictly) lower triangular, diagonal and (strictly) upper triangular matrix of A respectively. Then, dene the preconditioner M as

M = (L + E)E+(E + U ),

for some appropriately chosen diagonal matrix E, and E+ as the Penrose pseudoinverse of E.

Note that M is the product of triangular matrices and diagonal matrix. Hence if E−1 exists, the inverse of M can be easily calculated and M can be used as a preconditioner. So let us temporarily assume E is invertible. Then we have

M = (L + E)E−1(E + U ) = LE−1U + E + L + U.

To make M similar to A, ILU preconditioner M take E so that LE−1U + Ehas the same diagonal component with D. So the diagonal matrix E is dened by

(E + LE−1U )ii= Dii, i = 1, . . . , K.

For our case, let Eik,jkbe the diagonal element of E corresponding to the node point xk= (xik, yjk) for k = 1, . . . , K. From above equation, we have recursive denition of ILU, which can be dened even when E is not invertible, as follows, :

Ei1,j1 = a1,1, Eik,jk = ak,k− Hi2

k12,jkEi+k−1,jk− Hi2

k,jk12Ei+k,jk−1, (2.8) where a+ is dened by

a+=

 a−1 if a 6= 0, 0 if a = 0,

and Eik−1,jk, Eik,jk−1 are diagonal elements of E corresponding to the node on the left/bottom of xk respectively.

(5)

Figure 2.1: Example domain

2.2.2 Conventional Modied Incomplete LU decomposition

From the general denition of (Conventional) MILU preconditioner [23, 24, 25], there are a lot of choices for MILU preconditioners. Here we present the case which was used in the paper [11] as follows : Similar to ILU preconditioner, let A = L+D +U where L, D and U denotes the (strictly) lower triangular, diagonal and (strictly) upper triangular matrix of A respectively. Then, dene the preconditioner M as

M = (L + E)E+(E + U ),

for some diagonal matrix E. Again, we assume E is invertible temporarily. In this case, to make M similar to A, MILU preconditioner M takes E so that LE−1U + Ehas the same row sum with D. So the diagonal matrix E is dened by

K

X

j=1

(E + LE−1U )ij= Dii, i = 1, . . . , K.

From the above equation, we have recursive denition of MILU, which can be dened even when Eis not invertible, as follows:

Ei1,j1 = a1,1, Eik,jk= ak,k− Hi

k12,jkE+ik−1,jk(Hi

k12,jk+ li

k−1,jk+12)

− Hik,jk1 2Ei+

k,jk−1(Hi

k,jk12 + li

k+12,jk−1).

(2.9)

Here lik−1,jk+12 and lik+12,jk−1 are dened as

li

k−1,jk+12 =

 Hip,jp+1

2 if xp:= (xik− h, yjk) ∈ Ωh, 0, otherwise,

and

li

k+12,jk−1=

 Hiq+1

2,jq if xq:= (xik, yjk− h) ∈ Ωh, 0, otherwise,

and Eik−1,jk, Eik,jk−1 are diagonal elements of E corresponding to the node on the left/bottom of xk respectively. If the nodes are not in Ωh, then they are ignored on the calculation.

Example 2.1. For the points in Figure 2.1, we give lexicographical ordering

(6)

and we construct Ah when h = 1 as follows:

Ah=

1 −1/2 −1/2

−1/2 2 −1/2 −1

−1/2 1 −1/2

−1/2 2 −1 −1/2

−1 −1 3 −1/2 −1/2

−1/2 −1/2 1

−1/2 1 −1/2

−1/2 −1/2 1

 As a result, we determine the diagonal entries E corresponding to MILU preconditioner

(1, 3/2, 1/2, 3/2, 1, 0, 1/2, 0), by this order.

The following proposition provides the easy formula for the matrix E.

Proposition 2.1. Let E be a diagonal matrix dened in (2.9). Then, we have Eik,jk= Hi

k+12,jk+ Hi

k,jk+12.

Proof. We will use strong induction on k for k = 1, 2, . . . , n. For initial step, Ei1,j1corresponds to the point x1, which is at the left-bottom corner of grids. This implies

Hi

112,j1, Hi

1,j112 = 0.

Hence we have

Ei1,j1= a1,1= Hi

1+12,j1+ Hi

1,j1+12.

Now, suppose the statement holds for k < n + 1. We denote xp and xq by the nodes on the left/bottom of xn+1respectively. Here we have 4 possible cases;

(1)xp∈ Ωh and xq∈ Ω/ h, (2)xp∈ Ω/ h and xq∈ Ωh, (3)xp∈ Ωh and xq∈ Ωh, (4)xp∈ Ω/ h and xq∈ Ω/ h.

We just prove the case (1), since the others are analogous. For the case (1), we have

Ein+1,jn+1 = an+1,n+1− Hi

n+112,jn+1

Ein+1−1,jn+1

(Hi

n+112,jn+1+ li

n+1−1,jn+1+12), and

li

n+1−1,jn+1+12 = Hip,jp+1

2, Ein+1−1,jn+1= Hi

n+11

2,jn+1+ Hip,jp+1 2,

which follow from xp∈ Ωh, xq∈ Ω/ hand the induction hypothesis. Especially, xq∈ Ω/ himplies its four neighboring edges do not intersect with Ω, which means Hin+1,jn+11

2 = 0. Thus we have an+1,n+1= Hi

n+1+12,jn+1+ Hi

n+1,jn+1+12+ Hi

n+11 2,jn+1

Therefore, we get

Ein+1,jn+1= an+1,n+1−Hi

n+11 2,jn+1

Ein+1−1,jn+1

(Hin+11

2,jn+1+ lin+1−1,jn+1+1 2)

= Hi

n+1+12,jn+1+ Hi

n+1,jn+1+12 + Hi

n+11

2,jn+1 − Hi

n+11 2,jn+1

= Hi

n+1+12,jn+1+ Hi

n+1,jn+1+12, which completes the induction.1

1Here we not used x+notation, But each fraction a/b actually means ab+.

(7)

Corollary 2.1. We say a node (xik, xjk) ∈ Ωh is at the (numerical) right-top corner if the right and upper edge of its control volume does not intersect with Ω, i.e.

Hi

k+12,jk= Hi

k,jk+12 = 0.

Then we have

Eik,jk= 0 i (xik, yjk)is at the numerical right-top corner of Ωh. Proof. It can be proved by using Proposition 2.1.

Remark 2.1. To say a node is at the right-top corner in the real picture, there must be no nodes in Ωhat its upper or right side. Hence the value of heavyside functions with respect to its upper or right side should be zero. However, note that the node at the numerical right-top corner may not be the node at the right-top corner in the real picture, i.e. for a node (xik, yjk) ∈ Ωhalthough we have Eik,jk= 0, there might exists some node (x, y) ∈ Ωh such that

(x, y) = (xik+1, yjk) or (x, y) = (xik, yjk+1).

This is possibile when Ω is non-convex.

Corollary 2.2. The inverse of MILU does not exist.

Proof. Consider

M = (L + E)E+(U + E).

Since there is at least one node at the right-top corner in Ωh, and by corollary 2.1, at least one of diagonal entry of E is zero.

2.3 Mixture of MILU and ILU preconditioner

As in Example 2.1., we can see that the number of zero diagonal entries of E-matrix can be larger than 1, and Corollary 2.1 tells us it varies by shape of domain. So MILU preconditioner for A presented in Section 2.2 can not be used since it is not invertible, whereas ILU preconditioner can be used since it is invertible. Hence our idea is to mix ILU and MILU preconditioner with certain ratio, which is invertible.

Denition 2.3 (Mixture of MILU and ILU). For r ∈ (0, 1), we dene MILU-ILU(r) be the matrix M = (L + ˜E) ˜E−1( ˜E + U )where ˜E is dened by the formula

i1,j1 = a1,1, E˜ik,jk= ak,k− Hi

k12,jk+ik−1,jk(Hi

k12,jk+ (1 − r)li

k−1,jk+12)

− Hi

k,jk12i+k,jk−1(Hi

k,jk12 + (1 − r)li

k+12,jk−1).

(2.10)

Proposition 2.2. For any r ∈ (0, 1), MILU-ILU(r) mixture is invertible.

Proof. It suces to show that any diagonal element of ˜E is non-zero. Let ˜E of MILU-ILU be diag( ˜Ei1,j1, . . . , ˜EiK,jK), and l := inf

k(ik+ jk). We will prove the following by induction on n.

ik+ jk= n ⇒ ˜Eik,jk> max{(1 − r)Hi

k+12,jk+ Hi

k,jk+12, Hi

k+12,jk+ (1 − r)Hi

k,jk+12} (2.11) For ik+ jk= l, this node is at the left-bottom, i.e. (xik−1, yjk), (xik, yjk−1)are not in Ωh. Thus

i+k−1,jk = 0, E˜i+k,jk−1= 0, and for such nodes,

ak,k≥ Hik+1 2,jk+ Hi

k,jk+12.

(8)

-5.5 -5 -4.5 -4 -3.5 -3 -2.5 2

4 6 8 10

12 Unpreconditioned

ILU

MILU-ILU(0.03)

Figure 3.1: The performances of ILU and MILU-ILU (0.03) preconditioners in test #1: the graph of the condition number κ = κ M−1A

with respect to the grid step size h in the log scales.

Thus

ik,jk≥ Hi

k+12,jk+ Hi

k,jk+12

> max{(1 − r)Hi

k+12,jk+ Hi

k,jk+12, Hi

k+12,jk+ (1 − r)Hi

k,jk+12}.

So the initial step is proved. For induction step, assume that conclusion holds for n = m. Now choose any node satisfying ik+ jk= m + 1. For the case that the node is at the left-bottom, it is the same as the initial step. If not,

ik,jk = ak,k− Hi

k12,jki+k−1,jk(Hi

k12,jk+ (1 − r)li

k−1,jk+12)

− Hik,jk1 2

i+k,jk−1(Hik,jk1

2 + (1 − r)lik+1 2,jk−1)

> ak,k− Hi

k12,jk− Hi

k,jk12

≥ max{(1 − r)Hi

k+12,jk+ Hi

k,jk+12, Hi

k+12,jk+ (1 − r)Hi

k,jk+12},

(2.12)

which completes the proof.

3 Empirical tests on MILU-ILU preconditioning

In this section, we consider the mixture preconditioner of MILU and ILU. It was suggested in [15] to mix more of MILU from the default 97% as the grid step size decreases. This section is devoted to seeking the optimal ratio to mix and to nding out the performance of the MILU-ILU preconditioning with the optimal ratio.

3.1 Test #1 : MILU-ILU(0.03)

We rst check the performance of the MILU-ILU preconditioning with r = 0.03. The Purvis- Burkhalter method is applied on the unit disc of center (0, 0) with uniform grid step size h. The numerical results are reported in gure 3.1. In regard to the magnitude of condition number, MILU-ILU (0.03) achieves better results than ILU. The condition number grows as O h−2

in all cases: in regard to the growth order of condition number, MILU-ILU (0.03) and ILU are just as good as the unpreconditioned linear system.

(9)

Figure 3.2: The graph of the condition number κ = κ M−1Awith respect to the ratio of MILU-ILU for various h. It is remarkable that the graph just translates by the same amount (− log 4, log 2) when his reduced by half.

3.2 Test #2 : Optimal ratio to mix

It is known to be advantageous to take a smaller ratio r for a smaller grid step size h. A question in practice is how to give specic quantities to such qualitative statements. On the same setting as in test #1, a brute-force search is carried out to quantify the optimal ratio r with respect to h.

Figure 3.2 depicts the graph of the condition number κ with respect to the ratio r of MILU-ILU for each step size h. It is remarkable that the graph in the log scales just translates to the left by log 4and upward by log 2 when h is reduced by half. In addition, in the gure, note that κ in the log scale increases by log 4 when r is xed, greater than 0.01, and h is reduced by half.

Hence the results in test #1 can be explained by the above paragraph. When the ratio r = 3%

is xed and h in the log scale is reduced by log 2, then κ in the log scale increases by log 4. We can deduce that log κ + 2 log h is constant, and that κ grows as O h−2

as h becomes smaller.

Now, let us utilize the translation property to decide the ratio. When h is reduced by half (or log h ← log h − log 2), let us take the ratio r to be a quarter of it ( log r ← log r − log 4 ) then log κ will increase by log 2 ( log κ ← log κ + log 2 ) according to the translation property. This observation leads to the following conjecture that if C = r · h−2 is a xed constant independent of h, then log κ + log h is kept constant approximately, and κ grows as O h−1.

Conjecture : Let A be the matrix associated with the Purvis-Burkhalter method [3, 5] for solving the Poisson equation with Neumann boundary condition in a domain Ω, and let M be the MILU-ILU preconditioner with mixing ratio r. When r = C · h2 for some moderate constant C > 0 independent of h, then we have

κ M−1A = O h−1 , for any smooth domain Ω ⊂ R2. In practice, we may set C = 1.

(10)

(a) Circular domain (b) Elliptic domain with long axis

tilted to upper right corner. (c) Elliptic domain with short axis tilted to upper right corner.

(d) Round square domain. (e) Flower-shaped domain. (f) Stone-shaped domain.

Figure 4.1: Various test domain with N = 20.

4 Numerical support of the conjecture

In this section, we provide pieces of numerical evidences to support the conjecture. Numerical tests are conducted on the various domains depicted in Figure 4.1.

Figure 4.2 shows how the condition number changes as h gets smaller in every domain depicted in Figure 4.1. Here, r = h2and r = 3 · h2are used for the ratio r. In each domain Ω and constant C = r/h2, the numerical results in Figure 4.2 indicate that the condition number grows as O h−1 which follows the conjecture. The conjecture was based upon the empirical observations on the, disc. It states that the condition number κ = κ(M−1A)is of size O(h−1)for any domain Ω and any constant C = r/h2 > 0 independent of h, when M is the combination of MILU and ILU at the ratio of 1 − r to r.

More specically, we checked the plausibility of the conjecture for six dierent domains that are depicted in Figure 4.1 and two dierent values of C = 1, 3, making 12 sets of combination (Ω, C).

For each set (Ω, C) and each h, we perturb the domain by a random vector uniformly distributed in (−h, h) × (−h, h), and formed matrices A and M. Then, the maximum eigenvalue of M−1Ais calculated by the power iteration, and the minimum eigenvalue by the inverse iteration, and then the condition number is taken to be their ratio. Due to the singularity of matrix A, the inverse iteration is run on the orthogonal complement, 1.

Figure 4.2 shows the graph of κ with respect to h in the log scales for each domain (a)-(f) in Figure 4.1 and for each constant C = 1 and C = 3. In each graph, κ is observed to behave as h−1 as step size h decreases, regardless of choice of the constant C. This supports the conjecture.

Now, we present an example that shows the practical importance of the conjecture. While the conventional ILU and MILU-ILU(3%) generates the condition number of size O(h−2), the conjecture states that MILU-ILU(Ch2) generates that of size O(h−1). When h is small enough, h−2  h−1 and MILU-ILU(h2) is expected to outperform the others by a large margin. One measure of the excellence is the number of iterations until convergence of the Preconditioned Conjugate Gradient. This number of iterations is proportional to the condition number and it is directly related to the computation time. We take an example with h = 0.005 and Ω = {(x, y) | x2+ y2 < 1}. Figure 4.3 plots the residual norm with respect to iteration number for each of unpreconditioned, ILU, MILU-ILU(3%), and MILU-ILU(h2). The computational time of MILU-

(11)

(a) Circular domain (b) Elliptic domain with long axis tilted to upper right corner

(c) Elliptic domain with short axis tilted to upper right corner (d) Round square domain

(e) Flower-shaped domain (f) Stone-shaped domain

Figure 4.2: The graph of the condition number κ with respect to the grid step size h for each domain (a)-(f) and each MILU-ILU ratio r = h2or r = 3 · h2. We conducted the same numerical test 20 times by translating the same domains by a vector (e1,i, e2,i), i = 1, ..., N where e1,iand e2,i are uniformly distributed random variables in (−h, h). The graphs represent the average value of κ together with

(12)

Figure 4.3: The graph for the residual norm with respect to the iteration of Preconditioned Conjugate Gradient on the linear system with Ω = {(x, y) | x2+ y2 < 1}. In regard to the iteration num- ber for convergence, the computational time of MILU-ILU(h2) is merely about 67% and 51% of the computational times of MILU-ILU(3%) and ILU, respectively.

ILU(h2) is merely about 67% of the computational time of MILU-ILU(3%), and about 51% of that of ILU.

Remark for the lexicographical ordering.

Although the tested domains are the same geometric domains, they show dierent performances.

However, this dierence is due to a lexicographical ordering of the cells, introduced in Section 2.1. More precisely, the reason that classical MILU preconditioning cannot be applied for pure Neumann boundary condition case is that the diagonal matrix E in the MILU preconditioner has a singularities (i.e., Ei,j = 0). On the other hand, Proposition 2.1 implies that the original MILU preconditioner has a singularities at the cells where both Hi+1

2,j and Hi,j+1

2 are 0. Those cells exactly correspond to the right-top corner cells of the domain. Since the MILU-ILU(r) preconditioning was intended to remedy those singularities, it is natural that the number of cells at the right-top corner impacts on the performance of MILU-ILU(r) preconditioning. Therefore, the bad performance in Figure 4.2 (c) compared to 4.2 (b) is probably due to the fact that those domains have the dierent number of cells at the right-top corner, and corresponding singularities.

We note that there are four choices of lexicographical ordering for a 2D domain (eight choices for a 3D domain), and the performance of MILU-ILU(r) preconditioning can be changed according to this choice. For example, if we order the cells using the lexicographical ordering starting from the left-top corner, then the results in Figure 4.2 (b) and (c) will be reversed. When the domain is given and the direction of alignment of the domain is clear, it seems that the lexicographical ordering along the direction of the domain shows the best performance among other lexicographical orderings. When the direction of alignement of the domain is not clear, we have to nd a way to detect the direction of alignement of domain, or to make the preconditioner ordering-independent.

(13)

Figure 4.4: The graph of the condition number κ = κ M−1Awith respect to the ratio of MILU-ILU for various h in the 3-dimensional domain. The graph is quite dierent from the two dimensional cases in gure 3.2. Therefore, the conjecture seems not valid for three dimensional domains.

Remarks for the three dimensional case.

We performed the numerical test on the unit sphere of center (0, 0, 0) with uniform grid step size, to see whether numerical results still follow the conjecture in the case of three dimension. Figure 4.4 depicts the graph of the condition number κ with respect to the MILU-ILU with ratio r for each step size. As shown in Figure 4.4, the behavior in the three dimensional domain is quite dierent from the two dimensional case. Unlike the behavior for two dimensional domains, the graph in log scales is not convex and shifted in a more complex way than the two dimensional case.

The graph translates to the left and upward when h decreases by half, but the dierences between each graph are not uniform. Also, the position of the extreme values are irregular. However, it is hopeful that the tendency for the graph to be shifted as step size decreases, is similar to that of the two dimensional case. Moreover, when we track the optimal ratio of MILU-ILU in Figure 4.4, it still gives condition number of order O(h−1)to some extent. Therefore, it seems possible to nd the optimal ratio of MILU and ILU which gives condition number of order O(h−1), even for the three dimensional case. Since we only conducted the numerical test in two dimensional domains, we cannot fully understand this result in this paper at this point and theoretically rigorous analysis is needed. After more careful analysis on two dimensional cases, we will be able to deal with the three dimensional cases, which is more complex.

5 Conclusion

In this paper, we presented the new mixing method between ILU and MILU to solve the Poisson equation with Neumann boundary condition. We varied the ratio of mixing ILU and MILU to nd the optimal ratio, which gives the smallest condition number. We found that the optimal ratio r of ILU preconditioning should change as step size h changes. Moreover, we found that the exact optimal ratio is r = C · h2, where C can be any choice, as long as it does not depend on h. For this optimal ratio, the condition number behaves as h−1, which is signicantly enhanced compared to the previous performance, O(h−2). There are several ways to develop this result. First, since we only conducted the numerical tests, the theoretical proof of the conjecture is still an open problem.

Moreover, we performed the numerical test on the unit sphere of center (0, 0, 0) with uniform grid step size to check to check the plausibility of the conjecture for three dimensional domains, and we

(14)

nd that the behavior in the three dimensional domain is quite dierent from the two dimensional case. Since we only conducted the numerical tests in two dimensional domains and did not analyze them theoretically, it is not easy to understand the results for three dimensional cases until now.

Furthermore, as discussed in Remark 4.1, when the direction of alignment of the domain is not clear, we need to detect the direction of alignment of the domain before we order the cells or make the preconditioner more versatile and ordering-independent. These interesting questions are left for the future works.

Acknowledgement

The research of C. Min and Y. Park was supported by Basic Science Research Program through the National Re-search Foundation of Korea (NRF) funded by the Ministry of Education (2017- 006688) and the research of J. Kim, J. Jung and E. Lee was supported by the Samsung Science and Technology Foundation under Project (Number SSTF-BA1301-03).

References

[1] I. S. Du, G. A. Meurant.The eect of ordering on preconditioned conjugate gradients. BIT, 29(4):635657, 1989.

[2] M. Bonnet. Boundary integral equation methods for solids and uids. Meccanica, 34(4):301

302, 1999.

[3] J. W. Purvis, J. E. Burkhalter. Prediction of critical mach number for store congurations.

AIAA Journal, 17(11):11701177, 1979.

[4] P. G. Ciarlet. The nite element method for elliptic problems. SIAM, 2002.

[5] Y. T. Ng, C. Min, F. Gibou. An ecient uidsolid coupling algorithm for single-phase ows.

J. Comput. Phys., 228(23):88078829, 2009.

[6] I. Gustafsson. A class of rst order factorization methods. BIT, 18:142156, 1978.

[7] F. Gibou, R. Fedkiw, L.-T. Cheng, M. Kang. A secondorderaccurate symmetric discretiza- tion of the Poisson equation on irregular domains. J. Comput. Phys., 176:205227, 2002.

[8] E. Cheney, D. Kincaid. Numerical mathematics and computing. Nelson Education, 2012.

page 97.

[9] K. J. Marfurt. Accuracy of nite-dierence and nite-element modeling of the scalar and elastic wave equations. Geophysics, 49(5):533549, 1984.

[10] F. Gibou, C. Min. Ecient symmetric positive denite second-order accurate monolithic solver for uid/solid interactions. J. Comput. Phys., 231:32453263, 2012.

[11] G. Yoon, C. Min. Analyses on the nite dierence method byGibouet al. for poisson equation.

J. Comput. Phys., 280:184194, 2015.

[12] D. Brown, R. Cortez, M. Minion. Accurate projection methods for the incompressible Navier- Stokes equations. J. Comput. Phys., 168:464499, 2001.

[13] D. Gottlieb, S. Orszag. Numerical analysis of spectral methods: theory and applications.

SIAM, 1977.

[14] M. Sussman, P. Smereka, S. Osher. Alevel set approach for computing solutions to incom- pressible two-phase ow. J. Comput. Phys., 114:146159, 1994.

[15] B. Robert. Fluid simulation for computer graphics. CRC Press, 2015.

[16] E. Fatemi, S. Osher, L. I. Rudin. Nonlinear total variation based noise removal algorithms.

Physica D., 60(1-4):259268, 1992.

[17] B. Cockburn, George E, C. Shu. The development of discontinuous galerkin methods. In Discontinuous Galerkin Methods, pages 350. Springer, 2000.

[18] M. S. Birman, M. Z. Solomjak. Spectral theory of self-adjoint operators in Hilbert space, volume 5. Springer Science & Business Media, 2012.

(15)

[19] H. P. Pfeier, L. E. Kidder, M. A. Scheel, S. A. Teukolsky. A multidomain spectral method for solving elliptic equations. Comput. Phys. Commun., 152(3):253273, 2003.

[20] G. H. Shortley, R. Weller. The numerical solution of Laplace'sequation. J. Appl. Phys., 9(5):334348, 1938.

[21] S. Osher, M. Burger, D. Goldfarb, J. Xu, W. Yin. An iterative regularization method for total variation-based image restoration. Multiscale. Model. Sim., 4(2):460489, 2005.

[22] C. Min, G. Yoon. Comparison of eigenvalue ratios in articial boundary perturbation and jacobi preconditioning for solving poisson equation. J. Comput. Phys, 349:110, 2017.

[23] R. Beauwens. Modied incomplete factorization strategies. In Preconditioned Conjugate Gradient Methods, Lecture Notes in Mathematics, Springer:Berlin, 1457:116, 1990.

[24] I. Gustafsson. A class of preconditioned conjugate gradient methods applied to the nite element equations. In Preconditioned Conjugate Gradient Methods, Lecture Notes in Mathe- matics, Springer:Berlin, 1457:4457, 1990.

[25] Y. Notay. Solving positive (semi) denite linear systems by preconditioned iterative methods. In Preconditioned Conjugate Gradient Methods, Lecture Notes in Mathematics, Springer:Berlin, 1457:105125, 1990.

참조

관련 문서

We consider the standard five-point finite difference method for solving the Poisson equation with the Dirichlet boundary condition.. Among ILU, SGS, modified ILU (MILU) and

In this paper, we review a successful approach for solving the Poisson and the diffusion equations and the Stefan problem using (i) a level-set approach to capture the geometry

In this paper we propose a second order accurate nite dierence discretization for the variable coecient Poisson equation on fully adaptive grids that also yields

[22] to account for the incompressibility condition, a second order accurate semi-Lagrangian method to update the momentum equation, a stiffly stable backward difference scheme

On applying an iterative method to solve the discrete Poisson equation which is related to an nonsymmetric matrix, it is noted in [17] that if the related matrix is

The mean-zero equation is derived from three observations: (1) that the classical integral equation at infi- nite contrast is the same as that for an interior Neumann problem,

[23] presented a node-cen- tered finite difference approach to solving the variable coefficient Poisson equation with Dirichlet boundary conditions on irregular domains

In this case, we choose to store all the variable at the grid nodes in order to develop a simple supra-convergent scheme for the Poisson equation as well as a