• 검색 결과가 없습니다.

inf k∈Ksup f ∈F ||f − k|| and the elements in K which attain the above infimum are called a best simultaneous approximation for F from K

N/A
N/A
Protected

Academic year: 2022

Share "inf k∈Ksup f ∈F ||f − k|| and the elements in K which attain the above infimum are called a best simultaneous approximation for F from K"

Copied!
8
0
0

로드 중.... (전체 텍스트 보기)

전체 글

(1)

Volume 23, No. 3, September 2010

REMARK FOR CALCULATING THE BEST SIMULTANEOUS APPROXIMATIONS

Hyang Joo Rhee*

Abstract. This paper is concerned with algorithm for calculating one-sided best simultaneous approximation, in the case of continu- ous functions. we will apply any subgradient algorithm. In other words, we consider the algotithms from a mathematical, rather then computational, point of view.

1. Introduction

We assume that W is a normed linear space and K is a nonempty subset of W . For any subset F of W , we define

d(F, K) := inf

k∈Ksup

f ∈F

||f − k||

and the elements in K which attain the above infimum are called a best simultaneous approximation for F from K.

Lemma 1.1. [6] Suppose that K is a nonempty closed convex subset of a finite-dimensional subspace of a normed linear space W . For any compact subset F ⊂ W , there exists a best simultaneous approximation for F from K.

Throughout this article, we denote that X is a compact subset of Rm satisfying X = intX, S is an n-dimensional subspace of C(X) with the L1(X, µ)-norm where µ is an admissible measure on X.

Received March 08, 2010; Accepted August 12, 2010.

2010 Mathematics Subject Classification: Primary 41A28, 41A65.

Key words and phrases: one-sided simultaneous approximation, linear pro- gramming.

This work was completed with the support by a fund of Duksung Women’s Uni- versity in 2009.

(2)

Suppose that we are given l−tuple F = {f1, · · · , f`} in C(X) with S(F ) =

\` i=1

{s ∈ S| s ≤ fi} is non-empty. We define

d(F, S(F )) := inf

s∈S(F )sup

f ∈F

||f − s||1

and the elements in S(F ) which attain the above infimum are called a one-sided best simultaneous approximation for F from S(F ). The case by ` = 1, we called a one-sided best approximation for f from S(f ) [5].

Finding a one-sided best simultaneous approximation for F from S(F ) is equivalent to finding a s ∈ S(F ) satisfying

sup{

Z

X

s dµ| s ∈ S(F )}.

Since S(F ) is closed and convex, we have that S(F ) 6= φ for all l−tuple F = {f1, · · · , f`} in C(X) if and only if S contains a strictly positive function. By lemma 1.0.1., if S(F ) is nonempty, then there exists a one- sided best simultaneous approximation for F from S(F ). We choose and fix a basis s1, · · · , sn for S where s1 is strictly positive and

Z

X

s1dµ = 1,

while Z

X

sidµ = 0, i = 2, · · · , n.

Thus our problem can be reformulated as max{a1 |

Xn i=1

aisi ≤ fj, j = 1, · · · , `}.

For each ˆa = (a2, · · · , an) ∈ Rn−1, define hj(ˆa) = min{fj(x) −Pn

i=2aisi(x)

s1(x) : x ∈ X}

and H(ˆa) = min1≤j≤`hj(ˆa).

Remark 1.2. Since s1 is strictly positive, max{a1 |

Xn i=1

aisi ≤ fj, j = 1, · · · , `} = max

ˆ

a∈Rn−1 min

1≤j≤`hj(ˆa)

= max

ˆ

a∈Rn−1H(ˆa).

(3)

2. A subgadient to H at ˆa

In this section, we shall consider vector in Rn−1 as being indexed 2 to n. This algorithm is based on ideas from gradients and subgradients.

Corollary 2.1. The function H is a continuous concave function on Rn−1.

Proof. For each hj (1 ≤ j ≤ `), the continuity and concavity may be proved as follows. The continuity is obvious. Let ˆa, ˆb ∈ Rn−1. By definition

fj ≥ hj(ˆa)s1+ Xn

i=2

aisi

fj ≥ hj(ˆb)s1+ Xn

i=2

bisi on all of X. Thus, for any λ ∈ [0, 1],

fj ≥ (λhj(ˆa) + (1 − λ)hj(ˆb))s1+ Xn i=2

(λai+ (1 − λ)bi)si on X. Which implies that

hj(λˆa + (1 − λ)ˆb) ≥ λhj(ˆa) + (1 − λ)hj(ˆb).

Thus hj is concave. By definition, H(ˆa) = min1≤j≤`hj(ˆa), the function H is a continuous concave function on a one-sided best simultaneous approximation for F from S(F ).

We also have by definition that H is finite on Rn−1. We claim that

||ˆa||→∞lim H(ˆa) = −∞

where || · || is any norm on Rn−1. To see this, set T = span {s2, · · · , sn}.

Since T is a finite dimensional subspace of C(X), and R

Xtdµ = 0 for all t ∈ T , we necessarily have

||ˆa||→∞lim max{

Xn i=2

aisi(x) : x ∈ X} = ∞.

Thus

||ˆa||→∞lim min{fj(x) −Pn

i=2aisi(x)

s1(x) : x ∈ X} = −∞, i.e., lim||ˆa||→∞hj(ˆa) = −∞, so lim||ˆa||→∞H(ˆa) = −∞.

(4)

Maximizing H over Rn−1 is therefore a problem of maximizing a concave function.

Definition 2.2. Let H be as above and ˆa ∈ Rn−1. A vector g ∈ Rn−1 is said to be a subgradient to H at ˆa if

H(ˆb) − H(ˆa) ≤ (g, ˆb − ˆa)

for all ˆb ∈ Rn−1. We let G(ˆa) denote the set of subgradients to H at ˆa.

Definition 2.3. Let H be as above and if G(ˆa) is a singleton, then this singleton is called the gradient to H at ˆa.

By definition, a gradient to H exists at ˆa if and only if there is a unique supporting hyperplane to H at ˆa. Thus we can take a remark that a is a maximum point of H if and only if 0 ∈ G(a).

Since G(ˆa) is a compact convex set, it is uniquely determined by its extreme points. These extreme points are related to one-sided direc- tional derivatives as follows.

Proposition 2.4. [4] Let H be as above and ˆa ∈ Rn−1. For each d ∈ Rn−1

t→0lim+

H(ˆa + td) − H(ˆa)

t = Hd0(ˆa) exists. Furthermore,

Hd0(ˆa) = min{(g, d) : g ∈ G(ˆa)}.

As a consequence of proposition, the above definition of a gradient implies the existence of the partial derivatives to H at ˆa.

For any ˆa ∈ Rn−1, set Z(ˆa) = {x : (fj 0− H(ˆa)s1

Xn i=2

aisi)(x) = 0 where hj 0(ˆa) = H(ˆa)}.

By definition Z(ˆa) 6= φ for each ˆa ∈ Rn−1. For each x ∈ Z(ˆa), set gx = (−s2(x)/s1(x), · · · , −sn(x)/s1(x)).

Let ˜G(ˆa) denote the convex hull of the set of vectors {gx : x ∈ Z(ˆa)}.

Then the set ˜G(ˆa) is closed since Z(ˆa) are closed.

Theorem 2.5. The set ˜G(ˆa) is the set of subgradients to H at ˆa.

Proof. For each x ∈ Z(ˆa), there exists j ∈ {1, · · · , `} such that fj(x) = H(ˆa)s1(x) +

Xn i=2

aisi(x)

(5)

= hj(ˆa)s1(x) + Xn i=2

aisi(x).

By definition, that is fj ≥ hj(ˆb)s1+Pn

i=2bisi, for each ˆb ∈ Rn−1, fj(x) ≥ hj(ˆb)s1(x) +

Xn i=2

bisi(x)

≥ H(ˆb)s1(x) + Xn

i=2

bisi(x).

Since s1(x) ≥ 0,

H(ˆb) − H(ˆa) ≤ Xn

i=2

aisi(x) s1(x)−

Xn i=2

bisi(x) s1(x)

= Xn

i=2

(ai− bi)si(x) s1(x)

= (gx, ˆb − ˆa).

So gx is a subgradient to H at ˆa, that is, ˜G(ˆa) ⊂ G(ˆa).

It remains to prove that all subgradients to H at ˆa are in ˜G(ˆa).

Suppose that ˜G(ˆa) 6= G(ˆa). Then ˜G(ˆa) ⊂ G(ˆa), and ˜G(ˆa), G(ˆa) are both convex and compact, there exists a g ∈ G(ˆa) and d ∈ Rn−1 for which

(g, d) > (g, d) for all g ∈ ˜G(ˆa). Thus

min{(g, d) : g ∈ ˜G(ˆa)} > min{(g, d) : g ∈ G(ˆa)} = Hd0(ˆa).

If the strictly inequality will be equality, we have proved our result.

It suffices to prove that, for each d ∈ Rn−1, there exists a g ∈ ˜G(ˆa) for which

Hd0(ˆa) = lim

t→0+

H(ˆa + td) − H(ˆa)

t = (g, d).

We therefore calculate Hd0(ˆa). Set g(x) = fj 0(x) −Pn

i=2aisi(x)

s1(x) − H(ˆa)

and vi(x) = ssi1(x)(x), i = 2, · · · , n, where hj 0(ˆa) = H(ˆa). Note that min{g(x) : x ∈ X} = 0.

(6)

Now, for each d ∈ Rn−1, H(ˆa + td) − H(ˆa)

t = 1

t[hj 0(ˆa + td) − hj 0(ˆa)]

= 1 t[min

x∈X

fj 0(x) −Pn

i=2(ai+ tdi)si(x)

s1(x) −min

x∈X

fj 0(x) −Pn

i=2aisi(x)

s1(x) ]

= 1 t[min

x∈X

fj 0(x) −Pn

i=2aisi(x) − tPn

i=2disi(x)

s1(x) − H(ˆa)]

= 1 t min

x∈X[fj 0(x) −Pn

i=2aisi(x)

s1(x) − t

Xn i=2

divi(x) − H(ˆa)]

= 1 t min

x∈X{g(x) − t Xn i=2

divi(x)}.

For each x ∈ Z(ˆa), 1

t{g(x) − t Xn i=2

divi(x)} = − Xn

i=2

divi(x).

Thus

Hd0(ˆa) = lim

t→0+

H(ˆa + td) − H(ˆa)

t = lim

t→0+

1 t min

x∈X{g(x) − t Xn

i=2

divi(x)}

≤ min

x∈Z(ˆa) Xn i=2

divi(x)

= min

x∈Z(ˆa)(gx, d).

If equality holds, that is, for each d ∈ Rn−1, there exists a g ∈ ˜G(ˆa) for which

Hd0(ˆa) = min{(gx, d) : gx ∈ G(ˆa)}

= min{(gx, d) : gx∈ ˜G(ˆa)},

then by proposition 2.0.6., the proof is complete, that is, ˜G(ˆa) = G(ˆa).

We assume that equality does not hold. Set c = min

x∈Z(ˆa) Xn

i=2

divi(x) = − max

x∈Z(ˆa)

Xn i=2

divi(x).

Assume that, there exists a δ > 0 such that Hd0(a) ≤ c− δ.

(7)

Let tk→ 0 and xk ∈ X satisfy Hd0(ˆa) = lim

k→∞

1

tk{g(xk) − tk Xn i=2

divi(xk)} ≤ c− δ.

Because the limit exists, it follows that

k→∞lim g(xk) = 0.

Since X is compact, there exists a subsequence of the {xk}, again de- noted by {xk}, converging to some x. Since g is continuous and therefore g(x) = 0, i.e., x ∈ Z(ˆa).

The functionPn

i=2divi is continuous and there exists a K1 such that for all k ≥ K1

| Xn

i=2

divi(xk) − Xn

i=2

divi(x)| < δ 2. Thus for k ≥ K1

c+ Xn

i=2

divi(xk) = − max

x∈Z(ˆa)

Xn i=2

divi(x) + Xn i=2

divi(xk)

≤ − Xn

i=2

divi(x) + Xn

i=2

divi(xk) < δ 2. There exists a K2 such that for all k ≥ K2

g(xk) ≤ tk{c δ 2+

Xn i=2

divi(xk)}.

Thus, for all k ≥ max{K1, K2}, g(xk) ≤ tk{c δ

2+ Xn i=2

divi(xk)} < 0.

Hence g(x) ≥ 0 for all x ∈ X. This contradiction proves the theorem.

As we have shown, the subgradients to H at ˆa are easily determined theoretically. We can apply any subgradient algorithm. Suffice it to say that convergence is generally very slow, at least in theory. This paper is concerned with algorithms for calculating best one-sided simultaneous approximations. And this algorithms will expand the algorithms for calculating best two-sided simultaneous approximations.

(8)

References

[1] L. Elsner, I. Koltracht and M. Neumann, Convergence of sequential and asyn- chronous nonlinear paracontractions, Numerische mathematik 62 (1992), 305- 319.

[2] S. H. Park and H. J. Rhee, Characterization of best simultaneous approximation for a compact set, Bull K. Math. Soc. 33 (1996), 435-444.

[3] S. H. Park and H.J. Rhee, One-sided best simultaneous L1-approximation, J.

K. Math. Soc. 33 (1996), 155-167.

[4] A. Pinkus, On L1-approximation, Cambridge Tracts in Mathematics, 93 Cam- bridge University Press, Cambridge-New York, 1989.

[5] H. J. Rhee , An algorithm of the one-sided best simultaneous approximation, K. Ann. Math. 24 (2007), 69-79.

[6] I. Singer , Best Approximation in Normed Linear Spaces by Elements of Linear Subspaces, Springer-Verlag, New York, 1970.

*

Department of Mathematics Duksung Women’s University Seoul 132-714, Republic of Korea E-mail: rhj@duksung.ac.kr

참조

관련 문서

[r]

FIFO 파일 프로세스 간 통신에 사용되는 파일로 이름 있는 파이프. 소켓 네트워크를 통한 프로세스 간

● 쉘은 사용자와 운영체제 사이에 창구 역할을 하는 소프트웨어로 사용자로부터 명령어를 입력받아 이를 처리하는 명령어 처리기 역할을 한다. 각

• For any 0 ≤ p ≤ d and K ≥ 0, the matrix B p,K is proved to be nonsingular for all positive definite RBFs which satisfy the condition (1.1). In fact, the existence of

Within the Fresnel approximation, there are two approaches to determining the complex amplitude g(x, y) in the output plane, given the complex amplitude f(x, y) in the

Digital cameras use a solid Digital cameras use a solid- state device called an image sensor to record image in f f di i l i f i. form

[r]

We proposed a refined the Pocklington and Padr´ o-S´ aez cube root algorithm over F q , and successfully reduced the average number of F q multiplications...