Full asymptotics of spline Petrov-Galerkin methods for some periodic ...

Report 2 Downloads 73 Views
Full asymptotics of spline Petrov-Galerkin methods for some periodic pseudodifferential equations V´ıctor Dom´ınguez Dep. Matem´atica Aplicada, Universidad de Zaragoza Edificio de Matem´aticas, Campus Universitario, 50009 Zaragoza (Spain) e-mail: [email protected]

Francisco-Javier Sayas Dep. Matem´atica Aplicada, Universidad de Zaragoza Centro Polit´ecnico Superior, Mar´ıa de Luna, s/n, 50015 Zaragoza (Spain) e-mail: [email protected]

January 7, 2002 Abstract In this paper we study the existence of a formal series expansion of the error of spline Petrov–Galerkin methods applied to a class of periodic pseudodifferential equations. From this expansion we derive some new superconvergence results as well as alternative proofs of already known weak norm optimal convergence results. As part of the analysis the approximation of integrals of smooth functions multiplied by splines by rectangular rules in analyzed in detail. Finally, some numerical experiments are given to illustrate the applicability of Richardson extrapolation as a means of accelerating the convergence of the methods.

Keywords. Splines, Petrov–Galerkin methods, boundary integral equations, Richardson extrapolation. AMS Subject classification. 65R20

1

Introduction

This paper concerns itself with the numerical solution of a class of periodic pseudodifferential equations by means of Petrov–Galerkin methods with smoothest splines as test and trial functions. The importance of pseudodifferential equations lies on the fact that boundary integral operators on smooth curves of the plane can be transformed into this kind of operators if a parameterization of the curve is given. The main result of this work is the proof of existence of an asymptotic expansion (in integral powers of the discretization parameter h) of the error obtained by comparison of 1

2 the numerical solution with an optimal order projection onto the trial space. This first expansion is a false asymptotic series in the sense that all the coefficients depend again on the parameter. However, as a simple byproduct we obtain a proper expansion of the error of linear postprocessings of the solution, as happens when we construct the solution of the original boundary value problem (by a representation formula or a potential) once the associated boundary integral equation is solved. It was already known that spline Petrov–Galerkin methods exhibit superconvergence properties in weak norms depending on the balance between the degree of the test and trial spline spaces and the order of the operator. The asymptotic series obtained in this work gives an alternative proof of this fact by showing that weakest norm superconvergence is tantamount to finding a non–zero term in the expansion. On the other hand we obtain some point superconvergence results on points whose position relative to the discretization grid is related to the roots of Bernoulli polynomials. The key to these results lies in the appropriate comparison of the numerical solution with a Fourier spline projection Dhd u. The operator Dhd is a particular case of trigonometric–spline Petrov–Galerkin discretization already studied in [2] and it is used here for the sake of asymptotic comparison, because of its optimal approximation properties. To do this we apply some recent expansions obtained in [7]. The second key fact leading to our analysis is the possibility of expanding some pseudodifferential operators in a formal series of simpler operators. The main difficulties are then the study of commutation properties of the auxiliary discrete operator Dhd with simple pseudodifferential operators and the coupling of theoretical (operator) and numerical expansions. We have to remark that the scope of application of these results is a priori limited to operators that can be expanded in the above mentioned formal series of operators of integral order. If one just demands a limited expansion, the numerical asymptotic series turns out to be a sum of a finite number of error terms. However, we insist on the fact that for the full asymptotics of a method the whole of the formal series is relevant, and not only the principal part, on which stability relies. Compared to [13], where the collocation method is worked out, this article presents some novelties. Because of not comparing with the exact solution but with a projection onto the discrete space we are allowed to deal with the error series in itself and not with the action of an smoothing functional applied to the error. In addition to the advantages drawn from this, the inclusion of the effect of numerical integration will be possibly simpler in this frame. This work also widens preceding analysis of [6] in several ways. First, we include a much vaster scope of operators and methods ([6] only studies logarithmic operators with ‘constant coefficients’ and Galerkin methods). Also the comparison with a better discrete projection than interpolation gives advantages when working with weak norms. Finally, since qualocation methods [4, 17] can be considered as a partial application of numerical integration to Galerkin methods, the results of this paper can be taken as a first step towards an asymptotic analysis of that family of numerical schemes. The practical application of having asymptotic expansions of the error are manifold: the use of Richardson extrapolation for acceleration of convergence and for a posteriori error estimation, the obtaining of an alternative viewpoint on superconvergence and the new results on point convergence. The paper is structured as follows. Section 2 presents the operator equations and

3 numerical methods. The Fourier spline projector Dhd is then introduced in Section 3, where we also derive some asymptotic properties of the action of the basic monomial operators on the error of this discrete operator. Following these results, the main work of Section 4 is the full asymptotic expansion of the consistency error of the whole family of spline Petrov–Galerkin methods, which is then applied in Section 5 to give (under stability hypotheses) the results of this paper. Some technical questions used in Section 4 have been treated separately in Section 6, where we study the approximation of the integral of the product of a smooth function with a spline by displaced rectangular sums. In fact, what we obtain is a new kind of Euler–Maclaurin expansion when splines, placed in the same grid as the composite integration rule is defined, act as a class of weights in the integrals. The possibilities of applying Richardson extrapolation to accelerate the convergence of the method as well as the pointwise convergence are illustrated by two different numerical examples. The first of them is a very simple oddly elliptic operator equation of negative order. The second one is a strongly elliptic equation of positive order, requiring some additional numerical integration. The article is finished with an appendix where we show that two important classes of operators belong to the set of pseudodifferential operators for which the preceding analysis is valid. Throughout the paper C, C 0 , C 00 denote constants independent of the discretization parameter and of any other quantity that is multiplied by them, being possibly different in each occurrence. The Landau symbol O is used in the standard way.

2

Operators and methods

We briefly introduce the space of periodic distributions and the sequence of Sobolev spaces. Let D be the set of 1–periodic infinitely differentiable functions n o D := ϕ : R → C ϕ ∈ C ∞ (R), ϕ = ϕ(· + 1) , endowed with the metrizable locally convex topology defined by the norms   (l) kϕk∞,k := max sup |ϕ (x)| , k ≥ 0. 0≤l≤k

x∈R

The elements of its dual space D0 are referred to as periodic distributions. Given T ∈ D0 we can define its Fourier coefficients Tb(m) := hT, φ−m i,

m ∈ Z,

where h·, ·i denotes the duality product and φm := exp(2πmı·). It is well-known that any periodic distribution is univocally determined by its Fourier coefficients. Integrable functions f ∈ L1 (0, 1) define periodic distributions in the usual sense Z 1 hf, ϕi := f (t)ϕ(t) dt, ϕ ∈ D. 0

In this case, functional and distributional Fourier coefficients coincide.

4 We define the periodic Sobolev spaces X n o s 0 2s 2 H := u ∈ D |m| |b u(m)| < ∞ , m6=0

for arbitrary s ∈ R. Then H s is a Hilbert space with the inner product X (u, v)s = u b(0)b v (0) + |m|2s u b(m)b v (m). m6=0

Clearly H 0 = L2 (0, 1) and (·, ·)0 is the L2 –product. For r > s, H r is compactly embedded into H s . A linear map A : D0 → D0 such that A : H s −→ H s−n is bounded for all s ∈ R for a given value n ∈ Z is called a pseudodifferential operator (Ψdo in short) of order (at most) n. We will be interested in a particular set of pseudodifferential operators, defined in terms of some particular operators which we specify now. For j ∈ Z we define the operator X Dj u := (2kπı)j u b(m)φm , (1) m6=0

which is a Ψdo of order j. Notice that Dj Dk = Dj+k for all j and k. Moreover D1 = D is the differentiation operator, D0 u = u − u b(0) for all u, and D−1 is a sort of inverse differential operator since Z t −1 D u(t) = (u(x) − u b(0)) dx. 0

On the other hand we consider the Hilbert transform X Hu := sign(m)b u(m)φm , m6=0

where

 sign(m) =

1, −1,

if m > 0, if m < 0.

Obviously, H is a Ψdo operator of order 0 and H2 u = u − u b(0) for all u. We say that A is an expandable pseudodifferential operator of order n and we write A ∈ E(n) if there exists two sequences of functions (aj )nj=−∞ , (bj )nj=−∞ ⊂ D such that for all M integer n n X X j A= aj D + bj HDj + KM (2) j=−M

j=−M

with KM a Ψdo of order −M − 1. The set of all expandable operators will be denoted by E. Algebraic properties of this set, which are not needed for what follows, can be found in [8]. The aim of this work is the numerical solution of equations of the form Au = f,

(3)

5 where A ∈ E(n). This will be done by means of a standard Petrov-Galerkin scheme with smoothest splines on a uniform grid as test and trial spaces. Let N be a positive integer, h := 1/N and xi := ih for all i ∈ Z. The space of 1–periodic smoothest splines of degree d ≥ 1 over the grid {xi }i∈Z is defined by n o Shd := u ∈ C d−1 | u|[xi ,xi+1 ] ∈ Pd where Pd is the set of polynomials of degree at most d. The space of periodic piecewise constant functions will be consequently denoted Sh0 . It is well known Shd ⊂ H s for all s < d + 1/2. The Petrov-Galerkin method for solving (3) is uh ∈ Shd , (4) 0 (Auh , rh )0 = (f, rh )0 , ∀rh ∈ Shd . We restrict ourselves to the case d ≥ n, so that Auh ∈ H 0 for all uh ∈ Shd . The method could be also extended to cover the more general case where d + d0 ≥ n, understanding the H 0 product as a duality product. However for our analysis it will be simpler to avoid this generality. The stability analysis of these methods has been studied in [12] and some previous works. We return to the subject of stability and convergence in Section 5.

3

A Fourier spline projection

In this section we define a family of projections onto the spline spaces by means of a Fourier coefficient interpolation. We define the set of indices n No N 0, ∀x ∈ R   Re (−1)n ϕ(x)(an (x) − bn (x) ) > 0, ∀x ∈ R. Equivalent definitions of strong ellipticity, including one in G˚ arding’s inequality form, are given in [?]. Following already classical notations (see [17] for instance), we say that A is oddly elliptic if there exists ϕ ∈ D such that   Re ϕ(x)(an (x) + bn (x) ) > 0, ∀x ∈ R   Re (−1)n ϕ(x)(an (x) − bn (x) ) < 0, ∀x ∈ R, which is equivalent to HA being strongly elliptic. We now give necessary and sufficient conditions for the Petrov–Galerkin method (PG in the sequel) to converge. The natural norm to study convergence and stability of the n+d−d0 PG scheme is that of H 2 . To simplify some forthcoming expressions we denote α := d − d0 . Stability of the method means the existence of a positive constant C independent of h such that n+α kuh k n+α ≤ Ckuk n+α , ∀u ∈ H 2 , 2

2

being uh the numerical solution (4) with f = Au as right hand side. It is well known that stability is equivalent to convergence in the same norm and to a classical C´ea estimate (cf. [10]). The Galerkin case (d = d0 ) is well-known [16, 18]: stability is then equivalent to strong ellipticity. For the more general PG case, one can always transform the numerical scheme into a Galerkin method, where one applies the theory. We summarize this results in the following proposition. Proposition 5 The PG method is stable if and only if one of the two following conditions holds: (a) A is strongly elliptic and d and d0 have the same parity;

11 (b) A is oddly elliptic and d and d0 are of opposite parity. In that case there exists β > 0 such that for h small enough the Babuˇska-Brezzi condition is satisfied: |(Auh , rh )0 | >β (13) inf sup uh ∈Shd r ∈S d0 kuh k n+α krh k n−α h 2

h

2

0

In the sequel Gd,d h u denotes the numerical solution of the Petrov–Galerkin method (4) 0 when f = Au. We first give an asymptotic expansion of the comparison between Gd,d h u and Dhd u. From this point to the end of this section we will not explicitly indicate the regularity assumptions for the exact solution. This demands the distinction of several cases and notations become cumbersome. Consequently, we assume that u ∈ D. Asymptotic series will be denoted in the usual way: we write that in some norm ah ∼

∞ X

hk ak

k=k0

when for all M

M

X

k M +1 a − h a .

h k ≤ Ch k=k0

Proposition 6 For all s < d + 1/2 the expansion ∞ X

0

Dhd u − Gd,d h u ∼

0

−1 hk Gd,d h A Fk u

k=d+d0 +2−n

holds in H s . Moreover, the expansion is also valid in the L∞ norm of the dth order derivatives. Proof. We first show the result for s = (n + α)/2, i.e., in the natural stability norm. We distinguish two cases: n−α ≥ 0 and n−α < 0. In the first case, since krh k0 ≤ krh k(n−α)/2 for all rh , by (13) and Theorem 4 we obtain

d d,d0

Dh u − Gh u −

M X

0

M +1 −1 kukM +n+1 . hk Gd,d A F u k n+α ≤ Ch h

(14)

2

k=d+d0 +2−n

If n − α < 0, we have the inverse inequality krh k0 ≤ Ch

n−α 2

krh k n−α , 2

We apply Theorem 4 with M 0 such that M ≤ M 0 + and applying (15), we obtain

d d,d0

Dh u − Gh u −

M X

0

∀rh ∈ Shd . n−α 2

(15)

< M + 1. Proceeding as before

0

0 −1 h Gd,d h A Fk u k=d+d0 +2−n

k



M X k=M +1

0

−1 M +1 hk Gd,d A F u kukM 0 +n+1 . k n+α ≤ Ch h 2

(16)

12 To bound the addenda corresponding to indices M + 1 ≤ k ≤ M 0 we apply stability and obtain 0 −1 −1 0 00 kGd,d h A Fk uk n+α ≤ CkA Fk uk n+α ≤ C kuk n+α +k ≤ C kukM +α+1 , 2

2

2

−1

since A Fk is Ψdo of order k. In case that n − α is even, this bound is attainable for u ∈ H M +α . These bounds and (16) prove the result. The validity of the expansion in stronger norms can be derived with standard techniques from the inverse inequalities of the splines. See [6] Theorem 2 for a similar result.  0

If s ≤ n − d0 − 1 and uh := Ghd,d u, Proposition 6 and (5) show readily that 0

kuh − uks ≤ Chd+d +2−n . This is the weak norm superconvergence phenomenon of PG methods which had already been observed in [12]. In fact the result is optimal since writing k = d + d0 + 2 − n, the nontrivial term A−1 Fk u appears with the kth power of h in the expansion. In stronger norms, the error is simply an approximation error, which cannot be improved since the approximation of u by Dhd u is optimal. Furthermore, by induction we can derive the following asymptotic series. Corollary 7 For all u ∈ D, there exists a sequence of functions gk such that ∞ X Dhd u − uh ∼ hk Dhd gk . k=d+d0 +2−n

Moreover, by comparison of Dhd u and u in some special points (see [7] Corollary 3.2) we can straightforwardly prove the following nodal superconvergence property. Corollary 8 Let u be smooth enough. Then if d is even and d0 ≥ n max |uh (xi ) − u(xi )| ≤ Chd+2 . i

(17)

If d is odd, d0 ≥ n and  is the only root of Bd+1 in (0, 1/2), then max |uh (xi ± h) − u(xi ± h)| ≤ Chd+2 . i

(18)

To finish this section we show how the action of a linear regularising operator with values on a normed space affects Proposition 6 by producing a proper error expansion, where all the coefficients are independent of h. This kind of operators appear in postprocessings of the solution of boundary integral equations, such as evaluation of potentials. Corollary 9 Let T : H ν → X be linear and bounded linear for all ν, being X a normed space. Then for every smooth u there exists a sequence of functions (vk )k such that in X T (u − uh ) ∼

∞ X

hk T vk .

k=d+d0 +2−n

Proof. This is straightforward consequence of Proposition 6 and the fact that kT (Dhd u − u)kX ≤ ChM for all M because of (5).



13

6

Approximation of some inner products

In this section we state and prove some technical results, which have already been used in section 4, that study the asymptotic behavior of the following numerical quadrature: given f , rh ∈ Shd and t ∈ R Z

1

f (s)rh (s) ds ≈ h 0

N X

f (h(i + t))rh (h(i + t)).

i=1

We thus have approached the inner product of a function by a spline by a displaced rectangular rule. Let N X Lh (f, t) := h f (h(i + t)). i=1

Notice that Lh (f, ·) is a 1–periodic function. In the sequel we will be dealing with splines of degree d ≥ 1. All the results are valid for piecewise constant functions (d = 0) provided that we take the mean value of rh at the breakpoints as rh (xi ). However the proofs can be done with classical techniques without any complication. Lemma 10 Let rh ∈ Shd and f ∈ H ν with 1/2 < ν ≤ d + 1. Then uniformly in t ∈ [0, 1] Z 1 Lh (f rh , t) = f (s)rh (s) ds + O(hν )kf kν krh k0 . 0

Proof. Following [4] and [7] we write rh ∈ Shd in the form X rbh (µ)ϕdµ rh = µ∈ΛN

where ϕµ := Dhd φµ . Moreover, we have that   µ  d ϕµ = 1 + ∆d N ·, φµ , N

∀µ ∈ ΛN

being (cf. [4]) ∆d (x, y) := y d+1 Since for all r > 1

1 φm (x). (m + y)d+1 m6=0 X

1 < ∞, r − 1 ≤y≤ 1 m6=0 |m + y| X

sup

2

(19)

(20)

2

we have |∆d (x, y)| ≤ C|y|d+1 ,

∀(x, y) ∈ R × [−1/2, 1/2].

Notice first that rh (h(i + t)) =

X µ∈ΛN

h  µ i rbh (µ) 1 + ∆d t, φµ (h(i + t)). N

(21)

14 Therefore we can decompose h  µ i X Lh (f rh , t) = rbh (µ) 1 + ∆d t, fb(−µ) N µ∈ΛN o h  µ  in X X b + rbh (µ) 1 + ∆d t, f (−µ + mN )φm (t) N µ∈Λ m6=0

(22)

N

where we have used the easily verifiable identity h

N X

f (h(i + t))φµ (h(i + t)) = fb(−µ) +

i=1

X

fb(−µ + mN )φm (t).

m6=0

We remark that all the series involved converge absolutely and uniformly in t. On the one hand Z 1 X b rbh (µ)f (−µ) = f (s)rh (s) ds + O(hν )kf kν krh k0 ,

(23)

0

µ∈ΛN

since

X

ν b f − f (µ)φ

µ ≤ Ch kf kν . 0

−µ∈ΛN

Secondly, we can use (21) and the fact that ν ≤ d + 1 to bound X µ d+1  µ X rbh (µ)∆d t, fb(−µ) ≤ C |b rh (µ)| |fb(−µ)| N N µ∈ΛN µ∈ΛN X ≤ Chν |b rh (µ)||µ|ν |fb(−µ)| ≤ Chν krh k0 kf kν . (24) µ∈ΛN

Finally by (20) X h  µ  in X o rbh (µ) 1 + ∆d t, fb(−µ + mN )φm (t) N µ∈ΛN m6=0 2 i1/2 h X X b ≤ Ckrh k0 f (−µ + mN )φm (t) µ∈ΛN

≤ Ckrh k0

m6=0

h X nX µ∈ΛN

| − µ + mN |2ν |fb(−µ + mN )|2

on X

m6=0

m6=0

oi1/2 1 h2ν m − µ 2ν N

ν

≤ Ch krh k0 kf kν .

(25)

Gathering all the bounds we have obtained so far, the result follows readily. Lemma 11 Let f ∈ H M +1 and rh ∈ Shd . Then Z 1 Z M X j j d Lh (f rh , t) = f (s)rh (s) ds + (−1) h γj B j (t) 0

j=d+1

+O(hM +1 )krh k0 kf kM +1 , uniformly in t ∈ R.

0

1

f (j) (s)rh (s) ds



15 Proof. Consider again decomposition (22) with ∆d defined by (19). We first remark that the bounds (23) and (25) holds for ν = M + 1. Thus Z Lh (f rh , t) =

1

f (s)rh (s) ds + 0

X µ∈ΛN

 µ rbh (µ)∆d t, fb(−µ) + O(hM +1 )krh k0 kf kM +1 . N

We now recall the expansion the expansion ([7] Lemma 3.1) ∆d (x, y) =

M X

γjd (2πı)j B j (x)y j + GM (x, y)

j=d+1

where the remainder satisfies |GM (x, y)| ≤ C|y|M +1

∀(x, y) ∈ R × [−1/2, 1/2].

Then X µ∈ΛN

=

 µ rbh (µ)∆d t, fb(−µ) N

M X

n X o X  µ hj γjd B j (t) rbh (µ)(2πıµ)j fb(−µ) + rbh (µ)GM t, fb(−µ) N µ∈Λ µ∈Λ j=d+1 N

=

M X

hj γjd B j (t)(−1)j

N

n X

o d (j) rbh (µ)f (−µ) + O(hM +1 )kf kM +1 krh k0 ,

µ∈ΛN

j=d+1

proceeding as in (24). Finally applying (23) to f (j) with ν = M + 1 − j, the result follows. 

7

Numerical experiments

In this part we expose two examples corroborating our theoretical results. The first one is a Petrov–Galerkin method for an oddly elliptic operator equation. The second one is a Galerkin method for a strongly elliptic hypersingular equation. In this last case, numerical integration is necessary in order to fully discretize the problem. This is done in a way that preserves the theoretical properties of the method.

An oddly elliptic operator Let A := D−1 + J where Ju := u b(0). Then A is an invertible oddly elliptic Ψdo of order −1 −1. Moreover, D can be written as the convolution with B 1 −1

Z

D u=−

1

B 1 (· − t)u(t) dt, 0

16 as can be easily derived from the Fourier expansion of the first Bernoulli polynomial. We want to solve numerically the equation Au = f with a PG method with Sh1 and Sh0 as trial and test space respectively: uh ∈ Sh1 , Z 1 Z 1Z 1 (26) Juh Jrh − f (s)rh (s) ds, ∀rh ∈ Sh0 .s2 B 1 (s − t)uh (t)rh (s) dt ds = 0

0

0

Sh0

Methods where acts as test space are called region methods (cf. [11] Chapter 12). We take f (t) := exp(sin(2πt)) cos(2πt) + sin(2πt) so that the exact solution of the problem is u(t) = Df (t) = π exp(sin(2πt))(1 + cos(4πt) − 2 sin(2πt)) + 2π cos(2πt). The discrete scheme (26) is equivalent to a linear system, where all the coefficients can be calculated exactly. For our test we have chosen N =8, 16, 32, 64, √ 128, 256 and 512. √ If uh denotes the numerical solution for h = 1/N , and ζ1 = 1/2− 3/6, ζ2 = 1/2+ 3/6 are the two roots of B2 , we compute the errors e0h := ejh :=

max

|uh (xi ) − u(xi )| = O(h2 ),

max

|uh (xi + ζj h) − u(xi + ζj h)| = O(h3 ),

i=0,...,N −1 i=0,...,N −1

(27) j = 1, 2.

(28)

The bound in (28) follows from (18), whereas the expected convergence value in (27) can be inferred from the comparison with Dh1 u in the knots (see [7] Corollary 3.2). Table 1 shows these errors together with estimated convergence rates (e.c.r.) obtained by comparing consecutive errors in the table. Table 1: Nodal errors and e.c.r. for the first example N

e1h

e2h

e.c.r.

e.c.r.

e3h

e.c.r.

8 2.983 0.947 0.964 16 0.902 1.726 0.159 2.569 0.171 2.498 32 0.222 2.025 1.925E-2 3.052 1.979E-2 3.109 64 5.500E-2 2.010 2.462E-3 2.968 2.449E-3 3.014 128 1.373E-2 2.003 3.066E-4 3.054 3.059E-4 3.001 256 3.430E-3 2.001 3.827E-5 3.001 3.830E-5 2.998 512 8.578E-4 1.999 4.786E-6 3.000 4.786E-6 3.000

Given ϕ ∈ D the functional T : H ν −→ C u 7−→ (u, ϕ)0 is bounded for all ν. Since in this example F2k+1 ≡ 0 for all k, the expansion of Proposition 6 contains only even powers of h. Therefore, Corollary 9 reads now T u − T uh =

M X k=2

h2k T uk + O(hM +2 ).

17 We take ϕ(t) = cos(4πt) and calculate T uh for the same values of h. We then apply Richardson extrapolation (cf. [?]) to give new approximations to T u as follows:  0  γh := T uh j−1 4j γhj−1 − γ2h  γhj := , j = 1, . . . , 4. 4j − 1 Table 2 shows the errors jh := |T uh − γhj | whereas Table 3 gives the corresponding estimated convergence rates. Expected values of the order of the method are indicated on top of this table. Table 2: Errors after extrapolation N

0h

1h

8 16 32 64 128 256 512

4.88E-02 2.08E-03 1.17E-04 7.11E-06 4.41E-07 2.75E-08 1.72E-09

2h

3h

1.03E-03 1.41E-05 2.08E-06 2.10E-07 1.02E-08 2.03E-09 3.25E-09 3.90E-11 6.50E-13 5.06E-11 1.52E-13 6.96E-16 7.89E-13 5.92E-16 6.93E-19

4h

2.63E-12 6.08E-17 1.33E-20

Table 3: E.c.r. for the errors in Table 2 4

6

8

4.551 4.154 4.039 4.010 4.002 4.001

6.196 6.067 6.017 6.004 6.001

7.680 8.023 8.007 8.002

10

12

11.06 9.867 15.40 9.972 12.16

A hypersingular equation Let Ω ⊂ R2 be a bounded domain with smooth boundary Γ, given by a regular 1–periodic parametric representation x : R → Γ. Consider the operator 1 Wu(s) := f.p. 2π

Z

1

∂nx(s) ∂nx(t) log |x(s) − x(t)|u(t) dt 0

18 where f.p. stands for the Hadamard finite part of the integral and ∂nx(s) denotes the outer normal derivative at the point x(s). Let g : Γ → R and f (s) := |x0 (s)|g(x(s)). Then any solution of Wu = f (29) gives by means of the representation formula Z 1 Z 1 1 ω(y) := ∂nx(s) log |y − x(s)|u(s) ds =: κ(s, y)u(s) ds, 2π 0 0

y ∈ R2 \ Γ

a solution to the Neumann problem for the Laplace equation ∆ω = 0, in R2 \ Γ, ∂n ω = g. From a well-known property (cf. [5] Section 6.6) it follows that Z i 1 dh 1 log |x(s) − x(t)|u0 (t) dt = DVDu(s), Wu(s) = 2π ds 0

(30)

(31)

(32)

where we have denoted 1 Vu := 2π

Z

1

log |x(·) − x(t)|u(t) dt. 0

1 Also we can write V = − 2π HD−1 + K1 being K1 an integral operator with infinitely differentiable periodic kernel, and therefore a Ψdo of order −∞. Hence

W=−

1 DH + K, 2π

for a new integral operator K with the same properties as K1 . Thus W is a strongly elliptic operator of order 1. Notice that Z 1 πı DHu = − u(s) ds. 2 0 sin π(· − s) We point out that (29) is solvable if and only if fb(0) = 0, the solution being unique modulo constant functions (notice that W1 = 0). We aim to calculate approximately 1 the solution with null integral, by applying the Galerkin scheme with Sh,0 := {uh ∈ 1 Sh | u bh (0) = 0} as trial-test spaces. It can be easily checked that the theory given in previous sections holds for this new situation. Using integration by parts and (32), the Galerkin scheme can be formulated it in the following way 1 uh ∈ Sh,0 , Z 1 (33) 1 Z 1Z 1 0 0 0 1 − log |x(s) − x(t)|u (t)r (s)dtds = . f (s)|x (s)|r (s)ds ∀r ∈ S h h h h h,0 2π 0 0 0

19 1 To assemble a linear system for this problem once a basis of Sh,0 is chosen, the use of numerical integration is required. We briefly explain how this can be done in a way that 0 preserves all theoretical properties of the method. Let Sh,0 be the space of elements of Sh0 1 0 with null integral. Clearly, DSh,0 = Sh,0 . A B-spline basis for both discrete spaces can be constructed as follows. Let   x ∈ [−1, 0],  x + 1, 1, x ∈ [−1/2, 1/2], 1 − x, x ∈ [0, 1], µ0 (x) := µ1 (x) := 0, otherwise,  0, otherwise.

For i = 1, . . . , N we consider the 1–periodic functions such that ψi0 = µ0

 · − x − h/2  i , h

ψi1 = µ1

· − x  i

h

in (xi − 1/2, xi + 1/2).

,

j Obviously {ψ2j −ψ1j , ψ3j −ψ1j , . . . , ψnj −ψ1j } is a basis of S0,h for j = 0, 1. Once the coefficients

Z

1

f (t)ψi1 (t) dt, i = 1, . . . N, 0 Z 1Z 1 := − log |x(s) − x(t)|ψi0 (s)ψj0 (t) ds dt,

βi := αi,j

0

i, j = 1, . . . N,

0

are computed, the system corresponding to (33) is easily constructed. We first approximate Z 1 βi = h f (xi + ht)µ1 (t) dt ≈ hL1 f (xi + h·), −1

where L1 is the quadrature rule Z

1 10 1 L1 g = g(−1) + g(0) + g(1) ≈ 12 12 12

1

g(t)µ1 (t) dt, −1

which is exact for polynomials of degree 3. Consider now the two-dimensional quadrature formula L2 g :=

1 1 X X l=−1 k=−1

Z

1/2

Z

1/2

cl ck g(l, k) ≈

g(s, t) dtds −1/2

−1/2

20  |x(s) − x(t)|   ,  log |s − t| F (s, t) :=    log |x0 (t)|, and Z

1/2

Z

s 6= t, s = t,

1/2

log |k + t − s| dsdt.

ρk := −1/2

−1/2

This scheme arises from [15] where full discretizations of logarithmic kernel equations are studied and is itself an extension of the Galerkin collocation method [9]. The results from [15] can be adapted to show that the numerical solution to the modified system, say u∗h , ∗ 3 satisfies PNku −∗ u1h k−1 ≤ Ch , and the order of the method is thus preserved. Moreover if ∗ uh = i=1 ui ψi and we define (see (30)) ωh (y) =

N X

u∗i L1 (κ(xi

Z + h·, y)) ≈

1

κ(s, y)u∗h (s) ds,

y ∈ R2 \ Γ,

0

i=1

then we obtain an asymptotic expansion of the error ω(y) − ωh (y) =

M X

hk vh (y) + O(hM +1 ),

k=3

uniformly in compact sets of R2 \ Γ. Finally the pointwise superconvergence is still valid, namely (27) and (28) hold. In our experiments we have taken the ellipse x(t) = (x1 (t), x2 (t)) := (R cos 2πt, sin 2πt) and f (t) := 3x1 (t)[ x21 (t) − x22 (t)] − 6x1 (t)x2 (t)x01 (t). Then the exact solution of the hypersingular equation (29) satisfying u b(0) = 0 is 1 u(t) = (1 + R)2 [−1 + 2R + (1 + R) cos(4πt)) sin(2πt)]. 2 In Table 4 we show e0h and e1h for different values of N and estimate the orders of convergence. On the other hand, we take y 0 = (3, 4) and define a new set of approximations by extrapolation for the same values of N :  0  ωh := ωh (y 0 ) j−1 (3/2)j ωhj−1 − ω3h/2 j  ωh := , (3/2)j − 1

j > 0.

Notice that the ratio of discretization level between consecutive grids is 3/2. Tables 5 and 6 show the errors jh = |ω(y 0 ) − ωhj |. The exact potential has been computed using the composite trapezoidal rule with error estimation to obtain thirty digits of the exact value.

21

Table 4: Pointwise error and e.c.r. e0h

N

e1h

e.c.r.

e.c.r.

64 4.987E-2 3.196E-3 96 2.248E-2 1.965 9.590E-4 2.969 144 1.005E-2 1.987 2.860E-4 2.984 216 4.482E-3 1.990 8.520E-5 2.987 324 1.998E-3 1.992 2.533E-5 2.991 486 8.899E-4 1.994 7.524E-6 2.994

Table 5: Errors after extrapolation N

0h

64 1.13E-5 96 3.56E-6 144 1.10E-6 216 3.34E-7 324 1.01E-7 486 3.01E-8

1h

2h

3h

2.97E-7 5.97E-8 1.23E-9 1.19E-8 1.72E-10 2.37E-9 2.36E-11 4.71E-10 3.20E-12

4h

1.17E-11 1.11E-12 9.02E-14 1.02E-13 5.71E-15

5h

4.60E-16

Appendix: some expandable operators In this section we prove that some boundary integral operators can be written as expandable pseudodifferential operators. We introduce the following notation: if A ∈ E(n) has the expansion given by (2) we write the formal series exp

A =

n X

aj D j +

j=−∞

n X

bj HDj .

j=−∞

Table 6: E.c.r. for the errors in Table 5 3 2.851 2.904 2.937 2.959 2.973

4

5

3.959 3.971 3.980 3.986

4.850 4.898 4.936

6

7

5.810 5.871 6.806

22 Notice first that if K is an integral operator with infinitely differentiable kernel then K ∈ E exp and K = 0. Moreover if g ∈ D0 satisfies |b g (m)| ≤ C|m|n for m 6= 0 being n ∈ Z, then convolution with g, i.e. X g ∗ u := gb(m)b u(m)φm m∈Z

is Ψdo of order n.

7.1

Logarithmic operators

We now show that integral operators of the form Z 1 Vu := A(·, t) log(sin2 π(· − t))u(t) dt, 0

where A ∈ C ∞ (R2 ) is periodic in both variables, are expandable. We first investigate the particular case Z 1 Vk u := sink (2π(· − t)) log(sin2 π(· − t))u(t) dt 0

Lemma 12 Let k ≥ 0. Then Vk ∈ E(−k − 1) and there exists a sequence of complex numbers (ξk,j ) such that −k−1 X exp Vk = ξk,j HDj . j=−∞

Moreover,

exp

V0 = −HD−1 . Proof. If fk (t) := sink (2πt) log(sin2 πt) then Vk is the operator of convolution with fk . Since k sign(m)k!ık Y 1 , if |m| ≥ k + 1, fbk (m) = − k−2j mk+1 ) 1 − ( m j=0 it follows that |fbk (m)| ≤ C|m|−k−1 for m 6= 0 and hence Vk is Ψdo of order −k −1. Simple expansions show that "M −k−1  #  r k k−2j M −k k Y X sign(m)k!ı k − 2j m  fbk (m) = − + k−2j mk+1 m 1 − m r=0 j=0 =

MX −k−1 r=0

sign(m)ηk,r k + RM +1 (m) mr+k+1

k −M −1 with |RM , for some ηk,r ∈ C. Then the operator of convolution with +1 (m)| ≤ C|m| the function X k RM +1 (m)φm |m|>k

is Ψdo of order −M − 1. This proves the result with ξk,j := (2πı)j ηk,j−k−1 .



23 Proposition 13 In the hypotheses given above, V ∈ E(−1) and exp

V =

−1 X

bk HDk .

k=−∞

Proof. Consider first the following periodic version of the Taylor expansion. For all k ≥ 1 we can write ∞ X sink (2πt) = ρj,k tj j=k

(notice that ρk,k 6= 0 and that ρk,j = 0 if j + k is odd). Then we can construct an infinite upper triangular matrix such that    1 ρ1,1 ρ1,2 . . . σ1,1 σ1,2 . . .      1/2! ρ2,2 . . .  =  σ2,2 . . .    ... ... ... 

  .

Obviously, we obtain σj,k = 0 if j + k is odd. We can then define a sequence of differential operators k X L0 := I, Lk := σj,k Dj , k ≥ 1 j=1

which serves to create a periodic Taylor expansion: for f ∈ D f (t) =

2M −1 X

Lk f (s) sink (2π(t − s)) + R2M (t, s) sin2M (π(t − s)),

(34)

k=0

where R2M ∈ C ∞ . If we apply (34) to A(s, ·) and define ck (s) := (−1)k Lk [A(s, ·)](s) we can split V in the form Vu =

2M −1 X k=0

Z c k Vk u +

1

R2M (·, t) sin2M (π(· − t)) log(sin2 (π(· − t)))u(t) dt,

0

where R2M ∈ C ∞ . The remainder is an integral operator whose kernel has 2M − 1 continuous derivatives plus weakly singular derivatives of order 2M . Hence this operator is bounded from H s to H s+2M for all s ∈ [−2M, 0]. The result then follows readily from Lemma 12.  It is well known that logarithmic kernel integral operators play a central role in the theory of boundary integral equations. For example, see [3, 14] and references therein.

24

7.2

Integro–differential operators with Cauchy kernels

Let x ∈ D be such that x(t) defines a simple closed curve in C and |x0 (t)| = 6 0 for all t and x(t) 6= x(s) if t − s 6∈ Z. We consider operators of the form Z 1 n X 1 x0 (t) aj (s)D u(s) + Cu(s) := p.v. Cj (s, t)Dj u(s) dt πı x(t) − x(s) 0 j=0 j=0 n X

j

where aj , Cj are 1–periodic and infinitely differentiable and p.v. stands for the Cauchy principal value of the integral. For simplicity we denote D0 u = u instead of its definition given in (1). Operators like this have been considered in [3] and [14] and include an wide variety of boundary integral operators. Proposition 14 In the hypotheses given above, C ∈ E(n) and exp

C =

n X

j

aj D +

j=0

n X

bj HDj .

j=0

Proof. Let

H(s, t) :=

 1 cot π(t − s)    ,   π x(t) − x(s)     

t − s 6∈ Z,

1

t − s ∈ Z.

x0 (s)

Then H is smooth and 1–periodic. Moreover, we can write 1 p.v. πı

Z 0

1

x0 (t) Cj (·, t)v(t) dt = −ı p.v. x(t) − x(·)

Z

1

C j (·, t)v(t) cot(π(t − ·)) dt =: Hj v 0 exp

with C j (s, t) := H(s, t)Cj (s, t)x0 (t). It is straightforward that Hj = bj H being bj (s) := C j (s, s). Hence C is expandable.  Acknowledgments. Both authors are partially supported by DGES Project PB97/1013. The first author is supported by a scholarship of DGA reference B48/98.

References [1] M. Abramowitz and I. A. Stegun, Handbook of mathematical functions, Dover, 1970. [2] D.N. Arnold, A spline-trigonometric Galerkin method and an exponentially convergent boundary integral method, Math. Comp. 164 (1983), 383-397. [3] D.N. Arnold and W. L. Wendland, On the asymptotic convergence of collocation methods, Math. Comp. 41 (1983), 349-381.

25 [4] G.A. Chandler and I.H Sloan, Spline qualocation methods for boundary integral equations, Numer. Math. 58 (1990), 537-567. [5] G. Chen and J. Zhou. Boundary Element Methods, Academic Press, 1992. [6] M. Crouzeix and F.-J. Sayas, Asymptotic expansions of the error of spline Galerkin boundary element methods, Numer. Math. 78 (1998), 523-547. [7] V. Dom´ınguez and F.-J. Sayas, Local expansions of periodic spline interpolation with some applications, to appear in Math. Nach. [8] V. Dom´ınguez, Asymptotic analysis of projection methods for boundary integral equations, Ph.D. Thesis, University of Zaragoza, in preparation. [9] G.C. Hsiao, P. Kopp and W. L. Wendland, A Galerkin–collocation method for some integral equations of the first kind, Computing 25 (1980), 89-130. [10] R. Kress. Linear Integral Equations, 2nd edition. Springer-Verlag, 1999 [11] S. Pr¨ossdorf and B. Silbermann, Numerical Analysis for Integral and Related Operator Equations, Akademie Verlag, 1990. [12] J. Saranen, Local error estimates for some Petrov–Galerkin methods applied to strongly elliptic equations on curves, Math. Comp. 48 (1987), 485–502. [13] J. Saranen, Extrapolation methods for spline collocation solutions of pseudodifferential equations on curves, Numer. Math. 56 (1989), 385-407. [14] J. Saranen and W.L. Wendland, On the asymptotic convergence of collocation methods with spline functions of even degree, Math. Comp. 45 (1985), 91-108. [15] F.J. Sayas. Fully discrete Galerkin methods for systems of boundary integral equations, J. Comput. Appl. Math. 81 (1997), 311-331. [16] G. Schmidt, The convergence of Galerkin and collocation methods with splines for pseudodifferential equations on closed curves, Z. Anal. Anwendungen 3 (1984), 183196. [17] I.H. Sloan and W.L. Wendland, Spline qualocation methods for variable-coefficients elliptic equations on curves, Numer. Math. 83 (1999), 497-533. [18] W. Wendland, Boundary Element Methods for Elliptic Problems in: Mathematical Theory of Finite and Boundary Element Methods, eds. A. H. Schatz, V. Thom´ee and W. L. Wendland, Birkh¨auser, 1990, pp. 219–276.