DIVERGENCE-FREE KERNEL METHODS FOR ... - Semantic Scholar

Report 6 Downloads 239 Views
SIAM J. NUMER. ANAL. Vol. 47, No. 4, pp. 3158–3179

c 2009 Society for Industrial and Applied Mathematics 

DIVERGENCE-FREE KERNEL METHODS FOR APPROXIMATING THE STOKES PROBLEM∗ HOLGER WENDLAND† Abstract. We develop a new discretization scheme for the Stokes equations using analytically divergence-free approximation spaces. Our scheme is based upon a meshfree method using matrixvalued kernels. The scheme works in arbitrary space dimension and can produce arbitrarily smooth and hence high order approximations. After deriving the scheme, we analyze the discretization error in Sobolev spaces and give a numerical example. Key words. partial differential equations, fractional order Sobolev spaces, convergence rates, meshfree methods AMS subject classifications. 65N15, 65N35, 42A82 DOI. 10.1137/080730299

1. Introduction. We are interested in numerical solutions to the classical, stationary Stokes problem −νΔu + ∇p = f ∇·u= 0 u=g

in Ω, in Ω, on ∂Ω,

where the region Ω ⊆ Rd , the viscosity ν, and the right-hand sides f : Ω → Rd , g : ∂Ω → Rd are given and the velocity u : Ω → Rd and the pressure p : Ω → R are sought. Classical numerical techniques based on finite elements for solving these equations first rewrite the equations in weak form, resulting in a saddle point problem for the velocity and pressure. The discretization then employs matching finite dimensional spaces consisting of piecewise polynomials. The discretization spaces for the velocity and the pressure have to be matching in the sense of fulfilling the so-called inf-sup condition. This means in particular that usually not the same type of elements can be used to discretize the velocity and the pressure field, but there exist also stabilized approaches where the elements have the same degree. More details on the finite element approach can, for example, be found in [6]. In general, these methods have the problem that the approximate solutions are not analytically divergence-free. Other methods like finite volumes and finite differences are also well-used (see, for example, [28, Chapter 8]) but also have to deal with similar problems. Our numerical scheme is designed using a discretization space for the combined unknown v = (u, p) such that the velocity part su of the approximant sv = (su , sp ) is analytically divergence-free. To avoid problems with numerical integration, which could destroy the property of being divergence-free, we employ a collocation scheme to compute the approximant sv . The scheme involves matrix-valued, positive definite kernels, as they have been introduced in [21] and recently been further investigated ∗ Received by the editors July 15, 2008; accepted for publication (in revised form) July 16, 2009; published electronically October 1, 2009. http://www.siam.org/journals/sinum/47-4/73029.html † Department of Mathematics, University of Sussex, Brighton, BN1 9RF, UK (h.wendland@sussex. ac.uk).

3158

3159

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

in [9, 10, 19, 17, 18]. However, while those papers were primarily concerned with the approximation properties of standard interpolants, this paper is the first paper to establish applications to partial differential equations exploiting the fact that the approximation space is analytically divergence-free. In this sense, this paper has to be seen as a framework model and the first mathematically sound step in the direction of developing such high order, meshfree methods for, for example, fluid-dynamical problems. Rather direct consequences will be the application to time dependent Stokes and Navier–Stokes problems. But since similar constructions are possible with approximation spaces consisting of curl-free functions, other applications are, for example, in the direction of the Maxwell equations. Earlier work on applying radial basis functions to fluid-dynamical problems are, for example, in [15, 16]. The paper is organized as follows. In the next section we collect necessary results on function spaces, as they usually appear in this context. We also extend results on reproducing kernel Hilbert spaces to the new setting. In the third section we discuss the necessary results on meshfree, kernel-based approximation. In the fourth section we introduce and analyze our discretization scheme. The final section gives an illustrative example. 2. Standard function spaces. We consider the usual Sobolev spaces Wpτ (Ω) consisting of scalar-valued functions u : Ω ⊆ Rd → R with norms ⎛ |u|Wpk (Ω) = ⎝

 |α|=k

⎞1/p Dα upLp(Ω) ⎠

⎛ and uWpk (Ω) = ⎝

 |α|≤k

⎞1/p Dα upLp(Ω) ⎠

if τ = k ∈ N0 . In the case of fractional order Sobolev spaces with τ = k + s, k ∈ N0 , 0 < s < 1, we use ⎛

|u|Wpk+s (Ω) uWpk+s(Ω)

⎞1/p    |Dα u(x) − Dα u(y)|p := ⎝ dxdy⎠ , d+ps x − y Ω Ω 2 |α|=k 1/p  := upW k (Ω) + |u|pW k+s (Ω) . p

p

This is the definition for 1 ≤ p < ∞. The case p = ∞ is dealt with in the usual way by taking the essential supremum over all derivatives. We set H τ (Ω) = W2τ (Ω) and use the standard notation Lp (Ω) = Wp0 (Ω). From this, we define vector-valued function spaces in the usual way. However, since we are interested not only in vector-valued Sobolev spaces, we have to be more detailed here. Let F1 (Ω), . . . , Fn (Ω) be normed linear function spaces. Then, a tensor product function space is defined via F(Ω) := F1 (Ω) × · · · × Fn (Ω),

x → f (x) = (f1 (x), . . . , fn (x))

and can be equipped with the norms

f r :=

⎧⎛ n ⎪  ⎪ ⎪ ⎨⎝

⎞1/r fj rFj (Ω) ⎠

j=1 ⎪ ⎪ ⎪ ⎩ max fj Fj (Ω) 1≤j≤n

if 1 ≤ r < ∞, if r = ∞.

3160

HOLGER WENDLAND

In the case of Sobolev spaces, we will particularly use the following notation. Let τ = (τ1 , . . . , τn )T ∈ Rn , τj ≥ 0, p = (p1 , . . . , pn ) ∈ [1, ∞]n , and r ∈ [1, ∞]. Then, the algebraic space Wpτ (Ω) := Wpτ11 (Ω) × · · · × Wpτnn (Ω) can be equipped with the norms ⎧⎛ ⎞1/r n ⎪  ⎪ ⎪⎝ ⎨ fj rW τj (Ω) ⎠ pj f Wpτ (Ω),r = j=1 ⎪ ⎪ ⎪ ⎩ max f  τj Wp (Ω) 1≤j≤n

j

if 1 ≤ r < ∞, if r = ∞.

The most important case for us will be the case of pj = r = p for 1 ≤ j ≤ n, which will be denoted by Wpτ (Ω). We will also use the notation Lp (Ω) = Wp0 (Ω). In the case of p = 2 we will write Hτ (Ω) = W2τ (Ω). In the latter case the norm is given by ⎛ ⎞1/2 n  f Hτ (Ω) := ⎝ fj 2H τj (Ω) ⎠ . j=1

If Ω = Rd , this can equivalently be expressed using the Fourier transform by   n (2.1) f Hτ (Rd ) = (2π)−d/2 |fj (ω)|2 (1 + ω22 )τj dω, Rd j=1

where the Fourier transform f for f : Rd → R is defined as  T f (x)e−ix ω dx. f(ω) := (2π)−d/2 Rd

This all can be generalized in the following way. Suppose φj : Rd → R are even and integrable functions with a positive Fourier transform. Then, we can set   n  2 1   2 −d/2 f Hφ (Rd ) = (2π) dω, fj (ω) j (ω) d R j=1 φ j satisfies which leads to a norm equivalent to (2.1) if φ (2.2)

j (ω) ≤ c2 (1 + ω2 )−τj , c1 (1 + ω22 )−τj ≤ φ 2

ω ∈ Rd .

Obviously, these norms stem from an inner product   n  fj (ω)uj (ω) −d/2 (f , u)Hφ (Rd ) := (2π) dω. j (ω) Rd j=1 φ Finally, if τ1 = · · · = τn = τ or φ1 = · · · = φn = φ, we will use the simplified notation Hτ (Ω) = Hτ (Ω) and Hφ (Ω) = Hφ (Ω). Our main interest will be in the situation of d = n = 3, but we will often be able to formulate things in arbitrary dimensions. We are interested in subspaces which are divergence-free, in particular in Hτ (Ω; div) := {u ∈ Hτ (Ω) : ∇ · u = 0},

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

3161

d where the divergence of u : Ω → Rd is defined to be ∇ · u := j=1 ∂j uj . In the same sense we define, for example, Hφ (Rd ; div). We also recall the definition of the curl of a function, which is given by ∇ × u = ∂1 u2 − ∂2 u1 in two dimensions and by ⎛ ⎞ ∂2 u3 − ∂3 u2 ∇ × u := ⎝∂3 u1 − ∂1 u3 ⎠ ∂1 u2 − ∂2 u1 in three dimensions. It is our goal to derive dense subspaces of Hτ (Ω; div), which are well-suited for interpolation and approximation in scattered points and hence for the discretization of partial differential equations with the requirement of ∇ · u = 0. Finally, to measure the pressure, which is determined only up to a constant, we will use the norm on Wrτ (Ω)/R defined by pWrτ (Ω) := inf p + cWrτ (Ω) . c∈R

3. Generalized interpolation via matrix-valued kernels. In this section, we will introduce the necessary material on matrix-valued kernels and how they can be employed to solve generalized interpolation problems. We will mainly rely on material from [10, 9]. However, in particular when we discuss the associated reproducing kernel Hilbert spaces, we will give new, shorter proofs which extend to the case of product type kernels later on. 3.1. Positive definite matrix-valued kernels. We will use scalar-valued positive definite functions to build matrix-valued ones. Definition 3.1. A function φ : Rd → R is positive definite if for all N ∈ N, all pairwise distinct x1 , . . . , xN ∈ Rd , and all α ∈ RN \ {0}, the quadratic form N 

αj αk φ(xj − xk )

j,k=1

is positive. More generally, a function Φ : Rd → Rn×n is said to be positive definite if it is even Φ(−x) = Φ(x) and symmetric Φ(x) = Φ(x)T and satisfies N 

αTj Φ(xj − xk )αk > 0

j,k=1

for all pairwise distinct xj ∈ Rd and all αj ∈ Rn such that not all αj are vanishing. For such a positive definite matrix-valued function we can introduce the function space ⎧ ⎫ N ⎨ ⎬ FΦ (Ω) := Φ(· − xj )αj : xj ∈ Ω, αj ∈ Rn , ⎩ ⎭ j=1

which can be equipped with an inner product ⎞ ⎛ N M N  M    ⎝ Φ(· − xj )αj , Φ(· − yk )βk ⎠ := αTj Φ(xj − yk )β k . j=1

k=1

Φ

j=1 k=1

3162

HOLGER WENDLAND

The closure of FΦ (Ω) with respect to the norm associated with this inner product is in the scalar-valued case often called the native space of Φ. Definition 3.2. The native space of a positive definite, matrix-valued kernel Φ is defined to be the closure of FΦ (Ω) with respect to  · NΦ (Ω) :=  · Φ and will be denoted by NΦ (Ω). Note that we have for f ∈ NΦ (Ω) and α ∈ Rn the relation (3.1)

(f , Φ(· − x)α)Φ = f (x)T α,

which generalizes the reproducing kernel property of the scalar-valued case. Moreover,  we Φ, if Φ is defined on all of Rd and can be recovered from its Fourier transform  can, as in the scalar-valued case, represent the norm of a function f = Φ(· − xj )αj by   T  (3.2) f 2NΦ (Rd ) = (2π)−d/2 e−i(xj −xk ) ω αTj Φ(ω)α k dω. Rd j,k

Such vector-valued functions allow us to interpolate arbitrary scattered data. To be more precise, for every set X = {x1 , . . . , xN } ⊆ Rd and data values f1 , . . . , fN ∈ Rn , there exists exactly one interpolant of the form (3.3)

IX f :=

N 

Φ(· − xj )αj ,

j=1

with IX f (xj ) = fj for 1 ≤ j ≤ N . Moreover, the interpolants are stable in the sense that f − IX f NΦ (Ω) ≤ f NΦ (Ω) ,

IX f NΦ (Ω) ≤ f NΦ (Ω) .

This immediately follows since (f , Φ(· − x)α)NΦ (Ω) = αT f (x) implies (f − IX f , IX f )NΦ (Ω) =

N 

(f − IX f , Φ(· − xj )αj )NΦ (Ω)

j=1

=

N 

αTj [f (xj ) − IX f (xj )]

j=1

= 0. 3.2. Divergence-free functions. Now, by picking a suitable matrix-valued kernel Φ, we are going to use this approach to build kernels which lead to divergence-free interpolants. A motivation is given by the following observation. Assume we have a divergence-free vector field u : R3 → R3 which vanishes at infinity. Then, it is well-known that this vector field has a vector potential; i.e., it can be written in the form u = ∇ × w with a vector potential w : R3 → R3 which is also divergence-free. We could approximate this vector potential by a linear combination of the form w(x) = ∇ ×

N  j=1

φ(x − xj )Γj ,

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

3163

where φ : R3 → R is a positive definite function and Γj = (αj , βj , γj )T ∈ R3 are to be determined. This definition gives for the velocity field u the representation u= ∇×∇×

N 

φ(· − xj )Γj

j=1

= (−ΔI + ∇∇T )

N 

φ(· − xj )Γj

j=1

=

N 

Φ(· − xj )Γj ,

j=1

where Δ is the usual Laplace operator and I the identity matrix and we have defined Φ = (−ΔI + ∇∇T )φ.

(3.4)

This new matrix-valued function is also positive definite in the sense of Definition 3.1. Moreover, the form of Φ in (3.4) is not restricted to the three-dimensional setting. Lemma 3.3. Let φ : Rd → R be even, positive definite, and in W12 (Rd ) ∩ C 2 (Rd ). Then, the matrix-valued function Φ : Rd → Rd×d defined by (3.4) is also positive definite. This lemma has been proven in [21]. For our purposes it is important to note that Φ has the Fourier transform   Φ(ω) = (ω22 I − ωωT )φ(ω).

(3.5)

The following theorem has essentially been proven in [9]. Its proof follows ideas from the scalar-valued case as outlined in [30], employing the Moore–Penrose inverse  However, it is possible to avoid this and to give a shorter, straightforward proof, of Φ. which will be useful to us later on in the context of “combined” kernels. Theorem 3.4. Suppose φ ∈ W12 (Rd ) ∩ C 2 (Rd ) is a positive definite function. Define ψ := −Δφ ∈ L1 (Rd ) ∩ C(Rd ) and Φ according to (3.4). Then, the following relation holds: NΦ (Rd ) = Hψ (Rd ; div) with identical norms. In particular, the norm on NΦ (Rd ) can be expressed as   f(ω)22 2 −d/2 dω. f NΦ (Rd ) = (2π) 2 Rd ω2 φ(ω) N Proof. Let s = j=1 Φ(· − xj )αj be given. Then, by (3.2) and (3.5), we have   T d/2 2  e−i(xj −xk ) ω αTj Φ(ω)α (2π) sNΦ (Rd ) = k dω Rd j,k





= Rd

e−i(xj −xk )

T

ω



  ω22 αTj αk − (αTj ω)(ω T αk ) φ(ω)dω.

j,k

  and the Fourier representation Then, using ψ(ω) = ω22 φ(ω) s(ω) =

N  j=1

 e−ixj ω Φ(ω)α j = T

N  j=1

  T  e−ixj ω ω22 αj − (ωT αj )ω φ(ω)

3164

HOLGER WENDLAND

allows us to compute  s(ω)22 =

N 

e−i(xj −xk )

T

ω

  2  |φ(ω)| ω22 ω22 αTj αk − (ω T αj )(ω T αk )

j,k=1

 φ(ω)  = ψ(ω)

N 

e−i(xj −xk )

T

ω



 ω22 αTj αk − (ω T αj )(ω T αk )

j,k=1

such that d/2

(2π)



s2Hψ (Rd )

= Rd

 =

 s(ω)22 dω  ψ(ω)  φ(ω)

Rd

N 

e−i(xj −xk )

T

ω



 ω22 αTj αk − (ω T αj )(ω T αk ) dω

j,k=1 d/2

= (2π)

s2NΦ (Rd ) .

Hence, on FΦ (Rd ) ⊆ Hψ (Rd ; div) both norms are equal. This means in particular that also NΦ (Rd ) ⊆ Hψ (Rd ; div) with equal norms. Suppose finally that we have an f ∈ Hψ (Rd ; div) which is orthogonal to NΦ (Rd ), meaning in particular that  ixT ω dω   0 = (2π)d/2 (f , Φ(· − x)α)Hψ (Rd ) = f (ω)T Φ(ω)αe  ψ(ω) Rd    T ( f (ω)T ω)(ω T α) = eix ω  f (ω)T α − dω ω22 Rd = (2π)d/2 f (x)T α for arbitrary α, x ∈ Rd . In the last step we used the fact that f is divergence-free giving f (ω)T ω. 0 = (∇ · f )∧ (ω) = i This shows that f is identically zero and hence Hψ (Rd ; div) = NΦ (Rd ). We are now going to apply this result to the situation of Sobolev spaces. To this end, we introduce the space    2   f (ω) 2 τ d τ d 2 τ +1  (R ; div) := f ∈ H (R ; div) : dω < ∞ H 2 (1 + ω2 ) Rd ω2 and equip this space with the norm −d/2 f 2H  τ (Rd ;div) := (2π)

 Rd

 f (ω)22 (1 + ω22 )τ +1 dω. ω22

τ d τ d Since f Hτ (Rd ) ≤ f H  τ (Rd ;div) for all f ∈ H (R ; div), we see that H (R ; div) is indeed a subspace of Hτ (Rd ; div). Corollary 3.5. Let τ > d/2. Suppose φ satisfies (2.2) with τj = τ + 1, 1 ≤ j ≤ d. Define Φ := (−ΔI + ∇∇T )φ. Then,  τ (Rd ; div) = NΦ (Rd ). H

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

3165

 τ (Rd ; div). To prove such We now want to extend functions from Hτ (Ω; div) to H a result, we will use interpolation theory in Sobolev spaces. For two Banach spaces A1 ⊆ A0 and θ ∈ (0, 1) and 1 ≤ p < ∞ we define the interpolation space [A0 , A1 ]θ,p := {u ∈ A0 : u[A0,A1 ]θ,p < ∞}, where  u[A0,A1 ]θ,p =



1/p dt t + tvA1 ) .

t−θp KA0 ,A1 (t, u)p

0

KA0 ,A1 (t, u) := inf (u − vA0 v∈A1

If Ω has a Lipschitz boundary, then it is well-known (see, for example, [4, equation (12.2.5)]) that [H k (Ω), H k+1 (Ω)]θ,2 = H k+θ (Ω), which immediately carries over to the vector-valued Sobolev spaces. This interpolation property is also true in the case of the divergence-free functions. To see this, we need another general result about interpolation spaces; see [27, section 1.17]. For the convenience of the reader, we include its proof. Lemma 3.6. Let (A0 , A1 ) and (B0 , B1 ) denote two interpolation couples, where Bi is a subspace of Ai . Let 1 ≤ p < ∞ and 0 < θ < 1. Suppose there is a bounded linear operator P : Ai → Bi such that P (Ai ) = Bi and P |Bi = I for i = 0, 1, where I denotes the identity. Then, [B0 , B1 ]θ,p = [A0 , A1 ]θ,p ∩ B0 . Proof. Suppose first that u ∈ [B0 , B1 ]θ,p . This means u ∈ B0 = P (A0 ). Furthermore, since B1 ⊆ A1 and since the norms on Bi are those from Ai , we also have KA0 ,A1 (t, u) = inf (u − vA0 + tvA1 ) v∈A1

≤ inf (u − vB0 + tvB1 ) v∈B1

= KB0 ,B1 (t, u), which implies u[A0 ,A1 ]θ,p ≤ u[B0 ,B1 ]θ,p and hence u ∈ [A0 , A1 ]θ,p ∩ B0 . If, on the other hand, u ∈ [A0 , A1 ]θ,p ∩ B0 , then we have u = P u and KB0 ,B1 (t, u) = inf (u − vB0 + tvB1 ) v∈B1

= inf (P u − P vB0 + tP vB1 ) v∈A1

≤ C inf (u − vA0 + tvA1 ) v∈A1

= CKA0 ,A1 (t, u), which now implies u[B0 ,B1 ]θ,p ≤ Cu[A0 ,A1 ]θ,p and hence u ∈ [B0 , B1 ]θ,p . We are now applying this lemma to the situation of divergence-free Sobolev spaces. Proposition 3.7. Let k ∈ N0 and 0 < θ < 1. Let Ω ⊆ Rd for d = 2, 3 be a bounded domain with a C k+1,1 boundary. Then, [Hk (Ω; div), Hk+1 (Ω; div)]θ,2 = Hk+θ (Ω; div).

3166

HOLGER WENDLAND

Proof. We give the proof only for d = 3. The proof for d = 2 is similar. We apply Lemma 3.6 to Ai = Hk+i (Ω) and Bi = Hk+i (Ω; div). To construct the bounded operator P , we will use a variation of the classical Helmholtz–Hodge decomposition. Every u ∈ H1 (Ω) can be decomposed into u = w + ∇p,

(3.6)

where w ∈ H1 (Ω; div) satisfies ∇ · w = 0 and w · n = u · n and p ∈ H 2 (Ω). Here, n denotes the outer unit normal vector of ∂Ω. The function p is determined by ∂p solving the Poisson problem Δp = ∇ · u with Neumann boundary conditions ∂n = 0. k+i k+i+1 (Ω) and ∇pHk+i (Ω) ≤ From this it follows that u ∈ H (Ω) implies p ∈ H CuHk+i (Ω) for i = 0, 1; see [13, Theorem 1.10]. But this means also w = u − ∇p ∈ Hk+i (Ω; div) with wHk+i (Ω) ≤ CuHk+i (Ω) ,

i = 0, 1.

Although the vectors w and ∇p in (3.6) are not necessarily orthogonal due to our special choice of boundary conditions w · n = u · n, they are still uniquely determined by these boundary conditions. To see this, assume that there are two decompositions u = w1 + ∇p1 = w2 + ∇p2 , with ∇ · w1 = ∇ · w2 = 0 and w1 · n = w2 · n = u · n. This means we have w1 −w2 = ∇(p2 −p1 ) and both (w1 −w2 )·n = 0 and ∇·(w1 −w2 ) = 0. This gives   2 w1 − w2 2 dx = (w1 − w2 ) · ∇(p2 − p1 )dx Ω Ω  = (w1 − w2 ) · n(p2 − p1 )dS(x) − (p2 − p1 )∇ · (w1 − w2 )dx ∂Ω

Ω

= 0. Furthermore, if u is already divergence-free, the Poisson problem Δp = div u = 0 with ∂p boundary conditions ∂n = 0 allows only a constant as its solution such that u = w in this case. Thus, we can define a bounded operator P : Hk+i (Ω) → Hk+i (Ω; div) by setting P u = w, which is linear and the identity on Hk+i (Ω; div) by the uniqueness of the construction. Hence, Lemma 3.6 yields [Hk (Ω; div), Hk+1 (Ω; div)]θ,2 = [Hk (Ω), Hk+1 (Ω)]θ,2 ∩ Hk (Ω; div) = Hk+θ (Ω; div), which is the desired result. From this, we can now conclude that there is a continuous linear extension oper τ (Rd ; div). ator from Hτ (Ω; div) to H Proposition 3.8. Let d = 2, 3. Let τ ≥ 0, and let Ω ⊆ Rd be a simply connected domain with C τ ,1 boundary. Then, there exists a continuous operator  div : Hτ (Ω; div) → H  τ (Rd ; div) such that E  div u|Ω = u for all u ∈ Hτ (Ω; div). E Proof. To prove this result, we follow [10]. The idea is to define a continuous operator T : Hk (Ω; div) → Hk+1 (Ω) such that u = ∇ × T u for all u ∈ Hk (Ω; div). The function v = T u is the unique solution of the boundary value problem u = ∇ × v, v·n= 0

∇·v = 0

in Ω, on ∂Ω,

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

3167

where n denotes again the outer normal vector of ∂Ω. Important to us is the fact that, due to the regularity assumption on the boundary, this means that we can consider T also as a continuous operator T : Hk+1 (Ω; div) → Hk+2 (Ω). Hence, Proposition 3.7 and interpolation of operators (see [4, Proposition (14.1.5)]) yields that T is also a bounded operator T : Hτ (Ω; div) → Hτ +1 (Ω) for τ = k + θ with θ ∈ (0, 1). Thus, we can use the classical Stein extension operator ES : Hτ +1 (Ω) → Hτ +1 (Rd ), which exists for Lipschitz bounded domains, and define the required extension operator via div (u) := ∇ × ES T u. E This clearly is an extension operator which produces divergence-free functions. Furthermore, we have with v = T u that div (u)2 τ d = (2π)−d/2 E  (R ) H

 Rd



2  E div u2 (1 + ω22 )τ +1 dω 2 ω2

ω × ES v||22 = (2π)−d/2 (1 + ω22 )τ +1 dω ω22 Rd  ≤C ES v22 (1 + ω22 )τ +1 dω Rd

= CES v2Hτ +1 (Rd ) ≤ Cv2H τ +1 (Ω) ≤ Cu2Hτ (Ω) , which shows boundedness. 3.3. Combined kernels. For discretizing the Stokes equations it will be useful to combine the velocity vector u and the pressure p in one (d + 1)-dimensional vector v = (u, p). The approximation of such vectors with a divergence-free vector-valued and a general scalar-valued part can best be achieved using combined matrix-valued kernels. Hence, we define a matrix-valued kernel    0 Φ (3.7) Φ := : Rd → R(d+1)×(d+1) , 0 ψ  = (−ΔI + ∇∇T )φ : Rd → Rd×d and φ, ψ : Rd → R are positive definite where Φ functions. The tensor product structure of this kernel ensures its positive definiteness.  j , βj ) ∈ Rd+1 we obviously have For αj = (α (3.8)

N  j,k=1

N ! 

αTj Φ(xj − xk )αk =

"  j − xk )α  Tj Φ(x  k + βj ψ(xj − xk )βk > 0. α

j,k=1

Also, the native space structure follows immediately. Proposition 3.9. Assume that φ, ψ : Rd → R are positive definite and that φ ∈ W12 (R) ∩ C 2 (Rd ). Then, the native space of the kernel Φ from (3.7) is given by d d NΦ (Rd ) = NΦ  (R ) × Nψ (R )

with norm f 2NΦ (Rd ) = fu 2N

+ fp 2Nψ (Rd )     |fp (ω)|2 fu (ω)22 −d/2 + dω, = (2π) 2  ψ(ω) Rd ω2 φ(ω)  Φ

(Rd )

3168

HOLGER WENDLAND

where we have used the decomposition f = (fu , fp )Twith fu : Rd → Rd and fp : Rd → R. Proof. For any function of the form f = Φ(· − xj )αj it follows from (3.8) that f has the prescribed norm. By completion, this means indeed that NΦ (Rd ) ⊆ d d NΦ  (R ) × Nψ (R ) with identical norms. Equality of these spaces follows similarly as in the proof of Theorem 3.4. We will need an extension operator for combined Sobolev spaces as well. The operator is easily constructed using the standard Stein extension operator [25] for the “pressure” part and the extension operator from Proposition 3.8 for the “velocity” part. Proposition 3.10. Let d = 2, 3. Let τ, β ≥ 0, and let Ω ⊆ Rd be a simply connected domain with C τ ,1 boundary. Then, there exists a continuous operator  τ (Rd ; div) × H β (Rd ) such that Ev|Ω = v|Ω for all E : Hτ (Ω; div) × H β (Ω) → H τ β v = (u, p) ∈ H (Ω; div) × H (Ω) and   (Ev)u H  τ (Rd ;div) + (Ev)p H β (Rd ) ≤ C uHτ (Ω) + pH β (Ω) .  div u, ES p), Proof. The proof follows immediately by defining Ev as Ev := (E  where Ediv is the operator from Proposition 3.8 and ES is the Stein extension operator in its version for fractional Sobolev spaces as investigated in [8]. We will use this result mainly in the situation β = τ − 1. Then, if we pick the generating functions φ and ψ such that they satisfy (2.2) with τj = τ +1 and τj = τ −1 for φ and ψ, respectively, we are in the situation that  τ (Rd ; div) × H τ −1 (Rd ). E : Hτ (Ω; div) × H τ −1 (Ω) → NΦ (Rd ) = H 3.4. Generalized interpolation. The easiest way to describe the collocation scheme for the Stokes equation is by employing functionals from the dual space NΦ (Ω)∗ = {λ : NΦ (Ω) → R : λ linear and continuous}. Since f = Φ(· − y)ej belongs to NΦ (Ω), where ej is the jth unit vector, we see that the columns of Φ and, since Φ is symmetric, also its rows belong to NΦ (Ω). Thus, we can define λy Φ(x − y) as the vector-valued function, which is generated by applying λ with respect to y to every row of Φ. The resulting vector-valued function is the Riesz representer of λ in NΦ (Ω) in the sense of λ(f ) = (f , λy Φ(· − y))Φ . Thus, the following result, which is well-known in the context of scalar-valued kernels (see, for example, [30, Theorem 16.1]), remains true for matrix-valued kernels. Proposition 3.11. Suppose Φ : Ω ⊆ Rd → Rn×n is a positive definite, matrixvalued kernel. Suppose further that λ1 , . . . , λN ∈ NΦ (Ω)∗ are linearly independent and f1 , . . . , fN ∈ R are given. Then, the problem of finding the solution of (3.9)

min{sNΦ(Ω) : λj (s) = fj , 1 ≤ j ≤ N }

has a unique solution, which has the representation sΛ =

N 

αj λyj Φ(· − y).

j=1

The coefficients αj are determined via the interpolation conditions λi (sΛ ) = fi , 1 ≤ i ≤ N.

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

3169

We will build our functionals often by starting with vector-valued functionals. Hence, if λ = (λ1 , . . . , λn ) is such a vector-valued functional, the corresponding functional from NΦ (Ω)∗ will be defined by λ(f ) :=

(3.10)

n 

λj (fj ).

j=1

Naturally, we have to make sure that the so defined functional is indeed a member of NΦ (Ω)∗ . (i) For example, if we define nN functionals λj = δxj ei for 1 ≤ j ≤ N and 1 ≤ i ≤ n, then it is easy to see that the solution to the minimization problem (3.9) coincides with the classical interpolant IX f defined in (3.3). More details on this can be found in [21]. 4. Approximating the Stokes equations. 4.1. The approximation scheme. We are now describing the approximation scheme for the Stokes problem (4.1) (4.2)

−νΔu + ∇p = f ∇·u=0

(4.3)

u=g

in Ω, in Ω, on ∂Ω

using a meshfree, kernel-based collocation method with an analytically divergencefree approximation space such that condition (4.2) is automatically satisfied. To this end, we introduce the combined velocity-pressure vector v = (u, p) : Rd → Rd+1 such that (4.1)–(4.3) become (Lv)i := −νΔv + ∂i vd+1 = −ν

d 

∂jj vi + ∂i vd+1 = fi

in Ω,

j=1 d 

∂j vj = 0

in Ω,

j=1

vi = gi

on ∂Ω,

where 1 ≤ i ≤ d and where we introduced the operator L, which maps d-variate (d + 1)-vector-valued functions to d-variate d-vector-valued functions. To discretize this set of equations by collocation employing divergence-free kernels, we define the matrix-valued kernel    0 Φ Φ := : Rd → R(d+1)×(d+1) , 0 ψ  = (−ΔI + ∇∇T )φ and φ, ψ : Rd → R are positive definite functions. Next, where Φ we pick point sets X = {x1 , . . . , xN } ⊆ Ω and Y = {y1 , . . . , yM } ⊆ ∂Ω to define the vector-valued functionals (2i)

λj

(2i+1) λj

:= −νδxj ◦ Δei + δxj ◦ ∂i ed+1 , := δyj ei , 1 ≤ j ≤ M, 1 ≤ i ≤ d.

1 ≤ j ≤ N,

1 ≤ i ≤ d,

3170

HOLGER WENDLAND

The first set of functionals represents collocation in the interior points, while the second set of functionals represents the boundary conditions pointwise, i.e., (2i)

λj

(v) = (Lv)i (xj ),

(2i+1) (v) λj

= vi (yj ).

The approximation function is then defined by forming a linear combination of these functionals applied to one argument of the matrix-valued kernel Φ(x − y); i.e., we set (4.4)

sv (x) :=

Nk 2d  

αj (λj )y Φ(x − y), (k)

(k)

k=1 j=1

with Nk = N for k even and Nk = M for k odd. The coefficients are determined by the collocation conditions (2i)

(sv ) = λj

(2i+1)

(sv ) = λj

λj

(4.5) λj

(4.6)

(2i)

(v) = fi (xj ),

(2i+1)

(v) = gi (yj ).

We will make this more precise in the example section. Here, we first show that our approach is well-defined. Theorem 4.1. Assume that the building functions φ, ψ : Rd → R are positive  τ (Rd ; div) × H τ −1 (Rd ) with τ > 2 + d/2. definite and chosen such that NΦ (Rd ) = H Then, the interpolation function sv = (su , sp )T from (4.4) is well-defined and uniquely determined by the interpolation conditions (4.5) and (4.6). It satisfies Lsv (xj ) = f (xj ) and su (yj ) = g(yj ). Furthermore, we have ∇ · su = 0 in Rd . Proof. Since τ + 1 > 3 + d/2, we have that φ ∈ C 6 (Rd ). Hence, the kernel is sufficiently smooth and the functionals indeed belong to the dual of the native space. Thus, we have only to show that the functionals are linearly independent over  τ (Rd ; div)×H τ −1(Rd ). Let us assume that there are coefficients α(k) ∈ R NΦ (Rd ) = H j such that Nk 2d  

(4.7)

(k)

(k)

αj (λj )(γ) = 0

k=1 j=1

for all γ ∈ NΦ (Rd ). We will now pick a specific test function γ for every index pair (i, ). First of all we choose γ to have compact support such that the only data site contained in the support of this specific γ is xi (for  even) or yi (for  odd). Hence, in the case of an even , (4.7) reduces to 0=

d  k=1

(2k) (k) λi (γ)

αi

=

d 

(2k)

αi

(Lγ)k (xi ).

k=1

Since we have not yet exploited the second index , we can now modify γ such that (2 ) (Lγ)k (xi ) = δk, , which gives αi = 0. Since we can do the same in the case of an odd index , we see that all coefficients have to be zero, showing that the functionals are linearly independent. We will use the notation sv = (su , sp )T to denote the velocity and pressure parts of the approximation. However, the coefficients of these components depend on each other.

3171

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

4.2. Error analysis. Crucial to our error analysis is the following “shift” theorem relating the smoothness of the domain and the given data to the smoothness of the solution. Its proof can be found in [2, 26]. Theorem 4.2. Let m ∈ N0 , 1 ≤ r < ∞, and let Ω ⊆ Rd be a C m+1,1 smooth m+2−1/r m domain (∂Ω) # with outer normal vector n. For each f ∈ Wr (Ω) and g ∈ Wr with ∂Ω g · ndS = 0, the nonhomogeneous Stokes problem (4.1)–(4.3) has a unique solution u ∈ Wrm+2 (Ω) and p ∈ Wrm+1 (Ω)/R and  uWrm+2 (Ω) + pWrm+1 (Ω)/R ≤ C f Wrm (Ω) + gWm+2−1/r (∂Ω) . r

Remark 4.3. According to [26, Remark 2.6], the result of Theorem 4.2 remains true for m ∈ R, m ≥ −1, provided r = 2. This means in particular that if Ω is # sufficiently smooth and if f ∈ Hτ −2 (Ω) and g ∈ Hτ −1/2 (∂Ω) with ∂Ω g · ndS = 0, then the solution satisfies (u, p) ∈ Hτ (Ω; div) × H τ −1 (Ω) and

  uHτ (Ω) + pH τ −1 (Ω)/R ≤ C f Hτ −2 (Ω) + gHτ −1/2 (∂Ω) .

We will apply the estimate from Theorem 4.2 to the difference v − sv with m = 0 to derive (4.8) u − su Wr2 (Ω) + p − sp Wr1 (Ω)/R ≤ CLv − Lsv Lr (Ω) + g − su W2−1/r (∂Ω) . r

To estimate the two terms on the right-hand side of the last equation, we first observe that we have (Lv − Lsv )(xj ) = 0, (g − su )(yj ) = 0,

1 ≤ j ≤ N, 1 ≤ j ≤ M.

Hence, we are dealing with smooth functions which have a large number of zeros. In the first case we have functions defined on a bounded region of Rd , while in the second case we are dealing with functions on a manifold. For such functions, we can apply the following “sampling inequalities.” We will first deal with the first case and hence introduce the “fill distance” hX,Ω := sup min x − xj 2 x∈Ω xj ∈X

to measure how well the data sites X cover the region Ω. The following result comes from [3, 22, 23]. Proposition 4.4. Let 1 ≤ p, r ≤ ∞, m ∈ N0 , and τ ∈ R with τ > d/p. Suppose Ω ⊂ Rd is a bounded domain having a Lipschitz boundary. Let X ⊆ Ω be a discrete set with mesh norm h = hX,Ω sufficiently small. For each u ∈ Wpτ (Ω) with u|X = 0 we have for 0 ≤ m ≤ τ − d(1/p − 1/r)+  − 1 that uWrm (Ω) ≤ Chτ −m−d(1/p−1/r)+ uWpτ (Ω) , where C > 0 is a constant independent of u and h, and (x)+ = max{x, 0}. We will need a version of this proposition which allows also fractional order norms on the left-hand side. To prove it, we will need the following auxiliary results. Lemma 4.5. Let Ω ⊆ Rd be bounded, having a Lipschitz boundary.

3172

HOLGER WENDLAND

1. Let 1 < p ≤ r < ∞ and τ >

d p

− dr ; then we have the continuous embedding d τ−d p+r

Wpτ (Ω) ⊆ Wr

(Ω).

2. Let 1 ≤ r ≤ p; then we have the continuous embedding Wpτ (Ω) ⊆ Wrτ (Ω). 3. Let 1 < r < ∞, 0 ≤ σ ≤ τ , and u ∈ Wrτ (Ω); then 1− σ

σ

τ τ uWrσ (Ω) ≤ CuLr (Ω) uW τ (Ω) . r

Proof. The first property is (1,4,4,5) in [14]. The second property is (1.4.2) Proposition in [4] for integers σ and τ , and the general case follows by interpolation. The third property is, for integers σ and τ , Theorem 4.17 in [1]. Such an inequality is usually referred to as a Gagliardo–Nirenberg inequality, and the general case including fractional orders is, for example, proven in [5, Corollary 3.2] in the case of Ω = Rd . But since we have a universal continuous extension operator EΩ : Wrσ (Ω) → Wrσ (Rd ) for all 0 ≤ σ ≤ τ , we can conclude that 1− σ

σ

1− σ

σ

τ τ τ τ uWrσ (Ω) ≤ EΩ uWrσ (Rd ) ≤ EΩ uLr (R d ) EΩ uW τ (Rd ) ≤ CuL (Ω) uW τ (Ω) r r

r

for all u ∈ With this, we can prove a new version of Proposition 4.4. Theorem 4.6. Let Ω ⊆ Rd be a bounded domain with Lipschitz boundary. Let 1 < p, r < ∞ and τ > d/p. Let X ⊆ Ω be a discrete set having mesh norm h = hX,Ω sufficiently small. For each u ∈ Wpτ (Ω) with u|X = 0 we have for 0 ≤ σ ≤ τ − d(1/p − 1/r)+ that Wrτ (Ω).

uWrσ (Ω) ≤ Chτ −σ−d(1/p−1/r)+ uWpτ (Ω) . Proof. For σ = 0, the standard estimate in Proposition 4.4 gives (4.9)

uLr (Ω) ≤ Chτ −d( p − r )+ uWpτ (Ω) . 1

1

Let us first assume that 1 < r ≤ p. This means ( p1 − 1r )+ = 0 such that (4.9) becomes uLr (Ω) ≤ Chτ uWpτ (Ω) . The second item from Lemma 4.5 gives Wpτ (Ω) ⊆ Wrτ (Ω), i.e., uWrτ (Ω) ≤ CuWpτ (Ω) . Inserting these two estimates into the third estimate in Lemma 4.5 gives, for 0 ≤ σ ≤ τ , that 1− σ

σ

σ

1− σ

τ τ uW uWrσ (Ω) ≤ CuLr (Ω) τ (Ω) r

σ

τ ≤ Ch(1− τ )τ uW ττ(Ω) uW τ (Ω) p

p

= Chτ −σ uWpτ (Ω) , which proves the result in this case. If p < r < ∞, we have ( 1p − 1r )+ = that (4.9) becomes

1 p



uLr (Ω) ≤ Chτ −d( p − r ) uWpτ (Ω) . 1

1

d τ−d p+r

Furthermore, the first case in Lemma 4.5 yields Wpτ (Ω) ⊆ Wr u

τ− d + d p r

Wr

(Ω)

≤ CuWpτ (Ω) .

(Ω), i.e.,

1 r

such

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

3173

Hence, if we set μ = τ − (d/p) + (d/r), the last two inequalities used in the context of the third case of Lemma 4.5 lead, for 0 ≤ σ ≤ μ, to 1− σ

σ

σ

μ μ uWrσ (Ω) ≤ CuLr (Ω) uW ≤ Ch(1− μ )μ uWpτ (Ω) μ r (Ω)

≤ Chτ −σ− p + r uWpτ (Ω) , d

d

which finishes the proof. It is easy to see that this result carries over to the case of vector-valued functions. Corollary 4.7. Under the assumptions of Theorem 4.6 on p, r, τ, σ, X, and Ω assume that u ∈ Wpτ (Ω) satisfies u|X = 0. Then, we also have uWrσ (Ω) ≤ Chτ −σ−d(1/p−1/r)+ uWpτ (Ω) . Now it is easy to bound the term Lv − Lsv Lr (Ω) . Since our extension operator is defined only in the case of d = 2, 3, we will, from now on, restrict ourselves to these cases. However, it is worth pointing out that this is the only reason why we explicitly need the restriction in space dimension. Hence, if the extension operator is proven to exist in arbitrary space dimension, everything else will also hold in arbitrary dimension. Proposition 4.8. Let τ > 2 + d/2 and 1 < r < ∞. Assume Ω ⊆ Rd is a τ ,1 boundary. Let f ∈ Hτ −2 (Ω) and bounded, simply connected # region with a C τ −1/2 g∈H (∂Ω) satisfy ∂Ω g · ndS = 0. Suppose that the kernel Φ is chosen such d  τ (Rd ; div) × H τ −1 (Rd ). Then, that NΦ (R ) = H   Lv − Lsv Lr (Ω) ≤ C(ν)hτ −2−d(1/2−1/r)+ f Hτ −2 (Ω) + gHτ −1/2 (∂Ω) . Proof. First of all, we have Lv − Lsv Lr (Ω) ≤ Chτ −2−d(1/2−1/r)+ Lv − Lsv Hτ −2 (Ω) .  div u, ES p) ∈ To bound the latter norm we first extend the function v to Ev = (E  τ (Rd ; div) × H τ −1 (Rd ) and note that the generalized interpolant sv coincides with H sEv . Furthermore, if we pick the representer p for the pressure such that pH τ −1 (Ω) = pH τ −1 (Ω)/R , then we have Lv − Lsv Hτ −2 (Ω) = LEv − LsEv Hτ −2 (Ω)  div u − Δs  ≤ νΔE Hτ −2 (Ω) + ∇ES p − ∇sES p Hτ −2 (Ω) Ediv u

 div u − s  τ ≤ νE Ediv u H (Ω) + ES p − sES p H τ −1 (Ω)

 div u − s  ≤ νE  τ (Rd ;div) + ES p − sES p H τ −1 (Rd ) Ediv u H ≤ C(ν)Ev − sEv NΦ (Rd ) ≤ C(ν)EvNΦ (Rd )   div u  τ d ≤ C(ν) E + E p τ −1 (Rd ) S H H (R ;div)   ≤ C(ν) uHτ (Ω) + pH τ −1 (Ω)   ≤ C(ν) f Hτ −2 (Ω) + gHτ −1/2 (∂Ω) . To bound the boundary part g − su W2−1/r (∂Ω) in the estimate (4.8) we have to r carry the concept of Proposition 4.4, Theorem 4.6, and Corollary 4.7 to the manifold

3174

HOLGER WENDLAND

∂Ω. This has partially been done in [11] for the special case of ∂Ω being the sphere in Rd and in a more general context in [12]. We will represent the boundary ∂Ω by a finite atlas consisting of smooth diffeomorphisms with a slight abuse of terminology. To be more precise, we assume that ∂Ω = ∪K j=1 Vj , where Vj ⊆ ∂Ω are relatively open sets. Moreover, the sets Vj are images of C k,s -diffeomorphisms ϕj : B → Vj , where B = B(0, 1) denotes the unit ball in Rd−1 . Finally, suppose {wj } is a partition of unity with respect to {Vj }. Then, the Sobolev norms on ∂Ω can equivalently be defined via upWpμ (∂Ω)

=

K 

(uwj ) ◦ ϕj pWpμ (B) .

j=1

It is well-known that this norm is independent of the chosen atlas {Vj , ϕj }, but this is of less importance here, since we will assume that the atlas is fixed. We will measure the density of the points Y on ∂Ω by introducing the mesh norm hY,∂Ω := max hTj ,B , 1≤j≤K

with Tj = ϕ−1 j (Y ∩ Vj ) ⊆ B. As mentioned before, we will assume the atlas fixed and hence do not have to worry about the dependence of hY,∂Ω on the atlas. Similarly, as in the proof of [12, Theorem 3.10] we will now derive a version of Theorem 4.6 for manifolds. Proposition 4.9. Let 1 < p, r < ∞ and τ = k + s > d/p. Let Ω ⊆ Rd be a bounded domain having a C k,s smooth boundary. Assume that Y ⊆ ∂Ω with hY,∂Ω sufficiently small. Then, there is a constant C > 0 such that for all u ∈ Wpτ (Ω) with u|Y = 0 we have for 0 ≤ σ ≤ τ − 1/p − (d − 1)(1/p − 1/r)+ that (4.10)

τ −1/p−σ−(d−1)(1/p−1/r)+

uWrσ (∂Ω) ≤ ChY,∂Ω

uWpτ (Ω) .

Proof. We are using the notation introduced in the paragraph before this propoτ −1/p (B). Then, Theorem 4.6 yields for sition. In addition, let uj = (uwj ) ◦ ϕj ∈ Wp 0 ≤ σ ≤ τ − 1/p − (d − 1)(1/p − 1/r)+ that urWrσ (∂Ω) =

K 

uj rWrσ (B)

j=1

≤C

K 

r(τ −1/p−σ−(d−1)(1/p−1/r)+ )

hTj ,B

j=1 r(τ −1/p−σ−(d−1)(1/p−1/r)+ )

≤ ChY,∂Ω ≤

uj rW τ −1/p (B) p

urW τ −1/p (∂Ω) p

r(τ −1/p−σ−(d−1)(1/p−1/r)+ ) ChY,∂Ω urWpτ (Ω) ,

which gives the required result. Clearly, this can be carried over to the vector-valued case. The same procedure as the one employed in the proof of Proposition 4.8 leads to the following result.

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

3175

Proposition 4.10. Assume that Ω and f , g satisfy the smoothness assumptions of Proposition 4.8. Suppose that the kernel Φ is chosen such that NΦ (Rd ) =  τ (Rd ; div) × H τ −1 (Rd ), with τ > 2 + d/2. Then, for sufficiently small hY,Ω and H 1 < r < ∞, there is a constant C = C(ν) such that  τ −2−1/2+1/r−(d−1)(1/2−1/r)+  f Hτ −2 (Ω) + gHτ −1/2 (∂Ω) . g − su W2−1/r (∂Ω) ≤ ChY,∂Ω r

Proof. First of all, setting p = 2 and σ = 2 − 1/r in the vector version of (4.10) gives g − su W2−1/r (∂Ω) = u − su W2−1/r (∂Ω) r

r



τ −2−1/2+1/r−(d−1)(1/2−1/r)+ ChY,∂Ω u

− su Hτ (Ω) .

However, as in the proof of Proposition 4.8 we can conclude that   u − su Hτ (Ω) ≤ C(ν) f Hτ −2 (Ω) + gHτ −1/2 (∂Ω) , which completes the proof. After estimating both parts in (4.8) we have proven our main result. Theorem 4.11. Assume that Ω and f , g satisfy the smoothness assumptions of Proposition 4.8. Suppose that the kernel Φ is chosen such that its native space is  τ (Rd ; div) × H τ −1 (Rd ), with τ > 2 + d/2. Then, the error between the NΦ (Rd ) = H true solution and the collocation can be bounded by u − su Wr2 (Ω) + p − sp Wr1 (Ω)/R  τ −2−d(1/2−1/r)+ τ −2−1/2+1/r−(d−1)(1/2−1/r)+  × ≤ C hX,Ω + hY,∂Ω   f Hτ −2 (Ω) + gHτ −1/2 (∂Ω) for 1 < r < ∞. If r ≥ 2 and h = hX,Ω ≈ hY,∂Ω , this reduces to   u − su Wr2 (Ω) + p − sp Wr1 (Ω)/R ≤ Chτ −2−d(1/2−1/r) f Hτ −2 (Ω) + gHτ −1/2 (∂Ω) . It is very likely that a similar result holds for  u − su Wr1 (Ω) + p − sp Lr (Ω)/R ≤ C f Wr−1 (Ω) + gW 1−1/r (∂Ω) . r

However, this requires us to extend the results of Proposition 4.4 to negative Sobolev indices, which has so far not been done and is not obvious, since the usual techniques from finite elements do not immediately work in our situation. 5. An example. We consider the following example for the two-dimensional Stokes problem coming from [7]. Let Ω = (0, 1)2 and ν = 1. We assume that the exact solution is given by u(x) = (20x1 x32 , 5x41 − 5x42 )T ,

p(x) = 60x21 x2 − 20x32 + C.

Figure 5.1 shows the velocity field, but with vectors of unit length on the left and the contour lines (with values for C = 0) of the pressure on the right. The boundary function g is, of course, defined by u on ∂Ω. Furthermore, it is easy to see that f = 0. As indicated in the last section, we want to describe in more detail how we form the approximation function, where we will also slightly change the notation. Let us assume that we have chosen X = {x1 , . . . , xN } ⊆ Ω and Y = {y1 , . . . , yM } ⊆ ∂Ω. Then,

3176

0.9

−17.1429 7 −14.285

3

14

.57

−8



0.7

0.7

0.6

0.6

0.5

0.5

29

14



4 71

5

0

.8

−2

20 17.1429 14.2857

0.8

0.8

7 5.

429

−5.71

57 71 34.28 1.4286 14 3 57 43 2.85 28. 5.71 2 2

86 11.42

8.57 143 11.4 286

0.9

2.8 57 14

1

5.71 429

1

22.8571

HOLGER WENDLAND

4

0.3

0.2

0.2

0.1

0.1

0

.1

.28

8.57 143 5.714 29

2.8

571

4

2.85714

0 0

0

0.2

0.4

0.6

x

0.8

1

0 0

20 42 9 57

17

14

9 42 71 5.

0.3

2.85714

y

0.4

0.4

6 428 11. 43 71 8.5

y

571

−2.8

0.2

0.4

0.6

0.8

1

x

Fig. 5.1. Velocity field visualized with unit length vectors (left) and contour lines of the pressure field (right).

since we are working in two-dimensional space, we have four types of functionals: two represent the interior collocation conditions, and two represent the boundary conditions. These functionals are in vector-form given by ⎞ ⎛ −νδxj ◦ Δ ⎠, 0 λj = ⎝ 1 ≤ j ≤ N, δxj ◦ ∂1 ⎞ ⎛ 0 λj = ⎝−νδxj−N ◦ Δ⎠ , N + 1 ≤ j ≤ 2N, δxj−N ◦ ∂2 ⎞ ⎛ δyj−2N λj = ⎝ 0 ⎠ , 2N + 1 ≤ j ≤ 2N + M, 0 ⎞ ⎛ 0 2N + M + 1 ≤ j ≤ 2N + 2M. λj = ⎝δyj−2N −M ⎠ , 0 The interpolant is then formed by applying these functionals to the second argument of the kernel Φ(x − y), which leads to sv (x) =

2N +2M j=1

αj λyj Φ(x − y)

⎛ ⎞ ⎞ 2N ν∂22 Δφ(x − xj ) −ν∂12 Δφ(x − xj−N )  = αj ⎝−ν∂12 Δφ(x − xj )⎠ + αj ⎝ ν∂11 Δφ(x − xj−N ) ⎠ j=1 −∂1 ψ(x − xj ) −∂2 ψ(x − xj−N ) j=N +1 ⎛ ⎛ ⎞ ⎞ 2N +M 2N +2M −∂22 φ(x − yj−2N ) ∂12 φ(x − yj−2N −M )  αj ⎝ ∂12 φ(x − yj−2N ) ⎠ + αj ⎝−∂11 φ(x − yj−2N −M )⎠ , + 0 0 j=2N +1 j=2N +M+1 N 



where the first two components represent the velocity field and the last one represents the pressure. The interpolation matrix is then determined by applying the functionals to the interpolant.

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

3177

In our numerical example, we employ the radial basis function with compact support φ(x) = φ(r), r = x2 , with 4 3 2 φ(r) = (1 − r)10 + (429r + 450r + 210r + 50r + 5),

which belongs to C 8 (R2 ) and generates the Sobolev space W25.5 (R2 ) (see [29]). Thus, we have τ = 4.5. We scale the basis function by a factor of 1/10 to compensate for the numerical support; i.e., we are using φ(·/10), which is a basis function having theoretical support in [0, 10]. However, similar results were achieved with any support radius between 1 and 10. We keep the support radius fixed throughout our computations such that, in each case, we have a full system. We also use this function for ψ, though technically a function generating W23.5 (R2 ) would be sufficient. However, from classical kernel interpolation theory it is known that smoother kernels employed for approximating rougher functions lead to the approximation order of the rougher space. In our example, however, the pressure is infinitely smooth so that we do not expect a degeneration in the approximation order. The numerical tests were run on a series of equidistant meshes, where the number of points is doubled in each new computation. The linear systems are solved by a standard LU method with pivoting. The error is analyzed on a fine 300 × 300 mesh. From the classical stability analysis for radial basis function interpolation (see [20, 24, 9]) we have to expect ill-conditioned matrices. Hence, to avoid any negative influence from such ill conditioning, as a precaution, all computations were done using quad double precision. The results are given in Table 5.1 and in Figure 5.2. In both cases, we used the notation eu = u − su and ep = p − sp . Table 5.2 contains the approximation orders Table 5.1 Approximation errors for the Stokes problem using a regular grid of width h. h 1/4 1/8 1/16 1/32 1/64

eu ∞ 1.9335e-01 6.8463e-03 3.3125e-04 1.8936e-05 1.1634e-06

Δeu ∞ 5.0578e+01 1.1666e+01 2.6241e+00 5.9224e-01 1.3868e-01

eu 2 8.4643e-02 1.4669e-03 6.0479e-05 3.2883e-06 2.0282e-07

Δeu 2 1.0289e+01 8.9352e-01 1.2125e-01 1.9686e-02 3.3334e-03

∇ep ∞ 3.8917e+01 4.7894e+00 8.5801e-01 1.6156e-01 3.0745e-02

∇ep 2 1.1865e+01 6.9091e-01 8.0065e-02 9.7144e-03 1.2243e-03

2

10

1

10

0

10

−1

10

error

−2

10

−3

10

−4

10

−5

(1) (2) (3) (4) (5) (6)

10

−6

10

−7

10

−2

−1

10

10

h

Fig. 5.2. Convergence results. Plotted are (1) eu L∞ , (2) Δeu L∞ , (3) eu L2 , (4) Δeu L2 , (5) ∇ep L∞ , and (6) ∇ep L2 in double logarithmic style.

3178

HOLGER WENDLAND Table 5.2 Approximation orders.

computed

estimated

eu ∞ 4.8198 4.3693 4.1287 4.0248 3.5

Δeu ∞ 2.1162 2.1524 2.1476 2.0944 1.5

eu 2 5.8506 4.6002 4.2010 4.0191 4.5

Δeu 2 3.5255 2.8816 2.6227 2.5621 2.5

∇ep ∞ 3.0225 2.4808 2.4090 2.3936 1.5

∇ep 2 4.1020 3.1093 3.0430 2.9881 2.5

computed from the computational results and the expected orders from Theorem 4.11. Note, however, that neither the L2 orders nor any orders in the L∞ -norm are actually theoretically corroborated by Theorem 4.11. In all cases, maybe with the exception of the L2 error of the velocity field, the estimated approximation orders are more than matched. Acknowledgment. I would like to thank Daniela Schr¨ ader for carefully reading drafts of this paper. I would also like to thank the anonymous referees. Their comments were extremely helpful in improving the paper substantially. REFERENCES [1] R. A. Adams, Sobolev Spaces, Academic Press, New York, 1975. [2] C. Amrouche and V. Girault, On the existence and regularity of the solution of Stokes problem in arbitrary dimension, Proc. Japan. Acad. Ser. A Math. Sci., 67 (1991), pp. 171–175. [3] R. Arcang´ eli, M. C. L. de Silanes, and J. J. Torrens, An extension of a bound for functions in Sobolev spaces, with applications to (m, s)-spline interpolation and smoothing, Numer. Math., 107 (2007), pp. 181–211. [4] S. Brenner and L. Scott, The Mathematical Theory of Finite Element Methods, Springer, New York, 1994. [5] H. Brezis and P. Mironescu, Gagliardo-Nirenberg, composition and products in fractional Sobolev spaces, J. Evol. Equ., 1 (2001), pp. 387–404. [6] F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Methods, Springer, New York, 1991. [7] E. Burman, Pressure projection stabilizations for Galerkin approximations of Stokes’ and Darcy’s problem, Numer. Methods Partial Differential Equations, 24 (2008), pp. 127–143. [8] R. A. DeVore and R. C. Sharpley, Besov spaces on domains in Rd , Trans. Amer. Math. Soc., 335 (1993), pp. 843–864. [9] E. Fuselier, Improved stability estimates and a characterization of the native space for matrixvalued RBFs, Adv. Comput. Math., 29 (2008), pp. 269–290. [10] E. Fuselier, Sobolev-type approximation rates for divergence-free and curl-free RBF interpolants, Math. Comp., 77 (2008), pp. 1407–1423. [11] Q. L. Gia, F. Narcowich, J. Ward, and H. Wendland, Continuous and discrete least-squares approximation by basis functions on spheres, J. Approx. Theory, 143 (2006), pp. 124–133. [12] P. Giesl and H. Wendland, Meshless collocation: Error estimates with application to dynamical systems, SIAM J. Numer. Anal., 45 (2007), pp. 1723–1741. [13] V. Girault and P.-A. Raviart, Finite Element Methods for Navier–Stokes Equations— Theory and Algorithms, Springer Ser. Comput. Math. 5, Springer, Berlin, 1986. [14] P. Grisvard, Elliptic Problems in Nonsmooth Domains, Pitman, Boston, 1985. [15] E. J. Kansa, Multiquadrics—a scattered data approximation scheme with applications to computational fluid-dynamics. II: Solutions to parabolic, hyperbolic and elliptic partial differential equations, Comput. Math. Appl., 19 (1990), pp. 147–161. [16] S. Lowitzsch, Approximation and Interpolation Employing Divergence-Free Radial Basis Functions with Applications, Ph.D. thesis, Texas A&M University, College Station, TX, 2002. [17] S. Lowitzsch, A density theorem for matrix-valued radial basis functions, Numer. Algorithms, 39 (2005), pp. 253–256. [18] S. Lowitzsch, Error estimates for matrix-valued radial basis function interpolation, J. Approx. Theory, 137 (2005), pp. 234–249.

DIVERGENCE-FREE KERNEL METHODS FOR STOKES PROBLEM

3179

[19] S. Lowitzsch, Matrix-valued radial basis functions: Stability estimates and applications, Adv. Comput. Math., 23 (2005), pp. 299–315. [20] F. J. Narcowich and J. D. Ward, Norm estimates for the inverse of a general class of scattered-data radial-function interpolation matrices, J. Approx. Theory, 69 (1992), pp. 84–109. [21] F. J. Narcowich and J. D. Ward, Generalized Hermite interpolation via matrix-valued conditionally positive definite functions, Math. Comp., 63 (1994), pp. 661–687. [22] F. J. Narcowich, J. D. Ward, and H. Wendland, Sobolev bounds on functions with scattered zeros, with applications to radial basis function surface fitting, Math. Comp., 74 (2005), pp. 643–763. [23] F. J. Narcowich, J. D. Ward, and H. Wendland, Sobolev error estimates and a Bernstein inequality for scattered data interpolation via radial basis functions, Constr. Approx., 24 (2006), pp. 175–186. [24] R. Schaback, Error estimates and condition number for radial basis function interpolation, Adv. Comput. Math., 3 (1995), pp. 251–264. [25] E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, New Jersey, 1971. [26] R. Teman, Navier–Stokes Equations, Theory and Numerical Analysis, North–Holland, Amsterdam, 1977. [27] H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North–Holland, Amsterdam, 1978. ¨ ller, Multigrid, Academic Press, San Diego, [28] U. Trottenberg, C. Oosterlee, and A. Schu 2001. [29] H. Wendland, Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree, Adv. Comput. Math., 4 (1995), pp. 389–396. [30] H. Wendland, Scattered Data Approximation, Cambridge Monogr. Appl. Comput. Math., Cambridge University Press, Cambridge, UK, 2005.