INTERPOLATION APPROXIMATIONS BASED ON ... - Semantic Scholar

Report 4 Downloads 112 Views
INTERPOLATION APPROXIMATIONS BASED ON GAUSS-LOBATTO-LEGENDRE-BIRKHOFF QUADRATURE LI-LIAN WANG1

BEN-YU GUO2

Abstract. We derive in this paper the asymptotic estimates of the nodes and weights of the Gauss-Lobatto-Legendre-Birkhoff (GLLB) quadrature formula, and obtain optimal error estimates for the associated GLLB interpolation in Jacobi weighted Sobolev spaces. We also present a useroriented implementation of the pseudospectral methods based on the GLLB quadrature nodes for Neumann problems. This approach allows an exact imposition of Neumann boundary conditions, and is as efficient as the pseudospectral methods based on Gauss-Lobatto quadrature for PDEs with Dirichlet boundary conditions.

1. Introduction In a recent work, Ezzirani and Guessab [8] proposed a fast algorithm for computing the nodes and weights of some Gauss-Birkhoff type quadrature formulas (cf. [20, 21]) with mixed boundary conditions. One special rule of particular importance, termed as the Gauss-Lobatto-Legendre-Birkhoff (GLLB) quadrature, takes the form Z

1

φ(x)dx ∼ φ0 (−1)ω− +

N −1 X

−1

φ(xj )ωj + φ0 (1)ω+ ,

(1.1)

j=1

which is exact for all polynomials of degree 2N − 1, and whose interior nodes are zeros of the quasi(2,2) orthogonal polynomial (cf. [26]) formed by a linear combination of the Jacobi polynomials JN −1 (x) (2,2)

and JN −3 (x). Motivated by [8], our first intention is to study the asymptotic behaviors of the nodes and weights of (1.1). Two important results are (2j + 1/2)π xj ∼ , = cos 2N + 1

1 ≤ j ≤ N − 1,

(1.2)

and π ωj ∼ sin θj , = N

θj = arccos xj ,

1 ≤ j ≤ N − 1.

(1.3)

1991 Mathematics Subject Classification. 65N30, 65N35, 65M60. Key words and phrases. GLLB quadrature rule, collocation method, Neumann problems, asymptotic estimates, interpolation errors. 1 Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University (NTU), 637616, Singapore ([email protected]). The work of this author is partially supported by a Startup Grant of NTU, Singapore MOE Grant T207B2202, and Singapore NRF2007IDM-IDM002-010. 2 Department of Mathematics, Shanghai Normal University, Shanghai, China, 200234; Scientific Computing Key Laboratory of Shanghai Universities; Shanghai E-institute for Computational Science. The work of this author is supported in part by Science and Technology Commission of Shanghai Municipality, Grant N.075105118, Shanghai Leading Academic Discipline Project N.T0401 and Fund for E-institute of Shanghai Universities N.E03004. 1

2

LI-LIAN WANG

&

BEN-YU GUO

With the aid of these estimates, we are able to analyze the GLLB interpolation errors: kIN u − ukL2 (I) + N −1 k(1 − x2 )1/2 (IN u − u)0 kL2 (I) ≤ cN −m k(1 − x2 )(m−2)/2 u(m) kL2 (I) ,

(1.4)

where I = (−1, 1) and IN is the Gauss-Birkhoff interpolation operator associated with the GLLB points. Similar to the approximation results on the Legendre-Gauss-Lobatto interpolation obtained in [17, 18], the estimate (1.4) is optimal. The GLLB quadrature formula involves derivative values φ0 (±1) at the endpoints. Hence its nodes can be a natural choice of the preassigned interpolation points for many important problems, such as numerical integration and integral equations with the data of derivatives at the endpoints, etc.. In particular, it plays an important role in spectral approximations of second-order elliptic problems with Neumann boundary conditions [8]. As we know, the numerical solutions of such problems given by the commonly used Galerkin method with any fixed mode N , do not fulfill the Neumann boundary conditions exactly. Thereby, we prefer to collocation methods oftentimes, whose solutions satisfy the physical boundary conditions. Whereas, in a collocation method, a proper choice of the collocation points is crucial in terms of accuracy, stability and ease of treating the boundary conditions (BCs), especially, the treatment of BCs is even more critical due to the global feature of spectral method [19, 22]. The GLLB collocation approximation takes the boundary conditions into account in such a way that the resulting collocation systems have diagonal mass matrices, and therefore leads to explicit time discretization for time-dependent problems [8]. However, like the collocation method based on Gauss-Lobatto points [12, 4, 2, 15, 3, 25, 9], the GLLB differentiation matrices are ill-conditioned. Thus a suitable preconditioned iteration solver is preferable in actual computations. We construct in this paper an efficient finite-difference preconditioning for onedimensional second-order elliptic equations. We also present a user-oriented implementation and an error analysis of the GLLB pseudospectral methods based on variational formulations. The rest of the paper is organized as follows. In Section 2, we introduce the GLLB quadrature formula, and then investigate the asymptotic behaviors of its nodes and weights in Section 3. With the aid of asymptotic estimates and some other approximation results, we are able to analyze the GLLB interpolation errors in Section 4. We give a user-oriented description of the implementation of the GLLB collocation method, and present some illustrative numerical results in Section 5. The final section is for some concluding remarks. 2. The GLLB Quadrature Formula In this section, we introduce the Gauss-Lobatto-Legendre-Birkhoff quadrature formula. 2.1. Preliminaries. We start with some notations to be used in the subsequent sections. 2.1.1. Notations. • Let I := (−1, 1), and χ(x) ∈ L1 (I) be a generic positive weight function defined in I. For any integer r ≥ 0, the weighted Sobolev space Hχr (I) is defined as usual with the inner product, semi-norm and norm denoted by (u, v)r,χ , |v|r,χ and kvkr,χ , respectively. In particular, L2χ (I) = Hχ0 (I), (u, v)χ = (u, v)0,χ and kvkχ = kvk0,χ . For any real r > 0, Hχr (I) and its norm are defined by space interpolation as in Admas [1]. In cases where no confusion would arise, χ may be dropped from the notations, whenever χ ≡ 1.

GLLB INTERPOLATION APPROXIMATIONS

3

• Let ω α (x) = (1−x2 )α , x ∈ I with α > −1 be the Gegenbauer weight function. In particular, denote ω(x) = 1 − x2 . • Let N be the set of all non-negative integers. For any N ∈ N, let PN be the set of all algebraic polynomials of degree at most N. Without loss of generality, we assume that N ≥ 4 throughout this paper. • Denote by c a generic positive constant, which is independent of any function and N (mode in series expansion). • For two sequences {zn } and {wn } with wn 6= 0, the expression zn ∼ wn means |zn |/|wn | → c(6= 0) as n → ∞. In particular, zn ∼ = wn means zn /wn ≈ 1 for n  1, or zn /wn → 1 for n → ∞. dl v (l) • For simplicity, we sometimes denote ∂xl v = dx , for any integer l ≥ 1. l = v 2.1.2. Jacobi polynomials. We recall some relevant properties of the Jacobi polynomials, denoted (α,β) by Jn (x), x ∈ I with α, β > −1 and n ≥ 0, which are mutually orthogonal with respect to the Jacobi weight ω α,β (x) = (1 − x)α (1 + x)β : Z 1 (α,β) Jn(α,β) (x)Jm (x)ω α,β (x)dx = γn(α,β) δmn , (2.1) −1

where δmn is the Knonecker symbol, and γn(α,β) =

2α+β+1 Γ(n + α + 1)Γ(n + β + 1) . (2n + α + β + 1)Γ(n + 1)Γ(n + α + β + 1)

(2.2)

In the subsequent discussions, we will mainly use two special types of Jacobi polynomials. The first (2,2) o [24]): one is Jn (x), defined by the three-term recursive relation (see, e.g., Szeg¨ (2,2)

(2,2)

n(n + 4)Jn(2,2) (x) = (n + 2)(2n + 3)xJn−1 (x) − (n + 1)(n + 2)Jn−2 (x), (2,2)

J0

(x) = 1,

(2,2)

J1

(x) = 3x, (2,2)

The coefficient of leading term of Jn

(2.3)

x ∈ [−1, 1].

(x) is

Kn(2,2) =

(2n + 4)! . 2n n!(n + 4)!

(2.4)

Moreover, we have Jn(2,2) (−x) = (−1)n Jn(2,2) (x),

Jn(2,2) (1) =

(n + 1)(n + 2) . 2

The corresponding monic polynomial is defined by dividing the leading coefficient in (2.4): −1 Pn (x) := λn Jn(2,2) (x) with λn = Kn(2,2) .

(2.5)

(2.6)

As a direct consequence of (2.3) and (2.6), Pn+1 (x) = xPn (x) − an Pn−1 (x)

with

an =

n(n + 4) , (2n + 3)(2n + 5)

where P−1 = 0 and P0 = 1. By (2.1)–(2.2), the normalized Jacobi polynomials  en J (2,2) (x) with λ en = γ (2,2) −1/2 , Pen (x) := λ n n satisfy kPen k2ω2 = 1.

(2.7)

(2.8)

4

LI-LIAN WANG

&

BEN-YU GUO

We will also utilize the Legendre polynomials, denoted by Ln (x), which are mutually orthogonal with respect to the unit weight ω = 1. Note that the constant of orthogonality and leading coefficient respectively are 2 (2n)! γn(0,0) = kLn k2 = . (2.9) , Kn(0,0) = n 2n + 1 2 (n!)2 Recall that Ln (±1) = (±1)n and L0n (±1) = 21 (−1)n−1 n(n + 1). 2.2. GLLB quadrature formula. The GLLB quadrature formula can be derived from Theorem 4.2 of Ezzirani and Guessab [8], which belongs to a Birkhoff-type rule (see, e.g., [20, 21]). Theorem 2.1. Let {Pn } be the monic Jacobi polynomials defined in (2.6). Consider the quasiorthogonal polynomial: QN −1 (x) = PN −1 (x) + bN PN −3 (x),

N ≥ 2,

(2.10)

where (N − 1)(N + 2)(2N 2 + 2N + 3) (2N − 1)(2N + 1)(2N 2 − 2N − 3) p 12(N − 1)(N + 2)(N 2 + N − 3) I := −bIN + bIN . + (2N − 1)(2N 2 − 2N − 3)

bN = −

(2.11)

Then we have that

¶ For N ≥ 4,

1 1 (N + 2)(N + 3) − < −bN < . (2.12) 4 N (2N − 1)(2N + 1)  N −1 · The N − 1 zeros of QN −1 (x), denoted by xj j=1 , are distinct, real and all located in (−1, 1). ¸ There exists a unique set of weights {ωj }N j=0 such that Z

1

φ(x)dx = φ0 (−1)ω0 +

−1

N −1 X

φ(xj )ωj + φ0 (1)ωN := SN [φ],

∀φ ∈ P2N −1 .

(2.13)

j=1

¹ The interior weights are all positive and explicitly expressed by ωj = with

1 AN , PN −2 (xj )Q0N −1 (xj ) (1 − x2j )2

1 ≤ j ≤ N − 1,

 bN  λ2N −2 , AN = 1 − e2 aN −2 λ N −2

(2.14)

(2.15)

eN −2 are defined in (2.7), (2.11), (2.6) and (2.8), where the constants aN −2 , bN , λN −2 and λ respectively. Proof. The proof is based on a reassembly and extension of some relevant results in [8]. For clarity, we first present a one-to-one correspondence between the notations of [8] and those of this paper: n N − 1, βk ak ,

dˆ σ (1 − x2 )2 dx, sˆN −1 bN ,

π ˆk Pk ,

βn−1 aN −2 − bN ,

π ek Pek , Jn∗ (ˆ σ)

qn,2 QN −1 ,

JN −1 .

We next derive the expression of bN . By Theorems 4.2 and 4.3 of [8], we find that p − 12(N − 1)(N + 2)(N 2 + N − 3) + (N + 2)(2N 2 − 4N + 3) aN −2 − bN = βN −2 = . (2N − 1)(2N 2 − 2N − 3)

(2.16)

GLLB INTERPOLATION APPROXIMATIONS

5

Hence, a direct computation leads to (2.7)

bN = aN −2 − βN −2 = RHS of (2.11).

¶ The special values bj (j = 1, 2, 3) can be computed directly via (2.11). To obtain the bounds for bN with N ≥ 4, we first prove that − bN
0. A direct calculation yields W1 = 2(N − 3)(N + 2)(2N + 1). Hence (2.17) is valid for N ≥ 4, and it remains to prove 1 1 − bN > − . (2.18) N 4 p √ √ Clearly, − N 2 + N − 3 > − N 2 + N − 2 = − (N − 1)(N + 2) and 2N (N − 1) > 2N 2 − 2N − 3. Therefore, √ (N − 1)(N + 2)(2N 2 + 2N + 3) − (2N + 1) 12(N − 1)(N + 2) −bN > 2 (2N  −21)(2N + 1)(2N − 2N − 3) (N − 1)(N + 2) (2N + 2N + 3) − 4(2N + 1) > (2N − 1)(2N + 1)(2N 2 − 2N − 3) (N − 1)(N + 2)(2N 2 − 6N − 8) > 2N (N − 1)(2N − 1)(2N + 1) (N + 2)(N − 4)(N + 1) > 2N (2N − 1)(2N + 1) (N + 2)(N − 4) N −4 1 1 > > = − . 2N (2N − 1) 4N 4 N Thus, a combination of (2.17) and (2.18) leads to (2.12). · Observe from (2.12) that for N ≥ 3, −bN > 0. Therefore, by Theorem 4.1 of [8], the quasiorthogonal polynomial can be expressed as a determinant  QN −1 (x) = PN −1 (x) + bN PN −3 (x) = det xIN −1 − JN −1 , (2.19) where IN −1 is an identity matrix, and JN −1 is a symmetric tri-diagonal matrix of order N − 1,   √ a1 0 √  √a 0 a2   1   .. .. ..  JN −1 =  (2.20) . . .   p   √  aN −3 p 0 aN −2 − bN  aN −2 − bN 0 with an and bN given in (2.7) and (2.11), respectively. Therefore, all the zeros of QN −1 are real and distinct. ¸ Theorem 4.2 of [8] reveals that with such a choice of bN , all the zeros of QN −1 are distributed N −1 in (−1, 1), and the quadrature formula (2.13) is unique. Moreover, the interior weights {ωj }j=1 are all positive.

6

LI-LIAN WANG

&

BEN-YU GUO

¹ We now consider the expressions of the weights. By Formula (38) of [8], (1 − x2j )2 ωj =

A∗N ∗ (x ) , KN j

1 ≤ j ≤ N − 1,

(2.21)

where A∗N = 1 − bN /aN −2 , and ∗ KN (xj ) = A∗N

N −3 X

Pek2 (xj ) + PeN2 −2 (xj ).

(2.22)

k=0 ∗ To simplify KN (xj ), we recall the Christoffel-Darboux formula (see, e.g., Szeg¨ o [24]): n dn+1 X e2 0 Pk (x) > 0, Pen+1 (x)Pen (x) − Pen0 (x)Pen+1 (x) = dn

∀x ∈ [−1, 1],

(2.23)

k=0

where dk is the leading coefficient Pek (x), and by (2.6) and (2.8), dk =

ek λ , λk

Pek (x) = dk Pk (x).

(2.24)

Hence,  (2.10)  Q0N −1 (x)PeN −2 (x) − PeN0 −2 (x)QN −1 (x) = PN0 −1 (x)PeN −2 (x) − PeN0 −2 (x)PN −1 (x)   − bN PeN0 −2 (x)PN −3 (x) − PN0 −3 (x)PeN −2 (x)  1  e0 (2.24) PN −1 (x)PeN −2 (x) − PeN0 −2 (x)PeN −1 (x) = dN −1  bN  e 0 P (x)PeN −3 (x) − PeN0 −3 (x)PeN −2 (x) − dN −3 N −2 (2.23)

=

1

N −2 X

(2.25)

N −3 bN dN −2 X e2 2 e Pk (x) − 2 Pk (x) dN −3

dN −2 k=0 k=0 ) ( N −3 2 i h X bN dN −2 1 2 2 Pek (x) + PeN −2 (x) = 1− 2 dN −2 dN −3 k=0 ( ) N −3 h bN i X e 2 1 (2.26) 2 = 1− Pk (x) + PeN −2 (x) . dN −2 aN −2 k=0

In the last step, we used the identity aN −2 =

2 e2 λ d2N −3 N −3 λN −2 = , e2 λ2 d2N −2 λ N −2 N −3

(2.26)

which can be verified directly by the definition of the constants. Thus, taking x = xj in (2.25) and using the fact: QN −1 (xj ) = 0, 1 ≤ j ≤ N − 1, gives (2.25) Q0N −1 (xj )PeN −2 (xj ) =

1 dN −2

N −3 n o X A∗N Pek2 (xj ) + PeN2 −2 (xj ) = k=0

1 dN −2

∗ KN (xj ),

(2.27)

which implies that (2.24)

(2.24)

∗ KN (xj ) = d2N −2 Q0N −1 (xj )PN −2 (xj ) =

Plugging it into (2.21) leads to (2.14)–(2.15).

e2 λ N −2 0 Q (xj )PN −2 (xj ). λ2N −2 N −1 

GLLB INTERPOLATION APPROXIMATIONS

7

Remark 2.1. À The choice of bN so that all zeros of QN −1 (x) are in (−1, 1), is unique, and thereby uniquely determines the quadrature rule. It is seen from (2.12) that bN is uniformly bounded with −bN ∼ = 1/4, whose behavior is depicted in Figure 3.1 (left). −1 Á The formula (2.19)–(2.20) indicates that the quadrature nodes {xj }N j=1 are eigenvalues of the symmetric tridiagonal matrix JN −1 , and therefore, can be evaluated by using some standard methods (e.g., the QR-algorithm) as with the classical Gauss quadrature (see, e.g., −1 [13]). Making use of Formula (41) of [8], the weights {ωj }N j=1 can be computed from the first component of the orthonormal eigenvectors of JN −1 . N −1 Â Alternative to the eigen-method, the nodes {xj }j=1 can be located by a root-finding method, say Newton-Raphson iteration, which turns out to be more efficient for a quadra(0) −1 ture rule of higher order. A good initial approximation might be {xj }N j=1 given in (3.24). −1 Accordingly, the weights {ωj }N j=1 can be computed via the compact formula (2.14)–(2.15). Ã It is worthwhile to point out that the quadrature nodes and weights are symmetric xj + xN −j = 0,

ωj = ωN −j ,

j = 1, 2, · · · , N − 1.

(2.28)

Therefore, the computational cost can be halved. Ä Let h0 and hN be the Lagrangian polynomials given in (5.6) of this paper. The boundary weights Z 1 Z 1 ω0 = h0 (x)dx, ωN = hN (x)dx. −1

−1

One verifies that ω0 = −ωN , and an explicit evaluation of the above integrals by using the relevant properties of Jacobi polynomials leads to the formula for ω0 in Theorem 4.2 of [8]. The GLLB quadrature formula (2.13) enjoys the same degree of exactness as the Gauss-LegendreLobatto (GLL) quadrature (with N +1 nodes), but in contrast to GLL, GLLB includes the boundary derivative values φ0 (±1), which is thereby suitable for exactly imposing Neumman boundary conditions (see Section 5). 3. Asymptotic properties of the nodes and weights In this section, we make a quantitative asymptotic estimate of the nodes and weights in the GLLB quadrature formula (2.13). These properties will play an essential role in our subsequent analysis of the GLLB interpolation errors. 3.1. Asymptotic property of the nodes. We start with the interlacing property. Theorem 4.1 of [21] reveals that the zeros of QN −1 interlace with those of QN −2 . The following theorem indicates that the zeros of QN −1 also interlace with those of PeN −2 , which can be proved in a fashion similar to that in [21]. For integrity, we provide the proof below. −1 N −2 e Theorem 3.1. Let {xj }N j=1 and {yk }k=1 be the zeros of QN −1 (x) and PN −2 (x), respectively. Then we have − 1 = y0 < x1 < y1 < x2 < y2 < · · · < yN −2 < xN −1 < 1 = yN −1 . (3.1)

Proof. We take three steps to complete the proof. Step I: Show that QN −1 (yj )QN −1 (yj+1 ) < 0,

j = 1, 2, · · · N − 3,

(3.2)

8

LI-LIAN WANG

&

BEN-YU GUO

equivalently to say, between two consecutive zeros of PeN −2 (x), there exists at least one zero of QN −1 (x). To justify this, we first deduce that Q0N −1 (x)PeN −2 (x) − PeN0 −2 (x)QN −1 (x) > 0,

∀x ∈ [−1, 1],

which is a consequence of (2.7), (2.25) and (2.12), since 1 bN 1 1 dN −2 > 0, 1− >1+ − ≥ 1, aN −2 4 N aN −2

(3.3)

∀N ≥ 4.

Thanks to PeN −2 (yk ) = 0, taking x = yj , yj+1 in (3.3) yields PeN0 −2 (yj )QN −1 (yj ) < 0,

PeN0 −2 (yj+1 )QN −1 (yj+1 ) < 0,

(3.4)

which certainly implies PeN0 −2 (yj )PeN0 −2 (yj+1 ) · QN −1 (yj )QN −1 (yj+1 ) > 0. Therefore, to prove (3.2), it suffices to check that PeN0 −2 (yj )PeN0 −2 (yj+1 ) < 0.

(3.5)

Notice that PeN −2 (x) = dN −2

N −2 Y

 x − yk ,

k=1

and consequently, PeN0 −2 (yj ) = dN −2 PeN0 −2 (yj+1 )

j−1 Y

−2  NY  yj − yk · yj − yk = (−1)N −j−2 dN −2 · C1 ,

k=1 N −j−3

= (−1)

k=j+1

(3.6)

dN −2 · C2 ,

where C1 and C2 are two positive constants. Hence PeN0 −2 (yj )PeN0 −2 (yj+1 ) = (−1)2N −2j−5 · d2N −2 C1 C2 < 0,

1 ≤ j ≤ N − 3.

(3.7)

This validates (3.5) and thereby (3.2) follows. Step II: At this point, it remains to consider the possibility of possessing zeros of QN −1 (x) in subintervals (yN −2 , 1) and (−1, y1 ). Clearly, by the first identity of (3.6), the largest zero yN −2 satisfies PeN0 −2 (yN −2 ) > 0. Thus, by (3.4), QN −1 (yN −2 ) < 0.

(3.8)

QN −1 (1) = PN −1 (1) + bN PN −3 (1) > 0,

(3.9)

On the other hand, we have

which is due to a direct calculation using (2.5)–(2.6) and (2.12): PN −1 (1) (N + 2)(N + 3) + bN = + bN > 0, PN −3 (1) (2N − 1)(2N + 1) and PN −3 (1) > 0. The sign of (3.8)–(3.9) indicates that there is at least one zero of QN −1 (x) in the interval (yN −2 , 1). Using the symmetry of the zeros (see [8] and [24]): xj = −xN −j ,

j = 1, · · · , N − 1;

yk = −yN −1−k ,

k = 1, · · · , N − 2,

we deduce that the interval (−1, y1 ) also contains at least one zero of QN −1 (x).

(3.10)

GLLB INTERPOLATION APPROXIMATIONS

9

Final step: A combination of the previous statements reaches the conclusion: each of the  N −2 N − 1 subintervals (yj , yj+1 ) j=0 (y0 = −1, yN −1 = 1) contains at least one of the N − 1 zeros of QN −1 (x), and therefore there can only be a unique one in each subinterval.  To visualize the above interlacing property, we denote ∆min N =

min

1≤j≤N −1

(yj − xj ),

∆max = N

max (yj − xj ),

1≤j≤N −1

∀N ≥ 4.

(3.11)

According to Theorem 3.1, we expect to see that ∆max > ∆min N N > 0,

∀N ≥ 4,

max lim ∆min = 0, N = lim ∆N

N →∞

(3.12)

N →∞

which is illustrated by Figure 3.1 (right).

0.5

0.4

−bN

upper bound

0.2

0.25

Δmax =maxj(yj−xj) N

lower bound

0

0 4

40

80 N

120

160

4

40

80 N

120

160

Figure 3.1. Left: the constant −bN and its bounds (cf. (2.10)–(2.11)) against N. against N (right). and ∆max Right: asymptotic behavior of ∆min N N

In the analysis of interpolation error, we need more precise asymptotic estimates of the GLLB −1 quadrature nodes. For this purpose, hereafter, we assume that the zeros {xj }N j=1 of QN −1 (x) are arranged in descending order. We make the change of variables x = cos θ,

θ ∈ [0, π],

θj = cos−1 xj ,

j = 1, 2, · · · , N − 1.

(3.13)

The main result is stated in the following theorem.  N −1 Theorem 3.2. Let θj j=1 be the same as in (3.13). Then θj ∈ Ij := (θej−1 , θej ) ⊂ (0, π),

1 ≤ j ≤ N − 1,

(3.14)

where 2j + 23 π, θej = 2N + 1

0 ≤ j ≤ N − 1.

(3.15)

In other words, 0 < θe0 < θ1 < θe1 < θ2 < θe2 < · · · < θeN −2 < θN −1 < θeN −1 < π.

(3.16)

10

LI-LIAN WANG

&

BEN-YU GUO

Proof. By the intermediate mean-value theorem, (3.14) is equivalent to QN −1 (cos θej−1 )QN −1 (cos θej ) < 0,

j = 1, 2, · · · , N − 1.

To prove this result, we first recall Formula (8.21.10) of Szeg¨ o [24]:   1 3 5 Jn(2,2) (cos θ) = n− 2 G(θ) cos (n + )θ + γ + O(n− 2 ), θ ∈ (0, π), 2 where r  − 5 θ − 52  θ − 25 2 5 − 21 G(θ) = π sin cos =4 sin θ 2 , γ = − π. 2 2 π 4 Hence, using (2.6), (2.10) and (3.18) gives

(3.17)

(3.18)

(3.19)

(2.10)

QN −1 (cos θ) = PN −1 (cos θ) + bN PN −3 (cos θ) (2.6)

(2,2)

(2,2)

= λN −1 JN −1 (cos θ) + bN λN −3 JN −3 (cos θ)   n 1 3 (3.18) = G(θ) (N − 1)− 2 λN −1 cos (N + )θ + γ 2  o 1 − 12 + (N − 3) bN λN −3 cos (N − )θ + γ 2 n o 3 − 23 + λN −1 O((N − 1) ) + bN λN −3 O((N − 3)− 2 )

(3.20)

:= HN (θ) + RN . Notice that θej in (3.15) solves the equation   1 e 1 N+ θj + γ = j − π, 0 ≤ j ≤ N − 1, 2 2 which implies     1 1 e 1 3 e θj + γ = j − π + θej , N− θj + γ = j − π − θej . N+ 2 2 2 2 Therefore, taking θ = θej in (3.20) and using trigonometric identities, gives n   1 1 HN (θej ) = G(θej ) (N − 1)− 2 λN −1 cos (j − )π + θej 2  o 1 − 12 + (N − 3) bN λN −3 cos (j − )π − θej 2 n o  1 j − 21 e e = (−1) sin θj G(θj ) (N − 1) λN −1 − (N − 3)− 2 bN λN −3

(3.21)

:= (−1)j SN (θej ). Since −bN > 0, λk > 0 and θj ∈ (0, π), we have that SN (θej ) > 0, and thereby  sgn HN (θej ) = (−1)j , 0 ≤ j ≤ N − 1.

(3.22)

Moreover, by (2.12), (3.19) and (3.21), 1 o − 3 n 5 1 1 1 1 |HN (θej )| ≥ 2 2 π − 2 sin θej 2 (N − 1)− 2 λN −1 + − (N − 3)− 2 λN −3 4 N − 12 ≥ cN (λN −1 + λN −3 ), which, together with the fact −bN ∼ = 41 , implies 3

|RN | ≤ cN − 2 (λN −1 + λN −3 ) ≤ cN −1 |HN (θej )|.

(3.23)

GLLB INTERPOLATION APPROXIMATIONS

Hence, a combination of (3.20)-(3.23) leads to that     sgn QN −1 (cos θej ) = sgn HN (θej ) = (−1)j ,

11

0 ≤ j ≤ N − 1,

∀N  1,

which implies (3.17) and therefore, there exists at least one zero of QN −1 (cos θ) in each subinterval Ij = (θej−1 , θej ), 1 ≤ j ≤ N − 1. Because the number of zeros equals to the number of subintervals, there exists exactly one zero in Ij . 

−1

10

1.03 1.02

−3

10

1.01 −5

10

1 4

160

320

500

4

160

N

320

500

N

e N (solid line) and N −1.9 (“◦”) against N. Right: asymptotic Figure 3.2. Left: the error ∆ (1)

(2)

behavior of δN (solid line) and δN (“◦”) (cf. (3.43)) against N.

−1 As a consequence of Theorem 3.2, a good asymptotic approximation to the zeros {xj }N j=0 might

be  θe e  (2j + 1/2)π j−1 + θj (0) xj = cos θj ∼ , = cos = xj := cos 2 2N + 1 To illustrate this property numerically, we denote e N = max xj − x(0) . ∆ j

1 ≤ j ≤ N − 1.

(3.24)

1≤j≤N −1

e N (solid line) and the reference value N −1.9 (“◦”) against We plot in Figure 3.2 (left) the error ∆ e N decays algebraically at a rate of N −1.9 , and predicts that N, which indicates that ∆ xj = cos

(2j + 1/2)π + O(N −1.9 ), 2N + 1

1 ≤ j ≤ N − 1.

(3.25)

3.2. Asymptotic properties of the weights. We establish below an important result concerning N −1 the asymptotic behaviors of the interior quadrature weights {ωj }j=1 in (2.13).  N −1 Theorem 3.3. Let θj j=1 be the zeros of the quadrature (trigonometric) polynomial QN −1 (cos θ). Then for any N  1, π ωj ∼ j = 1, 2, · · · , N − 1. (3.26) = sin θj , N

12

LI-LIAN WANG

&

BEN-YU GUO

Proof. We rewrite the weights as ωj

(2.14)

1 AN 2 2 (1 − xj ) PN −2 (xj )Q0N −1 (xj )

(2.6)

1 AN 1 , λN −2 λN −3 (1 − x2j )2 J (2,2) (xj )W 0 (xj ) N −2 N

=

=

(2.10)

(3.27)

where WN (x) :=

λN −1 (2,2) (2,2) J (x) + bN JN −3 (x) = λ−1 N −3 QN −1 (x). λN −3 N −1

(3.28)

(2,2)

To prove (3.26), it suffices to study the asymptotic behaviors of the constant, JN −2 (xj ) and WN0 (xj ) in (3.27). For clarity, we split the rest proof into three steps. Step I: Let’s first estimate the constants in (3.27)–(3.28). A direct calculation using (2.6)–(2.7) and (2.12) leads to 1 aN −2 ∼ = , 4

1 −bN ∼ = , 4

1 ∼ 16 , = 2 e N λN −2

λN −2 (N − 2)(N + 2) ∼ 1 = = , λN −3 N (2N − 1) 2

(3.29)

which, together with (2.15), implies  AN bN  λN −2 ∼ 16 . = 1− = e2 λN −2 λN −3 aN −2 λN −3 λ N N −2

(3.30)

 1 5 ΘN,j = N + θj − π. 2 4

(3.31)

Step II: Let

We next show that sin ΘN,j ∼ = 0,

∀N  1.

(3.32)

Since QN −1 (xj ) = 0, we have (3.28)

−1 WN (cos θj ) = λ−1 N −3 QN −1 (cos θj ) = λN −3 QN −1 (xj ) = 0.

Hence, by (3.18), λN −1 (2,2) (2,2) J (cos θj ) + bN JN −3 (cos θj ) λN −3 N −1 n  1 (3.18) = (N − 3)− 2 G(θj ) cN cos ΘN,j + θj o 3 + bN cos ΘN,j − θj + O(N − 2 ) n 1 = (N − 3)− 2 G(θj ) (bN − cN ) sin θj sin ΘN,j o 3 + (bN + cN ) cos θj cos ΘN,j + O(N − 2 ),

0 = WN (cos θj ) =

where G(θj ) is given in (3.19), and by (3.29), λN −1  N − 3  21 ∼ λN −1 λN −2 ∼ 1 cN = = = . λN −3 N − 1 λN −2 λN −3 4

(3.33)

Consequently, 0 = (bN − cN ) sin θj sin ΘN,j + (bN + cN ) cos θj cos ΘN,j + O(N −1 sin5/2 θj ).

(3.34)

GLLB INTERPOLATION APPROXIMATIONS

13

On the other hand, using (3.29) and (3.33) leads to 1 bN − cN ∼ (3.35) = − , for N  1. 2 Since sin θj 6= 0(> 0), the desired result (3.32) follows from (3.34). Step III: Applying Formula (8.8.1) of [24] (note: this formula may be derived by differentiating (3.18) and using (3.28)) to WN0 (x), and using trigonometric identities, yields n 1 dWN (cos θj ) = (N − 3) 2 G(θj ) − cN sin(ΘN,j + θj ) dθ o − bN sin(ΘN,j − θj ) + (N sin θj )−1 O(1) (3.36) n 1 = (N − 3) 2 G(θj ) (bN − cN ) sin θj cos ΘN,j o − (bN + cN ) cos θj sin ΘN,j + (N sin θj )−1 O(1) . bN + cN ∼ = 0,

Hence, by (3.32) and (3.35), dWN 1√ (cos θj ) ∼ N G(θj ) sin θj cos ΘN,j , =− dθ 2

1 ≤ j ≤ N − 1,

which implies WN0 (xj )

dWN dWN dθ 1 = =− (cos θj ) (cos θj ) = dθ dx θ=θj dθ sin θj

(3.37)



N G(θj ) cos ΘN,j . 2

(3.38)

On the other hand, by (3.18), 1 (2,2) JN −2 (cos θj ) ∼ = N − 2 G(θj ) cos ΘN,j ,

1 ≤ j ≤ N − 1.

(3.39)

Multiplying (3.38) by (3.39) yields 1 1 (2,2) JN −2 (xj )WN0 (xj ) ∼ = G2 (θj ), = G2 (θj ) cos2 ΘN,j ∼ 2 2 where in the last step, we used the fact: cos2 ΘN,j

(3.40)

(3.32)

∼ = 1,

∀N  1.

(3.41)

Finally, thanks to 5 1 1 (3.19) 2 2 , G (θ ) = , j 4 π sin5 θj sin θj the desired result (3.26) follows from (3.27), (3.30) and (3.40).

1 (1 − x2j )2

(3.13)

=



As a direct consequence of (3.24) and (3.26), we derive the following explicit asymptotic expression: π (2j + 1/2)π (0) ωj ∼ sin , 1 ≤ j ≤ N − 1. (3.42) = ωj := N 2N + 1 To examine how good it is, we set (1)

δN =

π N

n sin θ o j , 1≤j≤N −1 ωj max

(2)

δN =

π N

n ω (0) o j . 1≤j≤N −1 ωj max

By Theorem 3.3, we expect to see that (i) δN ∼ = 1,

i = 1, 2,

which indeed can be visualized from Figure 3.2 (right).

∀N  1,

(3.43)

14

LI-LIAN WANG

&

BEN-YU GUO

So far, we have derived two asymptotic estimates for the GLLB nodes and weights (cf. (3.24) and (3.42)), which will be useful for the analysis of the GLLB interpolation error in the forthcoming section. 4. GLLB interpolation error estimates This section is devoted to the analysis of the GLLB interpolation errors in Sobolev norms, which will be used for the error analysis of GLLB pseudospectral methods. We first state the main result, and then present the ingredients for the proof including some inequalities and orthogonal projections. Finally, we give the proof of the interpolation errors. 4.1. The main result. We begin with the definition of the GLLB interpolation operator associated with the GLLB quadrature formula. The GLLB interpolant IN u ∈ PN , satisfies (IN u)0 (±1) = u0 (±1),

(IN u)(xj ) = u(xj ),

1 ≤ j ≤ N − 1.

(4.1) Denote the weight functions by ω(x) = 1 − x2 and ω α (x) = (1 − x2 )α . Let α ¯ = max 0, −[α] − 1 with [α] being the largest integer ≤ α. To describe the error, we introduce the weighted Sobolev space  Bαm (I) := u ∈ L2 (I) : ∂xk u ∈ L2ωα+k (I), α ¯ + 1 ≤ k ≤ m ∩ H α¯ (I), m ≥ 0, 

with the norm

m   21 X kukBαm (I) = kuk2H α¯ (I) + k∂xk uk2ωα+k . k=α+1 ¯

The main result on the GLLB interpolation error is stated as follows. m Theorem 4.1. For any u ∈ B−2 (I) and m ≥ 2,

N kIN u − uk + k∂x (IN u − u)kω ≤ cN 1−m k∂xm ukωm−2 .

(4.2)

4.2. Preparations for the proof. The main ingredients for the proof Theorem 4.1 consist of the asymptotic estimates (cf. Theorems 3.2 and 3.3), several inequalities and the approximation property of one specific orthogonal projection to be stated below. 4.2.1. Some inequalities. For notational convenience, we define the discrete inner product and discrete norm associated with the GLLB quadrature formula as



12 , u, v N = SN [u · v], kvkN = v, v N where SN [·] represents the finite sum in (2.13). Clearly, the exactness (2.13) implies that

φ, ψ N = (φ, ψ), ∀φ · ψ ∈ P2N −1 .

(4.3)

We have the following equivalence between the continuous and discrete norms over the polynomial space PN , which will be useful in the analysis of interpolation error, and the GLLB pseudospectral methods for nonlinear problems. Lemma 4.1. For any integer N ≥ 4, kφk ≤ kφkN ≤

p 1 + CN kφk ≤ 3kφk,

where  CN = 1 +

∀φ ∈ PN ,

3  3  3 1+ 1+ . N −2 N −1 N

(4.4)

GLLB INTERPOLATION APPROXIMATIONS

Proof. We first prove that (4.4) holds for the Legendre polynomial LN . That is

 (0,0) (0,0) γN = kLN k2 ≤ LN , LN N ≤ 1 + CN kLN k2 = (1 + CN )γN .

15

(4.5)

For this purpose, set (0,0) 2

ψ(x) = L2N (x) − KN

(1 − x2 )2 QN −1 (x)PN −3 (x).

Since the leading coefficient of QN −1 · PN −3 is one, we deduce from (2.9) that ψ ∈ P2N −1 . Hence, using the fact QN −1 (xj ) = 0 and the orthogonality, gives Z 1



(4.3) ψ(x)dx LN , LN N = 1, ψ N = (1, ψ) = −1

(2.10)

Z

1

(2.1)

=

(0,0) 2

L2N (x)dx − bN KN

=

−1 (0,0) γN − bN

Z

1

PN2 −3 (x)(1 − x2 )2 dx

−1

(4.6)

(0,0) 2 (2,2) λN −3 KN γN −3

  (0,0) 2 (2,2) (0,0) −1 (0,0) = 1 − bN λN −3 KN γN −3 γN γN . Using (2.2)–(2.6) and (2.9) to work out the constants, yields (0,0) 2 (2,2) γN −3

λN −3 KN

(0,0) −1

γN

=

(N + 1)(2N − 1)(2N + 1) . N (N − 1)(N − 2)

Furthermore, by (2.12), we have that for N ≥ 4, (0,0) 2 (2,2) γN −3

0 < −bN λN −3 KN

(0,0) −1

γN


0 and m s g± = 0. If u ∈ X ∩ B−2 (I) and f ∈ B−2 (I) with integers m, s ≥ 2, then ku − uN kµ ≤ c(N µ−m k∂xm ukωm−2 + N −s k∂xs f kωs−2 ), µ = 0, 1. (5.24)

1 Proof. For simplicity, let eN = πN u − uN and Ev (φ) = (v, φ) − v, φ N . By (4.3), (5.1) and (5.15), 

 ∂x (u − uN ), ∂x φ + b (u, φ) − uN , φ N = Ef (φ),

∀φ ∈ XN .

Thanks to (4.24), we can rewrite the above equation as



1  (∂x eN , ∂x φ) + b eN , φ N = b πN u, φ N − (u, φ) + Ef (φ),

∀φ ∈ XN .

(5.25)

Using (4.4), Theorems 4.1 and 4.2, and Corollary 4.2 leads to

1

1 | πN u, φ N − (u, φ)| ≤ | πN u − IN u, φ N | + |Eu (φ)| 1 ≤ kπN u − IN ukkφk + |Eu (φ)| ≤ cN −m k∂xm ukωm−2 kφk,

and |Ef (φ)| ≤ N −s k∂xs f kωs−2 kφk. Hence, taking φ = eN in (5.25) and using (4.4), we reach that |eN |1 + keN k ≤ c(N −m k∂xm ukωm−2 + N −s k∂xs f kωs−2 ). Finally, using Theorem 4.2 again leads to the desired result.



Remark 5.3. Theorem 4.2 of [5] presents an analysis estimate in L2 −norm of a modified LGL collocation method for (5.1) (with b = 1) with a convergence order O(N 2−m ). Obviously, the estimate (5.24) improve the existing result essentially, and seems optimal (with the order O(N −m )).

28

LI-LIAN WANG

&

BEN-YU GUO

6. Concluding remarks In the foregoing discussions, we restrict our attentions to one-dimensional linear problems, but the ideas and techniques go beyond these equations. An interesting extension is on the construction and analysis of a quadrature formula and the associated pseudospectral approximations for mixed boundary conditions α± u0 (±1) + β± u(±1) where α± and β± are constants such that − u00 + bu = f,

in (−1, 1);

α± u0 (±1) + β± u(±1) = g± ,

(6.1)

is well-posed. We will report this topic in our future work. In summary, we have derived in this paper the asymptotic properties of the nodes and weights of the GLLB quadrature formula, and obtained optimal estimates for the GLLB interpolation errors. We also presented a detailed implementation of the the associated GLLB pseudospectral method for second-order PDEs. Appendix A. Expressions of the Lagrangian basis For notational simplicity, let q(x) := QN −1 (x) be the quadrature polynomial given in (2.10), and define 1 QN −1 (x) 1 q(x) qj (x) := 0 = 0 ∈ PN −2 , 1 ≤ j ≤ N − 1. (A.1) QN −1 (xj ) x − xj q (xj ) x − xj  N −1  N −1 We see that qj j=1 are the Lagrangian basis associated with the interior points xj j=1 of the GLLB quadrature. This matter with (5.6) implies that the corresponding basis functions can be expressed as follows. • Boundary basis functions :  0  q (1)(x − 1) − q(1) q(x) , h0 (x) = q(−1)q 0 (1) − 2q 0 (1)q 0 (−1) − q 0 (−1)q(1)  0  q (−1)(x + 1) − q(−1) q(x) hN (x) = . 0 q(1)q (−1) + 2q 0 (1)q 0 (−1) − q 0 (1)q(−1)

(A.2)

• Interior basis functions: hj (x) =

x2 + ax + b qj (x), x2j + axj + b

1 ≤ j ≤ N − 1,

(A.3)

where a= b=

2qj (1)qj0 (−1) + 2qj0 (1)qj (−1) , 0 qj (1)qj (−1) − 2qj0 (1)qj0 (−1) − qj (1)qj0 (−1) 3qj (1)qj0 (−1) − 3qj0 (1)qj (−1) + 2qj0 (1)qj0 (−1) − 4qj (1)qj (−1) . qj0 (1)qj (−1) − 2qj0 (1)qj0 (−1) − qj (1)qj0 (−1)

(A.4)

Appendix B. The derivation of (5.20) We first recall the formula (see, e.g., [24]): ∂x Jn(α,β) (x) =

1 (α+1,β+1) (n + α + β + 1)Jn−1 (x), 2

n ≥ 1.

(B.1)

GLLB INTERPOLATION APPROXIMATIONS

Using (B.1) and the expressions of dn and en gives 1 (1,1) φ0n (x) = dn (n + 1)Jn−1 (x) + 2  1 (1,1) = dn (n + 1) Jn−1 (x) − 2

29

 1 (1,1) (n + 3)en Jn+1 (x) 2  n (1,1) Jn+1 (x) . n+2

By (B.1) and formula (4.5.5) of [24], (1 − x2 )

 n d (1,1) (n + 1)(n + 3)  (1,1) (1,1) Jn−1 (x) − Jn (x) = Jn+1 (x) . dx 2n + 3 n+2

A combination of the above two facts leads to 2n + 3 (2,2) en−1 (1 − x2 )J (2,2) (x). dn (1 − x2 )Jn−1 (x) = λ φ0n (x) = n−1 4 Indeed, the normalized factor dn is chosen such that {φn } is orthonormal in L2ω2 (I). References [1] R. A. Adams, Sobolev Spaces, Academic Press, New York, 1975. [2] C. Bernardi and Y. Maday, Spectral Methods, in Handbook of Numerical Analysis, Vol.5, Techniques of Scientific Computing, 209-486, edited by P. G. Ciarlet and J. L. Lions, Elsevier, Amsterdam, 1997. [3] J. P. Boyd, Chebyshev and Fourier spectral methods, Dover Publications Inc., Mineola, NY, second edition, 2001. [4] C. Canuto, M. Y. Hussaini, A. Quarteroni and T. A. Zang, Spectral Methods: Fundamentals in Single Domains, Springer, Berlin, 2006. [5] C. Canuto, Boundary conditions in Chebyshev and Legendre methods, SIAM Numer. Anal., 21, (1986), 815-831. [6] P. G. Ciarlet, Finite element methods for elliptic problems, North-Holland, Amsterdam, 1978. [7] M. O. Deville and E. H. Mund, Finite-element preconditioning for pseudospectral solutions of elliptic problems, SIAM J. Sci. Stat. Comput., 11, (1990), 311-342. [8] A. Ezzirani and A. Guessab, A fast algorithm for Gaussian type quadrature formulae with mixed boundary conditions and some lumped mass spectral approximations, Math. Comp., 225 (1999), 217-248. [9] B. Fornberg, A Practical Guide to Pseudospectral Methods, Cambridge University Press, 1996. [10] D. Funaro, A variational formulation for the Chebyshev pseudospectral approximation of Neumann problems, SIAM J. Numer. Anal., 27 (1990), 695-703. [11] D. Funaro, Polynomial approximations of differential equations, Spring-Verlag, 1992. [12] J. Hesthaven, S. Gottlieb and D. Gottlieb, Spectral Methods for Time-Dependent Problems, Cambridge University Press, 2007. [13] G. H. Golub and J. H. Welsch, Calculation of Gauss quadrature rules, Math. Comp. 23 (1969), 221-230. [14] Guo Ben-yu, Difference Methods for Partial Differential Equations, Science Press, Beijing, 1988. [15] Guo Ben-yu, Spectral Methods and Their Applications, World Scietific, Singapore, 1998. [16] Guo Ben-yu, Jacobi approximations in certain Hilbert spaces and their applications to singular differential equations, J. Math. Anal. and Appl., 243(2000), 373-408. [17] Guo Ben-yu and Wang Li-lian, Jacobi interpolation approximations and their applications to singular differential equations, Advances in Computational Mathematics, 14(2001), 227-276. [18] Guo Ben-yu and Wang Li-lian, Jacobi approximations in non-uniformly Jacobi-weighted Sobolev spaces. J. Approx. Theor., 128(2004), 1-41 . [19] W. Z. Huang and D. M. Sloan, The pseudospectral method for third order differential equations, SIAM J. Numer. Anal., 6(1992), 1662-1647. [20] K. Jetter, A new class of Gaussian quadarature formulas based on Birkhoff type data, SIAM J. Numer. Anal., 5 (1982), 1081-1089. [21] K. Jetter, Uniqueness of Gauss-Birkhoff quadrature formulas, SIAM J. Numer. Anal., 24 (1987), 147-154. [22] W. J. Merryfield and B. Shizgal, Properties of collocation third-derivatives operators, J. Comput. Phys., 105(1993), 182-185.

30

LI-LIAN WANG

&

BEN-YU GUO

[23] J. Shen, Efficient spectral-Galerkin method I. Direct solvers for second- and fourth-order equations by using Legendre polynomials, SIAM J. Sci. Comput., 15(1994), 1489-1505. [24] G. Szeg¨ o, Orthogonal Polynomials, Amer. Math. Soc., Providence, RI, 1959. [25] L. N. Trefethen, Spectral Methods in Matlab, SIAM, Philadelphia, 2000. [26] Y. Xu, Quasi-orthogonal polynomials, quadrature, and interpolation, J. Math. Anal. Appl. 182(1994), 779799.