INTEGRATION PRECONDITIONING OF PSEUDOSPECTRAL OPERATORS. I. BASIC LINEAR OPERATORS J. S. HESTHAVENy
Abstract. This paper develops a family of preconditioners for pseudospectral operators with the emphasis on the advection and diusion operators subject to various types of boundary conditions. The development of the preconditioners is done within the context of ultraspherical polynomial approximations with special attention being paid to methods based on Legendre and Chebyshev approximations on the Gauss-Lobatto quadrature points. It is shown that the eigenvalue spectrum of the preconditioned linear operators can be obtained on analytic form and the spectra all share the property of having a large majority of the eigenvalues being exactly unity. The preconditioned advective operator has only two non-unity eigenvalues being bounded independent of the order of the approximation, N . Likewise, the preconditioned diusive operator has only four non-unity eigenvalues but ispin general inde nite. For Dirichlet boundary conditions the spectral radius is found to grow as N while it grows like N in the case of Neumann boundary conditions. Generalizations to higher order dierential operators, general boundary conditions and arbitrary polynomial basis and quadrature nodes are discussed. Key words. Pseudospectral Methods, Preconditioning, Penalty Methods. AMS subject classi cations. 65M70, 65N35, 15A12, 65F10
1. Introduction. Within the last decades, pseudospectral methods have proven to be an ecient way in which to obtain accurate solutions to partial dierential equations with the basic idea being based on replacing the exact derivatives with derivatives of interpolating polynomials at the collocation points [1, 2]. Using such an approach for solving boundary value problems involves the solution of linear systems of equations which are known to be very ill-conditioned, e.g. for methods based on orthogonal polynomials the condition number of the approximate rst order operator grows like N 2 , while the condition number for the second order operator in general scales with N 4 . Hence, in order to apply recently developed powerful iterative methods like GMRES, see e.g. [3], for the solution of such systems, the development of ecient preconditioners becomes crucial. In this paper we present a family of novel preconditioners for linear one-dimensional dierential operators approximated using a pseudospectral approach based on ultraspherical polynomials. Previous work on preconditioners includes the use of nite dierence based preconditioners for the advective operator [4] as well as for the diffusive operator [5]. More recently, the use of nite element based preconditioners has been advocated [6] and applied with signi cant success to a large variety of operators. However, a general property of previously proposed preconditioners is that they rely on a low order approximation of the inverse operator. In this work we take a dierent road and devise polynomial integration preconditioners of the same order as the polynomial approximation itself. This has the consequence of making the preconditioners full matrices which, however, is only to be expected as we are considering global methods. On the other hand, using properties of the polynomial basis as the foundation for the development of the preconditioners allows for devising ecient preconditioners for methods based on any of the classical orthogonal polynomials and for operators of This work was supported by NSF grant no. ASC-9504002 and by DoE grant no. DE-FG0295ER25239. y Division of Applied Mathematics, Brown University, Box F, Providence, RI 02912. E-mail:
[email protected] 1
2
J. S. HESTHAVEN
arbitrary order. Moreover, as we shall discuss, the framework presented here applied directly to pseudospectral methods based on any of the Gauss quadrature points. The pseudospectral integration preconditioners presented here are closely related to previous work on preconditioning of spectral dierentiation operators as they appear in the context of spectral -methods [7, 8]. However, the extension to the case of pseudospectral methods involves the question of how to deal with boundary conditions and how this issue aects the eciency of the integration preconditioners. We propose to enforce the boundary conditions in a weak form only, using a penalty term [9, 10, 11]. This has the very attractive consequence that the eect of the preconditioners can be analyzed and understood in detail. In fact, for the cases discussed in the present paper we obtain the eigenvalue spectrum on analytic form. Moreover, enforcing the boundary conditions through the penalty term allows for a direct generalization to situations with arbitrary boundary conditions. The remaining part of this paper is organized as follows. In Sec. 2 we introduce the ultraspherical polynomials and discuss their properties with special attention being given to the Legendre and Chebyshev polynomials. We continue by introducing spectral as well as pseudospectral methods as the connection between these two approaches lies at the heart of the motivation leading to the integration preconditioners which are developed and discussed in Sec. 3. Section 4 addresses the performance of the preconditioners for the advective operator with special attention being given to the issue of how boundary conditions are imposed. The analysis is carried out in detail for the case of Legendre and Chebyshev pseudospectral methods on the Gauss-Lobatto quadrature points. In Sec. 5 we present a similar analysis for the diusion operator, i.e. the Poisson equation, being subject to Dirichlet as well as Neumann boundary conditions with the details being given in the case of Legendre and Chebyshev methods. Section 6 contains a few concluding remarks and discusses generalizations of the framework developed here in order to deal with general boundary conditions, higher order operators, and methods based on general orthogonal polynomials as well as general Gauss quadrature nodal sets. 2. Ultraspherical Polynomials and Spectral Methods. In this section we shall review several subjects central to the subsequent analysis. In particular, we introduce the ultraspherical polynomials and discuss in some detail the theory underlying spectral as well as pseudospectral methods for the solution of partial dierential equations as this connection lies at the heart of the integration preconditioners introduced in the following section. 2.1. Ultraspherical Polynomials. The ultraspherical polynomials, Pn()(x), appear as the solution to the singular Sturm-Liouville problem in the nite domain x 2 [?1; 1] [12, 13], !
d (1 ? x2 )+1 dPn() (x) + n(n + 2 + 1)(1 ? x2 ) P () (x) = 0 ; n dx dx i.e. they are essentially the symmetric Jacobi polynomials, Pn(;) (x), although normalized dierently since ?( + 1)?(n + 2 + 1) P (;) (x) ; Pn() (x) = ?(2 + 1)?(n + + 1) n where we have introduced the Gamma function, ?(x).
PSEUDOSPECTRAL INTEGRATION PRECONDITIONING I
3
The rst two ultraspherical polynomials are given as
P0() (x) = 1 ; P1() (x) = (2 + 1)x ; while the remaining polynomials may be arrived at by using the recursion formula (1) xPn() (x) = an?1;n Pn(?)1 (x) + an+1;n Pn(+1) (x) ; with the recurrence coecients being (2)
an?1;n = 2nn++22+ 1 ; an+1;n = 2n +n +21+ 1 :
We shall need the values of Pn() (x) and its rst derivative at the boundary of the domain as
P () (1) = (1)n
(3) (4)
n
n + 2 n
;
dPn() (1) = (2 + 1)(1)n+1 n + 2 + 1 n?1 dx
:
To simplify the notation in the remaining part of the paper we express the recursion, Eqs.(1)-(2), on a compact matrix form as
xP = P A ; where
2 6 6 6 6 6 4
0
a1;0
a0;1 0
0
a1;2
0 0
0 0 0
0 a2;1 0 a2;3 0 0 a3;2 0 a3;4 .. .. .. . . . . . . . . . . . . while the row vector, P , is given as A=
h
3 7 7 7 7 7 5
;
i
P = P0() (x); : : : ; PN() (x) : A relation between the polynomials and their derivatives are given through a recurrence relation of the form (5)
dPn(?)1 (x) (1) dPn(+1) (x) Pn() (x) = b(1) + bn+1;n dx ; n?1;n dx
with the recurrence coecients (6)
1 1 (1) b(1) n?1;n = ? 2n + 2 + 1 ; bn+1;n = 2n + 2 + 1 :
4
J. S. HESTHAVEN
Equations (5)-(6) may, as was the case for Eqs.(1)-(2), be expressed on matrix form as P = P x B(?1) ; where 2 0 0 0 0 0 3 6 b(1) 0 b(1) 0 0 77 6 1;0 1;2 6 (1) 0 b(1) 0 777 ; 2;3 B(?1) = 66 0 b2;1 (1) 6 0 0 b3;2 0 b(1) 3;4 7 5 4 .. .. .. . . . . . . . . . . . . while P x refers to the derivative of the row vector, P . We have chosen the notation of B(?1) since this matrix operator essentially plays the role of integration of the polynomial basis. Observe that since P0() (x) = 1, its rst derivative vanishes and the entries of the rst row of B(?1) are taken to be zero. We shall also need the following less known recurrence formula (7)
d2 Pn(?)2 (x) (2) d2 Pn() (x) (2) d2 Pn(+2) (x) + bn;n dx2 + bn+2;n dx2 ; n?2;n dx2
P () (x) = b(2) n
with the recurrence coecients
1 1 (2) (8)b(2) n?2;n = (2n + 2 + 1)(2n + 2 ? 1) ; bn+2;n = (2n + 2 + 1)(2n + 2 + 3) ; 2 b(2) n;n = ? (2n + 2 ? 1)(2n + 2 + 3) :
On matrix form this yields
P = P xxB(?2) ; with
2
B(?2) =
6 6 6 6 6 6 4
0 0
b(2) 2;0 0 .. .
0 0 0
b(2) 3;1 .. .
0 0
b(2) 2;2
0 0 0
0 0
b(2) 2;4
0 0 0
0 b(2) 0 b(2) 3;3 3;5 ... ... ... ... ...
3 7 7 7 7 7 7 5
:
As for B(?1), we take the two rst rows of the second order integration operator, B(?2) , to be zero since the two rst elements P xx are identically zero. Recurrence relations for higher order derivatives may be obtained on explicit form by repeatedly combining the relations, Eqs.(5)-(8). The general part of the subsequent analysis in this paper shall be concerned with methods based on the ultraspherical polynomials, although, due to their extensive use, we shall pay special attention to the cases of Legendre polynomials, Ln (x), and Chebyshev polynomials, Tn (x). In the following we shall therefore brie y recall the most important properties of these two families of polynomials.
PSEUDOSPECTRAL INTEGRATION PRECONDITIONING I
5
2.1.1. Legendre Polynomials. The Legendre polynomials, Ln(x), appear from the general ultraspherical polynomial as Ln(x) = Pn(0) (x) ; with the rst two being
L0 (x) = 1 ; L1(x) = x ; while the higher order polynomials appear from the recurrence, Eq.(1), with the recurrence coecients an?1;n = 2nn+ 1 ; an+1;n = 2nn++11 ; and the boundary values are given as (9)
Ln(1) = (1)n ; dLndx(1) = 21 (1)n+1 n(n + 1) :
The recurrence coecients of Eq.(5), also providing the entries of B(?1) , become (10)
1 1 (1) b(1) n?1;n = ? 2n + 1 ; bn+1;n = 2n + 1 ;
while the entries of B(?2) , being the recurrence coecients for Eq.(7), are given as 1 1 (2) (11) b(2) n?2;n = (2n + 1)(2n ? 1) ; bn+2;n = (2n + 1)(2n + 3) ; 2 b(2) n;n = ? (2n ? 1)(2n + 3) : Further details on Legendre polynomials can be found in e.g. [12, 13]. 2.1.2. Chebyshev Polynomials. The Chebyshev polynomials, Tn(x), appears as a special case of the ultraspherical polynomials as
Tn (x) = n lim 1 ?(2 + 1)Pn() (x) = cos(n arccos x) ; !? 2
where the limit is taken since ?(2 + 1) has a simple pole for = ?1=2. However, the limit exists and the Chebyshev polynomials are recovered [12]. Although there exists a closed form expression for the Chebyshev polynomials it remains illustrative to recall that
T0 (x) = 1 ; T1(x) = x ; while the higher order Chebyshev polynomials are recovered from Eq.(1) with the recurrence coecients an?1;n = 12 ; an+1;n = c2n ;
6
J. S. HESTHAVEN
where c0 = 2 and cn = 1 otherwise. The Chebyshev polynomials takes the values at the boundaries of the domains as (1) = (1)n+1 n2 ; Tn (1) = (1)n ; dTndx while the entries of B(?1) , establishing a connection between Tn (x) and its derivatives, (12)
are given as (13)
1 cn (1) b(1) n?1;n = ? 2(n ? 1) ; bn+1;n = 2(n + 1) :
Let us nally give the recurrence coecients for Eq.(7), being the entries of B(?2) , as (14)
cn 1 (2) b(2) n?2;n = 4(n ? 1)(n ? 2) ; bn+2;n = 4(n + 1)(n + 2) : 1 b(2) n;n = ? 2(n2 ? 1) ;
Much more on the properties of Chebyshev polynomials can be found in e.g. [12, 13]. 2.2. Spectral Methods. When applying spectral methods for the solution of partial dierential equations, one seeks polynomial solutions n
PN u(x) 2 QN0 ; QN0 = span Pn()
oN
n=0
;
of the form (15)
PN u(x) =
N X n=0
u^nPn() (x) ;
where u^n represents the continuous expansion coecients. Since the ultraspherical polynomials appear as eigensolutions to a Strum-Liouville problem they are by construction orthogonal in the weighted L2w [?1; 1] space as
n nm =
Z 1
?1
Pn() (x) Pm() (x)(1 ? x2 ) dx ;
where the continuous normalization factor, n , is given as (16)
?2 ( + 1)?(n + 2 + 1) :
n = 22+1 n!(2 n + 2 + 1)?2 (2 + 1)
Hence, the expansion, Eq.(15), exists for all u(x) 2 L2w [?1; 1] and the continuous expansion coecients are obtained as
u^n = 1
Z 1
n ?1
u(x)Pn() (x)(1 ? x2 ) dx :
PSEUDOSPECTRAL INTEGRATION PRECONDITIONING I
7
The evaluation of this integral naturally poses a problem in most situations and one has to resort to approximations of the expansion coecients by using Gauss quadrature. Since we are concerned with boundary value problem, the natural choice is the Gauss-Lobatto quadrature for the ultraspherical polynomials which involves the introduction of the Gauss-Lobatto grid
fxi gNi=0 =
(
x 2 [?1;
1]
() (1 ? x2 ) dPN (x)
dx
)
=0
;
assumed ordered as
?1 = x0 < x1 < : : : < xN ?1 < xN = 1 : Using this grid we obtain the discrete expansion coecients, u~n, through the sum
u~n = ~1
N X
n i=0
u(xi )Pn() (xi )wi ;
where the discrete normalization, ~n = n for n < N , Eq.(16), while 2 + 1)?(N + 2 + 1)
~N = 22+1 ? (NN : !?2 (2 + 1) The Gauss-Lobatto weights, wi , are given as
(17)
8 > >
:
+1)?(N +2+1) P () (x ) d P () (x ) ?22+1 N?!(2N(+2 +1)?2 (2+1) N i dx N ?1 i
(18)wi = >
i = 0; N h
i?1
i 2 [1; N ? 1]
To simplify the notation in the remaining part of the paper, we introduce the two vectors
u = (u(?1); u(x1); : : : ; u(xN ?1 ); u(1))T ; u~ = (~u0 ; : : : ; u~N )T ; which are related through the transformations u = T?1 u~ ; u~ = Tu ; where the entries of the (N + 1)-by-(N + 1) matrices, T?1 and T, are given as T?ij1 = Pj() (xi ) ; Tij = ~1 Pi() (xj )wj : i We observe that T?1 and T form a set of real orthogonal unitary matrices since T?1 T = I and, with appropriate normalization being possible since ~n and wj are strictly positive, we have that TT = T?1 . An important consequence of this is that T?1 and T provides a similarity transform. Since the continuous expansion coecients, u^n , are impossible to obtain in any practical situation, we shall devote the remaining part of this paper to the use of the discrete expansion coecients, u~n . The error incurred by this, known as the aliasing
8
J. S. HESTHAVEN
error appearing as a result of the introduction of a nite size grid, is discussed in e.g. [1, 2]. When solving partial dierential equations we must address the issue of how to compute derivatives. Indeed, in the present context we must consider the question of given the approximation
u = T?1 u~ ; is it possible to nd the vector of expansion coecients, u~ (1) , such that d ?1 (1) dx u = T u~ ; i.e. to obtain the expansion coecients for the approximation of the derivative of u(x) at the grid points. We should recall that in general we have d P u(x) ; PN dudx(x) 6= dx N and we approximate the former by the latter, leading to the introduction of an error as discussed in [1, 2]. However, it is the standard way of approximating derivatives in spectral methods and we shall simply assume that the introduced error is negligible. Within the present context the sought after relation is easily obtained since d u = T?1 u~ (1) = T?1 B(?1) u~ (1) = T?1 u~ ; x x dx where we have introduced Eqs.(5)-(6), implying that u~ = B(?1) u~ (1) : The solution of this system is easily achieved by backward recursion to obtain u~ (1) with the exception of u~(1) N which, however, must have the value of zero. We note that B(?1) plays the role of integration such that for u~ (1) being given one may recover u~ up to a constant since u~0 remains undetermined as the rst row of B(?1) is zero. To obtain a direct relation through which we may obtain u~ (1) from u~ we need to invert B(?1) , which, however, requires some care since B(?1) is singular. Nevertheless, by realizing that B(?1) : QN0 ?1 ! QN1 ;
we obtain that the inverse of B(?1) , which we shall name B(1), may be obtained uniquely in the restricted space as B(1) : QN1 ! QN0 ?1 ; such that the rst column and the last row of B(1) must be identically zero. It is well known that B(1) takes the form of a strictly upper triangular matrix when considering expansions based on ultraspherical polynomials and that B(1) has a conditioning number that grows like N 2 . Examples of the explicit forms of such spectral dierentiation matrices may be found in [1, 14]. Using the exact same line of arguments that led to the identi cation of B(1) as the spectral rst order dierentiation operator, we may also obtain the relationship
PSEUDOSPECTRAL INTEGRATION PRECONDITIONING I
9
u~ = B(?2) u~ (2) ; where u~ (2) refers to the vector of expansions coecients for the second order derivative of u at the collocation points. Indeed, by realizing that B(?2) : QN0 ?2 ! QN2 ; we may uniquely invert B(?2) in this restricted space as B(2) : QN2 ! QN0 ?2 ; indicating that the rst two columns and last two rows of B(2) must be identically zero. The explicit entries for special choices of the polynomial basis can be found in [1, 14] and we recall that these second order spectral dierentiation matrices are strictly upper triangular matrices with a condition number being proportional to N 4 . It should be noted that although we have chosen to focus the attention on rst and second order operators, the exact same line of arguments can be applied to arrive at spectral dierentiation operators of arbitrary order through the use of the polynomial recursion formulas, Eqs.(5)-(8). 2.3. Pseudospectral Methods. Let us now turn the attention towards the issue of pseudospectral methods and their relation to the spectral methods based on the use of the discrete expansion coecients. If we introduce the projection
IN u(x) = the key observation to make is that
N X n=0
u~n Pn() (x) ;
8xi : IN u(xi ) = u(xi ) ; where xi represents the Gauss-Lobatto quadrature points used to compute the discrete expansion coecients, u~n . Hence, the discrete projection is exact at the collocation points and therefore IN u(x) is identical to the interpolating polynomial based on these nodal points. This result appears by using the Christoel-Darboux identity [12] for orthogonal polynomials together with Eqs.(3)-(4) and the expression for the Gauss-Lobatto weights, wi , given in Eq.(18). As a consequence of this property, we may equally well express the approximating polynomial using the interpolating Lagrange polynomials, li (x), as
IN u(x) =
N X n=0
u~n Pn() (x) =
where the Lagrange polynomials take the form
li (x) = wi
N X i=0
u(xi )li (x) ;
N X
1 P () (x )P () (x) : i n n n=0 ~n
Hence, in this formulation of the pseudospectral scheme we only consider the grid point values of the approximated function while the discrete expansion coecients, so vital in the spectral formulation, never appear.
10
J. S. HESTHAVEN
Let us again return to the issue of how to compute derivatives once the initial approximation is known. Indeed, since IN u(x) is nothing more than a polynomial we may directly dierentiate these polynomials such that
u(1) = D(1) u ; yields the derivative of u(x) at the grid points with u(1) = (u(1) (x0 ); : : : ; u(1)(xN ))T ; u = (u(x0 ); : : : ; u(xN ))T ; and the entries of the dierentiation matrix, D(1) , become (19)
dlj (xi ) D(1) ij = dx = wj
N X
1 P () (x ) dPn() (xi ) : j n dx n=0 ~n
It is well known that D(1) is a centro-antisymmetric full matrix [15] with a condition number being proportional to N 2 , much like the spectral rst order dierentiation dierentiation operator, B(1) . In the exact same spirit we obtain the second order derivatives of u(x) at the grid points as
u(2) = D(2) u ; with the entries of the second order dierentiation matrix, D(2) , being (20)
N 1 2 n() (x ) 2 l (x ) X d i : j i Dij = dx2 = wj ~ Pn() (xj ) d Pdx 2 n=0 n (2)
The second order dierentiation matrix is known to be a full centro-symmetric matrix [15] with a condition number proportional to N 4 , similar to the second order spectral dierentiation operator, B(2) . 3. Integration Preconditioning. One of the key issues in the development of the integration preconditioners for the pseudospectral dierentiation matrices is the establishment of the following result Theorem 3.1. Given the spectral dierentiation matrices, B(1) and B(2) , the unitary transformation, T and T?1 , and the pseudospectral dierentiation matrices, D(1) and D(2) , the following relation holds D(1) = T?1 B(1) T ; D(2) = T?1 B(2) T : Proof. The theorem is established by expressing the pseudospectral dierentiation matrices, Eq.(19) and Eq.(20), as
D(1) = T?x 1 T ; D(2) = T?xx1 T ;
where the entries of the two matrices, T?x 1 and T?xx1 , are given as ?
T?x 1 ij =
dPj() (xi ) ? ?1 d2 Pj() (xi ) : ; Txx ij = dx2 dx
11
PSEUDOSPECTRAL INTEGRATION PRECONDITIONING I
However, since ?1 B(?2) ; T?1 = T?x 1 B(?1) ; T?1 = Txx
the result follows. This result immediately suggests the use of the following two pseudospectral integration preconditioners P~ (?1) = T?1 B(?1)T ; P~ (?2) = T?1 B(?2) T ;
(21)
for preconditioning of the rst and second order pseudospectral dierentiation matrices, respectively, since we arrive at the very attractive result that P~ (?1)D(1) = IN1 ; P~ (?2)D(2) = IN2 ;
(22)
where we have introduced Iba being a matrix with unity along the diagonal for i 2 [a; b] while the remaining elements of Iba are identically zero, e.g. IN0 equals the identity matrix. The problem, however, with using the preconditioners de ned in Eq.(21) is that they share the rather undesirable property of being singular. We may, however, surmount this problem while maintaining the remarkable eciency of the integration preconditioners when applied to the pseudospectral dierentiation matrices, Eq.(22). Indeed, if we de ne the modi ed spectral integration operator, B~ (?1) as 2
B~ (?1) =
6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
0
b(1) 1;0 0 0 0 .. . 0 0
0 0
b(1) 2;1 0 0 .. . 0 0
0
b(1) 1;2 0
b(1) 3;2 0 .. . 0 0
0 0
b(1) 2;3
0 0 0
0 0 0 0 ... ... 0
(1) 1 0 0 0 .. . 0
... 0 b(1) 3;4 ... ... ... ... ... ... b(1) 0 b(1) N ?1;N N ?1;N ?2 0 b(1) 0 N;N ?1
3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5
;
we maintain the property that B~ (?1)B(1) = IN1 since B(1) : QN1 ! QN0 ?1 , while enabling B~ (?1) to be non-singular by properly choosing the parameter, (1) 1 - a choice, as we shall see shortly, that can also be made such as to optimize the performance of the preconditioner. This suggests that a natural choice of a pseudospectral integration preconditioner for the rst order pseudospectral dierentiation matrix is given as (23)
P(?1) = T?1 B~ (?1) T ;
while we return to the exact speci cation of (1) 1 shortly. Following the exact same reasoning that led to the de nition of B~ (1) we introduce the modi ed spectral second order integration operator, B~ (?2) as
12
J. S. HESTHAVEN 2
B~ (?2) =
6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4
0 0
b(2) 2;0 0 0 0 .. . 0 0
0 0 0
b(2) 3;1 0 0 .. . 0 0
0 0
b(2) 2;2 0
b(2) 4;2 0 .. . 0 0
0 0 0
b(2) 3;3
0 0
b(2) 2;4 0
0 0 0
b(2) 3;5
(2) 11 (2) 21 0 0 0 ... 0
(2) 12 (2) 22 0 0 0 .. .
... 0 b(2) 0 4;4 ... ... ... ... ... ... ... ... b(2) N ?2;N ... (2) 0 0 bN ?1;N ?1 0 (2) (2) 0 bN;N ?2 0 bN;N
3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5
(2) (2) (2) where we introduce the four parameters, (2) 11 , 12 , 21 and 22 to avoid the singu( ? 2) ( ? 2) (2) ~ larity of B while maintaining the property that B B = IN2 so important for the eciency of the pseudospectral integration preconditioner which we then de ne as
(24)
P(?2) = T?1 B~ (?2) T ;
while we return to the exact speci cation of (2) ij shortly. We note in particular that the eigenvalue spectrum of the preconditioned dierentiation matrices, P(?1)D(1) = T?1 IN1 T ; P(?2) D(2) = T?1 IN2 T ; are known on explicit form and that, moreover, also the pseudospectral integration preconditioners, Eq.(23) and Eq.(24), are given on explicit form. So far we have dealt with the issue of preconditioning the dierentiation matrices only. However, when attempting to solve a given ordinary or partial dierential equation one faces the problem of boundary conditions. The usual way of introducing boundary conditions in pseudospectral methods is by directly modifying the dierentiation matrices, e.g. by deleting the appropriate rows and columns in the matrices corresponding to the boundary points in case of Dirichlet boundary conditions [1, 2]. This has the rather unfortunate consequence that the result of Theorem 3.1, leading to the identi cation of the pseudospectral integration preconditioners, fails to be valid. We shall therefore follow a dierent road. In [9, 10], Funaro and Gottlieb introduced a penalty method which adds a correction at the boundary rather than directly modifying the matrix operators. Hence, the boundary conditions are enforced on a weak form only but the error incurred by this vanishes at the same rate as that of the interior scheme and therefore does not destroy the overall spectral accuracy of the computed solutions. The penalty methods have been studied thoroughly for various problems in [11] and we shall rely heavily on the results quoted in that paper. The issue of boundary conditions naturally depends on the actual problem being considered and in the following we consider the details of a variety of problems.
PSEUDOSPECTRAL INTEGRATION PRECONDITIONING I
13
4. Preconditioning of the Advective Operator. Let us rst focus the attention on the solution and preconditioning of the rst order problem du(x) = f (x) ; u(1) = 0 ; dx
(25)
and assume that we seek solutions of polynomial form leading to a discrete approximation of Eq.(25) as (26) D(1) u = f ; u(1) = 0 ; where u and f refer to the vectors of the two functions, u(x) and f (x), at the GaussLobatto collocation points associated with the polynomial basis, Pn() (x), chosen. Imposing the boundary conditions in a conventional manner would amount to deleting the last row and column and solving the resulting matrix problem. However, since the condition number of D(1) is proportional to N 2 this may be very hard to do accurately for N being large. Moreover, since we have destroyed the very delicate structure of D(1) we can not rely on the eectiveness of P(?1) for preconditioning. We therefore propose to consider an alternative scenario in which we impose the boundary conditions as h
(27)
i
D(1) ? INN u = f ;
following the introduction of the penalty method analyzed in [9, 10, 11]. The penalty parameter, , is chosen such that all eigenvalues of the operator D(1) ? INN ; lie entirely in the last half plane. Such choices are discussed in depth in [11] for speci c choices of the polynomial basis and we shall return to this issue shortly. For now, we simply assume that is a given parameter, usually depending on N , and consider the question of preconditioning of Eq.(27). However, since D(1) is unaltered is seems only natural to use P(?1) for preconditioning to obtain h
i
T?1 IN1 T ? P(?1) INN u = P(?1)f ;
which is valid since we have assured that B~ (?1) , and therefore P(?1), is non-singular. At this points we need to address the eect of the preconditioner for Eq.(27). Answering this question amounts to estimating the spectral radius of the preconditioned operator i h T?1 IN1 T ? P(?1) INN = T?1 IN1 ? B~ (?1) TINN T?1 T ; where the last expression is utilizing the fact that T and T?1 has been identi ed as a similarity transform. Since the eigenvalues of IN1 is obvious we shall consider the part appearing from the penalty term. The operator, R+ = TINN T(?1) , has the entries (28)
R+ij = ~1 Pi() (1)Pj() (1)!N ; i
14
J. S. HESTHAVEN
such that we need to consider the entries of S = B~ (?1) R+ , being 8 >
:
+ (1) i=0 1 RN;j (1) + R bi;i?1 R+i?1;j + b(1) i;i+1 i+1;j i 2 [1; N ? 1] : (1) + bN;N ?1RN ?1;j i=N
To continue beyond this point of the analysis we shall rely on the remarkable symmetries of the polynomial expansions. Indeed, for i 2 [1; N ? 2] we obtain (1) + + i 2 [1; N ? 2] : b(1) i;i?1 Ri?1;j + bi;i+1 Ri+1;j = 0 ;
by using the expression for the entries of R, Eq.(28), combined with Eq.(3), Eq.(6) and Eq.(16). Since, ~N 6= N (Eq.(17)), we obtain a special row for i = N ? 1 such that the reduced entries of S are given as 8 > > >
> > :
+ (1) 1 RN;j
i=0 i 2 [1; N ? 1] : (1) + + b(1) N ?1;N ?2 RN ?2;j + bN ?1;N RN;j i = N ? 1 + i=N b(1) N;N ?1RN ?1;j 0
If we now consider the eigenvalues of the preconditioned operator, IN1 ? S, we realize that at least N ? 2 of the N + 1 eigenvalues are exactly one - a result that is obtained irrespective of the choice of ultraspherical polynomial basis. Moreover, using a permutation matrix, it is easily realized that the remaining three eigenvalues are obtained as the eigenvalues of a 3-by-3 matrix, W(1) , with the entries 3 2 ? S0;0 ? S0;N ?1 ? S0;N W(1) = 4 ? SN ?1;0 1 ? SN ?1;N ?1 ? SN ?1;N 5 : ? SN;0 ? SN;N ?1 1 ? SN;N Hence, once the polynomial basis is chosen and is determined from stability considerations, the eigenvalue spectrum can be determined on explicit form and, recalling that (1) 1 still remains undetermined, optimized to minimize the spectral radius. The results of this nal stage, however, is method speci c and we shall leave the general framework and consider the outcome of the analysis for Legendre and Chebyshev methods in the following. 4.1. Analysis of the Legendre Method. Let us now consider the case of the approximation being based on the Legendre Gauss-Lobatto nodal set, i.e. we have the weights at the endpoints, !0 and !N , as (29)
8 2 < 2i+1
i