LAGUERRE APPROXIMATION OF STABLE MANIFOLDS WITH ...

Report 0 Downloads 20 Views
MATHEMATICS OF COMPUTATION Volume 73, Number 245, Pages 211–242 S 0025-5718(03)01535-7 Article electronically published on April 22, 2003

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS WITH APPLICATION TO CONNECTING ORBITS GERALD MOORE

Abstract. We present an algorithm, based on approximation by Laguerre polynomials, for computing a point on the stable manifold of a stationary solution of an autonomous system. A superconvergence phenomenon means that the accuracy of our results is much higher than the usual spectral accuracy. Both the theory and the implementation of the method are considered. Finally, as an application of the algorithm, we describe a fully spectral approximation of homo- and heteroclinic orbits.

1. Introduction We consider an autonomous system (1.1)

v0 = F(v),

F : Rm 7→ Rm ,

possessing a stationary solution, which without loss of generality we assume to be the origin; i.e., F(0) = 0. Our aim in this paper is to introduce a new algorithm, particularly suitable for smooth F, for approximating a point on the local stable manifold of zero for (1.1). For the rest of this introduction, we briefly describe the elementary properties of this manifold which will be required; for more details see [14], [21], [24]. We assume that F is differentiable at the origin and additionally that (1.2)

F(x) ≡ Ax + G(x),

where A denotes the m × m Jacobian matrix and kG(x)k = O(kxk2 ). Note that k · k will always denote the vector 2-norm and that we shall be more specific about the structure of G later. In this paper we shall also assume that the origin is hyperbolic, i.e., that A has no zero or purely imaginary eigenvalues, although an extension to the general case is described in section 6. We make frequent use of the decomposition (1.3)

Rm = E s ⊕ E u ,

where the stable subspace Es is the invariant subspace of A corresponding to those eigenvalues with strictly negative real part, while the unstable subspace Eu is the invariant subspace of A corresponding to those eigenvalues with strictly positive real part. These subspaces have dimensions ps & pu ≡ m − ps , respectively, and we use P s & P u to denote the projections induced by the above direct sum. The local Received by the editor February 20, 2001 and, in revised form, May 13, 2002. 2000 Mathematics Subject Classification. Primary 33C45, 37C29, 37M99, 65N35, 65P40. Key words and phrases. Laguerre polynomials, invariant manifolds, homoclinic orbits, spectral methods. c

2003 American Mathematical Society

211

212

GERALD MOORE

stable manifold is ps -dimensional and consists of those points lying on orbits which converge to the origin as t → ∞. It may be parametrised by Es near the origin and in fact this subspace is tangent to the manifold at the origin. Hence, given ξ ∈ Es sufficiently small, the unique point y on the manifold satisfying P s y = ξ is obtained by solving (1.1) with boundary conditions P s v(0) = ξ

(1.4)

lim P u v(t) = 0.

and

t→∞

If we denote this solution by v? (t), the effect of (1.4) is to force v? (t) to converge exponentially to 0 as t → ∞ and so y is given by v? (0). The contents of this paper are as follows. In section 2 we construct the Galerkin (and equivalent collocation and Laguerre expansion) equations approximating (1.1) and (1.4). In section 3 we prove that these discrete equations have a unique local solution and provide a superconvergent approximation to the stable manifold. In section 4 we describe how to implement our algorithm efficiently and then, in section 5, we use it to obtain some numerical results. Section 6 shows how the scheme in this paper can be used to approximate connecting orbits accurately and also contains an extension to strongly stable manifolds. Finally, we include an appendix which collects some useful results on Laguerre polynomials. 2. Variational, collocation and expansion formulations Our aim in this section is to set up an efficient approximation scheme for (1.1) and (1.4). The key feature here is that this is a BVP over [0, ∞) and such problems may be tackled in three ways. • Truncate the interval to [0, T ] and use an approximate boundary condition at T . • Transform the problem to a finite interval by a suitable mapping of the independent variable. • Discretise the problem by using a set of approximating functions appropriate to [0, ∞). In [21] we looked at methods of the first two types; here we will consider type three. (The only similar paper we are aware of is [16], but the authors use a rational spectral method rather than our approach below.) The natural space of approximating functions includes those of the form γ e− 2 t q(t), where γ > 0 is a constant to be chosen and q ∈ XN (the space of polynomials of degree N or less with coefficients in Rm ). Thus the boundary condition at t = ∞ in (1.4) is automatically satisfied. In fact, it is convenient to consider XN approximating γ  u? (t) ≡ e 2 t v? (t) − eAt ξ , where u? satisfies   γ γ   (2.1) u0 − γ2 I + A u = e 2 t G eAt ξ + e− 2 t u with side conditions (2.2)

s

P u(0) = 0

Z and



e−γt ku(t)k2 dt

exists.

0

(I.e., the leading term of v? has been removed, but we are still constructing the stable manifold since P u u? (0) = P u v? (0). In section 5, for comparison, we also exhibit results with this term left in.) We then introduce

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

213

0 • our trial space XN ⊂ XN to be that subspace satisfying the additional s condition P q(0) = 0, ¯ N ⊂ XN to be that subspace satisfying the additional • our test space X u condition Q q ∈ XN −1 , where Qu and Qs are the induced projections from the orthogonal analogue of (1.3), i.e., Rm = Eu⊥ ⊕ Es⊥ . 0 to satisfy the variational Hence we would like to define our approximation u?N ∈ XN equations (2.3) Z ∞ Z ∞  n γ  o γ γ   0 −γt e e−γt e 2 t G eAt ξ + e− 2 t uN ◦ q dt uN − 2 I + A uN ◦ q dt = 0

0

¯ N , where x1 ◦ x2 denotes the Euclidean inner-product on Rm . for every q ∈ X This may be simplified by the change of variable t 7→ t/γ or equivalently the immediate change of independent variable in (1.1), replacing it with 1 (2.4) v0 (t) = F(v). γ (Note that this change of variable does not alter the value of the solution at t = 0, which is the only part we are interested in.) Hence, remaining with the same 0 approximating u? notation u? and u?N for convenience, we could define u?N ∈ XN and satisfying  1    h i E D At (2.5) u0N − 12 I + γ1 A uN , q = γ1 et/2 G e γ ξ + e−t/2 uN , q ¯ N , where for every q ∈ X

Z hz1 , z2 i ≡

(2.6)



e−t z1 (t) ◦ z2 (t) dt.

0

Thus, using the terminology of [5], we have a tau method for approximating the stable component of u? and a Galerkin method for the unstable component. This scheme, however, is impractical, because we cannot evaluate the r.h.s. of (2.5), and so instead we make use of x1 ◦ x2 = P s x1 ◦ Qu x2 + P u x1 ◦ Qs x2 0 and we define u?N ∈ XN to be the solution of the approximate variational equations  1    h i E D At 0 t/2 −t/2 1 1 1 γ ξ+e uN , q (2.7) uN − 2 I + γ A uN , q = γ e G e N

¯ N , where for every q ∈ X (2.8)

hz1 , z2 iN ≡

N X

wi P s z1 (ti ) ◦ Qu z2 (ti ) +

i=1

N X

w ei P u z1 (t˜i ) ◦ Qs z2 (t˜i ).

i=0

These are the Gauss-Laguerre and Gauss-Laguerre-Radau quadrature points/ weights, which are defined in the Appendix, and we note that ¯N . ∀q1 ∈ XN , q2 ∈ X hq1 , q2 i = hq1 , q2 i N

214

GERALD MOORE

In section 3 we shall indeed prove that (2.7) has a locally unique solution. Finally, we emphasise again that γ > 0 is a constant to be chosen by the user. We shall see, in section 5, how the accuracy of our stable manifold approximation depends on γ. The above variational formulation is often useful theoretically. For actual computations, however, an equivalent collocation or modal formulation is more convenient. 2.1. Collocation framework. The collocation equations are obtained by inserting ∀x ∈ Eu⊥ ,

q(t) ≡ `i (t)x where

QN

j6=i (t j=1

− tj )

j6=i (ti j=1

− tj )

`i (t) ≡ QN

,

i = 1, . . . , N,

i = 1, . . . , N,

are the Lagrange polynomials w.r.t. {t1 , . . . , tN }, and ∀x ∈ Es⊥ ,

q(t) ≡ `˜i (t)x where

i = 0, . . . , N,

QN

j6=i (t − t˜j ) j=0 ˜ , `i (t) ≡ QN j6=i (t˜i − t˜j )

i = 0, . . . , N,

j=0

are the Lagrange polynomials w.r.t. {t˜0 , . . . , t˜N }, in (2.7). Hence (2.7) is equivalent to h i P s u0N (ti ) − 12 I + γ1 A P s uN (ti )  1  (2.9a) Ati −ti /2 1 ti /2 s γ ξ+e uN (ti ) , i = 1, . . . , N = γe P G e and

i + γ1 A P u uN (t˜i )  1  At˜i −t˜i /2 1 t˜i /2 u γ ξ+e uN (t˜i ) , = γe P G e

P u u0N (t˜i ) − (2.9b)

h

1 2I

i = 0, . . . , N,

with P s uN (0) = 0. Defining the matrices (DN )ij = dij ≡ [

t 0 `j ] (ti ), tj

i, j = 1, . . . , N,

(i.e., zero is added to `j as an extra interpolation point) and   eN = d˜ij ≡ [`˜j ]0 (t˜i ), i, j = 0, . . . , N, D ij

so that P s u0N (ti ) =

N X

dij P s uN (tj ),

i = 1, . . . , N,

d˜ij P u uN (t˜j ).

i = 0, . . . , N,

j=1

and P u u0N (t˜i ) =

N X j=0

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

215

our final discrete system satisfied by u?N is usN (0) = 0,

(2.10a) (2.10b)

N X

dij usN (tj ) −

h

1 2I

j=1

= (2.10c)

N X

1 ti /2 s P G γe

d˜ij uuN (t˜j ) −

h

=

 1  Ati −ti /2 γ e ξ+e uN (ti ) ,

1 2I

j=0 1 t˜i /2 u P G γe

i + γ1 A usN (ti ) i = 1, . . . , N,

i + γ1 A uuN (t˜i )

 1  At˜i −t˜i /2 γ ˜ e ξ+e uN (ti ) ,

i = 0, . . . , N,

where usN ≡ P s uN and uuN ≡ P u uN . In section 4 we shall see how to efficiently solve this system of (N + 1)m equations and unknowns. 2.2. Laguerre expansion framework. Using the modal representation uN (t) ≡

N X

ak Lk (t)

k=0

in (2.7), with ask ≡ P s ak and auk ≡ P u ak , we have usN (0) = 0 =⇒ Hence, inserting q(t) ≡ xLk (t),

x ∈ Eu⊥ ,

k = 0, . . . , N − 1,

q(t) ≡ xLk (t),

x ∈ Es⊥ ,

k = 0, . . . , N,

PN k=0

ask = 0.

and

in (2.7) and using (A.1b), (2.7) is equivalent to N X

(2.11a)

ask = 0,

k=0 i−1 X

(2.11b)

ask +

h

1 2I

i − γ1 A asi = γ1 gis ,

i = 0, . . . , N − 1,

1 2I

i + γ1 A aui = γ1 giu ,

i = 0, . . . , N.

k=0



(2.11c)

N X

auk −

h

k=i+1

Here (2.12)

N −1 X k=0

gks Lk (t) +

N X

gku Lk (t) ≡ IbN



 1  At et/2 G e γ ξ + e−t/2 uN ,

k=0

where IbN is the polynomial interpolation operator defined by: G s R u • IbN (z) ≡ IbN −1 (P z) + IN (P z), G bG i = 1, . . . , N , • IbN −1 (z) ∈ XN −1 with IN −1 (z)(ti ) = z(ti ), R R i = 0, . . . , N . • IN (z) ∈ XN with IN (z)(t˜i ) = z(t˜i ),

216

GERALD MOORE

Again, in section 4 we shall see how to efficiently solve this system of (N + 1)m equations and unknowns. 2.3. Conclusion. We have chosen the quadrature rule (2.8) so that the three frameworks (2.7), (2.10) and (2.11) are all equivalent, in the sense of having the same solution u?N . (In fact, we shall see in section 4 that there are practical reasons for preferring the expansion formulation (2.11).) Of course, by breaking this equivalence, many slightly different discretisations are possible: e.g., we could just use Laguerre-Radau quadrature and replace (2.12) with full N th degree interpolation at t˜0 , . . . , t˜N , which is not a collocation method. It would be easy to adapt the analysis in the next section to this scheme. Finally, we note that, if the boundary condition at t = 0 is left as inhomogeneous, i.e., (2.1) and (2.2) become  γ  γ   u0 − γ2 I + A u = e 2 t G e− 2 t u with

Z P s u(0) = ξ

and



e−γt u(t) ◦ u(t) dt exists,

0

then (2.11)–(2.12) are changed to N X

(2.13a)

ask = ξ,

k=0

(2.13b)

−ξ +

i−1 X

ask +

h

1 2I

i − γ1 A asi = γ1 gis ,

i = 0, . . . , N − 1,

1 2I

i + γ1 A aui = γ1 giu ,

i = 0, . . . , N,

k=0

(2.13c)



N X

auk −

h

k=i+1

with (2.14)

N −1 X k=0

gks Lk (t) +

N X

 n o gku Lk (t) ≡ IbN et/2 G e−t/2 uN .

k=0

In section 5 we shall compare the numerical results for (2.13)–(2.14) with those for (2.11)–(2.12). 3. Existence, uniqueness and error of approximation We first introduce an appropriate norm on Rm , with respect to which the matrix A has suitable definiteness properties. To do this, we require the following standard lemma, where 0 < λs ≡ min{−<e(λ) : λ an eigenvalue of A with <e(λ) < 0} and 0 < λu ≡ min{<e(λ) : λ an eigenvalue of A with <e(λ) > 0} measure the hyperbolicity of the origin.

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

217

Lemma. For any ε > 0 there exists a symmetric positive-definite Mε ∈ Rm×m , with Mε : Es 7→ Eu⊥ and Mε : Eu 7→ Es⊥ , such that −x ◦ Mε Ax ≥ (λs − ε)x ◦ Mε x

∀x ∈ Es ,

x ◦ Mε Ax ≥ (λ − ε)x ◦ Mε x

∀x ∈ Eu .

u

Proof. See [15]. We note that Mε may be constructed using a basis of eigenvectors, if this exists. The condition of Mε , however, will blow up as ε → 0 if a critical eigenvalue of A is defective; otherwise ε = 0 is also possible.  Hence we are able to assert the following theorem, where the As/u denote the restrictions of A to Es and Eu . Theorem. There exists a symmetric positive-definite M ∈ Rm×m , hence defining a norm √ kxkM ≡ x ◦ Mx, which satisfies M : Es 7→ Eu⊥ and M : Eu 7→ Es⊥ , and also the following properties: • ∃ cs > 0 such that −x ◦ MAx ≥ cs kxk2M

∀x ∈ Es ,

• ∃ c > 0 such that u

x ◦ MAx ≥ cu kxk2M • for each γ > 0,

∀x ∈ Eu ,

∃ θγs

< 1 such that −1  γ  γ s k 2 I − As 2 I + As kM ≤ θγ ,

• for each γ > 0, ∃ θγu < 1 such that −1  γ   u k γ2 I + Au 2 I − Au kM ≤ θγ . Proof. The first two parts are simply the above lemma. The last two parts follow from   −1  γ  −1  γ  γ I + A I + A I − A < 1 and ρ 0 such that o n Cγ max e−βt/2 et/2 kf (t)k (3.15) k|uN k|H 1 ≤ 1 − β t≥0 for each β ∈ [0, 1), where

Z

k|zk|H 1 ≡



e

−t



0

kz(t)k + kz (t)k 2

2



1/2 dt

.

0

We shall also need the following Sobolev inequality from [17]: • ∃ CS ≥ 1 such that k|zN k|L∞ ≤ CS k|zN k|H 1 holds for all z ∈ XN and all N > 0, where o n k|zk|L∞ ≡ max e−t/2 kz(t)k . t≥0

Thus, from (3.15), we also have the bound (3.16)

k|uN k|L∞ ≤

o n CS Cγ max e−βt/2 et/2 kf (t)k . 1 − β t≥0

Hence we have shown existence and uniqueness for the solution of the discrete linear problem (3.1) and obtained the bounds (3.15) and (3.16). 3.2. Solution of the nonlinear problem. Now we shall use the bounds of section 3.1 to prove the existence and local uniqueness of a solution for the nonlinear equation (2.7), i.e.,  1    h i E D At 0 t/2 −t/2 1 1 1 γ ξ+e uN , q (3.17) uN − 2 I + γ A uN , q = γ e G e N

¯ N . To achieve this, we must be more specific about the exponential for every q ∈ X As t decay of e and make an assumption about the behaviour of the higher-order term G in (1.2).

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

221

• ∃ αγ ∈ (0, 12 ] and ∃ CA ≥ 1 such that 1

ke γ

At

P s k ≤ CA e−αγ t

∀t ≥ 0.

Note that, if a critical eigenvalue of As is not defective, we may choose αγ ≡ min {λs /γ, 1/2}. • K(r) is a nondecreasing function of r such that, ∀kx1 k, kx2 k ≤ r, kG(x1 ) − G(x2 )k ≤ K(r) max{kx1 k, kx2 k}kx1 − x2 k. Now we define r? by CS Cγ ? ? 2αγ K(2CA r )4CA r

= 1,

0 defined by and consider the mapping uN 7→ zN ≡ K(uN ) ∈ XN  1    h i E D At (3.18) z0N − 12 I + γ1 A zN , q = γ1 et/2 G e γ ξ + e−t/2 uN , q N

¯N . for every q ∈ X Theorem. If kξk ≤ r? , then K is a contraction on the set  0 : k|uN |kL∞ ≤ CA kξk S ≡ uN ∈ XN with respect to k| · |kL∞ . Proof. We set β ≡ 1 − 2αγ in (3.16). Hence K(uN ) ∈ S since

 1   

At CS Cγ αγ t −t/2 γ ξ+e uN (t) k|K(uN )|kL∞ ≤ 2αγ max e G e

t≥0



CS Cγ 2 2αγ K(2CA kξk) [2CA kξk]

≤ CA kξk. Also K is a contraction on S since k|K(uN ) − K(vN )|kL∞

 1   1   

At At CS Cγ αγ t ≤ 2αγ max e G e γ ξ + e−t/2 uN (t) − G e γ ξ + e−t/2 vN (t)

t≥0 o n CS Cγ K(2CA kξk)2CA kξk k|uN − vN |kL∞ ≤ 2α γ ≤ 12 k|uN − vN k|L∞ .  Thus the Contraction Mapping Theorem guarantees a locally unique solution of (3.17), which we denote by u?N . 3.3. Basic error bound. Let IN be the polynomial interpolation operator defined by G R (P s z) + IN (P u z), IN (z) ≡ IN where G G (z) ∈ XN with IN (z)(ti ) = z(ti ) i = 0, . . . , N, IN ? ? with t0 ≡ 0; and denote uI ≡ IN (u ). Then u?I satisfies  1    h i E D At (3.19) IN (u? 0 ) − 12 I + γ1 A u?I , q = γ1 et/2 G e γ ξ + e−t/2 u? , q N

222

GERALD MOORE

¯ N , since for every q ∈ X h i E D h i E D IN (u? 0 ) − 12 I + γ1 A u?I , q = u? 0 − 12 I + γ1 A u? , q

N

¯ N . Hence subtracting (3.19) from for all q ∈ X  1    D h i E At ? 0 ? t/2 −t/2 ? 1 1 1 γ uN − 2 I + γ A uN , q = γ e G e ξ+e uN , q N

gives (3.20)

h i E D (u?N − u?I )0 − 12 I + γ1 A (u?N − u?I ), q    1   1   At At ,q = γ1 et/2 G e γ ξ + e−t/2 u?N − G e γ ξ + e−t/2 u? N

+ IN (u? 0 ) − u?I 0 , q

¯ N . Now, if we assume that for every q ∈ X k|u? |kL∞ ≤ CA kξk, which is certainly true for kξk sufficiently small, and combine the arguments in the proof of the above theorem with those leading to (3.15), we obtain n o C (3.21a) k|u?N − u?I |kH 1 ≤ 2αγγ K(2CA kξk)2CA kξk k|u?N − u? |kL∞ o n γC + 1−βγ max e−βt/2 ku? 0 (t) − u?I 0 (t)k t≥0

and, from the Sobolev inequality, n o CS Cγ K(2C kξk)2C kξk k|u?N − u? |kL∞ (3.21b) k|u?N − u?I |kL∞ ≤ 2α A A γ o n γCS Cγ + 1−β max e−βt/2 ku? 0 (t) − u?I 0 (t)k t≥0

for each β ∈ [0, 1). Consequently, writing k|u?N − u? |kL∞ ≤ k|u?N − u?I |kL∞ + k|u?I − u? |kL∞ ≤ CS k|u?N − u?I |kH 1 + k|u?I − u? |kL∞ and utilising CS Cγ 2αγ K(2CA kξk)2CA kξk



1 2

transforms (3.21) into the error results (3.22a)

o n γC k|u?N − u?I |kH 1 ≤ k|u? − u?I |kL∞ + 2 1−βγ max e−βt/2 ku? 0 (t) − u?I 0 (t)k t≥0

and (3.22b) k|u?N − u?I |kL∞ ≤ k|u? − u?I |kL∞ + 2

γCS Cγ 1−β max t≥0

o n e−βt/2 ku? 0 (t) − u?I 0 (t)k

for each β ∈ [0, 1). Hence our basic error bounds (3.22) depend on the accuracy of polynomial interpolation at Gauss-type points, which is stated in (A.19).

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

223

3.4. Superconvergence result. Since we are only interested in the error at t = 0, i.e., P u [u? − u?N ](0), any superconvergence property at this point is particularly important. Such a property is well known for many Galerkin-type approximations, and we develop it here for our Laguerre method. For a given x ∈ Eu , consider the linear adjoint problem z0 +

(3.23a)

1 γ

T

[A + B(t)] z = 0

with boundary conditions Qs z(0) = Mx, (3.23b)

lim Qu z(t) = 0,

t→∞

where

 1  At −t/2 ? γ ξ+e u B(t) ≡ JG e

is the Jacobian matrix of G. Just as in the standard constructive existence theorems for the stable manifold of (1.1), we may use the variation of constants approach [1] to write (3.23) in fixed-point form Z t 1 T Z ∞ 1 T 1 A (τ −t) s T A (τ −t) u T − AT t Mx − eγ Q B (τ )z(τ ) dτ + eγ Q B (τ )z(τ ) dτ. z(t) = e γ 0

t

The usual contraction mapping argument then shows that, for kξk sufficiently small, b For us, the most important (3.23) has a unique solution, which we denote by w. point is that this solution inherits the smoothness of B, and all its derivatives are bounded by kMxk; i.e., ∃ CB > 0 and αuγ > 0 such that (3.24)

b (j) (t)k ≤ CB kMxke−αγ t , kw u

j = 0, 1, 2, . . . .

(αuγ

depends on the eigenvalues of Au and, if all critical eigenvalues are nondefective, we may set αuγ ≡ λu /γ.) Now, if we define (3.25)

b w(t) ≡ et/2 w(t),

the function w will have the properties that enable us to establish the superconvergence of u?N (0), i.e., h iT (1) w0 − 12 I − γ1 A − γ1 B(t) w = 0; (2) Qs w(0) = Mx; (3) from (3.24) we have k|w(j) |kL2 , j = 0, 1, 2, . . . , bounded by simple multiples of kMxk. Thus, through simple integration-by-parts, these first two properties lead to the key formula D h i E (3.26) −(Mx) ◦ [u?N − u? ](0) = (u?N − u?I )0 − 12 I + γ1 A + γ1 B(t) (u?N − u?I ), w . Now, if (3.20) and (3.26) are combined, we obtain D h i E (Mx) ◦ [u?N − u? ](0) = −(u?N − u?I )0 + 12 I + γ1 A + γ1 B(t) (u?N − u?I ), w − q

+ γ1 hB(t)(u?N − u?I ), qi + u?I 0 − IN (u? 0 ), q (3.27)    1   1   At At − γ1 et/2 G e γ ξ + e−t/2 u?N − G e γ ξ + e−t/2 u? ,q N

224

GERALD MOORE

¯ N . Hence, choosing q ≡ wN ∈ X ¯ N approximating w, we can split for any q ∈ X our error into the five terms (3.28)

(Mx) ◦ [u?N − u? ](0) =

where

1

+

2

+

3

+

4

+

5,

D h i E 1 ≡ −(u?N − u?I )0 + 12 I + γ1 A + γ1 B(t) (u?N − u?I ), w − wN ,

2 ≡ u?I 0 − IN (u? 0 ), wN ,

(3.29)

3 ≡ γ1 hB(t)[u?N − u?I ], wN i

G R (u? )] + P u B(t)[u?N − IN (u? )], wN , − γ1 P s B(t)[u?N − IN

G R 4 ≡ γ1 P s B(t)[u?N − IN (u? )] + P u B(t)[u?N − IN (u? )], wN

G R (u? )] + P u B(t)[u?N − IN (u? )], wN N , − γ1 P s B(t)[u?N − IN 5 ≡− with R(t) ≡ e

t/2

1 γ

hR(t), wN iN ,

  1   1  At At −t/2 ? −t/2 ? γ γ ξ+e uN − G e ξ+e u G e G R (u? )] − P u B(t)[u?N − IN (u? )]. − P s B(t)[u?N − IN

¯ N is based on the fact that we will need both k|w − wN |kL2 Our choice of wN ∈ X bounded by kMxk and all derivatives of wN similarly bounded. Hence, if we take wN to be the best least-squares fit to w, section A.1 shows how property (3) above leads to these two requirements being satisfied. All that now remains to be done is the individual bounding of the five terms in (3.29). • Expression 1 can immediately be bounded by a constant (depending only on γ1 , kAk and maxt≥0 {kB(t)k}) multiple of k|u?N − u?I |kH 1 k|w − wN |kL2 , and then we may apply (3.22) and (A.11). • Expression 2 is small because of the high accuracy of Gauss-type quadratures, and this is apparent if we write





?0 uI − IN (u? 0 ), wN = u?I 0 − u? 0 , wN + u? 0 − IN (u? 0 ), wN

0 i + u? 0 − IN (u? 0 ), wN . = hu?I − u? , wN − wN Hence, using the notation 0 − wN ) + (P s u? 0 ) ◦ wN η1 ≡ (P s u? ) ◦ (wN

and

0 − wN ) + (P u u? 0 ) ◦ wN , η2 ≡ (P u u? ) ◦ (wN

we have

Z 2 = 0



n o G b e−t η1 − φR η + η − φ η dt, 2 N 1 N −1 2

which are just quadrature errors and bounded using (A.20) and (A.21).

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

225

• Expression 3 appears because we are using different quadrature rules for the stable and unstable components of our equations. Using the notation z(t) ≡ BT (t)wN (t),

z(1) (t) ≡ BT (t)Qu wN (t)

and z(2) (t) ≡ BT (t)Qs wN (t),

it can be re-written

G R (u? ) −P u B(t)IN (u? ), wN B(t)u?I − P s B(t)IN D E D E G R (u? ), z(1) + u? − IN (u? ), z(2) . = hu?I − u? , zi + u? − IN ¯ N , z(1) the best Hence, letting zN be the best least squares fit to z from X N (2) least squares fit to z(1) from polynomials of degree N − 1 or less, and zN (2) the best least squares fit to z from polynomials of degree N or less, we have

G R (u? ) −P u B(t)IN (u? ), wN B(t)u?I − P s B(t)IN = hu?I − u? , z − zN i + hu?I − u? , zN i E D E D (1) (1) G G (u? ), z(1) − zN + u? − IN (u? ), zN + u? − IN E D E D (2) (2) R R (u? ), z(2) − zN + u? − IN (u? ), zN . + u? − IN Finally, then, we have 3 equal to γ1 multiplying the above right-hand side; the first terms being bounded in sections A.4.1 and A.4.2, while the second terms are quadrature errors and bounded as with expression 2 . • Expression 4 is basically a quadrature error involving the smooth matrix B; i.e.,

s G R (u? )] +P u B(t)[u?N − IN (u? )], wN P B(t)[u?N − IN



G R (u? ), BT (t)Qu wN + u?N − IN (u? ), BT (t)Qs wN = u?N − IN and

s G R (u? )] +P u B(t)[u?N − IN (u? )], wN N P B(t)[u?N − IN =

N X

G wi [u?N − IN (u? )](ti ) ◦ BT (ti )Qu wN (ti )

i=1

+

N X

R w ei [u?N − IN (u? )](t˜i ) ◦ BT (t˜i )Qs wN (t˜i ).

i=0

Hence, using the same notation as with 3 above, we have E

nD o G G ? R ? R 4 = γ1 (u? ), z1 − IbN . u?N − IN −1 (z1 ) + uN − IN (u ), z2 − IN (z2 ) Our bound for 4 is therefore n o ? G ? G ? R ? R 1 b 2 2 2 2 − I (u )|k k|z − I (z )|k +k|u −I (u )|k k|z − I (z )|k k|u , 1 2 L L L L N N N −1 1 N N N 2 γ which is dealt with in section A.4.2. • Expression 5 is small because it is a Taylor series remainder. To bound it, we must assume that JG is Lipschitz continuous in a neighbourhood of the origin; i.e., ∃ CG > 0 such that, ∀ x1 , x2 sufficiently small, (†)

kG(x2 ) − G(x1 ) − JG (x1 )[x2 − x1 ]k ≤ CG kx2 − x1 k2 .

226

GERALD MOORE

Then, after applying the Cauchy-Schwarz inequality to 5 to obtain !1/2 N !1/2 N X X s 2 u 2 wi kP R(ti )k wi kQ wN (ti )k | hR(t), wN iN | ≤ i=1

+

i=1

N X

!1/2

N X

w ei kP R(t˜i )k2 u

i=0

!1/2 w ei kQ wN (t˜i )k2 s

,

i=0

we may use (†) to write kP s R(ti )k ≤ kP s kCG e−ti /2 ku? (ti ) − u?N (ti )k2 , kP R(t˜i )k ≤ kP kCG e u

u

−t˜i /2

ku (t˜i ) − ?

i = 1, . . . , N,

u?N (t˜i )k2 ,

i = 0, . . . , N.

Hence, using (A.2e), 5 is bounded by a constant (depending only on the norms of projections) multiple of ? 1 γ CG k|uN

− u?I |kL2 k|u?N − u?I |kL∞ k|wN |kL2 ,

and then we can apply (3.22). Combining these results we see that: (1) all terms are of high accuracy, either through products of errors or because of Gauss-type quadrature; (2) all terms involve derivatives of w or wN , which are bounded by simple multiples of kMxk. Thus we can choose x = [u?N − u? ](0) in (3.28) and obtain p k[u? − u?N ](0)k ≤ kMk kM−1k × “superconvergence”. 4. Implementation For an efficient implementation of (2.10) or (2.11) we must choose and construct matrices s

U ∈ Rm×p

(4.1)

u

and V ∈ Rm×p ,

whose columns form bases for Es and Eu , respectively. It is preferable for these bases to be orthonormal, i.e., Schur vectors [12]; if available, however, a well-conditioned eigenvector basis may be more convenient. We then have available the matrices ×ps

s

A− ∈ Rp

(4.2)

u

and A+ ∈ Rp

×pu

,

which denote the restrictions of A to Es and Eu , respectively, with respect to the two bases above. Next we need to define s

W− ∈ Rp

×m

u

and W+ ∈ Rp

×m

by x = Uu + Vv i.e.,

=⇒

W− x = u and W+ x = v,

  −1 W− . = U V W+ s s Finally, we require ξ i ∈ Rp , i = 1, . . . , N , and ξ˜i ∈ Rp , i = 0, . . . , N , such that 

1

Uξi ≡ e γ

Ati

1

ξ, i = 1, . . . , N,

or

ξi = e γ

A− ti

W− ξ, i = 1, . . . , N,

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

227

and 1 ˜ Ati

Uξ˜i ≡ e γ

1

ξ, i = 0, . . . , N,

A t˜ ξ˜i = e γ − i W− ξ, i = 0, . . . , N.

or

Since A− will be in Schur form or better, the matrix exponentials can be easily computed [12]. We can now form the practical implementation of (2.10) by expressing our discrete solution in the form (4.3a)

uN (ti ) ≡ Uui + Vvi ,

i = 1, . . . , N,

ui + V˜ vi , uN (t˜i ) ≡ U˜

i = 0, . . . , N,

and (4.3b) s

u

˜ 0 = 0. Then our equations correspondui ∈ Rp and vi /˜ vi ∈ Rp with u where ui /˜ ing to (2.10b)–(2.10c) are N X

(4.4a)

j=1

dij uj −

h

1 2I

i + γ1 A− ui

  = γ1 eti /2 W− G Uξi + e−ti /2 [Uui + Vvi ] ,

i = 1, . . . , N,

and N X

(4.4b)

j=0

˜j − d˜ij v

h

1 2I

i ˜i + γ1 A+ v

  ˜ ˜ = γ1 eti /2 W+ G Uξ˜i + e−ti /2 [U˜ ui + V˜ vi ] ,

Finally, we change variables in (4.4) to √ √  ˆi ≡ wi / ti ui i = 1, . . . , N, u

and

ˆi ≡ v

i = 0, . . . , N.

p ˜i, w ei v

i = 0, . . . , N,

so that, by making use of (A.6) and (A.9), we have only these N ps + (N + 1)pu unknowns; i.e., N X

(4.5a)

ˆj − d0ij u

j=1

h

1 2I



=

i ˆi + γ1 A− u

0 1 √wi W− G γ ti

  1 √ ˆ )i ] , i = 1, . . . , N, Uξ i + [Uˆ ui + V (T v 0 wi

and N X

(4.5b)

ˆj − d˜0ij v

j=0

=

1 γ

h

1 2I

i ˆi + γ1 A+ v

 h   i p ˆ + Vˆ w ei0 W+ G Uξ˜i + √1 0 U T˜ u vi , i = 0, . . . , N. w ei

i

228

GERALD MOORE

Thus the mappings T T

ˆ≡v ˆ0 , . . . , v ˆ N OUTPUT : T v ˆ ≡ (T v ˆ )1 , . . . , (T v ˆ )N INPUT : v PN e ˆ0 , . . . , v ˆ N to obtain k=0 ak Lk (t). a) Apply S to each component of v b) Drop aN , since LN (t) is zero at t1 , . . . .tN . ˆ )1 , . . . , (T v ˆ )N . c) Apply ST to each component of a0 , . . . , aN −1 to obtain (T v and Te     ˆ ≡ Te u ˆ , . . . , Te u ˆ Te u 0 N P N −1 ˆ 1, . . . , u ˆ N to obtain a) Apply S to each component of u ak Lk (t). k=0 PN −1 b) Set aN = − k=0 ak , thus imposing the boundary condition   at t =  0.  T e e ˆ , . . . , Te u ˆ . c) Apply S to each component of a0 , . . . , aN to obtain T u Te

ˆ ≡u ˆ1, . . . , u ˆN INPUT : u

OUTPUT :

0

N

efficiently convert nodal values of polynomials between t1 , . . . , tN and t˜0 , . . . , t˜N , e0 , using (A.6). An additional key point here is that the condition of D0N and D N e N , does not deteriorate with unlike that of their unscaled counterparts DN and D N [17]. Note that, from √ (A.4b), the unstable component of the point on the stable v0 . manifold is given by N + 1Vˆ Similarly, we can now form the practical implementation of (2.11) by setting (4.6)

asi , asi ≡ Uˆ

gis ≡ Uˆ gis ,

s

aui ≡ Vˆ aui ,

giu ≡ Vˆ giu ,

u

ˆui /ˆ ˆsi /ˆ gis ∈ Rp and a giu ∈ Rp . Then our equations corresponding to (2.11) where a are N X

(4.7a)

ˆsk = 0, a

k=0 i−1 X

(4.7b)

ˆsk + a

h

1 2I

i ˆis , ˆsi = γ1 g − γ1 A− a

i = 0, . . . , N − 1,

1 2I

i ˆiu , ˆui = γ1 g + γ1 A+ a

i = 0, . . . , N,

k=0

(4.7c)



N X

ˆuk − a

h

k=i+1

where (4.8a)

ˆ s = Sˆ cs with cˆs ≡ ˆcs1 , . . . , ˆcsN , g   p   ˆ csi ≡ wi0 W− G Uξ i + √1 0 S T a i wi

and (4.8b)

ecu with cˆu ≡ ˆcu , . . . , ˆcu , ˆ u = Sˆ g 0 N  i h p u T 1 0 ˜ e √ ˆ ei W+ G Uξ i + S a ci ≡ w 0 w ei

i

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

229

with a ≡ Uˆ asi + Vˆ aui , i = 0, . . . , N . Here, S, S T , Se and SeT are just the vector forms S and e ST in (A.7) so that of the matrices S, ST , e s ≡ [Sˆ cs ]i = gi−1

N X

sij cˆsj ,

i = 1, . . . , N,

j=1 N X  T  sji aj−1 , S a i=

i = 1, . . . , N,

j=1

and N h i X ecu = s˜ij ˆcuj , giu ≡ Sˆ i

i = 0, . . . , N,

j=0

N i h X s˜ji aj , SeT a = i

i = 0, . . . , N.

j=0

4.1. Conclusion. It is now time to make a final comparison between the collocation formulation (4.5) and the expansion formulation (4.7), which we have so far developed in tandem. It is natural to solve either of these nonlinear systems using the fixed point approach of subsection 3.2, i.e., solving a succession of linear problems with the right-hand side of (4.5) or (4.7) known at each iteration. It is then clear that (4.7) has the enormous advantage that it may be solved in the manner of subsection 3.1, i.e., • (4.7b), (4.7a) is a “lower-triangular” set of equations, which can be solved by the forward-recurrence ˆ si = a



2I

− A−

−1

ˆis − γ g

i−1 X

! ˆsk a

i = 0, . . . , N − 1,

,

k=0

ˆ sN = − a

N −1 X

ˆsk ; a

k=0

• (4.7c) is an “upper-triangular” set of equations, which can be solved by the backward-recurrence ˆ ui = − a



2I

+ A+

−1

γ

N X

! ˆuk + g ˆiu a

,

i = N, . . . , 0.

k=i+1

On the other hand, with (4.5) it is necessary to solve an N ps system to obtain ˆ 1, . . . , u ˆ N and an (N + 1)pu system to obtain v ˆ0 , . . . , v ˆ N . Hence our numerical u results in the next section are based on (4.7). We have been willing to derive the e 0 may collocation framework, however, since the particular structure of D0N and D N lead to special efficient solution methods being developed.

230

GERALD MOORE

Finally, we note that the analogue of (4.7)–(4.8) for the implementation of (2.13)– (2.14) is N X

(4.9a)

ˆsk = ξ˜0 , a

k=0

(4.9b)

−ξ˜0 +

i−1 X

h

ˆsk + a

1 2I

i ˆis , ˆsi = γ1 g − γ1 A− a

i = 0, . . . , N − 1,

1 2I

i ˆiu , ˆui = γ1 g + γ1 A+ a

i = 0, . . . , N,

k=0

(4.9c)



N X

ˆuk − a

h

k=i+1

where (4.10a)

ˆ s = Sˆ cs with cˆs ≡ ˆcs1 , . . . , ˆcsN , g   p  T  s 1 0 cˆi ≡ wi W− G √ 0 S a i wi

and (4.10b)

ecu with cˆu ≡ ˆcu0 , . . . , ˆcuN , ˆ u = Sˆ g  i h p u T 1 0 e √ ˆ ei W+ G S a ci ≡ w . 0 w ei

i

5. Numerical results Now we illustrate the above algorithms on two simple well-known examples. 5.1. The Lorenz equations. x˙ =a(y − x), y˙ =bx − y − xz, z˙ =xy − cz. We use the parameter values (a, b, c) = (10, 14, 83 ) for which the stationary solution at the origin is hyperbolic. The Jacobian matrix there has real eigenvalues i h p −c, − 21 (a + 1) ± (a + 1)2 + 4a(b − 1) , two of which are negative. We take ξ = δ(φ1 + φ2 ) (where φi are the normalised eigenvectors corresponding to the two negative eigenvalues) and δ = 4. Figures 5.1 and 5.2 show the error in the unstable component of the stable manifold for algorithms (2.11) and (2.13), respectively, with various values of γ. The exponential convergence with respect to N is clear in both cases. In Figure 5.1 the results seem to improve as γ increases up to γ = 35, but then the error for γ = 40 is somewhat larger. Since there is no z 2 term in the Lorenz equations, the exponential decay rate of our solution is e−20.83t . This is being approximated by a γ

polynomial multiplied by e− 2 t , and thus a value of γ ≈ 41.65 is to be expected.

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

10

10

10

δ=4 γ = 15 γ = 20 γ = 25 γ = 30 γ = 35 γ = 40

4

6

8

Error

10

HLorenz

2

231

10

10

10

10

12

14

16

10

0

5

10

15

20

25

30

35

N

Figure 5.1. Algorithm with homogeneous initial condition

10

10

10

δ=4 γ = 15 γ = 20 γ = 25 γ = 30 γ = 35 γ = 40

4

6

8

Error

10

ILorenz

2

10

10

10

10

12

14

16

10

0

5

10

15

20

25

30

N

Figure 5.2. Algorithm with inhomogeneous initial condition

35

232

GERALD MOORE

In contrast, however, if the w − wN term in (3.29) is critical, the best value of γ would be γ ≈ 14.32. Thus we are not surprised that Figure 5.1 seems to indicate an optimal value of γ somewhat less than 40. In Figure 5.2 the results seem to improve as γ increases up to γ = 30, but γ = 40 definitely gives a larger error. Although 8 there are now components of the solution that decay like e− 3 t , these are in the stable subspace and do not directly affect the accuracy of our manifold approximation. Nevertheless, it is clear that there is now a stronger influence toward smaller γ and also that the errors in Figure 5.2 are somewhat larger than those in Figure 5.1. 5.2. Chua’s electronic circuit. 1 x˙ =a(y + (x − x3 )), 6 y˙ =x − y + z, z˙ = − by. We use the parameter values (a, b) = (4, 5) for which the stationary solution at the origin is again hyperbolic. The Jacobian matrix there has one positive eigenvalue and a pair of complex conjugate eigenvalues with negative real part, i.e., approximately 1.32 and −0.83 ± 1.36i. We take ξ = δ(φR + φI ) (where φR and φI are the real and imaginary parts of the complex eigenvector) and δ = 1.5. Figures 5.3 and 5.4 show the error in the unstable component of the stable manifold for algorithms (2.11) and (2.13), respectively, with various values of γ. δ=1.5

HChua

0

10

10

10

Error

10

10

10

10

10

γ = 2.5 γ = 3.5 γ = 4.5 γ = 5.5 γ = 6.5 γ = 7.5

2

4

6

8

10

12

14

16

10

0

5

10

15

20

25

30

N

Figure 5.3. Algorithm with homogeneous initial condition

35

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

δ=1.5

IChua

0

233

10

10

10

Error

10

10

10

10

10

γ = 2.5 γ = 3.5 γ = 4.5 γ = 5.5 γ = 6.5 γ = 7.5

2

4

6

8

10

12

14

16

10

0

5

10

15

20

25

30

35

N

Figure 5.4. Algorithm with inhomogeneous initial condition Again the exponential convergence with respect to N is clear in both cases. In Figure 5.3 the results seem to improve as γ increases up to γ = 6.5, and then the error for γ = 7.5 is about the same. Since there is no quadratic term in Chua’s equation, the exponential decay rate of our solution is e−2.48t ; thus a good value of γ is expected to be γ ≈ 4.97. It is not obvious, however, why the error decreases for larger values of γ. The influence of the w − wN term in (3.29) does not seem important here, since this would indicate a value of γ ≈ 2.64. In Figure 5.4 the error for γ = 7.5 seems definitely worse than that for γ = 6.5. Apart from this, neither algorithm seems significantly preferable to the other. 5.3. Choice of γ. In the light of the above numerical results, a simple formula for the optimal value of γ is unlikely to exist. We suggest, nevertheless, that setting γ2 equal to the decay rate of the unstable component of the solution is always a good choice. Further improvement in the choice of γ must be empirical and problem dependent. We also have a slight preference for the algorithm with homogeneous initial conditions. 6. Concluding remarks 6.1. Homoclinic and heteroclinic orbits. Our main reason for studying Laguerre approximation of stable manifolds in this paper is so that we can eventually develop a fully spectral approximation of connecting orbits between stationary solutions [3], [9], [19]. Here we very briefly describe the basic idea. We replace (1.1) with the parameter-dependent problem (6.1)

v0 = F(v, λ),

F : Rm × Rq 7→ Rm

234

GERALD MOORE

and assume that v± : Rq 7→ Rm are two (possibly identical) hyperbolic stationary solution curves of (6.1), i.e., F(v± (λ), λ) = 0 and the Jacobian matrices JF (v± (λ), λ) have no eigenvalues on the imaginary axis. (The number of free parameters, q, must be such that the sum of the dimensions of the stable manifold of v+ (λ) and the unstable manifold of v− (λ) equals m + 1 − q [3].) Then, to approximate a connecting orbit between v− (λ) and v+ (λ), we solve the three coupled equations (6.2a)

vu0 (t) = F(vu (t), λ),

(6.2b)

vc0 (τ ) vs0 (t)

(6.2c)

t ∈ (−∞, 0), τ ∈ (−1, 1),

= T F(vc (τ ), λ), = F(vs (t), λ),

t ∈ (0, ∞),

with vu (0) = vc (−1) and vc (1) = vs (0).

(6.3)

I.e., vu (t) is a trajectory lying in the unstable manifold of v− (λ), vs (t) is a trajectory lying in the stable manifold of v+ (λ), while vc (τ ) is an orbit (parametrised by scaled time; cf. [9]) connecting these two manifolds. Thus our unknowns to be determined are vu , vc , vs , λ and T . To complete a well-posed problem, we must also specify what proportions of the full connecting orbit are contained in (6.2a) and (6.2c). One possibility is to choose ε± > 0 sufficiently small and to insist that Z ∞ Z 0 kF(vu , λ)k dt = ε− and kF(vs , λ)k dt = ε+ ; (6.4) −∞

0

cf. [2], [20]. Of course, when discretising (6.2) we would use Laguerre approximation for (6.2a) and (6.2c), and either Chebyshev or Legendre approximation for (6.2b). Further details and results for such spectral algorithms will be given elsewhere: here we just show a graph of results for the model problem in [4], i.e., m = 2, q = 1 and x˙ =y, y˙ =λ − 2y − x2 + xy. The identical stationary solutions are

 √  − λ v− (λ) ≡ v+ (λ) = 0

and a homoclinic orbit exists for λ ≈ 6.5. We use ε± = 1 and apply Laguerre approximation with γ = 3 to (6.2a) and with γ = 20 to (6.2c). Figure 6.1 shows the error in the value of λ for the homoclinic orbit computed from different orders of polynomial approximation. Note that for fixed order Laguerre approximation, the rate of convergence is exponential in N until the Laguerre error becomes dominant. (Zero-order Laguerre corresponds to approximation of the stable/unstable manifolds by the corresponding stable/unstable subspaces, which is the standard technique in [3], [9].) We aim to produce a Matlab implementation of this approximation scheme, with (6.2b) replaced by the arclength parametrisation of [19], i.e., vc0 (σ) = L

F(vc (σ), λ) , kF(vc (σ), λ)k

σ ∈ (−1, 1).

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

10

10

10

10

Error in λ

10

10

10

10

10

10

235

Legendre + Laguerre approximation

2

3

4

5

6

7

8

9

Laguerre = 0 Laguerre = 2 Laguerre = 4 Laguerre = 6 Laguerre = 8 Laguerre = 10

10

11

12

10

15

20

25

30

35

40

45

50

Legendre = N

Figure 6.1. Approximation of homoclinic orbit

Comparison of this approach with the popular HOMCONT package [6], [8] will be presented elsewhere. 6.2. Strongly stable manifolds. We can also extend the approximation scheme in this paper to strongly stable manifolds of ρ-pseudohyperbolic stationary solutions of (1.1) [23], [24]; i.e., ρ < 0 is such that none of the eigenvalues of A in (1.2) has real part equal to ρ. Instead of the changes of variable at the beginning of section 2, we apply the transformations i) t 7→ t/γ, γ > 0, σ = 12 − γρ ii) u(t) = eσt v(t), to (1.1) and obtain (6.5)

u0 − [σI +

eσt 1 A]u = G(e−σt u). γ γ

(Note that, for ρ = 0, (6.5) corresponds to the equation being approximated by (2.5).) This enables us to calculate any trajectory of (1.1) such that e−ρt v(t) → 0 as t → ∞ and, as in the present paper, γ can be tuned to optimize the approximation of this strongly stable manifold. In particular, if our stationary solution is nonhyperbolic, we can approximate points on the stable manifold by choosing |ρ| sufficiently small. Of course, by replacing F by −F, a similar extension holds for unstable manifolds.

236

GERALD MOORE

Appendix. Laguerre polynomials, quadrature and approximation In this section, for convenience, we collect together some well-known results to which we wish to refer; cf. [7], [10], [17], [18], [22]. The famous Laguerre polynomials LN , N = 0, 1, 2, . . . , have the following properties: (1) Differential equation tL00N (t) + [1 − t]L0N (t) + N LN (t) = 0. (2) Orthogonality

Z



e−t LN1 (t)LN2 (t) dt = δN1 ,N2 .

0

(3) Recurrence relation N LN (t) = [2N − 1 − t]LN −1 (t) − [N − 1]LN −2 (t), L0 (t) = 1,

L1 (t) = 1 − t.

(As is standard, the polynomials have been scaled so that LN (0) = 1.) It then follows that, if pN (t) =

N X

ak Lk (t) and p0N (t) =

k=0

N −1 X

bk Lk (t),

k=0

we have ak = bk − bk−1 ,

(A.1a) a0 = pN (0) + b0 ,

k = 1, . . . , N − 1,

aN = −bN −1

or bk = −

(A.1b)

N X

aj ≡ −pN (0) +

aj .

j=0

j=k+1

In addition, the orthogonality property implies Z Z ∞ N X 2 −t 2 e [pN (t)] dt ≡ ak and (A.1c) 0

k X



e

−t

2 [p0N (t)]

dt ≡

0

k=0

N −1 X

b2k .

k=0

A.1. Quadrature rules. In this paper we are particularly concerned with the Gauss-Laguerre quadrature rule Z ∞ N X e−t f (t) dt ≈ wj f (tj ) (A.2a) 0

j=1

and the Gauss-Laguerre-Radau rule Z ∞ N X e−t f (t) dt ≈ w ej f (t˜j ), (A.2b) 0

j=0

where t˜0 = 0. (The former is exact if f is a polynomial of degree 2N − 1 or less, while the latter is exact if f is a polynomial of degree 2N or less; for example, (A.2c)

pN −1 (t) =

N −1 X k=0

ak Lk (t)

=⇒

N X j=1

2

wj [pN −1 (tj )] =

N −1 X k=0

a2k

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

237

and (A.2d)

pN (t) =

N X

ak Lk (t)

N X

=⇒

N 2 X  w ej pN (t˜j ) = a2k .

j=0

k=0

k=0

In addition, we shall need the important result (A.2e)

N X

Z 2

wj [pN (tj )] ≤



e−t [pN (t)]2 dt,

0

j=1

which is based on an expansion in Laguerre polynomials.) The classical formulae for these nodes and weights are (A.3a) (A.3b)

t1 , . . . , tN are the roots of LN t˜1 , . . . , t˜N are the roots of L0N +1 ,

and (A.4a) (A.4b)

1 0 −2 [L (tj )] , j = 1, . . . , N, tj N −2 1  , j = 0, . . . , N. LN +1 (t˜j ) w ej = N +1

wj =

These quadrature weights decay exponentially; more precisely,  N X βtj   wj e    1 j=1 ≤ (A.5) N X  1−β ˜   w ej eβ tj   j=0

for β ∈ [0, 1). For β = 1 there is very slow growth with N (cf. [22]) and it is convenient to define (A.6)

wj0 ≡ wj etj

and w ej0 ≡ w ej etj . ˜

A.2. Tridiagonal eigenproblems. Since [13], however, these values have usually been obtained from the solution of eigenproblems; see also [11]. Thus t1 , . . . , tN are the eigenvalues of the N × N symmetric tridiagonal matrix   1 −1 −1 3 −2      . .   . −2 5   TN =   . . . .. .. ..       . . .. ..  −(N − 1) −(N − 1) 2N − 1 and, if SN denotes the corresponding orthogonal matrix of eigenvectors, (A.7a)

2

wj = s21j = (SN )1j ,

j = 1, . . . , N.

238

GERALD MOORE

Similarly, t˜0 , . . . , t˜N are the eigenvalues of agonal matrix  1 −1 −1 3 −2   .  −2 . .  eN =  T ..  .   

the (N + 1) × (N + 1) symmetric tridi ..

.

..

.

..

.

      ..  .   2N − 1 −N  −N N

and, if e SN denotes the corresponding orthogonal matrix of eigenvectors (with row/column numbering 0, . . . , N ),  2 , j = 0, . . . , N. SN (A.7b) w ej = s˜20j = e 0j

If they are chosen so that the first element in each column is positive, the eigenSN also define the mappings between expansion coefficients vector matrices SN and e and scaled nodal values for polynomials. Thus, if pN (t) ≡

N X

ak Lk (t) and cj ≡



wj pN (tj ),

j = 1, . . . , N,

k=0

then a = SN c

(A.8a)

and c = STN a,

where a ≡ (a0 , . . . , aN −1 )T ; cf. (A.3a). Similarly, if pN (t) ≡

N X

ak Lk (t) and cj ≡

p w ej pN (t˜j ),

j = 0, . . . , N,

k=0

then (A.8b)

a=e SN c

and c = e STN a.

Since a fast Laguerre transform has not been discovered (although optimism was expressed in [17]), these mappings provide the most convenient transformations. A.3. Derivative matrices. The nodes and weights also enable us to define simple linear mappings between scaled nodal values of a polynomial and the corresponding scaled nodal values of its derivative. Thus, if pN (0) = 0 and p √ cj ≡[ wj / tj ]pN (tj ), j = 1, . . . , N, p √ cˆj ≡[ wj / tj ]p0N (tj ), then cˆ = D0N c, where (A.9a)

( (D0N )ij



d0ij

=

1/(ti − tj ) i 6= j, (1 + ti )/(2ti ) i = j.

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

Similarly, if

p w ej pN (t˜j ), p ej p0N (t˜j ), cˆj ≡ w cj ≡

239

j = 0, . . . , N,

then e 0 c, cˆ = D N where

  1/(t˜i − t˜j ) i 6= j,   0 0 e ≡ d˜ij = 1/2 D i = j 6= 0, N  ij  −N/2 i = j = 0.

(A.9b)

A.4. Approximation errors. Finally, again for reference, we state some of the approximation error bounds derived in [17] and that are relevant to the present paper. PN is used to denote the space of polynomials of degree N or less. A.4.1. Least squares. πN u ∈ PN is defined by Z ∞ Z ∞ e−t [πN u](t)pN (t) dt = e−t u(t)pN (t) dt 0

∀pN ∈ PN .

0

It is shown in [17] that, for any ε > 0 and positive integer p, the Laguerre coefficients of u decrease spectrally, i.e., Z ∞ −t ≤ C(p, ε)k −p/2 kukp,1−ε , e u(t)L (t) dt (A.10) k 0

where kzkp,β ≡

 p Z X 

j=0



e−βt |z (j) (t)|2 dt

0

1/2  

.

(Here and later C(p, ε) is a constant whose exact form can be derived from [17]. For us, it is only important that: (1) C(p, ε) grows with p like C p/2 , where C is an absolute constant, and hence is eventually dominated by a term like N −p/2 ; (2) C(p, ε) grows as ε decreases and is unbounded as ε → 0.) From (A.10), it then easily follows that (A.11a)

ku − πN uk0,1 ≤ C(p, ε)N −p/2 kukp,1−ε

and (A.11b)

ku0 − (πN u)0 k0,1 ≤ C(p, ε)N 1−p/2 kukp,1−ε ,

the latter bound utilising the identity (πN u)0 = πN (u0 ) + aN L0N +1 , (1)

where

Z (m) aN

≡ 0



e−t u(m) (t)LN (t) dt.

240

GERALD MOORE

In fact, provided that u is sufficiently smooth, all derivatives of πN u may be bounded by repeatedly applying this identity to obtain (A.12)

(s)

(πN u)

= πN (u

A little algebra, using L0j = − (k) LN +1

=

NX +1−k

(s)

)+

Pj−1 i=0

αj Lj

s X

(s−k+1)

aN

(k)

LN +1 ,

s = 1, . . . , N.

k=1

Li , shows that with

NX +1−k

j=0

j=0

  N +1 |αj | = ; k

and hence (A.13)

(k)

kLN +1 k0,1 ≤ 2N ,

k = 1, . . . , N.

Inserting (A.10) and (A.13) in (A.12) then gives (A.14)

kπN ukN,1 ≤ kukN,1 + C(2N, ε)

2N N −N/2 kuk2N,1−ε. 1 − 1/N

A.4.2. Interpolation. Let φN u denote either of the following interpolation operators: (A.15a)

φG N u ∈ PN

and

(φG N u)(ti ) = u(ti ), i = 0, . . . , N,

(A.15b)

φR N u ∈ PN

and

˜ ˜ (φR N u)(ti ) = u(ti ), i = 0, . . . , N,

where t0 ≡ 0. Then, using Sobolev embedding and inverse inequalities, the interpolation error is N 1/2 worse than the least-squares error (A.11); i.e., for any ε > 0 and positive integer p (A.16)

ku − φN uk0,1 ≤ C(p, ε)N (1−p)/2 kukp,1−ε , ku0 − (φN u)0 k0,1 ≤ C(p, ε)N (3−p)/2 kukp,1−ε .

Similarly, if φbG N −1 u denotes the interpolation operator defined by (A.17)

φbG N −1 u ∈ PN −1

and

(φbG N −1 u)(ti ) = u(ti ),

i = 1, . . . , N,

then, for any ε > 0 and positive integer p, (A.18)

(1−p)/2 kukp,1−ε , ku − φbG N −1 uk0,1 ≤ C(p, ε)(N − 1) 0 (3−p)/2 kukp,1−ε . ku0 − (φbG N −1 u) k0,1 ≤ C(p, ε)(N − 1)

(We note that [17] only analyses interpolation at Gauss-Radau points, but the arguments are similar in the other two cases.) Analogous weighted maximum-norm results are also possible, e.g., (A.19)

ku − φN uk0,σ,∞ ≤ C(p, σ, ε)N 5/4−p/2 kukp,σ−ε , ku0 − (φN u)0 k0,σ,∞ ≤ C(p, σ, ε)N 9/4−p/2 kukp,σ−ε ,

where kzkp,β,∞ ≡ max

t≥0 0≤k≤p

n

o e−βt |z (k) (t)| .

LAGUERRE APPROXIMATION OF STABLE MANIFOLDS

241

A.4.3. Quadrature. The error in Gauss or Gauss-Radau quadrature can be written in various ways. For us, however, it is appropriate to combine the least-squares error with Sobolev embedding and inverse inequalities. Thus for Gauss-Radau quadrature, i.e., φR N defined by (A.15b), we have Z ∞  e−t u(t) − [φR N u](t) dt 0 Z ∞ Z ∞ −t e {u(t) − [π2N u](t)} dt − e−t φR = N [u − π2N u](t) dt 0

and hence (A.20)

Z

0



0

 ≤ C(2N, ε)(2N )(1−p)/2 kukp,1−ε . e−t u(t) − [φR u](t) dt N

Similarly, for Gauss quadrature, i.e., φbG N −1 defined by (A.17), we have Z ∞ o n e−t u(t) − [φbG N −1 u](t) dt 0 Z ∞ Z ∞ e−t {u(t) − [π2N −1 u](t)} dt − e−t φbG = N −1 [u − π2N −1 u](t) dt 0

0

and hence Z ∞ o n e−t u(t) − [φbG u](t) dt ≤ C(2N − 1, ε)(2N − 1)(1−p)/2 kukp,1−ε . (A.21) N −1 0

References 1. H. Amann, Ordinary Differential Equations, Studies in Mathematics 13, de Gruyter, 1990. MR 91e:34001 2. Z. Bashir-Ali, J.R. Cash and G. Moore, Numerical experiments with algorithms for connecting orbits, in preparation. 3. W-J. Beyn, The numerical computation of connecting orbits in dynamical systems., I.M.A. Jour. Num. Anal. 9 (1990), 379–405. MR 91i:65146 4. W-J. Beyn, Numerical methods for dynamical systems, Advances In Numerical Analysis (W. Light, ed.), Clarendon Press, Oxford, 1991, pp. 175–236. MR 92f:65005 5. C. Canuto, M.Y. Hussaini, A. Quarteroni and T.A. Zang, Spectral Methods in Fluid Mechanics, Springer, New York, 1987. MR 89m:76004 6. A.R. Champneys, Y.A. Kuznetsov and B. Sandstede, A numerical toolbox for homoclinic bifurcation analysis, Int. J. Bif. & Chaos 6 (1996), 867–887. MR 98b:65094 7. P.J. Davis and P. Rabinowitz, Methods of Numerical Integration, Academic Press, New York, 1975. MR 56:7119 8. E.J. Doedel, A.R. Champneys, T.F. Fairgrieve, Y.A. Kuznetsov, B. Sandstede & X.J. Wang, AUTO 97: Continuation And Bifurcation Software for Ordinary Differential Equations (with HomCont), Technical Report, California Institute Of Technology, 1998. 9. M.J. Friedman and E.J. Doedel, Numerical computation and continuation of invariant manifolds connecting fixed points, SIAM J. Numer. Anal. 28 (1991), 789–808. MR 98e:34058 10. D. Funaro, Computational aspects of pseudospectral Laguerre approximations, Applied Numerical Mathematics 6 (1989/90), 447–457. MR 91m:65074 11. W. Gautschi, Orthogonal polynomials: applications and computation, Acta Numerica 5 (A. Iserles, ed.), Cambridge University Press, 1996, pp. 45–119. MR 99j:65019 12. G.H. Golub and C.F. van Loan, Matrix Computations, John Hopkins, Baltimore, 1989. MR 90d:65055 13. G.H. Golub and J.H. Welsch, Calculation of Gauss quadrature rules, Math. Comp. 23 (1969), 221–230. MR 39:6513 14. J. Guckenheimer and P.H. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Applied Mathematical Sciences Vol. 42, Springer, 1983. MR 85f:58002

242

GERALD MOORE

15. M.W. Hirsch and S. Smale, Differential Equations. Dynamical Systems and Linear Algebra, Academic Press, San Diego, 1974. MR 58:6484 16. Y. Liu, L. Liu and T. Tang, The numerical computation of connecting orbits in dynamical systems: a rational spectral approach, J. Comput. Phys. 111 (1994), 373–380. MR 95a:65109 17. Y. Maday, B. Pernaud-Thomas and H. Vandeven, Reappraisal of Laguerre type spectral methods, Rech. Aerosp. (1985-86), 13–35. MR 88b:65135 18. C. Mavriplis, Laguerre polynomials for infinite-domain spectral elements, J. Comput. Phys. 80 (1989), 480–488. MR 90f:65228 19. G. Moore, Computation and parametrisation of periodic and connecting orbits., I.M.A. Jour. Num. Anal. 15 (1995), 245–263. MR 96a:34087 20. G. Moore, Geometric methods for computing invariant manifolds, Applied Numerical Mathematics 17 (1995), 319–331. MR 96g:65082 21. G. Moore and E. Hubert, Algorithms for constructing stable manifolds of stationary solutions, I.M.A. Jour. Num. Anal. 19 (1999), 375–424. MR 2000f:65066 22. P. Rabinowitz and G.R Weiss, Tables of abscissas and weights for numerical evaluation of integrals of the form 0∞ e−x xn f (x)dx, Math. Tables Aids Comp. 13 (1959), 285–294. MR 21:6713 23. D. Ruelle, Elements of Differentiable Dynamics and Bifurcation Theory, Academic Press, San Diego, 1989. MR 90f:58048 24. M. Shub, Global Stability of Dynamical Systems, Springer, New York, 1987. MR 87m:58086 Department of Mathematics, Imperial College, Queen’s Gate, London SW7 2BZ England E-mail address: [email protected]