C1 Approximation of Vector Fields based on the ... - Semantic Scholar

Report 2 Downloads 128 Views
C 1 Approximation of Vector Fields based on the Renormalization Group Method Department of Applied Mathematics and Physics Kyoto University, Kyoto, 606-8501, Japan Hayato CHIBA ∗1 Received June 20, 2007; Revised April 21, 2008 Abstract The renormalization group (RG) method for differential equations is one of the perturbation methods for obtaining solutions which approximate exact solutions for a long time interval. This article shows that, for a differential equation associated with a given vector field on a manifold, a family of approximate solutions obtained by the RG method defines a vector field which is close to the original vector field in the C 1 topology under appropriate assumptions. Furthermore, some topological properties of the original vector field, such as the existence of a normally hyperbolic invariant manifold and its stability are shown to be inherited from those of the RG equation. This fact is applied to the bifurcation theory.

1 Introduction The renormalization group (RG) method for differential equations is one of the perturbation methods for obtaining solutions which approximate exact solutions for a long time interval. In their papers [1,2], Chen, Goldenfeld, Oono have established the RG method for ordinary differential equations of the form x˙ =

dx = f (t, x) + εg(t, x), x ∈ Rn , dt

(1.1)

where ε > 0 is a small parameter. For this equation, the method for deriving approximate solutions of the form x(t) = x0 (t) + εx1 (t) + ε2 x2 (t) + · · ·

(1.2)

is called the naive expansion or the regular perturbation method, where xi (t)’s are governed by inhomogeneous linear ODEs obtained by putting Eq.(1.2) into Eq.(1.1) and equating the coefficients of εi of the both sides of Eq.(1.1). It is well known that approximate solutions constructed by the naive expansion are valid only in a time interval of O(1) in general, since secular terms diverge as t → ∞. Many techniques for obtaining approximate solutions which are valid in a long time interval have been developed until now, which are collectively called singular perturbation methods. The RG method proposed by Chen et al. is one of the singular perturbation methods looking like the variationof-constant method, in which the secular terms included in x1 (t), x2 (t), · · · of Eq.(1.2) are renormalized into the integral constant of x0 (t). The ODE to be satisfied by the renormalized integral constant is called the RG equation. Chen et al. showed that the RG method unifies the conventional singular perturbation methods such as the multiscale method, the boundary layer technique, WKB analysis, and the reductive perturbation method, by giving ∗1

E mail address : [email protected]

1

explicit examples. Though the multi-scale method requires occasionally fractional power laws or logarithmic functions of ε in the expansion of x(t), the RG method needs only a power-series expansion of x(t) in ε, and it starts with the naive expansion of x(t) to reach the same result as the multi-scale method does. Kunihiro [3],[4] interpreted the RG method as a theory of envelopes for approximate solutions constructed by the naive expansion. His insight revealed why the RG method works well. Nozaki, Oono [5] and Goto, Masutomi, Nozaki [6] proposed a proto-RG equation or translational Lie group method to renormalize secular terms up to arbitrary order and to obtain higher order approximate solutions. Ei, Fujii, Kunihiro [7] apply the RG method to obtain approximate center manifolds and slow manifolds. Ziane [8] and DeVille et al. [9] proved that an orbit constructed on the RG method approximates an exact solution for a long time interval. Further DeVille et al. [9] showed that if the unperturbed part of a given ODE is linear and diagonalizable, the RG equation for the ODE is equivalent to the normal form of the vector field. Despite the active interest in the RG method, little attention has been paid to date to the question as to whether a family of approximate solutions to exact solutions of the original ODE (vector field), which is obtained by varying initial values, forms a well-defined vector field or not. Put another way, a question is to be asked as to whether approximate solutions intersect with one other or not. Further, the RG method has been applied to differential equations only on the Euclidean space, but not extended to a method applicable to differential equations on manifolds, yet. In the present paper, it is shown that for a given vector field of the form f (t, x)+εg(t, x) on an arbitrary manifold, approximate solutions obtained by the RG method define a vector field which is close to the original vector field in the C 1 topology on appropriate assumptions of boundedness for the flow of f (t, x) and for other functions. This implies that the approximate vector field works well in investigating properties of the original vector field that are persistent under C 1 perturbation. In particular, if the approximate vector field has a normally hyperbolic invariant manifold, then the original vector field is expected also to have an invariant manifold because the Fenichel theory assures that normally hyperbolic invariant manifolds are persistent under C 1 perturbation. In fact, it is shown that the existence of an invariant manifold and its stability are inherited from those of the RG equation since the flow of the RG equation is proved to be conjugate to that of the approximate vector field. In view of this, it is desirable that the RG equation is easier to solve than the original equation. In fact, it will be proved that the RG equation has larger symmetry than the original equation. This method will be applied in the bifurcation theory to show that a periodic orbit is emerged far away from a fixed point, which is an example of the global bifurcation other than the ordinary Hopf bifurcation. In particular, the RG method is applied to a time-dependent linear equation of the form x˙ = F(t)x + εG(t)x, x ∈ Rn ,

(1.3)

where F(t) and G(t) are n × n matrix functions. On appropriate assumptions, the stability of the trivial solution x = 0 of Eq.(1.3) is shown to coincide with that of the RG equation for Eq.(1.3), which is time-independent linear equation. By using this result, synchronous solution of coupled oscillators is shown to be stable. This paper is organized as follows: Sec.2 presents basic facts and definitions in dynamical systems. Sec.3 contains a simple example of the RG method. In Sec.4, a main theorem on approximate vector fields is proved. Sec.5 gives a few properties of the RG equation in term of symmetries. In Sec.6, an invariant manifold of a given equation is shown to be inherited from its RG equation. In Sec.7, the RG method is applied to time-dependent 2

linear equations (1.3). In Appendix A, we discuss the higher order RG equation to prove Thm.6.1.

2 Notations Let f be a time independent C r vector field on a C r manifold M and ϕ : R × M → M its flow. We denote by ϕt (x0 ) ≡ x(t), t ∈ R, a solution to the ODE x˙ = f (x) through x0 ∈ M, which satisfies ϕt ◦ ϕ s = ϕt+s , ϕ0 = id M , where id M denotes the identity map of M. For a fixed t ∈ R, ϕt : M → M defines a diffeomorphism of M. We assume ϕt is defined for all t ∈ R. For a time-dependent vector field, let x(t, τ, ξ) denote a solution to an ODE x˙(t) = f (t, x) through ξ at t = τ, which defines a flow ϕ : R × R × M → M by ϕt,τ (ξ) = x(t, τ, ξ). For fixed t, τ ∈ R, ϕt,τ : M → M is a diffeomorphism of M satisfying ϕt,t ◦ ϕt ,τ = ϕt,τ , ϕt,t = id M .

(2.1)

Conversely, a family of diffeomorphisms ϕt,τ of M, which are C 1 with respect to t and τ, satisfying the above equality for any t, τ ∈ R defines a time-dependent vector field on M through f (t, x) =

d   ϕτ,t (x). dτ τ=t

(2.2)

3 A brief review of the renormalization group method Before describing a general theory of the RG method in the next section, we review the RG method for obtaining approximate solutions of an ODE with a simple example. Let us consider an ODE x¨ + x + εx3 = 0, x ∈ R, |ε| 0 such that the following holds for all |ε| < ε0 : (i) Suppose that the norm condition (N1) is satisfied. Then, −1 n Φt,t0 := αt ◦ ϕRG t−t0 ◦ αt0 : αt0 (U) → R

(4.15)

defines a flow on Uε := {(t, x) | t ∈ R≥T , x ∈ αt (U)} associated with a time-dependent vector field Fε (t, x) :=

d   Φa,t (x). da a=t

(4.16)

The integral curves of Fε are put in the form X(t, t0 ; ξ) := xˆ(t, t; A(t, t0 , ξ)),

(4.17)

where xˆ is defined by (4.10). (ii) Suppose that the norm conditions (N1), (N2) are satisfied. Then, there exists a non-negative constant L1 such that the vector field Fε defined by (4.16) satisfies an inequality sup || f + εg − Fε || < ε2 L1 . Uε

7

(4.18)

(iii) Suppose that the norm conditions (N1) to (N3) are satisfied. Then, there exists a non-negative constant L2 such that the vector field Fε satisfies an inequality sup ||Dt,x f + εDt,x g − Dt,x Fε || < ε2 L2 ,

(4.19)



where Dt,x f = (∂ f /∂t, ∂ f /∂x) and ||Dt,x f || = ||∂ f /∂x|| + ||∂ f /∂t||. In particular, Fε is sufficiently close to f + εg in the C 1 topology if |ε| is sufficiently small. Proof of (i). Since h(t, x) is bounded on R≥T × U by the norm condition (N1), εh(t, x) can be sufficiently close to a null function as a C 3 function of x for sufficiently small ε. Since the flow ϕ0t,0 is a C 4 diffeomorphism and since the set of diffeomorphisms is open in the space of C 1 maps in the C 1 topology, it follows that for a sufficiently small ε, the map αt given by (4.14) is a diffeomorphism from U into Rn for each t ∈ R≥T . Therefore the map Φt,t0 : αt0 (U) → Rn defined by (4.15) is a diffeomorphism from αt0 (U) into Rn as well, and satisfies Φt,t ◦ Φt ,t0 = Φt,t0 , Φt,t = idαt (U) . This shows that Φt,t0 is a flow associated with a vector field Fε defined by (4.16). Then, it turns out that ˆ(t, t; A(t, t0 , ξ)) = X(t, t0 ; ξ), Φt,t0 (αt0 (ξ)) = αt ◦ ϕRG t−t0 (ξ) = αt (A(t, t0 , ξ)) = x which implies that X(t, t0 ; ξ) gives an integral curve of Fε , namely, dX (t, t0 ; ξ) = Fε (t, X(t, t0 ; ξ)). dt

(4.20) 

This ends the proof. Proof of (ii),(iii). Denote h(t, A) as ht (A). The vector field Fε (t, x) is calculated as  d   0 RG −1 Fε (t, x) = a=t (ϕa,0 + εha ) ◦ ϕa−t ◦ αt (x) da d  d  0 −1 0 RG −1 = a=t (ϕa,0 + εha ) ◦ αt (x) + (Dϕt,0 + εDht )α−1  ϕ ◦ αt (x) (x) t da da a=t a−t ∂f −1 −1 = f (t, x0 (t, 0, α−1 (t, x0 (t, 0, α−1 t (x))) + ε t (x)))x1 (t, t; αt (x)) + εg(t, x0 (t, 0, αt (x))) ∂x d  0 R(α−1 +ε  x1 (t, a, α−1 t (x)) + ε(Dϕt,0 + εDht )α−1 t (x)) t (x) da a=t −1 = f (t, x0 (t, 0, α−1 t (x))) + εg(t, x0 (t, 0, αt (x))) ∂f −1 2 R(α−1 +ε (t, x0 (t, 0, α−1 t (x)))ht (αt (x)) + ε (Dht )α−1 t (x)). t (x) ∂x On account of αt (x) = x0 (t, 0, x) + εht (x), the above equation is expanded as d f  ε2 d2 f  Fε (t, x) = f (t, x) + ε  (t, x0 (t, 0, α−1 (x))) + (t, x0 (t, 0, α−1  t t (x))) + εg(t, x) dε ε=0 2 dε2 ε=θ1 ε  dg  dht  ∂f 0 −1 2∂f −1 (t, x)h (t, x) +ε2  (t, x0 (t, 0, α−1 (x))) + ε ((ϕ ) (x)) + ε ε=θ ε (αt (x)) t t t,0 dε ε=θ2 ε  ∂x ∂x dε 3   ∂f 2 d  −1 2 (t, x0 (t, 0, αt (x))) ht (α−1 R(α−1 +ε  t (x)) + ε (Dht )α−1 t (x)), t (x) dε ε=θ4 ε ∂x

where 0 < θ1 , θ2 , θ3 , θ4 < 1 are constants in the Taylor’s formula. The second term of the right hand side of the above is calculated as ∂x0 d f  ∂f d  ∂f −1 0 −1 −1 0 −1 (t, x) (t, 0, (ϕ (t, x (t, 0, α (x))) = ) (x))   α (x) = − (t, x)ht ((ϕt,0 ) (x)). 0 t t,0 dε ε=0 ∂x ∂A dε ε=0 t ∂x 8

Therefore we obtain Fε (t, x) − f (t, x) − εg(t, x) =

 ε2 d2 f  −1 2 dg  (t, x (t, 0, α (x))) + ε (t, x0 (t, 0, α−1   0 t t (x))) 2 dε2 ε=θ1 ε dε ε=θ2ε   ∂f dht  ∂f −1 2 d  −1 (t, x (α (x)) + ε (t, 0, α (x))) ht (α−1 +ε2 (t, x)   0 t t (x)) ∂x dε ε=θ3 ε t dε ε=θ4 ε ∂x +ε2 (Dht )α−1 R(α−1 t (x)). t (x)

(4.21)

We have to estimate the norm of the right hand side of the above equation. At first, d f /dε is given by df (t, x0 (t, 0, α−1 t (x))) dε  −1 ∂ 0 ∂f ∂x0 −1 −1 = − (t, x0 (t, 0, α−1 (t, 0, α (ϕ (x))) (x)) + εh ) ht (α−1 t αt (x) t t t (x)). ∂x ∂A ∂A t,0

(4.22)

Note that equations 0 −1 0 −1 α−1 ◦ ht )−1 ◦ (ϕ0t,0 )−1 (x), t (x) = (ϕt,0 + εht ) (x) = (id + ε(ϕt,0 )

(4.23)

x0 (t, 0, α−1 t (x))

(4.24)

=

ϕ0t,0

α−1 t )(x),

= (id − εht ◦ −1 ∂ 0 ∂x0 −1 (t, 0, α−1 (ϕ (x)) + εh ) t αt (x) t ∂A ∂A t,0  k     −1  −1  ∞    ∂ht ∂ht ∂x0   ◦ ∂x0 = id − ε ◦ −ε ∂A α−1 ∂A α−1 ∂A α−1 ∂A α−1 t (x) k=0 t (x) t (x) t (x) 



α−1 t (x)

(4.25)

hold and the left hand side of the above three equations are bounded by the norm conditions (N1),(N2). Therefore the right hand side of Eq.(4.22) is bounded uniformly in R≥T . To show the boundedness of the first term of right hand side of Eq.(4.21), it is sufficient to show that the derivative of each factor of the right hand side of Eq.(4.22) is bounded. They are calculated as d ∂f (t, x0 (t, 0, α−1 t (x))) dε ∂x  −1 ∂ 0 ∂2 f ∂x0 −1 −1 = − 2 (t, x0 (t, 0, αt (x))) (t, 0, αt (x)) (ϕ + εht )α−1 ht (α−1 t (x)), t (x) ∂A ∂A t,0 ∂x d ∂x0 (t, 0, α−1 t (x)) dε ∂A  −1 ∂ 0 ∂2 x0 −1 = − 2 (t, 0, α−1 (ϕ (x)) + εh ) ht (α−1 t αt (x) t t (x)) ∂A t,0 ∂A −1  d ∂ 0 (ϕ + εht )α−1 t (x) dε ∂A t,0  −1  −1  ∂ 0 ∂ 0 d ∂ 0 −1 −1 (ϕt,0 + εht )α−1 (ϕ (ϕ =− + εh ) + εh ) , t αt (x) t αt (x) t (x) ∂A dε ∂A t,0 ∂A t,0   d ∂ 0 (ϕt,0 + εht )α−1 t (x) dε ∂A  2  −1   ∂ ∂ 0 ∂ht 0 (ϕt,0 + εht )α−1 − (ϕt,0 + εht )α−1 ht (α−1 = t (x)), 2 t (x) t (x) ∂A α−1 ∂A ∂A (x) t d ht (α−1 t (x)) dε   −1  ∂ht ∂ 0 =− (ϕt,0 + εht )α−1 ht (α−1 t (x)). t (x) ∂A α−1 ∂A t (x) 9

(4.26)

(4.27)

(4.28)

(4.29)

(4.30)

By the norm conditions and Eq.(4.23, 24, 25), these are bounded uniformly in R≥T . Therefore the first term of the right hand side of Eq.(4.21) is bounded. The boundedness of the second term of the right hand side of Eq.(4.21) is verified from Eq.(4.22) by using g instead of f , and the boundedness of other terms of the right hand side of Eq.(4.21) are verified from Eq.(4.26, 30) and the norm conditions (N1),(N2). This proves Thm 4.4 (ii). Thm 4.4 (iii) is verified by differentiating both sides of Eq.(4.21) with respect to x, t and estimating the norm as above. This calculation is elementary and omitted 

here.

Remark 4.5. Though we have treated the vector field Fε on an open set of Rn , the vector field Fε may be defined in the case of an arbitrary manifold M. Let {Ui }i∈Λ be an open covering of M such that each U i is compact. We identify Ui with an open subset on Rn . Suppose that Ui ∩ U j  ∅ and let ψi j : Ui ∩ U j → Ui ∩ U j be a coordinate transformation function from Ui to U j . Let εRi (A) and εR j (A) be the RG vector fields constructed on j) , ϕRG( be respective flows. By Eq.(4.7), it is easy to verify that Ri (A) = Ui and U j , respectively, and let ϕRG(i) t t RG( j) = ψ−1 ◦ ψi j . Let Fεi , Fεj be approximate vector fields constructed on Ui , U j (Dψi j )−1 R j (ψi j (A)) and ϕRG(i) t i j ◦ ϕt

defined by (4.16), respectively. Then Fεi is transformed by the coordinate transformation as follows: d  Dψi j Fεi (t, x) = Dψi j  Φa,t (x) da a=t d  RG(i) −1 =  ψi j ◦ αt ◦ ϕt−t0 ◦ αt0 (x) da a=t d  RG(i) −1 −1 −1 −1 =  ψi j ◦ (x0 + εh) ◦ ψi j ◦ (ψi j ◦ ϕt−t0 ◦ ψi j ) ◦ (ψi j ◦ (x0 + εh) ◦ ψi j ) (ψi j (x)), da a=t −1 −1 where ψi j ◦ x0 (t, 0, ψ−1 i j (x)) and ψi j ◦ h(t, ψi j (x)) = ψi j ◦ x1 (t, t, ψi j (x)) are coordinate representations on U j of

x0 (t, 0, x) and of x1 (t, t, x), respectively, which are represented in the coordinates on Ui . This means that Dψi j Fεi (t, x) = Fεj (t, ψi j (x)), x ∈ Ui . Let {ρi }i∈Λ be a partition of unity subordinate to the cover {Ui }i∈Λ and define Fε (t, x) :=

(4.31)  i∈Λ

ρi (x)Fεi (t, x), then Fε

is a well-defined vector field on M which approximates to f + εg. Remark 4.6. Now that we have the approximate vector field Fε (t, x) = f (t, x) + εg(t, x) + O(ε2 ), the Gronwall inequality immediately proves the error estimate for approximate solutions : Let x(t, t0 ) be a solution of Eq.(4.1) satisfying the norm conditions (N) whose initial time is t0 . Let X(t, t0 ; ξ) be a curve defined by Eq.(4.17). Suppose that x(t0 , t0 ) = X(t0 , t0 ; ξ) ∈ αt (U). Then, there exist positive constants ε0 , T, C such that the inequality

||x(t, t0 ) − X(t, t0 ; ξ)|| < Cε, 0 < t < T/ε

(4.32)

holds for 0 < ε < ε0 . This fact was essentially proved in Ziane [8] and DeVille et al. [9]. Note that DeVille et al. also treated the √ case that the norm conditions (N) are not satisfied, for example, g(t, x) = x/ t. The above fact is also followed by putting m = 1 and replacing eFt by (Dϕ0t,0 )A in the proof of Thm.A.8, in which the error estimate for a higher order case by using the higher order RG equation is proved. In the next example, the RG method is applied to a vector field whose unperturbed part is nonlinear. Application to vector fields with linear unperturbed parts will be treated in Sec.6. 10

Example 4.7. Consider a system on {(x, y) | x > 0, y ∈ R} ⊂ R2  x˙ = xy + εxy2 , y˙ = − log x + εy,

(4.33)

where ε ∈ R is a small constant. Note that unperturbed part is nonlinear. In order to obtain approximate solutions to (4.33), we apply the RG method. The unperturbed system of (x0 , y0 ) is written as x˙0 = x0 y0 , y˙ 0 = − log x0 . Its general solution, whose initial value is (x0 (0), y0 (0)) = (A, B), is given by x0 (t) = eB sin t+(log A) cos t , y0 (t) = B cos t − (log A) sin t. The RG equation defined by Eq.(4.11) is calculated as     d A ε A log A = , B dt B 2 which is solved as

  A(t) = exp peεt/2 , B(t) = qeεt/2 ,

(4.34)

(4.35)

(4.36)

where p, q ∈ R are arbitrary constants. On the other hand, h(t, A, B) defined by Eq.(4.8) is given by h(t, A, B) = (Dϕ0t,0 )(A,B) M(t), where 

(Dϕ0t,0 )(A,B)

 cos t · eB sin t+(log A) cos t /A sin t · eB sin t+(log A) cos t = , − sin t/A cos t

(4.37)

  2AB log A AB 2 A log A 2   A(log A)2 − AB2 3 3 2 sin t + cos t − sin t + AB sin t − sin t    3 3 2 4 M(t) =  (4.38)  .  (log A)2 − B2 2B log A log A B  3 3 2 2 cos t − sin t − sin t − (log A) cos t + sin 2t 3 3 2 4 It is easy to verify that the norm conditions (N) are satisfied. According to (4.17) with the present A(t), B(t), an approximate solution to (4.33) is given by     X(t) eB(t) sin t−(log A(t)) cos t + εh(t, A(t), B(t)). = Y(t) B(t) cos t − (log A(t)) cos t

(4.39)

ε (x log x, y) commutes with the vector field (xy, − log x), which is the unperturbed 2 part of Eq.(4.33) with respect to the Lie bracket product. This fact is proved generally in the next section. Note that the RG vector field

5 RG vector fields with symmetry In this section, we consider an autonomous equation on a manifold M x˙ = f (x) + εg(x), x ∈ M. 0 For this equation, we suppose that (Dϕ0s )−1 A g(ϕ s (A)) is KBM on R≥T and the RG equation for f + εg  t 1 dA 0 = εR(A) = ε lim (Dϕ0s )−1 A g(ϕ s (A))ds t→∞ t − T T dt

(5.1)

(5.2)

is defined, where ϕ0 is a flow of f (x) satisfying ϕ0t+t = ϕ0t ◦ ϕ0t . Assume that a Lie group G acts on the manifold M. If a vector field f on M satisfies (Da) x f (x) = f (ax), ∀a ∈ G, ∀x ∈ M, 11

(5.3)

then f is called invariant under the action of G, where (Da) x is the derivative at x of the map determined by a : M → M at x. Proposition 5.1. If vector fields f and g are invariant under the action of a Lie group G, then so is the RG vector field for f + εg. Proof.

For all a ∈ G, R(aA) is calculated as  t 1 0 (Dϕ0s )−1 R(aA) = lim aA g(ϕ s (aA))ds t→∞ t − T T  t 1 −1 0 = lim (Da)A (Dϕ0s )−1 A (Da)A g(aϕ s (A))ds t→∞ t − T T  t 1 −1 0 = (Da)A lim (Dϕ0s )−1 A (Da)A (Da)A g(ϕ s (A))ds = (Da)A R(A). t→∞ t − T T 

This proves the proposition. The next proposition was proved by Ziane [8] for the case that f (t, x) is a linear vector field.

Proposition 5.2. The RG vector field εR(A) for f + εg commutes with f with respect to the Lie bracket product. Equivalently, R(A) satisfies (Dϕ0t )A R(A) = R(ϕ0t (A)),

(5.4)

for all t ∈ R and all A ∈ M. Proof. For all s ∈ R and for all A ∈ M, R(ϕ0s (A)) is calculated as  t 1 (Dϕ0s )−1 g(ϕ0s ◦ ϕ0s (A))ds ϕ0s (A) t→∞ t − T T  t 1 0 −1 0 = lim (Dϕ0s )A ◦ (Dϕ0s )−1 A ◦ (Dϕ s )A g(ϕ s+s (A))ds t→∞ t − T T  t 1 0 0 = (Dϕ s )A lim (Dϕ0s+s )−1 A g(ϕ s+s (A))ds. t→∞ t − T T

R(ϕ0s (A)) = lim

Putting s + s = s provides R(ϕ0s (A))

=

(Dϕ0s )A lim t→∞

1 t−T

=

(Dϕ0s )A R(A)

+



T +s

0  (Dϕ0s )−1 A g(ϕ s (A))ds

(Dϕ0s )A lim t→∞

1 t→∞ t − T = (Dϕ0s )A R(A). −(Dϕ0s )A lim

t+s



1 t−T



t+s t

0  (Dϕ0s )−1 A g(ϕ s (A))ds

T +s

0  (Dϕ0s )−1 A g(ϕ s (A))ds

T



This proves the proposition.

Propositions 5.1 and 5.2 show that if vector fields f and g are invariant under the action of a Lie group G, then the RG vector field εR(A) is invariant under the action of G and the one-parameter group {ϕ0t }t∈R . In this sense, the RG vector field has a simpler structure than the original vector field f + εg.

12

6 Invariant Manifolds In this section, we consider an equation of the form x˙ = F x + εg(x), x ∈ Rn ,

(6.1)

where F is a diagonalizable n × n constant matrix all of whose eigenvalues lie on the imaginary axis, and where g is a polynomial vector field on Rn . Note that in this situation, the norm conditions (N) are satisfied. Theorem 6.1. If the RG vector field εR(x) for Eq.(6.1) has a boundaryless compact normally hyperbolic invariant manifold N, then Eq.(6.1) also has a normally hyperbolic invariant manifold Nε for sufficiently small ε > 0. This invariant manifold Nε is diffeomorphic to N and its stability coincides with that of N. We will prove this theorem in Appendix A, while we give a brief sketch of the proof below. Suppose that the RG vector field has a normally hyperbolic invariant manifold N. Then, the approximate vector field Fε (t, x) defined by Eq.(4.16) has a normally hyperbolic invariant manifold N˜ which is diffeomorphic to R × N in the (t, x) space since the flow of the approximate vector field is related to the flow of the RG vector field through Eq.(4.15). Now we need the Fenichel’s theorem : Theorem. (Fenichel, [10]) Let M be a C r manifold (r ≥ 1), and Xr (M) the set of C r vector fields on M with the C 1 topology. Let f be a C r vector field on M and suppose that N ⊂ M is a boundaryless compact connected normally hyperbolic f -invariant manifold. Then, the following holds: (i) There is a neighborhood U ⊂ Xr (M) of f such that there exists an normally hyperbolic g-invariant C r manifold Ng ⊂ M for ∀g ∈ U. (ii) Ng is diffeomorphic to N and the diffeomorphism h : Ng → N is close to the identity id : N → N in the C 1 topology. See [10],[11],[12] for the proof of the theorem and the definition of normal hyperbolicity. Since the approximate vector field Fε (t, x) is C 1 close to the original vector field F x + εg(x), we expect that Fenichel’s theorem concludes that the original vector field F x + εg(x) has an invariant manifold which is diffeomorphic to R × N in the (t, x) space. Since Eq.(6.1) is an autonomous equation, F x + εg(x) has an invariant manifold which is diffeomorphic to N in the x space. The above argument need to be modified because the approximate vector field is time-dependent vector field even if the original vector is independent of t, while Fenichel’s theorem holds for time-independent vector fields. In Appendix A, we define the higher order RG equation to refine the error estimate of the approximate vector field to prove Thm.6.1. Note that for the case of compact normally hyperbolic invariant manifolds with boundary, Fenichel’s theorem is modified as follows : If a vector field f has a compact connected normally hyperbolic invariant manifold N with boundary, then a vector field g, which is C 1 close to f , has a locally invariant manifold Ng which is diffeomorphic to N. In this case, an orbit of the flow of g through a point on Ng may go out from Ng through its boundary. 13

According to this theorem, Thm.6.1 has to be modified so that Nε is locally invariant if N has boundary. Example 6.2. Consider the system on R2 

x˙ = y − x3 + εx, y˙ = −x.

(6.2)

The unperturbed system x˙ = y − x3 , y˙ = −x has the origin as a fixed point which is not hyperbolic. By using Thm.6.1, we show the occurrence of the Hopf bifurcation at ε = 0 and a stable periodic orbit appears for ε > 0. Changing the coordinate by (x, y) = (εX, εY), we obtain  X˙ = Y + ε(X − εX 3 ), Y˙ = −X.

(6.3)

We want to regard the term ε2 X 3 as a first order term with respect to ε since at this time, we define only the first order RG equation while the higher order RG equation will be defined in Appendix A. To do so, define the function ε0 (t) by ε0 (t) ≡ ε and rewrite Eq.(6.3) as  ˙  X = Y + ε(X − ε0 X 3 ),    ˙ Y = −X,     ε˙ = 0.

(6.4)

0

Then this system takes the form (6.1). The RG method is applicable to (6.4). Substitute X = X0 +εX1 , Y = Y0 +εY1 into (6.4) and equate the coefficients of ε0 , ε1 to zero, respectively. Then we get   X˙ 1 = Y1 + X0 − ε0 X03 , X˙ 0 = Y0 , Y˙ 0 = −X0 , Y˙ 1 = −X1 .

(6.5)

We denote a solution to the former by X0 (t) = Aeit + Ae−it , A ∈ C.

(6.6)

With this X0 (t), a special solution to the latter defined by (4.9), whose initial time is t = τ, is written as X1 (t) =

1 3i (A − 3ε0 A|A|2 )(t − τ)eit + A3 e3it + c.c., 2 8

(6.7)

where c.c. is the complex conjugate of the first two terms of the right hand side. Therefore, the RG equation for (6.3) is given by dA 1 = ε(A − 3ε0 A|A|2 ). dt 2 Substituting A = reiθ into the above equation provides  ε    r˙ = (r − 3ε0 r3 ), 2    θ˙ = 0.

(6.8)

(6.9)

√ Fixed points of this system are r = 0 and r = 1/3ε0 := r0 , when ε0 > 0. Further, we obtain d  ε ε 1 (r − 3ε0 r3 ) = (1 − 9ε0 · ) = −ε < 0.  dr r=r0 2 2 3ε0 This means that the RG equation (6.9) has a circle {r = r0 } as a stable normally hyperbolic invariant manifold (the set of fixed points) if ε > 0. By Thm 6.1, the system (6.2) also has a stable periodic orbit if ε > 0 is sufficiently 14

small. This proves that the Hopf bifurcation occurs for (6.2). Note that the radius of the invariant circle for the √ RG equation is of order O(1/ ε). In the original coordinate (x, y), the radius of the periodic orbit for the system √ √ (6.2) is of order O( ε). Indeed, the periodic solution is approximately given by x(t) = 2 ε/3 cos t in the (x, y) coordinate. We can show that the second order RG equation defined in Def.A.5 for Eq.(6.3) is given as r˙ = ε(r−3εr3 )/2, θ˙ = −ε2 /8. Thus we can obtain the same result as above without introducing ε0 by using the second order RG equation, although it provides a modification to the motion in the θ direction. We have just seen in Example 6.2 that the RG method can be used on problems in which there is an ordinary Hopf bifurcation. In the next example, we show that the RG method can also be used for systems in which a limit cycle is created far away from a fixed point, namely with O(1) radius. Example 6.3. Consider the system on R2 

x˙ = y + ε(x − x3 ), y˙ = −x.

(6.10)

Substituting x = x0 + εx1 , y = y0 + εy1 into (6.10) and equating the coefficients of ε0 , ε1 to zero, respectively, we get



x˙0 = y0 , y˙ 0 = −x0 ,



x˙1 = y1 + x0 − x03 , y˙ 1 = −x1 .

(6.11)

We denote a solution to the former by x0 (t) = Aeit + Ae−it , A ∈ C.

(6.12)

With this x0 (t), a special solution to the latter defined by (4.9), whose initial time is t = τ, is written as x1 (t) =

1 3i (A − 3A|A|2 )(t − τ)eit + A3 e3it + c.c., 2 8

(6.13)

where c.c. is the complex conjugate of the first two terms of the right hand side. Therefore, the RG equation for (6.10) is given by dA 1 = ε(A − 3A|A|2 ). dt 2 Substituting A = reiθ into the above equation provides  ε    r˙ = (r − 3r3 ), 2    θ˙ = 0. Fixed points of this system are r = 0 and r =

(6.14)

(6.15)

√ 1/3 := r0 , when ε > 0. It is easy to verify that r = r0 is the

stable fixed point. Therefore the system (6.10) has a stable periodic orbit if ε > 0 is sufficiently small. Note that since the radius of the invariant circle for the RG equation is of O(1), the radius of the periodic orbit of the system (6.10) is also of O(1). This can be verified numerically. For each ε, points y0 > 0 at which the periodic orbit for the system (6.10) crosses the y axis are calculated numerically to provide the Fig.1 below. The radius y0 is almost independent of ε when ε > 0 is sufficiently small.

15

1.3 1.25 1.2

y

1.15 1.1 1.05 1 0

0.1

0.2

0. 3

0.4

ǭ

0. 5

Fig. 1: The radius y0 of the periodic orbit of the system (6.10) for each ε.

7 Linear Equations We apply the RG method to a time-dependent linear equation x˙ = F(t)x + εG(t)x, x ∈ Rn ,

(7.1)

where F(t) and G(t) are n × n matrix functions which are of C 1 class with respect to t. A solution to the equation x˙0 = F(t)x0 is denoted by x0 (t, 0, v) = X(t)v, where X(t) is the fundamental matrix and v ∈ Rn is an initial value. We assume that X(t)−1G(t)X(t) is KBM on t ≥ 0, and we define a constant matrix  1 t −1 X (s)G(s)X(s)ds. R := lim t→∞ t 0

(7.2)

We call it a secular matrix for Eq.(7.1). Then, a special solution to an equation x˙1 = F(t)x1 +G(t)x0 (t, 0, v) defined by (4.9) is given by   = + X(t)(t − τ)Rv, G(t) x1 (t, τ; v) = X(t)G(t)v



t

(X(s)−1G(s)X(s) − R)ds,

(7.3)

and the RG equation for (7.1) is given by a linear equation v˙ = εRv, v ∈ Rn .

(7.4)

 is bounded in t ≥ 0, then Thm 4.4 (i) holds and the flow Φt,t0 defined by (4.15) is put in the form If X(t) and G(t) εR(t−t0 )   0 ))−1 X(t0 )−1 , (I + εG(t Φt,t0 = X(t)(I + εG(t))e

(7.5)

where I is the n × n identity matrix. Accordingly, the approximate vector field Fε defined by (4.16) is expressed as  −1 X(t)−1 x + ε2 X(t)G(t)R(I   −1 X(t)−1 x. + εG(t)) Fε (t, x) = F(t)x + εG(t)X(t)(I + εG(t))

(7.6)

The following proposition means that the stability of X(t)−1 x(t) is inherited from that of the RG equation if ε > 0 is sufficiently small. In fact, the proposition shows that if real parts of all eigenvalues of R are negative, then ||X(t)−1 x(t)|| → 0 as t → ∞ for arbitrary solution x(t) of (7.1), and that if there exists an eigenvalue of R whose real part is positive, then there exists a solution x(t) of (7.1) such that ||X(t)−1 x(t)|| → ∞ as t → ∞. 16

Proposition 7.1.

 defined in (7.3) is bounded in t ≥ 0. Let R be a secular matrix Suppose that X(t) and G(t)

for (7.1) and λ1 , · · · , λn its eigenvalues. Then, for each integer k with 1 ≤ k ≤ n, there exist positive constants D1 , D2 , t0 , a positive valued function φ(ε) with φ(ε) → 0 as ε → 0, and a solution x(t) of (7.1) such that the inequality

D2 eεRe(λk )t−2εφ(ε)t ≤ ||X(t)−1 x(t)|| ≤ D1 eεRe(λk )t+2εφ(ε)t

(7.7)

holds for t ≥ t0 .   = t(X(s)−1G(s)X(s) − R)ds is bounded, (I + εG(t))  −1 is expanded into the Neumann series as Proof. Since G(t) 0  n n  −1 = ∞ (I + εG(t)) n=0 (−ε) G(t) . With this expansion inserted into (7.6), F ε (t, x) is rewritten as Fε (t, x) = F(t)x + εG(t)x + ε2 H(t, ε)x, ∞     n+1 X(t)−1 .  G(t)  n X(t)−1 − G(t)X(t)G(t) (−ε)n X(t)G(t)R H(t, ε) :=

(7.8) (7.9)

n=0

Let us rewrite the equation (7.1) as x˙ = Fε (t, x) − ε2 H(t, ε)x.

(7.10)

Introducing a new function y(t) by x(t) = X(t)y(t), we verify that y satisfies the differential equation  ε)y, ε (t)y − ε2 H(t, y˙ = F

(7.11)

ε (t) := εX(t)−1G(t)X(t) + ε2 X(t)−1 H(t, ε)X(t), F ∞     n+1 ,   G(t)  n − X(t)−1G(t)X(t)G(t) (−ε)n G(t)R H(t, ε) :=

(7.12)

where

(7.13)

n=0

ε (t)y is given by and further that the flow of the linear vector field F εR(t−t0 )  t,t0 = (I + εG(t))e   0 ))−1 . Φ (I + εG(t

(7.14)

To prove the proposition, we can suppose that the secular matrix R is put in the Jordan form. In fact, if we change the variable x in (7.1) by x → Px, where P is an arbitrary nonsingular constant matrix, then F(t), G(t) and X(t)−1G(t)X(t) are brought into P−1 F(t)P, P−1G(t)P, and P−1 X(t)−1G(t)X(t)P, respectively. This means that R turns into P−1 RP. In what follows, we assume that R is of the Jordan form:     λ1 p1   λ p 2 2    .. ..  , R =  . .     λ p n−1 n−1   λn

(7.15)

where λi (i = 1, · · · , n) are the eigenvalues of R such that Re(λ1 ) ≤ · · · ≤ Re(λn ) and where pi (i = 1, · · · , n − 1) are either 0 or 1. Now let us fix an integer k < n such that Re(λk+1 ) − Re(λk ) > 0. The case that n = k and the case that there are no such a k < n are treated later. Define matrices Q1 (t), Q2 (t) to be upper triangle matrices    ελ1 t  0 . .   e .. *  . 0   . ελk t 0  0   e  ελ t  k+1 Q1 (t) =  (t) = , Q   2 e  0   ..  .. .   .  0 17

* ελn t

e

     ,  

such that Q1 (t) + Q2 (t) = eεRt . Then, a solution y(t) to (7.11) satisfies an integral equation  t  t,0 ek − ε2 (I + εG(t))Q   −1 ◦ H(s,  ε)y(s)ds y(t) = Φ 1 (t − s)(I + εG(s)) 0  ∞ 2   −1 ◦ H(s,  ε)y(s)ds, (I + εG(t))Q +ε 2 (t − s)(I + εG(s))

(7.16)

t

where e1 , · · · , en are the canonical basis of Rn . The first term of the right hand side of the above is written as ελk t  t,0 ek = (I + εG(t))(q  Φ , · · · , qk−1 (t)eελk t , eελk t , 0, · · · , 0)t , where qi (t) (i = 1, · · · , k − 1) are monomials of t 1 (t)e   = t(X(s)−1G(s)X(s) − R)ds is bounded uniformly in t implies whose degrees are at most k − 1. The fact that G(t) 0  ε). Consequently,  ±1 and X(t)−1G(t)X(t) are also bounded uniformly in t, and thereby so is H(t, that (I + εG(t)) there exist positive constants C0 , C1 such that  ε)|| ≤ C0 , ||(I + εG(t))  ±1 || ≤ C1 . ||H(t,

(7.17)

Further, there exist positive constants C2 , C3 and a positive valued function φ(ε) satisfying φ(ε) → 0 as ε → 0 such that C2 εRe(λk )t+εφ(ε)t e , for t ≥ 0, φ(ε)n  t,0 ek || ≤ C1C2 eεRe(λk )t+εφ(ε)t , for t ≥ 0, ||Φ φ(ε)n C3 εRe(λk+1 )t−εφ(ε)t ||Q2 (t)|| ≤ e , for t ≤ 0. φ(ε)n

||Q1 (t)|| ≤

(7.18)

Indeed, if εt ≥ 1, there exists a constant C such that ||Q1 (t)|| ≤ Cεn tn eεRe(λk )t . Suppose that there exists a function q(ε) such that ||Q1 (t)|| ≤ Cεn tn eεRe(λk )t ≤ Cq(ε)eεRe(λk )t+εφ(ε)t . This inequality is equivalent to the inequality εt ≤ q(ε)1/n eεφ(ε)t/n , and it is easy to verify that this inequality holds C2 εRe(λk )t+εφ(ε)t e , for εt ≥ 1. This when q(ε) = (n/(φ(ε)e))n . Putting C2 = C(n/e)n , we obtain ||Q1 (t)|| ≤ φ(ε)n εRe(λk )t inequality also holds when 0 ≤ εt < 1 because ||Q1 (t)|| ≤ Ce holds if 0 ≤ εt < 1. The inequalities for  t,0 ek || and ||Q2 (t)|| above are verified in a similar way. ||Φ We define a sequence of functions {ym (t)}m≥0 by  t,0 ek , y0 (t) = Φ



t

  −1 ◦ H(s,  ε)ym (s)ds ym+1 (t) = y0 (t) − ε (I + εG(t))Q 1 (t − s)(I + εG(s)) 0  ∞   −1 ◦ H(s,  ε)ym (s)ds. (I + εG(t))Q +ε2 2 (t − s)(I + εG(s)) 2

t

We need two lemmas to prove the proposition. Lemma 7.2. Let φ(ε) = ε1/(2n+2) and fix ε > 0 small so that Re(λk+1 ) − Re(λk ) − 3φ(ε) > 0. Then there exists a constant 0 < p < 1 such that ||ym (t) − ym−1 (t)|| ≤ pm eεRe(λk )t+2εφ(ε)t , m = 1, 2, · · · , for t ≥ 0. 18

(7.19)

Proof. We prove (7.19) by induction. For m = 1, the quantity ||y1 (t) − y0 (t)|| is estimated as follows :  t   −1 || · ||H(s,  ε)|| · ||y0 (s)||ds ||y1 − y0 || ≤ ε2 ||I + εG(t)|| · ||Q1 (t − s)|| · ||(I + εG(s)) 0 





2

  −1 || · ||H(s,  ε)|| · ||y0 (s)||ds ||I + εG(t)|| · ||Q2 (t − s)|| · ||(I + εG(s))

t

 ε2C0C13C22 t εRe(λk )t+εφ(ε)t εφ(ε)s e e ds φ(ε)2n 0  ε2C0C13C2C3 ∞ εRe(λk+1 )t−εφ(ε)t −εRe(λk+1 )s+εRe(λk )s+3εφ(ε)s + e e ds φ(ε)2n t   εC0C13C2 C3 φ(ε) ≤ C2 + eεRe(λk )t+2εφ(ε)t 2n+1 Re(λk+1 ) − Re(λk ) − 3φ(ε) φ(ε)   C3 φ(ε) 1/(2n+2) 3 =ε C0C1 C2 C2 + eεRe(λk )t+2εφ(ε)t . Re(λk+1 ) − Re(λk ) − 3φ(ε) ≤

 C2 +

 C3 φ(ε) p=ε , (7.20) Re(λk+1 ) − Re(λk ) − 3φ(ε) then Eq.(7.19) holds for m = 1. Further, if ε is sufficiently small, the inequality 0 < p < 1 holds. With this p, if

Define

1/(2n+2)

C0C13C2

we suppose Eq.(7.19) holds, then we can verify that ||ym+1 − ym || ≤ pm+1 eεRe(λk )t+2εφ(ε)t by the same calculation as 

above. This lemma implies that the sequence {ym (t)}m≥0 converges to a solution of (7.11). Lemma 7.3. Under the same conditions as Lem.7.2, there exist positive constants D1 and t0 such that ||ym (t)|| ≤ D1 eεRe(λk )t+2εφ(ε)t , m = 0, 1, · · ·

(7.21)

for t ≥ t0 . Proof.

We prove the lemma by induction. When m = 0, the above inequality is clear if D1 ≥ C1C2 /φ(ε)n .

Suppose that the above inequality holds for m, then  t 2   −1 || · ||H(s,  ε)|| · ||ym (s)||ds ||ym+1 || ≤ ||y0 || + ε ||I + εG(t)|| · ||Q1 (t − s)|| · ||(I + εG(s)) 0  ∞   −1 || · ||H(s,  ε)|| · ||ym (s)||ds ||I + εG(t)|| · ||Q2 (t − s)|| · ||(I + εG(s)) +ε2 t

 ε2C0C12C2 D1 t εRe(λk )t+εφ(ε)t εφ(ε)s = D1 e + e e ds φ(ε)n 0  ε2C0C12C3 D1 ∞ εRe(λk+1 )t−εφ(ε)t −εRe(λk+1 )s+εRe(λk )s+3εφ(ε)s e e + φ(ε)n t εC0C12C2 D1 εRe(λk )t+2εφ(ε)t εC0C12C3 D1 ≤ D1 eεRe(λk )t+εφ(ε)t + e + eεRe(λk )t+2εφ(ε)t φ(ε)n (Re(λk+1 )−Re(λk )−3φ(ε)) φ(ε)n+1   εn/(2n+2) ≤ D1 eεRe(λk )t+2εφ(ε)t e−εφ(ε)t + p , C1C2 εRe(λk )t+εφ(ε)t

where p is defined by Eq.(7.20). Since 0 < p < 1, we can take sufficiently large t0 and sufficiently small ε such that 0 < e−εφ(ε)t +

εn/(2n+2) p 0 is sufficiently small, then axis, and suppose that G(t) and G(t) the stability of a trivial solution x(t) ≡ 0 coincides with that of a trivial solution of the RG equation v˙ = εRv, where R is a secular matrix for F x + εG(t)x.  are satisfied if G(t) is periodic or almost periodic In the above corollary, the boundedness of G(t) and G(t) function in t whose Fourier exponents do not have accumulation points in R. Example 7.5. Let us consider the Mathieu equation: y¨ = −(a + 2ε cos t)y, 20

(7.25)

where a and ε are positive parameters. It is well known that there exists an area in (a, ε) plane such that the origin is an unstable fixed point for (7.25) if (a, ε) is in this area. We calculate the area approximately by the RG method. Let a = a0 + εa1 and y = y0 + εy1 . Substituting them into (7.25), and comparing the coefficients of ε0 and ε1 in both sides of (7.25) provides y¨ 0 = −b2 y0 ,

(7.26)

y¨ 1 = −b y1 − a1 y0 − 2 cos t · y0 ,

(7.27)

2

where a0 = b2 . A general solution to the former is given by y0 (t) = Aeibt + Ae−ibt , A ∈ C.

(7.28)

  y¨ 1 = −b2 y1 − a1 Aeibt + Aei(1+b)t + Aei(1−b)t + c.c. .

(7.29)

With this y0 , Eq.(7.27) is rewritten as

If b = 1/2 (i.e. a0 = 1/4), the secular term appears for all a1 . In fact, the equation   1 y¨ 1 = − y1 − a1 Aeit/2 + Ae3it/2 + Aeit/2 + c.c. 4

(7.30)

admits a special solution given by y1 (t, τ; A) = i(a1 A + A)(t − τ)eit/2 +

A 3it/2 e + c.c., 2

(7.31)

where the initial time has been chosen to be t = τ. Then, the RG equation for (7.25) is given by A˙ = iε(a1 A + A).

(7.32)

Putting A = B + iC, B, C ∈ R, we break up (7.32) into  B˙ = ε(1 − a1 )C C˙ = ε(1 + a1 )B. A general solution to this equation is given by √ 2 √ 2     peε √1−a1 t + qe−ε √1−a1 t B(t) =    peiε a21 −1t + qe−iε a21 −1t

(7.33)

(|a1 | ≤ 1) (|a1 | > 1),

(7.34)

where p, q ∈ R are arbitrary constants. This shows that the origin is an unstable fixed point for the RG equation (7.33) if |a1 | < 1. This proves the instability of the fixed point of the Mathieu equation (7.25) if a = 1/4 + εa1 + O(ε2 ), |a1 | < 1. Example 7.6. Consider the coupled Mathieu equations  x¨ = −(a + 2ε cos t)x − εp(x − y) − εq( x˙ − y˙ ) y¨ = −(a + 2ε cos t)y − εp(y − x) − εq(˙y − x˙),

(7.35)

where ε > 0 and a, p, q ∈ R are constants. Put u = x + y, then the equation for u(t) is the Mathieu equation (7.25). In Example 7.5, we proved that if a = 1/4, the trivial solution u = 0 of the Mathieu equation (7.25) is unstable. In what follows, we assume that a = 1/4. Put z = x − y. Then z satisfies the equation 1 z¨ = − z + ε(−2q˙z − 2pz − 2 cos t · z). 4 21

(7.36)

Put further z = z0 + εz1 , where z0 is subjected to the unperturbed equation z¨0 = − 41 z0 , and has a general solution of the form z0 (t) = Aeit/2 + Ae−it/2 . With this z0 (t), the equation for z1 proves to be given by 1 z¨1 = − z1 − iqAeit/2 − 2pAeit/2 − Ae3it/2 − Aeit/2 + c.c., (7.37) 4 where c.c. denote the complex conjugate of the last four terms of the right hand side. A special solution of this equation, whose initial time is t = τ, is given by z1 (t) = i(iqA + 2pA + A)(t − τ)eit/2 +

A 3it/2 e + c.c. 2

(7.38)

Therefore the RG equation for Eq.(7.36) is put in the form A˙ = iε(iqA + 2pA + A), A ∈ C. Put A = α + iβ, α, β ∈ R. Then the above equation is rewritten as      d α −q −2p + 1 α =ε . 2p + 1 −q β dt β Eigenvalues of the matrix in the right hand side of the above equation are λ± = −q ±

(7.39)

(7.40)  1 − 4p2 . Therefore the

stability of the trivial solution (α, β) = (0, 0) of the RG equation is as given in Figure 2.

p (1, 1) 0.5 

unstable Re(dz ) 

q

(0.2, 0.2)

stable Re(dz ) 

Fig. 2: The trivial solution (α, β) = (0, 0) is stable on the shaded area.

Corollary 7.4. shows that the stability of the trivial solution z(t) = 0 of Eq.(7.36) coincides with that of the stability of (α, β) = (0, 0). This proves that if Re(λ± ) < 0, then |x(t) − y(t)| → 0 as t → ∞ although each |x(t)|, |y(t)| diverges as t → ∞. A numerical solution to Eq.(7.35) for ε = 0.01, x(0) = 0.5, y(0) = 0.1 is presented in Fig.3.

Acknowledgment The author would like to thank Professor Toshihiro Iwai and Professor Hiroshi Kokubu for critical reading of the manuscript and for useful comments. The author is also grateful to Assistant Professor Yoshiyuki Y. Yamaguchi for bringing his attention to the RG method.

A Higher order RG equation In this appendix, we define the higher order RG equation for constructing an approximate vector field which is O(εm+1 ) close to a given original vector field. The result is used in proving Theorem 6.1. 22

x y

2

x y

2

1

1

0

-1

0

50

100

150

200

(b) p = q = 0.2

(a) p = q = 1

Fig. 3: Numerical results for Eq.(7.35). The synchronous solution x(t) = y(t) is (a) stable if p = q = 1, (b) unstable if p = q = 0.2.

Let F be a diagonalizable n×n matrix all of whose eigenvalues lie on the imaginary axis and g1 (t, x), · · · , gm (t, x) ∞

C vector fields on R × Rn which are polynomial in x and periodic in t. Consider an ODE x˙ = F x + εg1 (t, x) + ε2 g2 (t, x) + · · · + εm gm (t, x), x ∈ Rn ,

(A.1)

where ε ∈ R is a small parameter. Put x = x0 + εx1 + · · · + εm xm . Then the above equation is rewritten as x˙0 + ε x˙1 + · · · + εm x˙m = F(x0 + εx1 + · · · + εm xm ) +

m 

εi gi (t, x0 + εx1 + · · · + εm xm ).

(A.2)

i=1

Expanding the right hand side of the above equation with respect to ε and equating the coefficients of each εi of the both sides of the above, we obtain ODEs for x0 , x1 , · · · , xm x˙0 = F x0 , x˙1 = F x1 + G1 (t, x0 ), .. . x˙i = F xi + Gi (t, x0 , x1 , · · · , xi−1 ), .. . x˙m = F xm + Gm (t, x0 , x1 , · · · , xm−1 ),

(A.3) (A.4)

(A.5)

(A.6)

where Gi is some smooth function of t, x0 , x1 , · · · , xi−1 which is periodic in t. For example, G1 , G2 and G3 are given by G1 (t, x0 ) = g1 (t, x0 ), ∂g1 (t, x0 )x1 + g2 (t, x0 ), G2 (t, x0 , x1 ) = ∂x 1 ∂2 g1 ∂g1 ∂g2 (t, x0 )x2 + (t, x0 )x1 + g3 (t, x0 ), (t, x0 )x12 + G3 (t, x0 , x1 , x2 ) = 2 2 ∂x ∂x ∂x

(A.7) (A.8) (A.9)

respectively. We have to solve the above equations. At first, we denote by x0 (t, 0, A) = X(t)A a solution of the unperturbed part x˙0 = F x0 , where X(t) = eFt is the fundamental matrix and A ∈ Rn is an initial value. With this x0 , by the similar discussion to Sec.4, a solution of Eq.(A.4) is given by x1 (t, τ; A) = h(1) t (A) + X(t)R1 (A)(t − τ), 23

(A.10)

where h(1) t (A) and R1 (A) are defined by

 1 t X(s)−1G1 (s, X(s)A)ds, t→∞ t  t  (A) = X(t) h(1) X(s)−1G1 (s, X(s)A) − R1 (A) ds, t

R1 (A) = lim

(A.11) (A.12)

respectively. The integral constants of the indefinite integrals in Eq.(A.11),(A.12) and Eq.(A.13),(A.14) below are fixed arbitrary. By choosing these integral constants appropriately, we can reduce the RG equation. This will be done in a forthcoming paper. Note that since X(t) and G1 (t, x) are almost periodic in t, X(t)−1G1 (t, X(t)A) is bounded uniformly in t ∈ R and R1 (A) is well-defined (see Lemma 4.1). With this x0 and x1 , we solve the equation for x2 , as will be shown in Prop.A.1. This process is performed step by step until a solution xm to Eq.(A.6) is obtained. Proposition A.1. Define functions Ri (A) and h(i) t (A), i = 2, · · · , m by  i−1   1 t (i−1) −1 (A), · · · , h (A)) − X(s) (Dh(k) (A.13) Ri (A) := lim X(s)−1Gi (s, X(s)A, h(1) s s s )A Ri−k (A) ds, t→∞ t k=1  t i−1   (i) −1 (1) (i−1) −1 (Dh(k) ht (A) := X(t) X(s) Gi (s, X(s)A, h s (A), · · · , h s (A)) − X(s) s )A Ri−k (A) − Ri (A) ds. (A.14) k=1

Then, the curve defined by (i) (i) (i) 2 i xi := xi (t, τ; A) = h(i) t (A) + y1 (t, A)(t − τ) + y2 (t, A)(t − τ) + · · · + yi (t, A)(t − τ)

(A.15)

(i) gives a solution to Eq.(A.5) for i = 1, 2, · · · , m, where y(i) 1 , · · · , yi are defined by

y(i) 1 (t, A) = X(t)Ri (A) +

i−1 

(Dh(k) t )A Ri−k (A),

(A.16)

k=1

y(i) j (t, A)

1  ∂y j−1 (t, A)Ri−k (A), ( j = 2, 3, · · · , i − 1), = j k=1 ∂A

(A.17)

y(i) i (t, A)

(k) (i−1) i−1 1  ∂yi−1 1 ∂yi−1 (t, A)Ri−k (A) = (t, A)R1 (A), = i k=1 ∂A i ∂A

(A.18)

i−1

(k)

y(i) j (t, A) = 0, ( j > i).

(A.19)

Proof. We prove Prop.A.1 by induction. Assume that x1 , · · · , xi−1 defined by Eq.(A.15) are solutions of Eq.(A.5) for i = 1, 2, · · · , i − 1. In order to prove that xi defined by Eq.(A.15) is a solution of Eq.(A.5), we substitute Eq.(A.15) into Eq.(A.5) to obtain (1) (i−1) (A)) − Fh(i) t (A) + G i (t, X(t)A, ht (A), · · · , ht

i−1  (Dh(k) t )A Ri−k (A) − X(t)Ri (t) k=1

+

i 

k y˙ (i) k (t, A)(t − τ) +

k=1

= Fh(i) t (A) + F

i 

k−1 y(i) k (t, A)k(t − τ)

k=1 i 

k y(i) k (t, A)(t − τ) + G i (t, x0 , x1 , · · · , xi−1 ).

k=1

24

(A.20)

It is easy to verify that Gi (t, x0 , x1 , · · · , xi−1 ) with x0 , x1 , · · · , xi−1 defined by Eq.(A.15) is a polynomial in t − τ whose degree is at most i − 1. We denote it by Gi (t, x0 , · · · , xi−1 ) =

i−1 

(k) (t, x0 , · · · , xi−1 )(t − τ)k . G i

(A.21)

k=0 (i−1) (0) (t, x0 , · · · , xi−1 ) = Gi (t, X(t)A, h(1) Note that G (A)). Equating the coefficients of (t − τ)k of the both t (A), · · · , ht i

sides of Eq.(A.20) with Eq.(A.21), we obtain equations y(i) 1 (t, A) =

i−1  (Dh(k) t )A Ri−k (A) + X(t)Ri (A),

(A.22)

k=1 (i) (i) (k) y˙ (i) k (t, A) + (k + 1)yk+1 (t, A) = Fyk (t, A) + G i (t, x0 , · · · , xi−1 ), (k = 1, · · · , i − 1),

(A.23)

y˙ (i) i (t, A)

(A.24)

=

Fy(i) i (t, A).

(i) (i) These equations can determine y(i) 1 , y2 , · · · , yi . Eq.(A.22) gives Eq.(A.16). From Eq.(A.23) for k = 1, we obtain

(1) 2y2(i) (t, A) = Fy(i) ˙ (i) 1 (t, A) − y 1 (t, A) + G i (t, x0 , · · · , xi−1 ) =

i−1 

F(Dh(k) t )A Ri−k (A) + FX(t)Ri (A)

k=1

− =

i−1  ∂ ∂ (1) (t, x0 , · · · , xi−1 ) (Dh(k) X(t)Ri (A) + G t )A Ri−k (A) − i ∂t ∂t k=1

i−1 

F(Dh(k) t )A Ri−k (A) −

k=1



k−1 

i−1  ∂  (k) (k−1) (A)) Fht (A) + Gk (t, X(t)A, h(1) t (A), · · · , ht ∂A k=1

 (1) (t, x0 , · · · , xi−1 ) (Dh(t j) )A Rk− j (A) − X(t)Rk (A) Ri−k (A) + G i

j=1

=

i−1  k=1

 k−1   ∂  ( j)  (Dht )A Rk− j (A) + X(t)Rk (A) Ri−k (A) ∂A j=1

(1) (t, x0 , · · · , xi−1 ) − +G i =

i−1  ∂ (k−1) Gk (t, X(t)A, h(1) (A))Ri−k (A) t (A), · · · , ht ∂A k=1

i−1  ∂ (k) y1 (t, A)Ri−k (A) ∂A k=1

(1) (t, x0 , · · · , xi−1 ) − +G i If the equality (1) (t, x0 , · · · , xi−1 ) = G i

i−1  ∂ (k−1) Gk (t, X(t)A, h(1) (A))Ri−k (A). t (A), · · · , ht ∂A k=1

i−1  ∂ (k−1) Gk (t, X(t)A, h(1) (A))Ri−k (A) t (A), · · · , ht ∂A k=1

25

(A.25)

(A.26)

holds, Eq.(A.17) for j = 2 is obtained. The left hand side of the above is calculated as  (1) (t, x0 , · · · , xi−1 ) = − ∂  Gi (t, x0 , · · · , xi−1 ) G i ∂τ τ=t i−1  ∂Gi ∂  =− lim (t, x0 , · · · , xi−1 )  x j (t, τ; A) τ→t ∂x j ∂τ τ=t j=1 =

i−1  j=1

lim τ→t

j−1    ∂Gi (t, x0 , · · · , xi−1 ) X(t)R j (A)+ (Dh(k) t )A R j−k (A) . ∂x j k=1

(A.27)

The right hand side of Eq.(A.26) is calculated as i−1  ∂ (k−1) Gk (t, X(t)A, h(1) (A))Ri−k (A) t (A), · · · , ht ∂A k=1

=

i−1  k−1  k=1 j=1

 ∂Gk ∂Gk (t, x0 , · · · , xi−1 )(Dh(t j) )A Ri−k (A) + lim (t, x0 , · · · , xi−1 )X(t)Ri−k (A). τ→t ∂x0 ∂x j k=1 i−1

lim τ→t

(A.28)

Now we need a simple lemma. Lemma A.2. For integers i, j with i > j, the equality ∂Gi− j ∂Gi ∂Gi−1 = = ··· = ∂x j ∂x j−1 ∂x0

(A.29)

holds. We will prove this lemma after the proof of Prop.A.1. is completed. According to Lemma A.2, Eq.(A.27) and Eq.(A.28) are brought into (1) (t, x0 , · · · , xi−1 ) = G i

i−1  j=1

lim τ→t

j−1    ∂Gi− j (t, x0 , · · · , xi−1 ) X(t)R j (A)+ (Dh(k) t )A R j−k (A) , ∂x0 k=1

(A.30)

and i−1  ∂ (k−1) Gk (t, X(t)A, h(1) (A))Ri−k (A) t (A), · · · , ht ∂A k=1

=

k−1 i−1  i−1   ∂Gk− j ∂Gk lim (t, x0 , · · ·, xi−1 )(Dh(t j) )A Ri−k (A)+ lim (t, x0 , · · ·, xi−1 )X(t)Ri−k (A), τ→t ∂x0 τ→t ∂x0 k=1 j=1 k=1

(A.31)

respectively. This proves Eq.(A.26), and Eq.(A.17) for j = 2 is verified. (i) (i) By using Eq.(A.23), y(i) 3 , · · · , yi−1 , yi are calculated in the same way as above, and Eq.(A.17) and Eq.(A.18) are

proved, but we omit the detailed calculation here. Next, we have to show that y(i) i given by Eq.(A.18) satisfies

26

Eq.(A.24). To show this, according to y(1) 1 (t, A) = X(t)R1 (t), we rewrite Eq.(A.18) as (i−1)

1 ∂yi−1 R1 (A) i ∂A  (i−2)   1 ∂  ∂yi−2 = R1 (A) R1 (A)  i(i − 1) ∂A ∂A

y(i) i (t, A) =

. = ..   (1)    1 ∂  ∂  ∂  ∂y1 = R1 (A) · · · R1 (A) R1 (A) ···  i! ∂A ∂A ∂A ∂A      1 ∂ ∂ ∂  ∂R1 R1 (A) · · · R1 (A) R1 (A). = X(t) ··· i! ∂A ∂A ∂A ∂A Since X(t) is the fundamental matrix of the equation y˙ = Fy, y(i) i satisfies Eq.(A.24). Therefore xi defined by Eq.(A.15) satisfies Eq.(A.5). This ends the proof of Prop.A.1. Proof of Lemma A.2.



By definition, Gi (t, x0 , · · · , xi−1 ) is written as Gi (t, x0 , · · · , xi−1 ) =

i−1   1 dk  l g (t, m l=0 ε xl ) + gi (t, x0 ). k ε=0 i−k k! dε k=1

On the other hand, Gi−1 (t, x0 , · · · , xi−2 ) is rewritten as Gi−1 (t, x0 , · · · , xi−2 ) = =

i−2   1 dk  l g (t, m l=0 ε xl ) k ε=0 i−k−1 k! dε k=0 i−1  k=1

m l 1 dk−1  ε=0 gi−k (t, l=0 ε xl ). k−1 (k − 1)! dε

To show the equality ∂Gi /∂x j = ∂Gi−1 /∂x j−1 , it is sufficient to prove the equality m l m l ∂ 1 dk  1 dk−1  ∂ g (t, ε x ) =   gi−k (t, l=0 ε xl ), i−k l l=0 ∂x j k! dεk ε=0 ∂x j−1 (k − 1)! dεk−1 ε=0

(A.32)

for k = 1, 2, · · · , i − 1. For simplicity, we denote gi−k (t, x) by g(x). Consider the trivial equality  ∂ m l ∂ l g( l=0 ε xl ) = ε g( m l=0 ε xl ), j = 1, · · · , m. ∂x j ∂x j−1

(A.33)

Expanding the both sides of the above equation with respect to ε, we obtain  k   m l ∂  ε p d p     g( ε x ) + R(ε, x , · · · , x )  l 0 m l=0  ∂x j p=0 p! dε p ε=0  k   m l ∂  ε p d p   ,   g( ε x ) + R(ε, x , · · · , x ) =ε  l 0 m l=0  ∂x j−1 p=0 p! dε p ε=0  is some function satisfying R  ∼ o(|ε|k+1 ). Equating the coefficients of εk of the both sides of the above, where R we obtain Eq.(A.32). This ends the proof of Lemma A.2.



Remark A.3. Prop.A.1 also holds for a time-dependent matrix F(t) as long as the fundamental matrix X(t) of 27

F(t) is periodic in t. Further, for Prop.A.1, we do not need to assume that functions gi in Eq.(A.1) are polynomial in x. These assumptions are used to prove statements below. Lemma A.4.

For Eq.(A.1), functions h(i) t (A) with i = 1, 2, · · · , m defined by (A.12) and (A.14) are bounded

uniformly in t. To prove this lemma, we need a theory of almost periodic functions. Indeed, we can show that functions h(i) t (A) are almost periodic functions. This fact also holds even if gi (t, x)’s in Eq.(A.1) are not periodic in t but almost periodic in t as long as the set of Fourier exponents of gi (t, x)’s does not have accumulation points in R. See Fink [13] for the definitions and basic facts of almost periodic functions. Proof of Lem.A.4.

We prove the proposition by induction. At first, note that G1 (t, x0 ) defined by Eq.(A.7)

is almost periodic uniformly in x0 because it is periodic in t and polynomial in x0 . Therefore, a function X(t)−1G1 (t, X(t)A) included in Eq.(A.12) is almost periodic uniformly in A (see Thm.2.11 of Fink [13]). Each p bk (t)eiξk t , where bk (t) are some components of the vector-valued function X(t)−1G1 (t, X(t)A) is of the form k=1 periodic functions and ξk ∈ R are some constants. Since each bk (t) can be expanded as a Fourier series in p bk (t)eiξk t does not have accumulation points on R. Since ordinary sense, the set of Fourier exponents of k=1 the Fourier coefficient corresponding to the zero Fourier exponent, if it exists, is R1 (A) defined by Eq.(A.11), t X(t)−1G1 (t, X(t)A) − R1 (A) does not have a zero as a Fourier exponent. Therefore (X(s)−1G1 (s, X(s)A) − R1 (A))ds is almost periodic (we use Thm.4.12 of Fink [13]) and this proves the lemma A.4 for h(1) t (A). (i−1) (A). Like the above, the integrand in Eq.(A.14) is almost Suppose that Lem.A.4 holds for h(1) t (A), · · · , ht

periodic uniformly in A because G1 (t, x0 , · · · , xi−1 ) is periodic in t and polynomial in x0 , · · · , xi−1 . X(s)A, h(1) s (A), · · ·

, h(i−1) (A) s

Since

are almost periodic the set of whose Fourier exponents has no accumulation points

by the assumption of induction, the set of Fourier exponents of the function (i−1) p(s, A) := X(s)−1Gi (s, X(s)A, h(1) (A)) − X(s)−1 s (A), · · · , h s

i−1  (Dh(k) s )A Ri−k (A) k=1

included in Eq.(A.14) does not have accumulation points. Since Ri (A) defined by Eq.(A.13) gives the Fourier coefficient corresponding to the zero Fourier exponent of p(s, A), if it exists, there exists M > 0 such that all Fourier exponents λ of the integrand in Eq.(A.14) satisfies |λ| ≥ M. Then Thm.4.12 of Fink [13] proves that h(i) t (A) 

is almost periodic. Definition A.5.

Along with R1 (A), · · · , Rm (A) defined by Eq.(A.11) and (A.13), we define the m-th order RG

equation for Eq.(A.1) by A˙ = εR1 (A) + ε2 R2 (A) + · · · + εm Rm (A), A ∈ Rn ,

(A.34)

and we call εR1 (A) + · · · + εm Rm (A) the m-th order RG vector field for Eq.(A.1). We denote by ϕ(m) the flow t generated by the m-th order RG vector field. Fix an open set U ⊂ Rn such that U is compact. Define the map αt to be 2 (2) m (m) αt (A) := X(t)A + εh(1) t (A) + ε ht (A) + · · · + ε ht (A),

for all t ∈ R. Now we are in a position to restate Thm.4.4 in the present situation. 28

(A.35)

Theorem A.6.

Let ϕ(m) be the flow of the m-th order RG equation for Eq.(A.1) and αt the map defined by t

Eq.(A.35). Then, there exists ε0 > 0 such that the following holds for all |ε| < ε0 : A map −1 n Φt,t0 := αt ◦ ϕ(m) t−t0 ◦ αt0 : αt0 (U) → R

(A.36)

defines a flow on Uε := {(t, x) | t ∈ R, x ∈ αt (U)} associated with a time-dependent vector field Fε (t, x) :=

d   Φa,t (x). da a=t

(A.37)

ε (t, x), which is bounded in t and bounded as ε → 0, satisfying Further, there exists a vector field F ε (t, x). Fε (t, x) = F x + εg1 (t, x) + · · · + εm gm (t, x) + εm+1 F

Proof.

(A.38)

The proof of the fact that the map Φt,t0 defines a flow is the same as that of Thm.4.4 (i). We prove

Eq.(A.38). The vector field defined by Eq.(A.37) is calculated as d  (m) −1  αa ◦ ϕa−t ◦ αt (x) da a=t  d   −1 −1 m −1 = a=t x0 (a, 0, αt (x)) + εx1 (a, a; αt (x)) + · · · + ε xm (a, a; αt (x)) da     m −1 + · · · + εm (Dh(m) + X(t) + ε(Dh(1) ◦ εR1 (α−1 t (x)) + · · · + ε Rm (αt (x)) t )α−1 t )α−1 t (x) t (x)  d   −1 −1 m −1 = a=t x0 (a, 0, αt (x)) + εx1 (a, t; αt (x)) + · · · + ε xm (a, t; αt (x)) da  d   m −1 +  εx1 (t, a; α−1 t (x)) + · · · + ε xm (t, a; αt (x)) a=t da    m −1 + X(t)+ ε(Dh(1) +· · ·+εm (Dh(m) (A.39) ◦ εR1 (α−1 t (x))+· · ·+ε Rm (αt (x)) . t )α−1 t )α−1 t (x) t (x)

Fε (t, x) =

Since xi (a, t; α−1 t (x)) is a solution of Eq.(A.5), it satisfies d  −1 −1 −1 −1  xi (a, t; αt (x)) = F xi (t, t; αt (x)) + Gi (t, x0 (t, 0, αt (x)), · · · , xi−1 (t, t; αt (x))) da a=t (1) −1 (i−1) −1 −1 (αt (x))). = Fh(i) t (αt (x)) + G i (t, x0 , ht (αt (x)), · · · , ht

(A.40)

And according to Eq.(A.15) and (A.16), the equality  d  (i) −1 −1 −1 (Dh(k) Ri−k (α−1 a=t xi (t, a; αt (x)) = −y1 (t, αt (x)) = −X(t)Ri (αt (x)) − t (x)) t )α−1 t (x) da k=1 i−1

(A.41)

holds. Substituting Eq.(A.40) and Eq.(A.41) into Eq.(A.39), we obtain Fε (t, x)

m    (1) −1 (k−1) −1 −1 (x))+ εk Fh(k) (αt (x))) +O(εm+1 ) = F x0 (t, 0, α−1 t t (αt (x))+G k (t, x0 , ht (αt (x)),· · ·, ht k=1

= F x + εg1 (t, x) + · · · + εm gm (t, x) + O(εm+1 ).

(A.42)

It is hard to write out the term O(εm+1 ) explicitly. However, it is easy to prove that the term O(εm+1 ) is bounded −1 uniformly in t, because it consists of the almost periodic functions X(t), gi (t, x), h(i) t , αt . This ends the proof of

29



Thm.A.6. Theorem 6.1 follows immediately as a corollary of the next theorem. Theorem A.7. Consider an autonomous equation x˙ = F x + εg1 (x) + · · · + εm gm (x), x ∈ Rn ,

(A.43)

where F is a diagonalizable n × n constant matrix all of whose eigenvalues lie on the imaginary axis, and where g1 , · · · , gm are polynomial vector fields on Rn . Suppose that its m-th order RG vector field satisfies R1 (A) = · · · = Rk−1 (A) = 0, Rk (A)  0, k ≤ 2m.

(A.44)

If the vector field Rk (A) has a compact normally hyperbolic invariant manifold N, then Eq.(A.43) also has a normally hyperbolic invariant manifold Nε for sufficiently small ε > 0. The Nε is diffeomorphic to N and its stability coincides with that of N. Proof. Before proving the theorem, we point out that the condition k ≤ 2m is not essential because we can take m ∈ N sufficiently large. Let us denote by Fε (t, x) the approximate vector field for Eq.(A.43) defined by (A.37). From Thm A.6, we can rewrite Eq.(A.43) as ε (t, x). x˙ = Fε (t, x) − εm+1 F On account of Eq.(A.36), the RG vector field εk Rk (x) + · · · + εm Rm (x) satisfies the equation d  −1 εk Rk (x) + · · · + εm Rm (x) =  α ◦ Φa,t ◦ αt (x) da a=t a  dα−1 d  a  −1 = a=t (αt (x)) + (Dαt )αt (x) a=t Φa,t ◦ αt (x) da da dα t (x) + (Dαt )−1 = −(Dαt )−1 x x F ε (t, αt (x)). dt

(A.45)

(A.46)

Introducing a new function y(t) by x(t) = αt ◦ y(t) and substituting it into Eq.(A.45), we obtain dαt ε (t, αt (y(t))). (y(t)) + (Dαt )y(t) y˙ (t) = Fε (t, αt (y(t))) − εm+1 F dt This equation is put together with (A.46) to yield  y˙ = εk Rk (y) + · · · + εm Rm (y) − εm+1 (Dαt )−1 y ◦ F ε (t, αt (y)).

(A.47)

We introduce a new scaled time s by t = s/εk . Then the above equation is rewritten as dy k  = Rk (y) + εRk+1 (y) + · · · + εm−k Rm (y) − εm−k+1 (Dα s/εk )−1 y ◦ F εk (s/ε , α s/εk (y)). ds

(A.48)

k ε (t, y) are bounded uniformly in t ∈ R, (Dα s/εk )−1  Since αt , (Dαt )y and F y ◦ F εk (s/ε , α s/εk (y)) is also bounded as

s → ±∞ and ε → 0. Therefore the time-dependent vector field H(s, y) defined by the right hand side of the above equation is sufficiently close to the vector field Rk (y) in C 1 topology if ε > 0 is sufficiently small. Now we use Fenichel’s theorem. We regard the vector field Rk (y) on Rn as a vector field on R × Rn by putting Rk (t, y) := Rk (y). If Rk (y) has a normally hyperbolic invariant manifold N, Rk (t, y) has a normally hyperbolic invariant manifold R × N in (t, y) space. Since H(s, y) is sufficiently close to Rk (t, y) as an vector field on R × Rn 30

ε which is diffeomorphic to R × N. in C 1 topology, H(s, y) also has a normally hyperbolic invariant manifold N Since x(t) = αt ◦ y(t) and since Dαt is bounded, Eq.(A.43) for x(t) has a normally hyperbolic invariant manifold Nˆ ε which is diffeomorphic to R × N in (t, x) space. Since Eq.(A.43) is autonomous, the manifold Nˆ ε must be straight along the time axis (see the Fig.4). Conse

quently, Eq.(A.43) has a normally hyperbolic invariant manifold on Rn which is diffeomorphic to N.

t

t

x1

x1

x2

x2

x2 flow of Rk(x)

x1

flow of Eq.(A.48)

flow of Eq.(A.43)

Fig. 4: The case that Rk (x) has an invariant circle. In this case, the flows of Eqs.(A.48, 43) have invariant cylinders in the (t, x) space.

Let A(t) be a solution of the m-th order RG equation (A.34) for Eq.(A.1) and define the curve x˜(t) to be m (m) x˜(t) := αt (A(t)) = X(t)A(t) + εh(1) t (A(t)) + · · · + ε ht (A(t)).

(A.49)

Then, x˜(t) is an integral curve of the approximate vector field Fε (t, x) defined by Eq.(A.37) and it gives an approximate solution for Eq.(A.1). Theorem A.8. There exist positive constants ε0 , C, T , and a compact subset V = V(ε) ⊂ Rn including the origin x(t) defined by Eq.(A.49) with x(0) =  x(0) ∈ V satisfy such that for all |ε| < ε0 , every solution x(t) of Eq.(A.1) and  the inequality ||x(t) −  x(t)|| < Cεm , for 0 ≤ t ≤ T/ε.

(A.50)

Proof of Thm.A.8. Suppose that ||x(0)|| < K. At first, we show that there exists T > 0 such that ||x(t)|| < 2K for 0 ≤ t ≤ T/ε. We rewrite Eq.(A.1) as the integral equation  t e−F s εg(s, x(s), ε)ds, x(t) = eFt x(0) + eFt

(A.51)

0

where g(t, x, ε) := g1 (t, x) + εg2 (t, x) + · · · + εm−1 gm (t, x). Choose t ≥ 0 so that ||x(s)|| < 2K if 0 ≤ s ≤ t. Then, there exists a positive constant K  > 0 such that ||g(s, x(s), ε)|| < K  and the inequality  t ε||g(s, x(s), ε)||ds ||x(t)|| ≤ ||x(0)|| + 0    t K ≤K+ εt εK  ds = K 1 + K 0 holds. When 0 ≤ t ≤ K/(K  ε), we have ||x(t)|| < 2K, so that we put T := K/K  for the existence of T . 31

By Thm.A.6, an approximate solution  x(t) satisfies an ODE ε (t,   x˙ (t) = Fε (t,  x) = F x + εg1 (t,  x) + · · · + εm gm (t,  x) + εm+1 F x).

(A.52)

Fix a positive number K such that the closed ball B2K of radius 2K centered at the origin is included in the open set x(t)|| < 2K if || x(0)|| < K αt (U), where U is an open set on which αt is a diffeomorphism. Then, we can verify that || and if 0 ≤ t ≤ T/ε in the same way as above. −1 x(t). They satisfy For x(t) and  x(t) such that x(0) =  x(0), ||x(0)|| < K, we put ξ(t) = α−1 t ◦ x(t), η(t) = αt ◦ 

respective ODEs ε (t, ξ), ˙ = εR1 (ξ) + ε2 R2 (ξ) + · · · + εm Rm (ξ) + εm+1G ξ(t)

(A.53)

η˙ (t) = εR1 (η) + ε R2 (η) + · · · + ε Rm (η),

(A.54)

2

m

ε is a smooth function which is bounded uniformly in t ∈ R and bounded as ε → 0 for each ξ ∈ Rn . Let where G W be the image of the closed ball B2k under the map α−1 t . Then ξ(t) and η(t) are sitting in the compact set W if 0 ≤ t ≤ T/ε. Let L1 > 0 be a Lipschitz constant for R1 (ξ) + εR2 (ξ) + · · · + εm−1 Rm (ξ) on W and suppose that ε (t, ξ)|| < L2 . Then, for 0 ≤ t ≤ T/ε, the inequality sup ||G

t∈R,ξ∈W



t

||ξ(t) − η(t)|| ≤ εL1

||ξ(s) − η(s)||ds + εm+1 L2 t

(A.55)

0

holds. Then, the Gronwall inequality implies that ||ξ(t) − η(t)|| ≤

L2 m εL1 t L2 ε (e − 1) ≤ εm (eL1 T − 1), 0 ≤ t ≤ T/ε. L1 L1

(A.56)

This shows that there exists a positive constant C such that ||x(t) −  x(t)|| = ||αt ◦ ξ(t) − αt ◦ η(t)|| ≤ Cεm holds if 0 ≤ t ≤ T/ε.



The next theorem is simple extension of the Propositions 5.1 and 5.2. Theorem A.9. Consider an autonomous equation (A.43). (i) If vector fields F x and g1 (x), g2 (x), · · · are invariant under the action of a Lie group G, then the m-th order RG equation is also invariant under the action of G. (ii) The m-th order RG equation commutes with the linear vector field F x with respect to Lie bracket product. Equivalently, each Ri (A), i = 1, 2, · · · , satisfies X(t)Ri (A) = Ri (X(t)A), A ∈ Rn .

(A.57)

Proof of Thm.A.9. Recall that Gi in Eq.(A.5) is independent of t since Eq.(A.43) is autonomous. (i) We prove by induction that Ri (A) and h(i) t (A), i = 1, 2, · · · , are invariant under the action of a Lie group G. Since aX(t)A = X(t)aA and ag1 (x) = g1 (ax) hold for all a ∈ G, R1 (aA) is brought into the form  1 t R1 (aA) = lim X(s)−1G1 (X(s)aA)ds t→∞ t  1 t X(s)−1G1 (X(s)A)ds = aR1 (A). = a lim t→∞ t 32

(1) (1) And the invariance of h(1) t , ht (aA) = aht (A), is verified in a similar way. Suppose that Rk (aA) = aRk (A) and (k) h(k) t (aA) = aht (A) hold for k = 1, 2, · · · , i − 1. Then, it is easy to verify that (k) −1 (Dh(k) t )aA = a(Dht )A a ,

Gk (X(t)aA, h(1) t (aA), · · ·

(A.58)

, ht(k−1) (aA))

=

aGk (X(t)A, h(1) t (A), · · ·

, ht(k−1) (A))

(A.59)

(i) for k = 1, 2, · · · , i − 1. This and Eqs.(A.13), (A.14) implies that Ri (aA) = aRi (A) and h(i) t (aA) = aht (A). (i)  (ii) We prove by induction that Ri (X(t)A) = X(t)Ri (A) and h(i) t (X(t )A) = ht+t (A) hold for i = 1, 2, · · · . For all

s ∈ R, R1 (X(s )A) takes the form 1 R1 (X(s )A) = lim t→∞ t 



t



t+s

X(s)−1G1 (X(s)X(s )A)ds  1 t  X(s + s )−1G1 (X(s + s )A)ds. = X(s ) lim t→∞ t

Putting s + s = s , we verify that 1 t→∞ t

R1 (X(s )A) = X(s ) lim

X(s )−1G1 (X(s )A)ds   1 t+s = X(s )R1 (A) + X(s ) lim X(s )−1G1 (X(s )A)ds t→∞ t t = X(s )R1 (A).

 Next, h(1) t (X(s )A) is calculated as

 t

 X(s)−1G1 (X(s)X(s )A) − R1 (X(s )A) ds  t   = X(t)X(s ) X(s )−1 X(s)−1G1 (X(s)X(s )A) − R1 (A) ds  t   = X(t + s ) X(s + s )−1G1 (X(s + s )A) − R1 (A) ds.

 h(1) t (X(s )A) = X(t)

Putting s + s = s provides   h(1) t (X(s )A) = X(t + s )



t+s 

 X(s )−1G1 (X(s )A) − R1 (A) ds = h(1) t+s (A).

(A.60)

(k)   Suppose that Rk (X(t)A) = X(t)Ri (A) and h(k) t (X(t )A) = ht+t (A) hold for k = 1, 2, · · · , i − 1. Then, Ri (X(s )A) is

calculated as 1 t→∞ t

Ri (X(s )A) = lim

 t

 (i−1) (X(s )A)) X(s)−1Gi (X(s)X(s )A, h(1) s (X(s )A), · · · , h s

−X(s)−1 = X(s ) lim

t→∞

i−1    (Dh(k) s )X(s )A Ri−k (X(s )A) ds

 1 t t

k=1 (i−1) X(s + s )−1Gi (X(s + s )A, h(1) s+s (A), · · · , h s+s (A))

−X(s + s )−1

i−1   (Dh(k) s+s )A Ri−k (A) ds. k=1

33

Putting s + s = s provides 1 Ri (X(s )A) = X(s ) lim t→∞ t 





t+s

−X(s )−1

(i−1) X(s )−1Gi (X(s )A, h(1) s (A), · · · , h s (A))

i−1    (Dh(k) s )A Ri−k (A) ds k=1

  1 t+s   −1 (i−1) X(s ) Gi (X(s )A, h(1) s (A), · · · , h s (A)) t→∞ t t i−1   −X(s )−1 (Dh(k) ) R (A) ds  A i−k s

= X(s )Ri (A) + X(s ) lim

k=1

= X(s )Ri (A). (i)  We can show that h(i) t (X(t )A) = ht+t (A) in a similar way.



Reference [1] L. Y. CHEN, N. GOLDENFELD, Y. OONO, Renormalization group theory for global asymptotic analysis, Phys. Rev. Lett., 73(1994), no.10, pp. 1311-1315. [2] L. Y. CHEN, N .GOLDENFELD, Y. OONO, Renormalization group and singular perturbations: Multiple scales, boundary layers, and reductive perturbation theory, Phys. Rev. E, 54(1996), pp. 376-394. [3] T. KUNIHIRO, A geometrical formulation of the renormalization group method for global analysis, Progr. Theoret. Phys., 94(1995), no.4, pp. 503-514. [4] T. KUNIHIRO, The renormalization-group method applied to asymptotic analysis of vector fields, Progr. Theoret. Phys., 97(1997), no.2, pp. 179-200. [5] K. NOZAKI, Y. OONO, Renormalization-group theoretical reduction, Phys. Rev. E, 63(2001), 046101 [6] S. GOTO, Y. MASUTOMI, K. NOZAKI, Lie-Group Approach to Perturbative Renormalization Group Method, Progr. Theoret. Phys., vol.102, No.3.(1999), pp. 471-497. [7] S. EI, K. FUJII, T. KUNIHIRO, Renormalization-group method for reduction of evolution equations; invariant manifolds and envelopes, Ann. Physics, 280(2000),no. 2, pp. 236-298. [8] M. ZIANE, On a certain renormalization group method, J. Math. Phys., 41(2000), no.5, pp. 3290-3299. ´ T. KAPER, Analysis of a renormalization [9] R. E. LEE DEVILLE, A. HARKIN, M. HOLZER, K. JOSIC, group method and normal form theory for perturbed ordinary differential equations, Physica D, (2008) [10] N. FENICHEL, Persistence and smoothness of invariant manifolds for flows, Indiana Univ. Math. J., 21(1971), pp. 193-226. [11] M. W. HIRSCH, C. C. PUGH, M. SHUB, Invariant manifolds, Springer-Verlag, 1977, Lec. Notes in Math., 583. [12] S. WIGGINS, Normally hyperbolic invariant manifolds in dynamical systems , Springer-Verlag, 1994. [13] A. M. FINK, Almost Periodic Differential Equations, Springer-Verlag, Lec. Notes in Math., 377, 1974. [14] N. N. BOGOLIUBOV, Y. A. MITROPOLSKI, Asymptotic Methods in the Theory of Non-Linear Oscillations, Gordon and Breach, New York, 1961.

34