MATHEMATICS OF COMPUTATION Volume 74, Number 250, Pages 781–804 S 0025-5718(04)01665-5 Article electronically published on July 20, 2004
COMPUTING PERIODIC SOLUTIONS OF LINEAR DIFFERENTIAL-ALGEBRAIC EQUATIONS BY WAVEFORM RELAXATION YAO-LIN JIANG AND RICHARD M. M. CHEN
Abstract. We propose an algorithm, which is based on the waveform relaxation (WR) approach, to compute the periodic solutions of a linear system described by differential-algebraic equations. For this kind of two-point boundary problems, we derive an analytic expression of the spectral set for the periodic WR operator. We show that the periodic WR algorithm is convergent if the supremum value of the spectral radii for a series of matrices derived from the system is less than 1. Numerical examples, where discrete waveforms are computed with a backward-difference formula, further illustrate the correctness of the theoretical work in this paper.
1. Introduction We often need to compute periodic solutions of a linear dynamic system under a known periodic excitation in engineering applications, such as circuit simulation and mechanical modelling. These systems are described by linear differential-algebraic equations (DAEs). They have a common form as follows: M x(t) ˙ + Ax(t) + By(t) = f1 (t), (1) Cx(t) + N y(t) = f2 (t), x(0) = x(T ), t ∈ [0, T ], where M and N are, respectively, n1 × n1 and n2 × n2 nonsingular matrices, A is an n1 × n1 matrix, B is an n1 × n2 matrix, C is an n2 × n1 matrix, f1 (t) ∈ Rn1 and f2 (t) ∈ Rn2 for all t ∈ [0, T ] are two known input functions with period T , and x(t) ∈ Rn1 and y(t) ∈ Rn2 for all t ∈ [0, T ] are to be computed. This is a two-point boundary problem [1]. It is also obvious that y(0) = N −1 (f2 (0) − Cx(0)) for (1). Furthermore, y(0) = y(T ) due to x(0) = x(T ) and f2 (0) = f2 (T ). We assume that the boundary condition on periodic solutions of (1) is consistent in what follows. The consistency of boundary conditions for a periodic solution of (1) means that the condition x(0) = x(T ) implies x(0) ˙ = x(T ˙ ) and y(0) = y(T ). Waveform relaxation (WR) is a novel splitting technique in engineering applications. Numerical algorithms incorporated with WR are relaxation-based methods and they are suitable for scientific computations of transient responses for very large Received by the editor June 5, 2002 and, in revised form, August 25, 2003. 2000 Mathematics Subject Classification. Primary 37M05, 65F10, 65L10, 65L80, 65Y05. Key words and phrases. Differential-algebraic equations, periodic solutions, waveform relaxation, spectra of linear operators, linear multistep methods, finite-difference, numerical analysis, scientific computing, circuit simulation. c 2004 American Mathematical Society
781
782
Y.-L. JIANG AND R. M. M. CHEN
dynamic systems. In fact WR was first proposed to simulate MOS VLSI circuits [3, 6, 16]. It consists of a “divide-and-conquer” approach; namely, at differential equations level, it decouples a large system into a number of simplified subsystems in time-domain [2, 5, 8, 11, 13]. Therefore the latency and multirate behaviors of systems can be effectively exploited. Based on WR, we can also directly simulate lossless transmission lines with nonlinear terminations [10]. For a periodic system, after its transient response has practically decayed to zero, the steady-state response is periodic with the same period as the excitation. Since we are interested in the steady-state periodic solution, the numerical algorithm should only compute the response over one period directly without first computing the transient response that precedes it in order to save a great deal of computations. The periodic WR approach to be reported here is an effectively iterative way in function space. The resulting iterative systems with periodic constraint can be numerically solved by the sophisticated codes of DAEs or ODEs on boundary problems in the public domain (see, e.g., [1]). Let M = M1 − M2 , A = A1 − A2 , B = B1 − B2 , C = C1 − C2 , N = N1 − N2 , and [(x(0) )t (·), (y (0) )t (·)]t is a given initial guess. Now, we present a periodic WR algorithm to compute the steady-state periodic response over one period for (1). The periodic WR algorithm of (1) is (k) A1 x(k) (t) + B1 y (k) (t) M1 x˙ (t) + (k−1) (t) + A2 x(k−1) (t) + B2 y (k−1) (t) + f1 (t), = M2 x˙ (2) (k) (k) C1 x (t) + N1 y (t) = C2 x(k−1) (t) + N2 y (k−1) (t) + f2 (t), (k) x (0) = x(k) (T ), t ∈ [0, T ], k = 1, 2, . . . , where we suppose that M1 and N1 are nonsingular in this paper. In order to preserve the consistency of the boundary conditions for every periodic iteration, an initial guess [(x(0) )t (t), (y (0) )t (t)]t in (2) should satisfy [(x(0) )t (0), (y (0) )t (0)]t = [(x(0) )t (T ), (y (0) )t (T )]t and x˙ (0) (0) = x˙ (0) (T ). For any constant guess, the required boundary conditions are naturally held. Often, for a linear system we consider its WR solutions in C([0, T ], Cn ) or 2 L ([0, T ], Cn ) where n = n1 + n2 . This treatment can greatly simplify the theoretical analyses on WR. That is, the convergence behaviors of WR are mainly decided by the corresponding periodic WR operators in these function spaces. The WR solutions of initial value problems of equations as in (1) were reported in [7]. The expressions of spectra and pseudospectra for their WR operators were also clearly understood [9]. However, there are few papers to theoretically analyze the spectra of the periodic WR operator for linear dynamic systems in the WR literature. The linear system of ordinary differential equations (ODEs) in [14] is a special one with an identity coefficient matrix for x(t). ˙ In the paper we discuss the periodic WR operator derived from (2) where an analytic expression of its spectra is obtained. According to this expression we can conveniently provide the convergence condition of (2), namely if the supremum value of the spectral radii for a series of matrices derived from the splitting matrices of (1) is less than 1. We also present the spectral set of discrete periodic WR operators. A finite-difference method is then used for solving the decoupled systems in (2) for all k in our test examples. Numerical experiments are given to support the correctness of the expressions established here.
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
783
2. Spectra of periodic WR operators and convergent splittings For a matrix P ∈ Cn×n it is called noncritical w.r.t. (with respect to) T if √ 2π , i = −1, and σ(P ) is the spectral ipω ∈ / σ(P ) for p = 0, ±1, . . . where ω = T set of P . In other words, P is noncritical if for all p the matrices ipωI + P are invertible where I is the identity matrix with dimensions n × n. The existence condition of periodic solutions for a system of ODEs with an identity coefficient matrix for its derivative term depends on the following lemma [14]. Lemma 1. If P is noncritical w.r.t. T , then the system of ODEs w(t) ˙ + P w(t) = h(t), (3) w(0) = w(T ), t ∈ [0, T ] has a unique solution for any h ∈ C([0, T ], Cn ) or L2 ([0, T ], Cn) with period T . 2.1. The general case: DAEs. First, based on Lemma 1 and using elementary operations we derive the existence conditions of solutions on (1) and (2) for any given k. Lemma 2. The system of DAEs given in (1) has a unique periodic solution if M −1 (A − BN −1 C) is noncritical w.r.t. T . Similarly, for any fixed k the decoupled system in (2) has a unique periodic solution if M1−1 (A1 − B1 N1−1 C1 ) is noncritical w.r.t. T . In this paper we suppose that periodic solutions of (1) and (2) always exist. We will study under what condition the iterative solutions of (2) converge to the periodic solution of (1). We denote D1 = A1 − B1 N1−1 C1 , D2 = A2 − B1 N1−1 C2 , E = B2 − B1 N1−1 N2 , and g(t) = f1 (t) − B1 N1−1 f2 (t) on [0, T ]. By eliminating y (k) (t) from the first and second equations of (2), we have M1 x˙ (k) (t) + D1 x(k) (t) = M2 x˙ (k−1) (t) + D2 x(k−1) (t) + Ey (k−1) (t) + g(t), (4) x(k) (0) = x(k) (T ), t ∈ [0, T ]. The solution of (4) is (5) x(k) (t) = M1−1 M2 x(k−1) (t) +
t
−1
eM1
D1 (s−t)
0
M1−1 (D2 − D1 M1−1 M2 )x(k−1) (s)ds
−1
+ e−M1 D1 t (x(k) (0) − M1−1 M2 x(k−1) (0)) t −1 eM1 D1 (s−t) M1−1 (Ey (k−1) (s) + g(s))ds. + 0
We use the constraint condition x(k) (0) = x(k) (T ) to solve for x(k) (0) in (5): x(k) (0) = M1−1 M2 x(k−1) (0) −1
+ (I − e−M1 (6)
D1 T −1
T
−1
D1 (s−T )
M1−1 (D2 − D1 M1−1 M2 )
−1
D1 (s−T )
M1−1 (Ey (k−1) (s) + g(s))ds.
eM1
0
× x(k−1) (s)ds + (I − e
)
−M1−1 D1 T −1
)
0
T
eM1
784
Y.-L. JIANG AND R. M. M. CHEN
After substituting (6) into (5), we arrive at a formula for x(k) (t) on [0, T ] without any boundary values. It is x(k) = K1 x(k−1) + K2 y (k−1) + ϕ1 ,
(7)
where, for u ∈ C([0, T ], Cn1 ) or L2 ([0, T ], Cn1 ) t −1 eM1 D1 (s−t) M1−1 (D2 − D1 M1−1 M2 )u(s)ds K1 u(t) = M1−1 M2 u(t) + 0
−M1−1 D1 t
(8)
+e ×
T
−1
eM1
−1
(I − e−M1
D1 (s−T )
0
D1 T −1
)
M1−1 (D2 − D1 M1−1 M2 )u(s)ds,
for v ∈ C([0, T ], Cn2 ) or L2 ([0, T ], Cn2 ) t −1 eM1 D1 (s−t) M1−1 Ev(s)ds K2 v(t) = 0 (9) −M1−1 D1 t −M1−1 D1 T −1 (I − e ) +e
T
−1
eM1
D1 (s−T )
0
and on [0, T ] ϕ1 (t) (10)
=
t
−1
eM1
D1 (s−t)
0 −1
+e−M1
D1 t
M1−1 g(s)ds −1
(I − e−M1
D1 T −1
)
T
−1
eM1
M1−1 Ev(s)ds,
D1 (s−T )
0
M1−1 g(s)ds.
On the other hand, we also have (11)
y (k) = K3 x(k−1) + K4 y (k−1) + ϕ2 ,
where K3 u(t) = (N1−1 C2 − N1−1 C1 K1 )u(t) for u ∈ C([0, T ], Cn1 ) or L2 ([0, T ], Cn1 ), K4 v(t) = (N1−1 N2 − N1−1 C1 K2 )v(t) for v ∈ C([0, T ], Cn2 ) or L2 ([0, T ], Cn2 ), and ϕ2 (t) = −N1−1 C1 ϕ1 (t) + N1−1 f2 (t) on [0, T ]. We can compactly write together (7) and (11) as z (k) = Kz (k−1) + ϕ, K1 K2 where z (l) (t) = [(x(l) )t (t), (y (l) )t (t)]t , K = , and ϕ(t) = [ϕt1 (t), ϕt2 (t)]t K3 K4 on [0, T ]. The linear operator K is a periodic WR operator of (1). It is the sum of a constant matrix multiplication operator and a linear periodic convolution operator where the periodic convolution operator is compact (see [14]). The spectral set of K is bounded and closed in C([0, T ], Cn ) or L2 ([0, T ], Cn ) since K is a linear bounded operator. We can analytically write out its spectral set. (12)
Definition 1. For a periodic integral function w(t) on [0, T ], its Fourier coefficients are 1 T 2π , p = 0, ±1, . . . . w ˜p = w(t)e−ipωt dt, ω = T 0 T Let us also define the matrices Q(ipω) for p = 0, ±1, ±2, . . . as (13)
M1−1 M2 0 −1 N C − N1−1 C1 M1−1 M2 N1−1 N2 2 1 −1 −1 −1 −1 (ipωI + M1 D1 ) M1 (D2 − D1 M1 M2 ) (ipωI + M1−1 D1 )−1 M1−1 E . + −N1−1 C1 (ipωI + M1−1 D1 )−1 M1−1 (D2 − D1 M1−1 M2 ) −N1−1 C1 (ipωI + M1−1 D1 )−1 M1−1 E Q(ipω) =
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
785
Theorem 1. The spectral set of the linear operator K in (12) is (14) σ(K) = {σ(Q(ipω)) : p = 0, ±1, . . .}. Proof. Because the linear operator K is a compact perturbation of the constant ˜ where matrix multiplication operator Q 0 M1−1 M2 ˜ , (15) Q= N1−1 C2 − N1−1 C1 M1−1 M2 N1−1 N2 ˜ is an isolated eigenvalue of K (see [12]). Let z(t) = any element λ ∈ σ(K)\σ(Q) [xt (t), y t (t)]t and assume (λ, z) is an eigenpair of the operator K, by calculating the Fourier series
coefficients of the relationship Kz = λz. Then we can derive that ˜ ⊆ {σ(Q(p)) : p = 0, ±1, . . .}. In fact, for p = 0, ±1, . . ., σ(K)\σ(Q) T
1 K x + K y 1 p 2 p p = (16) Kz Kz(t)e−ipωt dt = ,
T 0 K 3 xp + K4 y p where 1
K 1 xp = T and 1
K 3 xp = T
T
0
T
0
K1 x(t)e
−ipωt
K3 x(t)e
−ipωt
dt,
dt,
1
K 2 yp = T 1
K 4 yp = T
T
0
0
T
K2 y(t)e−ipωt dt,
K4 y(t)e−ipωt dt.
Let us calculate the term K ˜p = [˜ xtp , y˜pt ]t . From (8), by 1 xp in (16). We denote z exchanging the order of integrals and using of the equality eipωT = 1, we have (17)
K 1 xp
= =
=
=
1 T
T
K1 x(t)e−ipωt dt T −1 −1 1 T M1−1 M2 x ˜p + e−(ipωI+M1 D1 )t dt eM1 D1 s T 0 s ×M1−1 (D2 − D1 M1−1 M2 )x(s)ds T −1 −1 −1 1 T + e−(ipωI+M1 D1 )t dt (I − e−M1 D1 T )−1 eM1 D1 (s−T ) T 0 0 ×M1−1 (D2 − D1 M1−1 M2 )x(s)ds T 1 M1−1 M2 x ˜p + (ipωI + M1−1 D1 )−1 e−ipωs M1−1 (D2 − D1 M1−1 M2 )x(s)ds T 0 −1 1 T − (ipωI + M1−1 D1 )−1 eM1 D1 (s−T ) M1−1 (D2 − D1 M1−1 M2 )x(s)ds T 0 −1 1 T + (ipωI + M1−1 D1 )−1 eM1 D1 (s−T ) M1−1 (D2 − D1 M1−1 M2 )x(s)ds T 0 xp . [M1−1 M2 + (ipωI + M1−1 D1 )−1 M1−1 (D2 − D1 M1−1 M2 )]˜ 0
Similarly, (18)
1
K 2 yp = T
0
T
K2 y(t)e−ipωt dt = (ipωI + M1−1 D1 )−1 M1−1 E y˜p .
According to the expressions of K3 x(t) and K4 y(t), by (17) and (18) we can also
calculate the terms K 3 xp and K4 y p for p = 0, ±1, . . .. Thus, (19)
p = Q(ipω)˜ zp , Kz
p = 0, ±1, . . . .
786
Y.-L. JIANG AND R. M. M. CHEN
p = λ Because there exists some p such that z˜p = 0 since z = 0, the equality Kz zp and (19) imply λ ∈ σ(Q(ipω)). It follows that ˜ σ(K) ⊆ σ(Q) {σ(Q(ipω)) : p = 0, ±1, . . .} = {σ(Q(ipω)) : p = 0, ±1, . . .}. On the other hand, for any fixed p (p ∈ {0, ±1, . . .}) let (λp , Zp ) where Zp = [Xpt , Ypt ]t be an eigenpair of the matrix Q(ipω). If we define zp (t) = Zp eipωt such that zp (t) = [xtp (t), ypt (t)]t (xp (t) = Xp eipωt , yp (t) = Yp eipωt ), then Kzp = λp zp . Let us see the case of K1 xp (t). By (8) and the equality eipωT = 1, we have (20) K1 xp (t) = M1−1 M2 Xp eipωt t −1 (ipωI+M1−1 D1 )s + e ds e−M1 D1 t M1−1 (D2 − D1 M1−1 M2 )Xp 0
−1
+ e−M1
D1 t
−1
(I − e−M1
D1 T −1
× (D2 − D1 M1−1 M2 )Xp = M1−1 M2 Xp eipωt + [(ipωI D1 t
−1
e(ipωI+M1
D1 )s
−1 ds e−M1 D1 T M1−1
+ M1−1 D1 )−1 M1−1 (D2 − D1 M1−1 M2 )Xp eipωt −1
−1
T
0
− (ipωI + M1−1 D1 )−1 e−M1 + e−M1
)
D1 t
M1−1 (D2 − D1 M1−1 M2 )Xp ]
(ipωI + M1−1 D1 )−1 M1−1 (D2 − D1 M1−1 M2 )Xp
= [M1−1 M2 + (ipωI + M1−1 D1 )−1 M1−1 (D2 − D1 M1−1 M2 )]xp (t). Similarly, we may calculate the terms K2 yp (t), K3 xp (t), and K4 yp (t). These calculations result in Kzp = λp zp , namely σ(Q(ipω)) ⊆ σ(K). It follows that {σ(Q(ipω)) : p = 0, ±1, . . .} ⊆ σ(K), since the spectral set σ(K) is closed. This completes the proof of Theorem 1. Because ρ(K) = sup{ρ(Q(ipω)) : p = 0, ±1, . . .}, the convergence condition (i.e., ρ(K) < 1) of the periodic WR algorithm (2) can be derived from the above theorem. Now we conclude that a splitting of (1), such as in (2), is convergent if the resulting algorithm converges to a periodic solution of (1). Corollary 1. The periodic WR splitting in (2) is convergent if (21)
sup{ρ(Q(ipω)) : p = 0, ±1, . . .} < 1.
Often, a splitting of (1) in (2) is called a convergent splitting if it satisfies (21). Making use of Theorem 1, we may compare ρ(K) and ρ(K∞ ), where K∞ is the WR operator of the initial value problem on [0, +∞) for linear DAEs as (1) under the same splitting as before. Suppose that the eigenvalues of M1−1 D1 have positive real parts. In C([0, T ], Cn ) or L2 ([0, T ], Cn ) we have established an expression (see [9, Theorem 1]): (22) σ(K∞ ) = {σ(K(ξ)) : Re(ξ) ≥ 0}, where ξ ∈ C and the matrix-valued symbol K(ξ) satisfies −1 M2 ξ + A2 M1 ξ + A1 B1 (23) K(ξ) = C1 N1 C2
B2 N2
.
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
787
On the other hand, it is elementary to show the following relationship: ipωM1 + A1 B1 ipωM2 + A2 B2 (24) Q(ipω) = . C1 N1 C2 N2 That is, (25)
Q(ipω) = K(ipω).
In other words, we can rewrite (14) and (21) as (26) σ(K) = {σ(K(ipω)) : p = 0, ±1, . . .} and (27)
sup{ρ(K(ipω)) : p = 0, ±1, . . .} < 1.
Based on (14), (22), and (25), it implies σ(K) ⊆ σ(K∞ ). We also pay attention to the fact that ξ in (22) belong to the right half complex plane and ipω in (14) are some points of the imaginary axis. That is, σ(K) is only a subset of σ(K∞ ). Corollary 2. If the M1−1 D1 have eigenvalues with positive real parts, then −1 A1 B1 A2 B2 (28) σ ⊆ σ(K) ⊆ σ(K∞ ). C1 N1 C2 N2 By the above corollary, we yield a lower bound and an upper bound of ρ(K) as follows: −1 A1 B1 A2 B2 (29) ρ ≤ ρ(K) ≤ ρ(K∞ ). C1 N1 C2 N2 For (2), if we want to achieve its convergence, the spectral radius of the corresponding iterative operator K has to be less than 1. It is often impossible to numerically compute the spectral radius for a general operator in function space without its spectral expression. Now, the expressions (14) and (13) in theory provide us a very useful tool to do such computation task. By looking at (13), we ˜ where σ(Q) ˜ = σ(M −1 M2 ) σ(N −1 N2 ) in which Q ˜ know lim|p|→+∞ Q(ipω) = Q 1 1 appears in (15). It says there is a positive integer p˜ such that the spectra of ˜ for all p, where |p| > p˜, have no contribution for the the matrices Q(ipω) − Q computed value ρ(K). Thus, we could save a great deal of computation since only the finite value sup{ρ(Q(ipω)) : p = 0, ±1, . . . , ±˜ p} needs to be computed. We learn the convergence of (2) in advance if the computed spectral diagram of
{σ(Q(ipω)) : p = 0, ±1, . . . , p˜} is strictly located in the unite cycle of the complex plane. If so, the corresponding WR splittings are convergent. We then adopt these convergent splittings to compute the periodic solution of (1). Besides, for a general spitting of (1) we can also yield the convergence of (2) if its WR solutions on initial value problem are convergent on the infinite time interval [0, +∞) due to Corollary 2. Furthermore, by Corollary 2 again we realize that a necessary condition of convergence on periodic WR is −1 A2 B2 A1 B1 < 1. ρ C1 N1 C2 N2 In other words, in order to continue our computation process, in practice we should choose those splittings which first satisfy the above necessary condition on convergence.
788
Y.-L. JIANG AND R. M. M. CHEN
For some typical matrices, which usually come from semi-discrete forms of PDEs in engineering applications, fortunately we may further avoid directly computing the unknown value ρ(K) shown in Corollary 1. Meanwhile, we can also conveniently choose the corresponding convergent splittings from these typical matrices. The following text is just for that purpose. of this subsection Al Bl A B . We now state a simple and useful Let Υ = and Υl = Cl Nl C N result on the spectral radius ρ(K). Theorem 2. Assume that M and Υ are symmetric. If M1 and Υ1 are symmetric positive (negative) definite matrices, then ρ(K) = max{ρ(M1−1 M2 ), ρ(Υ−1 1 Υ2 )}.
(30)
Proof. We show only the case of symmetric positive definite matrices. For any fixed p (p ∈ {0, ±1, . . .}), let λ(ipω) be an eigenvalue of Q(ipω) with the eigenvector u(ipω). According to (24), we have M2 0 M1 0 ipω + Υ2 u(ipω) = λ(ipω) ipω + Υ1 u(ipω). 0 0 0 0 It follows that (31)
λ(ipω) =
Υ2 u(ipω), u(ipω) + ipωM2 u1 (ipω), u1 (ipω) , Υ1 u(ipω), u(ipω) + ipωM1 u1 (ipω), u1 (ipω)
where u(ipω) = [ut1 (ipω), ut2 (ipω)]t . The denominator of (31) is not zero because M1 and Υ1 are symmetric positive definite. If we note that the inner products in (31) are real, we can yield Υ2 u(ipω), u(ipω) 2 + p2 ω 2 M2 u1 (ipω), u1 (ipω) 2 . (32) |λ(ipω)| = Υ1 u(ipω), u(ipω) 2 + p2 ω 2 M1 u1 (ipω), u1 (ipω) 2 Based on (32), it holds that max |Υ2 u(ipω), u(ipω) | , |M2 u1 (ipω), u1 (ipω) | , Υ1 u(ipω), u(ipω) M1 u1 (ipω), u1 (ipω) (33) |λ(ipω)| ≤ if u1 (ipω) = 0, |Υ2 u(ipω), u(ipω) | , otherwise. Υ1 u(ipω), u(ipω) It is known that for any symmetric positive definite matrix P1 there exists a symmetric positive definite matrix P˜1 such that P˜12 = P1 . Thus, for a symmetric matrix P2 and x = 0 we deduce (34)
|P˜1 (P1−1 P2 )P˜1−1 y, y | |P2 x, x | = ≤ ρ(P1−1 P2 ), P1 x, x y, y
where y = P˜1 x. It is a fact that the matrix P˜1 (P1−1 P2 )P˜1−1 (= P˜1−1 P2 P˜1−1 ) is also symmetric. By use of condition (34), inequality (33) becomes (35)
|λ(ipω)| ≤ max{ρ(M1−1 M2 ), ρ(Υ−1 1 Υ2 )}.
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
789
On the other hand, we have the inequality (36) ˜ ρ(Υ−1 Υ2 )} sup{ρ(Q(ipω)) : p = 0, ±1, . . .} ≥ max{ρ(Q), 1 = max{ρ(M1−1 M2 ), ρ(N1−1 N2 ), ρ(Υ−1 1 Υ2 )} ≥ max{ρ(M1−1 M2 ), ρ(Υ−1 Υ )} 2 1 by (13), (15), and (24). This completes the proof of Theorem 2. Corollary 3. Assume that M , M1 , Υ, and Υ1 are symmetric positive (negative) definite matrices. The condition ρ(K) < 1 holds if and only if (iff ) 2M1 − M and 2Υ1 − Υ are symmetric positive (negative) definite matrices. Proof. We consider only the symmetric positive definite case. By (32), it is sufficient to show that for two symmetric positive definite matrices P and P1 the relationship |P2 x, x | < P1 x, x where x = 0 holds iff 2P1 − P (= P1 + P2 ) is a symmetric positive definite matrix where P = P1 − P2 . Because |P2 x, x | < P1 x, x iff −P1 x, x < P1 x, x −P x, x since P x, x > 0, it follows that |P2 x, x | < P1 x, x iff 0 < (2P1 −P )x, x . It is equivalent to 2P1 −P being a symmetric positive definite matrix. This completes the proof of Corollary 3. 2.2. The special case: ODEs. Let us discuss a typical and important special case of (1) as follows: (37)
M x(t) ˙ + Ax(t) = f (t),
x(0) = x(T ),
t ∈ [0, T ],
where the matrix M is nonsingular. This is a system of ODEs, which also comes from circuit simulation and spatially semi-discrete approximations of parabolic partial differential equations (PPDEs) by finite element methods or preconditioned techniques in spatial variables. Its periodic WR algorithm is read as M1 x˙ (k) (t) + A1 x(k) (t) = M2 x˙ (k−1) (t) + A2 x(k−1) (t) + f (t), (38) x(k) (0) = x(k) (T ), t ∈ [0, T ], k = 1, 2, . . . , where M = M1 − M2 in which the matrix M1 is nonsingular and A = A1 − A2 . The following lemma can be deduced from Lemma 2. Lemma 3. The system of ODEs (37) has a unique periodic solution if M −1 A is noncritical w.r.t. T . Similarly, for any fixed k the decoupled system in (38) has a unique periodic solution if M1−1 A1 is noncritical w.r.t. T . We assume that the system (37) and for any fixed k the system of (38) have periodic solutions. As shown previously in Theorem 1, we specifically have Theorem 3. The spectral set of the periodic WR operator K for (37) is (39) σ(K) =
{σ(M1−1 M2 + (ipωI + M1−1 A1 )−1 M1−1 (A2 − A1 M1−1 M2 )) : p = 0, ±1, . . .}.
This theorem is a direct generalization of Theorem 2.4 in [14], where a constraint on M1 and M2 , i.e., M1 is the identity matrix and M2 is the zero matrix, had been made.
790
Y.-L. JIANG AND R. M. M. CHEN
Corollary 4. The periodic WR splitting in (38) is convergent if (40) sup{ρ(M1−1 M2 + (ipωI + M1−1 A1 )−1 M1−1 (A2 − A1 M1−1 M2 )) : p = 0, ±1, . . .} < 1. By a special case of the relationship (24), we can also rewrite (39) and (40) as (41) σ(K) = {σ((ipωM1 + A1 )−1 (ipωM2 + A2 )) : p = 0, ±1, . . .} and (42)
sup{ρ((ipωM1 + A1 )−1 (ipωM2 + A2 )) : p = 0, ±1, . . .} < 1.
The lower bound and upper bound of K are simple in this case. We need to define a matrix-valued symbol K(ξ) as (43)
K(ξ) = (M1 ξ + A1 )
−1
(M2 ξ + A2 ),
ξ ∈ C.
M1−1 A1
have eigenvalues with positive real parts, then the Corollary 5. If the spectral radius of the periodic WR operator K for (38) satisfies (44)
ρ(A−1 1 A2 ) ≤ ρ(K) ≤ sup{ρ(K(ξ)) : Re(ξ) ≥ 0}.
Under the condition of Corollary 5, we also have (45) σ(A−1 {σ(K(ξ)) : Re(ξ) ≥ 0}. 1 A2 ) ⊆ σ(K) ⊆ Corollaries 4 and 5 imply that ρ(A−1 1 A2 ) < 1 is a necessary condition of convergence for the iterative algorithm (38) for any splitting. A sufficient condition on the convergence of (38) is presented in the following corollary. Corollary 6. Assume that M , M1 , A, and A1 are symmetric positive (negative) definite matrices. The periodic WR algorithm (38) is convergent if 2M1 − M and 2A1 − A are symmetric positive (negative) definite matrices. Now we study another special case, namely linear second-order ODEs. Its spectral expressions on periodic WR operators can be conveniently deduced from (39). The form of the equations is (46)
L¨ x(t) + S x(t) ˙ + Gx(t) = f (t),
x(0) ˙ = x(T ˙ ),
x(0) = x(T ),
t ∈ [0, T ],
where the matrix L is nonsingular. The above system also occurs in circuit simulation and mechanical models. The periodic WR algorithm of (46) may be written as (47) ¨(k) + S1 x˙ (k) (t) + G1 x(k) (t) = L2 x¨(k−1) + S2 x˙ (k−1) (t) + G2 x(k−1) (t) + f (t), L1 x (k) x˙ (0) = x˙ (k) (T ), x(k) (0) = x(k) (T ), t ∈ [0, T ], k = 1, 2, . . . , where L = L1 − L2 in which the matrix L1 is nonsingular, S = S1 − S2 , and G = G1 − G2 . Let y(t) = x(t) ˙ and y (l) (t) = x˙ (l) (t). Then (46) and (47), respectively, become (48)
x(t) ˙ − y(t) = 0, x(0) = x(T ),
Ly˙ + Gx(t) + Sy(t) = f (t), y(0) = y(T ),
t ∈ [0, T ]
and (49) (k) x˙ (t) − y (k) (t) = 0, L1 y˙ (k) + G1 x(k) (t) + S1 y (k) (t) = L2 y˙ (k−1) + G2 x(k−1) (t) + S2 y (k−1) (t) + f (t), (k) x (0) = x(k) (T ), y (k) (0) = y (k) (T ), t ∈ [0, T ], k = 1, 2, . . . .
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
791
We denote z(t) = [xt (t), y t (t)]t and z (l) (t) = [(x(l) )t (t), (y (l) )t (t)]t . Then, we can further write (48) and (49) as in the forms of (37) and (38) by I 0 0 −I 0 (50) z(t) ˙ + z(t) = , z(0) = z(T ), t ∈ [0, T ] 0 L G S f (t) and
(51)
I 0
0 −I 0 (k) z˙ (t) + z (k) (t) G1 S1 L1 0 0 0 0 0 (k−1) (k−1) (t) + (t) + , = z˙ z G2 S2 f (t) 0 L2
z (k) (0) = z (k) (T ),
t ∈ [0, T ],
k = 1, 2, . . . .
The following lemma states the existence of periodic solutions of (46) and the decoupled system in (47) for any given k by use of (50) and (51) according to Lemma 3. For simplicity, we omit its proof. Lemma 4. The system of second-order ODEs (46) has a unique periodic solution 0 −I if is noncritical w.r.t. T . Similarly, for any fixed k the decoupled L−1 G L−1 S 0 −I system in (47) has a unique periodic solution if is noncritical L−1 L−1 1 G1 1 S1 w.r.t. T . For p = 0, ±1, . . ., we denote Θ(ipω) = −p2 ω 2 L + ipωS + G and Θ1 (ipω) = −p2 ω 2 L1 + ipωS1 + G1 . The above lemma has a direct corollary. We now present it and give a complete proof. Corollary 7. The system of second-order ODEs (46) has a unique periodic solution if for all p the matrices Θ(ipω) are invertible. Similarly, for any fixed k the decoupled system in (47) has a unique periodic solution if for all p the matrices Θ1 (ipω) are invertible. Proof. We prove only the unique existenceof periodic solutions on (46). By Lemma 0 −I 4, it is sufficient to show that the matrix is noncritical w.r.t. T . L−1 G L−1 S First, we know ipωI −I I 0 0 −I = ipω + L−1 G ipωI + L−1 S 0 I L−1 G L−1 S I 0 ipωI −I = . 0 L−1 G ipωL + S ipωI −I Next, we need to prove that for all p the matrices are G ipωL + S invertible if the matrices Θ(ipω) are invertible. If the matrices Θ(ipω) for all p are invertible, it is easy to check −1 −1 Θ (ipω)(ipωL + S) Θ−1 (ipω) ipωI −I = . (52) −Θ−1 (ipω)G ipωΘ−1 (ipω) G ipωL + S 0 −I In other words, the matrix is noncritical w.r.t. T if for all p the L−1 G L−1 S matrices Θ(ipω) are invertible. This completes the proof of Corollary 7.
792
Y.-L. JIANG AND R. M. M. CHEN
Now, we assume that (46) and for any fixed k the system of (47) have periodic solutions. Namely, the matrices Θ(ipω) and Θ1 (ipω) are invertible for all p. Theorem 4. The spectral set of the periodic WR operator K for (47) is (53) −1 ipωI 0 0 −I σ(K) = : p = 0, ±1, . . . . σ G2 ipωL2 + S2 G1 ipωL1 + S1 Proof. Based on (51) and referring to (38), we have I 0 0 −I 0 0 0 , A1 = , M2 = , A2 = M1 = 0 L1 G1 S1 0 L2 G2 Furthermore, we know ipωI ipωM1 + A1 = G1
−I ipωL1 + S1
,
Namely, for p = 0, ±1, . . ., it follows that ipωI −1 (ipωM1 + A1 ) (ipωM2 + A2 ) = G1
ipωM2 + A2 =
−I ipωL1 + S1
−1
0 G2 0 G2
0 S2
0
. .
ipωL2 + S2 0
ipωL2 + S2
.
Thus, (53) is directly deduced from (41). This completes the proof of Theorem 4. Corollary 8. The periodic WR splitting in (47) is convergent if (54) −1 ipωI −I 0 0 sup ρ : p = 0, ±1, . . . < 1. G1 ipωL1 + S1 G2 ipωL2 + S2 The following theorem provides another expression of σ(K) in (53). For p = 0, ±1, . . ., we also denote Θ2 (ipω) = −p2 ω 2 L2 + ipωS2 + G2 . Theorem 5. For all p, if the matrices Θ1 (ipω) are invertible, then we have −1
ipωI −I 0 0 σ : p = 0, ±1, . . . G1 ipωL1 + S1 G2 ipωL2 + S2 (55)
= {σ(Θ−1 1 (ipω)Θ2 (ipω)) : p = 0, ±1, . . .} {0}. Proof. In order to prove the theorem, it is sufficient to show that the following relationship holds for any fixed p: (56) −1 0 0 ipωI −I {0}. = σ(Θ−1 σ 1 (ipω)Θ2 (ipω)) G2 ipωL2 +S2 G1 ipωL1 +S1 By (52), similarly we know −1 −1 Θ1 (ipω)(ipωL1 + S1 ) Θ−1 (ipω) ipωI −I 1 = . G1 ipωL1 + S1 −Θ−1 ipωΘ−1 1 (ipω)G1 1 (ipω) Furthermore, we have −1 ipωI −I 0 0 G1 ipωL1 + S1 G2 ipωL2 + S2 (57) Θ−1 (ipω)G2 Θ−1 (ipω)(ipωL2 + S2 ) 1 1 = . −1 ipωΘ−1 1 (ipω)G2 ipωΘ1 (ipω)(ipωL2 + S2 )
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
793
For p = 0, from (57) we obtain
0 G1
−I S1
−1
0 G2
0 S2
G−1 1 G2 0
=
G−1 1 S2 0
since Θ1 (0) = G1 . Thus, we have σ
0 G1
−I S1
−1
0 G2
0 S2
= σ(G−1 1 G2 )
{0} = σ(Θ−1 {0} 1 (0)Θ2 (0))
since Θ2 (0) = G2 . It says that (56) is valid for p =0. −1 1 I ipωI 0 = For p = 0, we first know ipω −ipωI I I basic calculation to yield
ipωI −ipωI =
0 I
0 . For (57), we do I
−1 Θ−1 ipωI 0 1 (ipω)(ipωL2 +S2 ) −ipωI I ipωΘ−1 2) 1 (ipω)(ipωL2 +S 1 −1 −1 I 0 ipωΘ1 (ipω)G2 ipωΘ1 (ipω)(ipωL2 +S2 ) ipω 0 0 I I −1 −1 Θ1 (ipω)Θ2 (ipω) ipωΘ1 (ipω)(ipωL2 +S2 ) = . 0 0 Θ−1 1 (ipω)G2 ipωΘ−1 1 (ipω)G2
Namely, from (57) and the above relationship we have σ
ipωI G1
−I ipωL1 +S1
−1
0 G2
0 ipωL2 +S2
= σ(Θ−1 1 (ipω)Θ2 (ipω))
{0}.
It says that (56) is also valid for p = 0. This completes the proof of Theorem 5. By invoking (55), we can rewrite (53) and (54) as (58) σ(K) =
{σ((−p2 ω 2 L1 +ipωS1 +G1 )−1 (−p2 ω 2 L2 +ipωS2 +G2 )) : p = 0, ±1, . . .} × {0}
and (59) sup{ρ((−p2 ω 2 L1 + ipωS1 + G1 )−1 (−p2 ω 2 L2 + ipωS2 + G2 )) : p = 0, ±1, . . .} < 1.
3. Spectra of discrete periodic WR operators and finite-difference for solving periodic WR solutions In this section we consider the discrete case of Section 2 and give a finitedifference formula for solving the periodic WR solution of (1).
794
Y.-L. JIANG AND R. M. M. CHEN
3.1. Spectra of discrete periodic WR operators. Now we discuss the application of the linear multistep method in the periodic WR algorithm (2). For this purpose, let us fix the time increment τ = T /N and discretize (2) by a linear multistep its characteristic polynomials are a(ξ) and b(ξ), i.e., m method, where m a(ξ) = j=0 αj ξ j and b(ξ) = j=0 βj ξ j , to obtain (60) m m m 1 (k) (k) (k) α M x + β A x + βj B1 yp−m+j j 1 p−m+j j 1 p−m+j τ j=0 j=0 j=0 m m 1 (k−1) (k−1) = αj M2 xp−m+j + βj A2 xp−m+j τ j=0 j=0 m m (k−1) + β B y + j 2 p−m+j j=0 j=0 βj (f1 )p−m+j , (k) (k) (k−1) (k−1) + N2 xp + (f2 )p , p = 0, ±1, . . . , k = 1, 2, . . . . C1 xp + N1 xp = C2 xp In the above algorithm we assume that a(ξ) and b(ξ) have no common roots where a(1) = 0 and a(1) ˙ = b(1). In practical codes one adopts a convergent linear multistep method to solve DAEs of (2). A special case of the linear multistep method is the backward differentiation formula (BDF) where b(ξ) = ξ m . The mstep constant BDF method converges to O(τ m ) for m < 7 [1]. (k) (k) (k) (k) ∞ Let xτ and yτ stand for the infinite sequences {xp }∞ p=−∞ and {yp }p=−∞ , (k−1)
(k−1)
and similarly for xτ , yτ , (f1 )τ and (f2 )τ . These infinite sequences are N (k) (k) periodic, for example it means that xp+N = xp (p = 0, ±1, . . .) for the sequence (k)
{xp }∞ p=−∞ . Now we simply rewrite (60) as 1 (k) (k) (k) τ aM1 xτ + bA1 xτ + bB1 yτ 1 (61) = aM2 x(k−1) + bA2 x(k−1) + bB2 yτ(k−1) + b(f1 )τ , τ τ τ (k) (k) (k−1) (k−1) + N2 xτ + (f2 )τ , k = 1, 2, . . . , C1 xτ + N1 xτ = C2 xτ where we denote the infinite sequences ∞ m (l) αj Ms xp−m+j , p=−∞
j=0
and
m j=0
by
(l) aMs xτ ,
m
(l) bAs xτ ,
and
j=0
(l) βj As xp−m+j
∞ , p=−∞
∞
(l)
βj Bs yp−m+j
p=−∞
(l) bBs yτ .
Definition 2. For an N -periodic sequence wτ , its discrete Fourier coefficients are N −1 1 w ˜p = wq e−ipq(2π/N ) , N q=0
p = 0, ±1, . . . .
By use of Definition 2, we know that wτ =
N −1 q=0
where τ,q = {eipq(2π/N ) }∞ p=−∞ .
w ˜q τ,q ,
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
795
Condition (S). For the characteristic polynomials a(ξ) and b(ξ), we assume that the matrix 1a −1 (ξ q )M1 + A1 B1 (62) , q = 0, 1, . . . , N − 1 τ b C1 N1 exists for the splitting matrices M1 , A1 , B1 , C1 , and N1 in which ξ = ei(2π/N ) . (l)
(l)
(l)
Let zτ = [(xτ )t , (yτ )t ]t . If Condition (S) holds, the solution of (61) for any fixed k can be written as zτ(k) = Kτ zτ(k−1) + ϕτ ,
(63) where Kτ zτ =
N −1
1a q (ξ )M1 + A1 τ b C1
q=0
and ϕτ =
N −1 q=0
−1 B1 N1
1a q (ξ )M2 + A2 τ b C2
1a (ξ q )M1 + A1 τb C1
B2 N2
z˜q τ,q
−1 B1 N1
f˜q τ,q
in which f˜q = [(f˜1 )tq , (f˜2 )tq ]t . Using the same approach given in [14], we can show the following theorem (we omit the proof here). Theorem 6. Under Condition (S) the spectral set of the discrete periodic WR operator Kτ in (63) is (64) σ(Kτ ) =
σ
1a q (ξ )M1 + A1 τ b C1
−1 B1 N1
1a q (ξ )M2 + A2 τ b C2
B2 N2
:
q = 0, 1, . . . , N − 1 , where ξ = ei(2π/N ) . 3.2. Finite-difference for solving periodic WR solutions. In this subsection we compute the iterative waveforms [(x˙ (k) )t (t), (y˙ (k) )t (t)]t (k = 1, 2, . . .) in (2) at m + 1 time-points, t0 = 0, t1 , t2 , . . . , tm = T , with a constant step-size τ . For any fixed k we take the implicit Euler method to approximate the derivatives x˙ (k) and x˙ (k−1) in (2). As a simple case of the linear multistep method presented in subsection 3.1, we now may write out the iterative matrix for discrete waveforms without using the discrete Fourier series technique. We will make use of this form to do our computations in the next section. For this purpose, we denote X (l) = [(x(l) )t (t1 ), . . . , (x(l) )t (tm )]t ∈ Rmn1 , Y (l) = (l) t [(y ) (t1 ), . . ., (y (l) )t (tm )]t ∈ Rmn2 , F1 = [f1t (t1 ), . . ., f1t (tm )]t ∈ Rmn1 , and F2 = [f2t (t1 ), . . ., f2t (tm )]t ∈ Rmn2 . It is mentioned here that the order of discrete equations is different from that of subsection 3.1 for the differential part and the algebraic part. By x(l) (t0 ) = x(l) (tm ) the discrete form of (2) is H1 X (k) + H2 Y (k) = J1 X (k−1) + J2 Y (k−1) + τ F1 , (65) H3 X (k) + H4 Y (k) = J3 X (k−1) + J4 Y (k−1) + F2 , k = 1, 2, . . . ,
796
Y.-L. JIANG AND R. M. M. CHEN
where
(M1 + τ A1 ) −M1 (M1 + τ A1 ) H1 = .. . H2 =
τ B1 ..
, H3 =
.
C1
τ B1 and
..
. −M1 (M1 + τ A1 ) N1 .. , H4 = . C1
−M2
(M2 + τ A2 ) −M2 (M2 + τ A2 ) J1 = .. . J2 =
τ B2 ..
, J3 =
.
−M1
..
, ..
N1 ,
. −M2 (M2 + τ A2 ) N2 .. .. , J4 = . . C2
C2
τ B2
.
. N2
For any fixed step-size τ , the convergence condition of the above iterative algorithm can be concluded in the following theorem. Theorem 7. The discrete algorithm (65) is convergent if −1 H1 H2 J1 J2 (66) ρ < 1. H3 H4 J3 J4 4. Numerical experiments Our numerical experiments are based on three examples, each of which has the form of (1), (37), and (46), respectively. We define that the iterative error is the sum of the squared difference of successive waveforms taken over all time-points. 4.1. Example 1. The first example is a test circuit shown in Figure 1 where n is even. This circuit is taken from [15]. It is a general form of a uniformly dissipative low-pass ladder filter circuit with a current-source input and a voltage output. I(t)
Ln - 1
vn + 1
i n-1 Gn
cn
Rn - 1
vn
vn - 1
Ln - 3
i n-3 Gn - 2
cn - 2
Rn - 3
L3
v5
vn - 2
i3 G4
c4
R3
L1
v3
v4
i1 G2
c2
R1
v1
v2 RL
Figure 1. A linear periodic DAEs circuit with n even. The circuit equations have a form as in (1) where x(t) = [i1 (t), v3 (t), i3 (t), v5 (t), . . . , in−1 (t), vn+1 (t)]t ∈ Rn and y(t) = [v1 (t), v2 (t), v4 (t), . . . , vn−2 (t), vn (t)]t ∈ n n R 2 +1 , f1 (t) = [0, . . . , 0, I(t)]t ∈ Rn , and f2 (t) = [0, . . . , 0]t ∈ R 2 +1 , for any given t ∈ [0, T ].
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
797
Now, M , A, B, C, and N in (1) are some concrete matrices. For M, A ∈ Rn×n we have M = diag(L1 , C2 , L3 , C4 , . . . , Ln−1 , Cn ) and A = diag(A˜1 , A˜2 , . . . , A˜ n2 ) where A˜i =
−1
0
1 G2i +
1
R2i+1
(i = 1, 2, . . . , n − 1), 2
n
A˜ n2 =
0 1
−1 Gn
.
n
The matrices B ∈ Rn×( 2 +1) and C ∈ R( 2 +1)×n are B=
0
1 −
..
. −
0 −1 1 − R3 C=
,
1 R3 1 1 Rn−1 1 0
−1 ..
. −
1 Rn−1
−1 0
.
n n ˜1 , N ˜2 ) where For N ∈ R( 2 +1)×( 2 +1) we have N = diag(N
˜2 = diag N
n n 1 1 1 ! ∈ R( 2 −1)×( 2 −1) , ,..., R3 R5 Rn−1
and
1 1 + R R 2 ˜1 = 1 N 1 − R1
1 R1 1 . R1
−
We seek its periodic responses by periodic WR. In our computations we use the discrete algorithm (65). For simplicity we let n = 10, T = 2π, and all circuit parameters are set to be 1. The boundary values satisfy x(0) = x(2π) and y(0) = −N −1 Cx(0) (= y(2π)). For this example, we use the Jacobi splitting to split the matrices M and N , i.e., M1 and N1 are diagonal matrices of M and N if we adopt the symbols in (2). The
Y.-L. JIANG AND R. M. M. CHEN 0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1 Imaginary axis
Imaginary axis
798
0
0
−0.1
−0.1
−0.2
−0.2
−0.3
−0.3
−0.4 −1
−0.5
0 Real axis
0.5
−0.4 −1
1
−0.5
0 Real axis
0.5
1
∞ ) (right) for Figure 2. Sampled values of σ(K) (left) and σ(K Case I in Example 1. matrices B1 and C1 are
B1 =
0
1 0 1 ..
.
∈ R10×6 , 0 1 0
0 −1 0 C1 =
∈ R6×10 .
−1 ..
. 0 −1 0
For the matrix A we have two ways to treat its splitting, (a) Case I: we simply do not split it, i.e., A1 = A; (b) Case II: 2 10×10 . A1 = 1 . . . ∈R 1
2
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
799
0.8 0.6 0.6 0.4 0.4
0.2 Imaginary axis
Imaginary axis
0.2
0
0
−0.2
−0.2
−0.4 −0.4 −0.6 −0.6 −1
−0.5
0 Real axis
0.5
1
−0.8 −1
−0.5
0 Real axis
0.5
1
∞ ) (right) for Figure 3. Sampled values of σ(K) (left) and σ(K Case II in Example 1.
By (14) and (22), in the present situation we let ξ = iζ (ζ ∈ R). The spectral drawings on σ(K)
∞ ), and σ(K
where ∞ ) = σ(K
{σ(K(iζ)) : ζ ∈ R},
are given in Figure 2 for Case I and Figure 3 for Case II in which p = 0, ±1, . . . , ±50 and ζ = 0, ±0.1, . . . , ±49.9, ±50. The spectral set for the case of p = 0 is also indicated with the symbol “o” in the right parts of Figures 2 and 3. To compute the periodic WR solution of the system, we let the input function I(t) = I(t + 2π) satisfy (67)
I(t) =
t, 0 ≤ t ≤ 0.5π, 0.5π, 0.5π ≤ t ≤ 1.5π, (2π − t), 1.5π ≤ t ≤ 2π.
The time-step is 0.02π sec and the initial guess is the zero function. The convergence results and two approximate waveforms for the voltage v1 (t) are shown in Figure 4.
800
Y.-L. JIANG AND R. M. M. CHEN −3
x 10
1
10
#8
12
0
10
11
−1
10
v1(t)
Error
10 −2
10
9 −3
10
8 −4
10
#7
7 −5
10
0
10
20 30 40 Number of iterations
50
6
0
2
4
6
t
Figure 4. Computed results for Example 1. Left: Periodic WR iterations (dashed line for Case I and solid line for Case II). Right: Approximate waveforms (k = 7 and k = 8) of the voltage v1 (t) (solid line).
4.2. Example 2. If we study numerical solutions of linear PPDEs by a spatial finite element method, we will meet the form of (37). Let us consider the one-dimensional heat equation
(68)
∂u(x, t) ∂ 2 u(x, t) − = sin(2πx) sin(t), ∂t ∂x2 u(0, t) = u(1, t) = 0, t ∈ [0, 2π], u(x, 0) = u(x, 2π), x ∈ [0, 1].
(x, t) ∈ [0, 1] × [0, 2π],
The equation is spatially discretized on a grid Ωh = {xj = jh|j = 0, 1, . . . , Ns } with s −1 the constant mesh size h = 1/Ns . The sequence of linear basis functions {ϕj }N j=1 is defined as x − xj 1 + h , xj−1 ≤ x ≤ xj , x − xj (69) ϕj (x) = 1− , xj ≤ x ≤ xj+1 , h 0, otherwise. Ns −1 The solution of the weak formulation for (68) is uh (x, t) = j=1 ϕj (x)µj (t). The function vector which is to be computed, x(t) = [µ1 (t), . . . , µNs −1 (t)]t , should
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2 Imaginary axis
Imaginary axis
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
0
0
−0.2
−0.2
−0.4
−0.4
−0.6
−0.6
−0.8 −1
−0.5
0 Real axis
0.5
−0.8 −0.5
1
801
0
0.5
1
Real axis
Figure 5. Sampled values of σ(K) for Example 2 under the Jacobi splitting (left) and the Gauss-Seidel splitting (right). satisfy (37) where
4
M = h 1
1 .. . 1
(Ns −1)×(Ns −1) , 1 ∈R 4
2 −1 1 A = −1 . . . −1 ∈ R(Ns −1)×(Ns −1) , h −1 2 sin(t) [f1 , . . . , fNs −1 ]t in which fj = sin(2(j − 1)hπ) − 2 sin(2jhπ) + (2π)2 h sin(2(j + 1)hπ) (j = 1, . . . , Ns − 1). In our numerical experiments we set h to be 1/11, namely Ns = 11. We respectively adopt the Jacobi splitting and the Gauss-Seidel splitting to solve the periodic WR solution for this example. Their spectra on K are pictured in Figure 5 where the left part stands for Jacobi and the right part stands for Gauss-Seidel in which p = 0, ±1, . . . , ±100. We can see that their spectral radii nearly approach 1. To observe convergence behaviors on periodic WR, we let the time-step be 0.05π sec and the initial guess be the zero function. The periodic WR iteration process stops when the residual error reaches below 10−5 . The iteration results and two approximate waveforms for µ4 (t) are shown in Figure 6. For this example the Jacobi iteration number is less than that of the Gauss-Seidel iteration number. and f (t) = −
802
Y.-L. JIANG AND R. M. M. CHEN −1
10
0.02
0.015 −2
10
0.01
0.005
µ4(t)
Error
−3
10
0
−0.005 −4
10
#2
−0.01
−0.015 #3
−5
10
0
20 40 Number of iterations
60
−0.02
0
2
4
6
t
Figure 6. Computed results for Example 2. Left: Periodic WR iterations (dashed line for Gauss-Seidel and solid line for Jacobi). Right: Approximate waveforms (k = 2 and k = 3) of µ4 (t) (solid line). 4.3. Example 3. We also consider the periodic response of a linear second-order system because the second-order system often describes mechanical models [4]. By use of the symbols in (46) we now let the matrix L = diag(1, 2, . . . , 1, 2) ∈ R10×10 and the function f (t) = [cos(2πt), 0, . . . , 0]t ∈ R10 for any given t ∈ [0, 1]. The matrices S ∈ R10×10 and G ∈ R10×10 are 2 −2 1 2 1 −2 2 −2 1 2 1 1 −2 2 −2 1 2 1 2 1 −2 2 −2 1 2 1 1 2 1 −2 2 −2 1 2 1 , S= 1 2 1 −2 2 −2 1 2 1 1 2 1 −2 2 −2 1 2 1 2 1 −2 2 −2 1 1 2 1 −2 2 −2 1 2 1 −2 2 2 −0.5 G = −0.5 . . . −0.5 . −0.5
2
We take two splittings, i.e., Jacobi and Gauss-Seidel, to solve its periodic response. Let the time-step be 0.01 sec and the initial waveform be the zero function.
COMPUTING PERIODIC SOLUTIONS BY WAVEFORM RELAXATION
803
0.2
0.15
−1
10
0.1 −2
10
w2
Error
0.05
0
−3
10
−0.05
−4
10
−0.1
−0.15 −5
10
0
10 20 Number of iterations
30
−0.2 −0.04
−0.02
0 w
0.02
0.04
1
Figure 7. Computed results for Example 3. Left: Periodic WR iterations (dashed line for Gauss-Seidel and solid line for Jacobi). Right: Approximate (dashed line for k = 1 and dotted line for k = 2) and exact (solid line) phase drawings. We use the same stopping criteria as in Examples 1 and 2. Numerical results on the periodic WR convergence are presented in Figure 7. The approximate and actual phase drawings of w1 and w2 , where w1 (t) = x1 (t) and w2 (t) = x˙ 1 (t) on [0, 1], are also pictured in Figure 7. 5. Conclusions We have successfully deduced an analytic expression of the spectral set on a periodic WR operator for a linear system of DAEs under a normal periodic constraint. The convergent splittings of the periodic WR algorithm on periodic solutions can be conveniently chosen from this useful expression; namely the periodic WR algorithm converges to the exact periodic response if the supremum value of spectral radii for a series of complex matrices is less than 1. The convergent condition of the paper is necessary and sufficient for this kind of relaxation-based algorithms. The practical partitions or splittings of dynamic systems can benefit from the analytic expression of the periodic WR operator; for example we may choose convergent splittings by directly viewing the spectral drawings of these operators. The simple and powerful finite-difference method is adopted to compute the discrete waveforms for these systems of DAEs. Numerical experiments on three examples further illustrate that the spectral expression is essential for WR in transient computations of periodic responses.
804
Y.-L. JIANG AND R. M. M. CHEN
Acknowledgments This research work was supported by the Natural Science Foundation of China NSFC 10171080, the National High Technology Research and Development Program of China 2001AA111042 (863 Program), and the Excellent Young Teachers Program of MOE 1887, Peoples Republic of China. We are very grateful to Professor Omar Wing for valuable suggestions on the subject. The authors also thank the anonymous reviewer for his or her meaningful comments, which enabled us to improve our paper to its present version. References [1] U.M. Ascher and L.R. Petzold, Computer Methods for Ordinary Differential Equations and Differential-algebraic Equations, SIAM, Philadelphia, 1998. MR 99k:65052 [2] M. Bjørhus and A.M. Stuart, Waveform relaxation as a dynamical system, Mathematics of Computation 66:219(1997), 1101–1117. MR 97i:34059 [3] G.D. Gristede, A.E. Ruehli, and C.A. Zukowski, Convergence properties of waveform relaxation circuit simulation methods, IEEE Transactions on Circuits and Systems - Part I 45:7(1998), 726–738. MR 99c:94067 [4] C.S. Hsu, Cell-to-Cell Mapping – A Method of Global Analysis for Nonlinear Systems, Springer-Verlag, New York, 1987. MR 89d:58115 [5] J. Janssen and S. Vandewalle, Multigrid waveform relaxation on spatial finite element meshes: The continuous-time case, SIAM Journal on Numerical Analysis 33:2(1996), 456–474. MR 97c:65158 [6] Y.L. Jiang, R.M.M. Chen, and O. Wing, Improving convergence performance of relaxationbased transient analysis by matrix splitting in circuit simulation, IEEE Transactions on Circuits and Systems - Part I 48:6(2001), 769–780. [7] Y.L. Jiang, W.S. Luk, and O. Wing, Convergence-theoretics of classical and Krylov waveform relaxation methods for differential-algebraic equations, IEICE Transactions on Fundamentals of Electronics, Communications, and Computer Sciences E80-A:10(1997), 1961–1972. [8] Y.L. Jiang and O. Wing, Monotone waveform relaxation for systems of nonlinear differentialalgebraic equations, SIAM Journal on Numerical Analysis 38:1(2000), 170–185. MR 2001i:65088 [9] Y.L. Jiang and O. Wing, A note on the spectra and pseudospectra of waveform relaxation operators for linear differential-algebraic equations, SIAM Journal on Numerical Analysis 38:1(2000), 186–201. MR 2001h:65089 [10] Y.L. Jiang, On time-domain simulation of lossless transmission lines with nonlinear terminations, to appear in SIAM Journal on Numerical Analysis, 2004. [11] Y.L. Jiang and O. Wing, A note on convergence conditions of waveform relaxation algorithms for nonlinear differential-algebraic equations, Applied Numerical Mathematics 36:2-3(2001), 281–297. MR 2002e:65107 [12] T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, Berlin, 1984. MR 96a:47025 [13] U. Miekkala and O. Nevanlinna, Iterative solution of systems of linear differential equations, Acta Numerica, pp. 259–307, 1996. MR 99f:65104 [14] S. Vandewalle and R. Piessens, On dynamic iteration methods for solving time-periodic differential equations, SIAM Journal on Numerical Analysis 30:1(1993), 286–303. MR 94b:65125 [15] L. Weinberg, Network Analysis and Synthesis, McGraw-Hill Book Company, New York, 1962. [16] J.K. White and A. Sangiovanni-Vincentelli, Relaxation Techniques for the Simulation of VLSI Circuits, Kluwer Academic Publishers, Boston, 1986. MR 88b:94024 Department of Mathematical Sciences, Xi’an Jiaotong University, Xi’an, People’s Republic of China E-mail address:
[email protected] School of Creative Media, City University of Hong Kong, Hong Kong, People’s Republic of China E-mail address:
[email protected]