Eigenvalue approximations of the wave equation ... - Semantic Scholar

Report 9 Downloads 110 Views
Eigenvalue approximations of the wave equation with local Kelvin-Voigt damping

A PROJECT SUBMITTED TO THE FACULTY OF THE DEPARTMENT OF MATHEMATICS & STATISTICS OF THE UNIVERSITY OF MINNESOTA DULUTH BY

Christopher Boamah-Mensah

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE

Advisor: Steven A. Trogdon

June, 2015

Acknowledgements I would like to thank my advisor for all his help.

i

Abstract Eigenvalue approximations of the wave equation with local Kelvin-Voigt damping are presented using the well known Chebyshev-Tau spectral method. The problem is formulated in two ways: the first is on one spatial domain while the second is on two spatial domains. Several eigenvalue problems for each method were solved and compared. In general, low frequency eigenvalues were the same for both methods. A brief discussion of inaccurate eigenvalue approximations is also given.

ii

Contents Acknowledgements

i

Abstract

ii

List of Tables

v

List of Figures

vi

1 Introduction

1

1.1

Motivation

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Known results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

2 Formulation

3

2.1

Method I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

2.2

Method II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

3 Results

13

3.1

Results for method I . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

3.2

Results for method II

20

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Summary

27

References

29

Appendix A. Properties of Chebyshev polynomials and expansions

iii

30

Appendix B. Sample MATLAB code

33

B.1 Method I – p = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

B.2 Method II – p = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

B.3 Chebyshev transform function . . . . . . . . . . . . . . . . . . . . . . . .

39

iv

List of Tables 4.1

θ and γ for method I. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

4.2

θ and γ for method II. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

v

List of Figures 3.1

The number of eigenvalues with positive real parts versus p. . . . . . . .

14

3.2

Imλn versus Reλn for p = 2. . . . . . . . . . . . . . . . . . . . . . . . . .

15

3.3

log(Imλn ) versus −Reλn for p = 2. . . . . . . . . . . . . . . . . . . . . .

16

3.4

Restricted Imλn versus Reλn for p = 2.

. . . . . . . . . . . . . . . . . .

17

3.5

Imλn versus Reλn for p = 3. . . . . . . . . . . . . . . . . . . . . . . . . .

17

3.6

Restricted Imλn versus Reλn for p = 3.

. . . . . . . . . . . . . . . . . .

18

3.7

Imλn versus Reλn for p = 4. . . . . . . . . . . . . . . . . . . . . . . . . .

19

3.8

Restricted Imλn versus Reλn for p = 4.

19

3.9

The number of eigenvalues with positive real parts versus p for method II. 20

. . . . . . . . . . . . . . . . . .

3.10 Imλn versus Reλn for p = 2 in method II. . . . . . . . . . . . . . . . . .

21

3.11 log(Imλn ) versus −Reλn for p = 2 in method II. . . . . . . . . . . . . .

21

3.12 Restricted Imλn versus Reλn for p = 2 in method II. . . . . . . . . . . .

22

3.13 Imλn versus Reλn for p = 3 in method II. . . . . . . . . . . . . . . . . .

23

3.14 Restricted Imλn versus Reλn for p = 3 in method II. . . . . . . . . . . .

23

3.15 Imλn versus Reλn for p = 4 in method II. . . . . . . . . . . . . . . . . .

24

3.16 Restricted Imλn versus Reλn for p = 4 in method II. . . . . . . . . . . .

24

3.17 Comparison of µ versus ω for p = 2, p = 3, and p = 4 of method I and method II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vi

25

Chapter 1

Introduction A presentation of the motivation and some known results from theoretical analysis of the problem is given in this chapter.

1.1

Motivation

We will begin with the one-dimensional elastic wave equation with Kelvin-Voigt damping:    2 ∂ ∂u ∂2u ∂ u   t > 0, −1 < x < 1,   ∂t2 = ∂x ∂x + β(x) ∂x∂t u(t, −1) = u(t, 1) = 0 t ≥ 0,     u(0, x) = u0 (x), ∂u(0, x)/∂t = u1 (x) −1 ≤ x ≤ 1.

(1.1)

β(x) ∈ C 1 [−1, 1]. Additionally, ( β(x) =

x ∈ [−1, 0],

0,

a(x), x ∈ (0, 1],

(1.2)

where a(x) > 0 for x > 0 and a(x) ∈ C[0, 1]. (1.1) generally models the longitudinal motion of a beam made with a non-uniform material or the transverse motion of an string made of two different materials. Engineers at Ford have created a damping patch made of an aluminum foil layer, a viscoelastic layer and an adhesive layer which they place on an elastic plate to analyze natural frequencies and other frequency modes [1]. The solution of the eigenvalue problem of (1.1) may provide analytic results that could be verified experimentally by the engineers. Another application is the use of piezoceramic 1

2 patches as sensors on structures [1]. The presence of these patches introduce discontinuities in material properties of the structures being analyzed. The displacements and frequencies of such structure systems can be theoretically analyzed with (1.1).

1.2

Known results

The main goal of this project is to approximate the eigenvalues associated with (1.1) which will give insight to the dynamic stability of the problem. We present some well known results about stability and the eigenvalues of (1.1). The energy (E(t)) of (1.1) is defined as 1 E(t) = 2

Z

1

0

" # ∂u 2 ∂u 2 + dx. ∂t ∂x

(1.3)

For exponential stability of (1.1), a(·) is assumed to satisfy the following [2]: (A) a(·) ∈ C 1 [0, 1], a(0) = a0 (0) = 0, a(x) > 0 for all x ∈ (0, 1]. Z x 0 |a (s)|2 ds ≤ a0 |a0 (x)| for all x ∈ [0, 1]. (B) ∃a0 > 0 such that a(s) 0 Theorem 1.2.1 (Zhang [2]). Suppose that function a(·) satisfies (A) and (B). Then the energy of system (1.1) is exponentially stable, i.e., ∃C, δ > 0 such that E(t) ≤ Ce−δt E(0),

∀t ≥ 0.

The eigenvalue problem associated with (1.1) can be obtained by seeking a product solution of the form eλt u(x) to give λ2 u = u00 + λ(β(x)u0 )0

(1.4)

u(−1) = u(1) = 0.

(1.5)

with boundary conditions

It is also assumed [3] that at x = 0 lim

x→0+

β 0 (x) =k>0 xp

for some p > 0. Theorem 1.2.2 (Renardy [3]). Let λn , un be eigenvalues and corresponding eigenfunctions for (1.4), (1.5). Assume that |λn | → ∞. Then Reλn → −∞.

Chapter 2

Formulation In this chapter we discretize (1.1) using Chebyshev polynomials and seek approximate solutions to the eigenvalues using the tau method. The problem is formulated in two ways. The first is to consider the problem on a single spatial domain. The second is to break the problem into two related problems on two spatial domains.

2.1

Method I

To solve a discrete eigenvalue problem, we start by transforming (1.1) to a system of first order equations as follows:  ∂u      ∂t   ∂v    ∂t

= v, (2.1) 



∂ ∂u ∂v + β(x) . ∂x ∂x ∂x

=

Next we expand u, β(x), and v using a truncated series of Chebyshev polynomials as N

u

=

N X k=0

ak (t)Tk (x)

β

N

=

N X `=0

3

b` T` (x) v

N

=

N X k=0

ek (t)Tk (x)

(2.2)

4 respectively. Substituting (2.2) into (2.1) yields  ∂uN      ∂t

= vN ,

(2.3)  N   N N   ∂u ∂v ∂ ∂v   . = + βN ∂t ∂x ∂x ∂x √ Multiplying (2.3) by T` (x)/ 1 − x2 and integrating from −1 to 1 with respect to x results in the weak form of (2.1) as follows  Z 1 Z 1 T` (x) ∂uN T` (x)   √  dx = dx, vN √  2  ∂t 1−x 1 − x2  −1 −1  Z     

1

−1

∂v N

T (x) √` dx = ∂t 1 − x2

Z

1

−1

∂ ∂x



∂uN ∂x

+ βN

(2.4) ∂v N ∂x

Using (A.2) and (A.9), (2.4) reduces to  dak   = ek ,    dt   dek    dt

(2)

= ak +

2 πck

Z

1

−1



T (x) √` dx. 1 − x2

k = 0, 1, 2, . . . , N − 2 (2.5)

  ∂ T (x) ∂v N √` βN dx ∂x ∂x 1 − x2

where N

βN

N

X ∂v N ∂ X = b` T` (x) ek (t)Tk (x) ∂x ∂x =

=

=

`=0 N X

k=0 N X (1) b` T` (x) ek (t)Tk (x) `=0 k=0 N N 1 X X (1) b` ek (t)[Tk+` (x) + 2 `=0 k=0 N X 1 (1) B`k ek (t)Tk (x) 2 k=0

T|k−`| (x)] (2.6)

after using (A.5), (A.7), and doing some algebra. B`k in (2.6) represents the elements of a matrix with coefficients of bk . The inner product of polynomials with index beyond N and polynomials with indices from 0 to N is zero. Substituting (2.6) into the integral

5 in (2.5) and switching the integral and derivative we get that " N #(1)   Z 1 N 2 ∂ T` (x) 1X (1) N ∂v √ β dx = B`k ek (t) . (2.7) πck −1 ∂x ∂x 2 1 − x2 k=0       T0 (1) T0 (−1) a0        T1 (1)   T1 (−1)   a1                    (Note that Tk (±1) is T (1) a T (−1) Let R1 ≡  2 , R ≡ , and a ≡ 2 2 −1           .  . . .. ..      ..        TN (1) aN TN (−1) given in (A.4)). From uN in (2.2) and the boundary conditions in (1.1) the following is obtained: aT R1 = 0, aT R−1 = 0. Equation (2.8) may be partitioned as follows # " R R 1,1 −1,1 = [ 0 0 ], [ aT1 aT2 ] R1,2 R−1,2

(2.8)

(2.9)

where a1 and a2 represent column vectors with the first N − 1 and last two elements of a respectively. R1,1 and R1,2 represent vectors of the first N − 1 and last two elements of R1 respectively. A similar statement can be made for R−1,1 and R−1,2 relative to R−1 . Equation (2.9) may be written as aT1 Q1 + aT2 Q2 = 0,

(2.10)

where Q1 ≡ [ R1,1 R−1,1 ] and Q2 ≡ [ R1,2 R−1,2 ]. Assuming Q2 is invertible T a2 = −[Q1 Q−1 2 ] a1 .

Thus, replacing the last two elements of a with a2 we get " # " # a1 I a= = a1 T T −[Q1 Q−1 −[Q1 Q−1 2 ] a1 2 ] where I is an (N − 1) × (N − 1) identity matrix. Letting " # I S≡ T −[Q1 Q−1 2 ]

6 results in a = Sa1 .

(2.11)

Assuming v is sufficiently smooth, we can show from (2.1) that v also has homogeneous Dirichlet boundary conditions. Thus, for v N in (2.2) we obtain e = Se1

(2.12)

in a similar way as (2.11). The convolution sum of (2.7) in matrix-vector form is           M         

b1

b0

b2

b1 2 b2 2

b3 .. .

                 

b1 2

b3 2

b2 2

b1 2

.. .

b3 2

b2 2

bN −1 2

bN −2 2

0

0

0

···

0

b2 2 b3 2

b3 2 b4 2

b4 2 b5 2

b4 2

b5 2

0 .. .

.

.. . bN 2

0

0

b5 2

···

.

..

.

..

.

···

··· bN 2

bN 2

..

b5 2

0

b0 ..

bN 2

b2 2

b1 2

0

 

   0      0     ..    .                      0



0

                  +               

b0

.. . bN





b0

b0 (1)



(1)

        ,        

e0 e1

(1)

e2

(1)

e3 .. .

(1)

b1 2

0

b2 2 b1 2

0

bN 2

b3 2 b2 2 b1 2

··· b3 2 b2 2

···

..

..

..

.

.

b3 2

.

··· .. .

      ..  .  + b3   2  b2  2   b1  2  0

eN

(2.13) where M is given by (A.12). Writing (2.13) compactly by defining B1 , B2 , and B3 as matrices yields M(B1 + B2 + B3 )Me.



(2.14)

7 Now (2.5) in matrix-vector notation becomes  d   a = e1   dt 1

(2.15)

    d e = M2 Sa + M(B + B + B )MSe 1 1 1 2 3 1 dt Finally, the # eigenvalue problem for (2.15) is obtained by seeking solutions of the form " ˜1 a which results in eλt ˜1 e " # " # ˜1 ˜1 a a λ =N , (2.16) ˜1 ˜1 e e where " N=

0

I

#

M2 S M(B1 + B2 + B3 )MS

Note that M2 and M(B1 + B2 + B3 )M are (N − 1) × (N + 1) matrices.

2.2

Method II

We begin by splitting (1.1) on two spatial domains. ∂ 2 u− ∂ 2 u− − = 0 t > 0, ∂t2 ∂x2 u− (−1, t) = 0, u− (0, t) = ?   2 ∂ ∂u+ ∂ 2 u+ ∂ u+ − + a(x) = 0 t > 0, ∂t2 ∂x ∂x ∂x∂t u+ (1, t) = 0,

−1 < x < 0

(2.17)

0<x 1. We numerically approximated all the coefficients computed. For p = 2, we computed the exact coefficients but there was no significant difference with the approximated coefficients. All eigenvalues were computed with MATLAB’s eigs function with the option set to compute the smallest eigenvalues in magnitude. The eigenvalues were also computed with machine precision accuracy according to MATLAB’s documentation for eigs.

Results for method I 100 Number of eigenvalues with positive real parts

3.1

90 80 70 60 50 40 30 20 10

2

4

6

8

10

12

p

Figure 3.1: The number of eigenvalues with positive real parts versus p.

15 In all our computations we obtained some eigenvalues with real parts greater than zero. As seen from Fig. 3.1, as p increased the number of eigenvalues with positive real parts decreased. This was because an increase in p constitutes a smoother function β(x), hence derivatives are approximated better. However, this does not explain the fact that eigenvalues with positive real parts exist. We will later on address this issue. Motivated by theorem 1.2.2 and work done by Embree [4], we investigated the behavior of eigenvalues for p = 2, p = 3, and p = 4. The eigenvalues shown in Fig. 3.2 for p = 2 n = 501 400

300

200

Im λn

100

0

−100

−200

−300

−400 −100

−80

−60

−40

−20

0

Re λn

Figure 3.2: Imλn versus Reλn for p = 2.

were computed using 402 nodes for which 600 eigenvalues were computed. The number (n) of eigenvalues that are shown in Fig. 3.2 is 501. At low frequencies the real and imaginary parts of the complex eigenvalues varied linearly. The real parts decayed less rapidly as the frequency increases. We believe that eigenvalues at high frequencies close

16 to the imaginary axis are poor approximations because their real parts are heading to 0 instead of −∞. Motivated by work done by Embree [4], we considered the eigenvalues in the rectangular region to determine any relationship between the real and imaginary parts. From Fig. 3.3 it is seen that eigenvalues (λ = ω + iµ) in the rectangular region n = 43 5.5

5

log(Im λn)

4.5

4

3.5

3

2.5 2

2.5

3 −Re λn

3.5

4

Figure 3.3: log(Imλn ) versus −Reλn for p = 2.

of Fig. 3.2 obey µ ≈ exp(θω + γ).

(3.6)

The parameters θ and γ were obtained using a least-squares fit as: θ ≈ −2.1042

γ ≈ −2.3757.

The curve µ(ω) = exp(θω + γ) is plotted in Fig. 3.4 and it approximates eigenvalues at low frequencies very well. We will address the issue of poor approximations at higher frequencies later on.

17 n = 349 200 180 160

Im λn

140 120 100 80 60 40 20 0 −40

−35

−30

−25

−20 Re λ

−15

−10

−5

0

n

Figure 3.4: Restricted Imλn versus Reλn for p = 2.

n = 515 400 300 200

Im λ

n

100 0 −100 −200 −300 −400 −140

−120

−100

−80

−60 Re λn

−40

−20

Figure 3.5: Imλn versus Reλn for p = 3.

0

18 From Fig. 3.5 when p = 3, we see that the eigenvalues follow a more structured pattern mainly because of the smoothness of β(x). Restricting our attention to eigenvalues in the rectangular region we see that (3.6) holds where θ ≈ −0.9467

γ ≈ −0.5786.

The curve µ(ω) = exp(θω + γ) is plotted in Fig. 3.6. From Fig. 3.7 when p = 4, we see n = 347 200 180 160 140

Im λn

120 100 80 60 40 20 0 −50

−40

−30 Re λ

−20

−10

0

n

Figure 3.6: Restricted Imλn versus Reλn for p = 3.

that the eigenvalues follow a more structured pattern than those computed for p = 3. Again restricting our attention to eigenvalues in the rectangular region we see that (3.6) also holds where θ ≈ −0.6769

γ ≈ −0.4001.

The curve µ(ω) = exp(θω + γ) is shown in Fig. 3.8.

19 n = 526 400 300 200

Im λn

100 0 −100 −200 −300 −400 −160

−140

−120

−100

−80 Re λn

−60

−40

−20

0

Figure 3.7: Imλn versus Reλn for p = 4.

n = 344 200 180 160 140

Im λn

120 100 80 60 40 20 0 −45

−40

−35

−30

−25 −20 Re λn

−15

−10

−5

Figure 3.8: Restricted Imλn versus Reλn for p = 4.

0

20

3.2

Results for method II

For these results, we also computed 600 eigenvalues. Fig. 3.9 shows that the number

Number of eigenvalues with positive real parts

70

60

50

40

30

20

10

0

2

4

6

8

10

p Figure 3.9: The number of eigenvalues with positive real parts versus p for method II.

of eigenvalues with positive real part increase with increasing p (2 ≤ p ≤ 9) and then decrease for p > 9. We suspect that the reason for this was poor approximations in the quadrature used in computing the coefficients dk .

12

21 n = 599 600

400

0

−200

−400

−600 −100

−80

−60 Re λn

−40

−20

0

Figure 3.10: Imλn versus Reλn for p = 2 in method II.

n = 107 5.5

5

4.5 log(Im λn)

Im λn

200

4

3.5

3

2.5 1

1.2

1.4

1.6 −Re λn

1.8

2

Figure 3.11: log(Imλn ) versus −Reλn for p = 2 in method II.

22 From Fig. 3.10, the eigenvalues followed a more structured pattern than those computed for p = 2 in method I. There were also fewer eigenvalues with real parts approaching zero which suggests that eigenvalues computed with method II are overall more accurate than those computed from method I. Considering eigenvalues in the rectangle region of Fig. 3.10, we obtained that the eigenvalues obeyed (3.6) as shown in Fig. 3.11. The parameters θ and γ were obtained from a least-squares fit as: θ ≈ −4.0052

γ ≈ −2.7824.

The curve µ(ω) = exp(θω + γ) is plotted in Fig. 3.12.

n = 382 200 180 160 140

Im λn

120 100 80 60 40 20 0 −16

−14

−12

−10

−8 Re λn

−6

−4

Figure 3.12: Restricted Imλn versus Reλn for p = 2 in method II.

−2

0

23 For p = 3, θ ≈ −1.9986 and γ ≈ −1.5275. n = 599 400 300 200

Im λn

100 0 −100 −200 −300 −400 −90

−80

−70

−60

−50 −40 Re λ

−30

−20

−10

0

n

Figure 3.13: Imλn versus Reλn for p = 3 in method II.

n = 371 200 180 160 140

Im λn

120 100 80 60 40 20 0 −30

−25

−20

−15 Re λn

−10

−5

0

Figure 3.14: Restricted Imλn versus Reλn for p = 3 in method II.

24 For p = 4, θ ≈ −1.3497 and γ ≈ −1.0821. n = 599 400 300 200

Im λn

100 0 −100 −200 −300 −400 −90

−80

−70

−60

−50

−40 Re λ

−30

−20

−10

0

n

Figure 3.15: Imλn versus Reλn for p = 4 in method II.

n = 363 200 180 160 140

Im λn

120 100 80 60 40 20 0 −20

−15

−10 Re λn

−5

0

Figure 3.16: Restricted Imλn versus Reλn for p = 4 in method II.

25 The main reason we obtained eigenvalues with positive real parts and poor eigenvalue approximations was that the matrices were generally ill-conditioned. By ill-conditioned we mean that, small perturbations in some of the matrix elements resulted in large differences in some eigenvalues computed. We investigated sensitivity of the eigenvalues by computing the condition number of each eigenvalue using MATLAB’s condeig function. In all cases (p = 2, p = 3, and p = 4) for the results of both methods presented previously, the minimum condition number has order of magnitude 1. For p = 2, p = 3, and p = 4 for method I, the order of magnitude of the maximum condition number was 107 , 1010 , and 1010 respectively. Likewise for method II, the order of magnitude for the maximum condition number was 1010 , 1011 , and 1011 respectively.

200 method I: p=2 180

method I: p=3 method I: p=4

160

method II: p=2 method II: p=3

140

method II: p=4 µ

120 100 80 60 40 20 0 −40

−35

−30

−25

−20

−15

−10

−5

ω

Figure 3.17: Comparison of µ versus ω for p = 2, p = 3, and p = 4 of method I and method II.

0

26 From Fig. 3.17, it is seen that there is some scaling relationship between the graphs of µ versus ω for p = 2 of both methods. A similar relationship also exists between the graphs of µ versus ω for p = 3 and p = 4 of both methods. It turns out that θ≈

θ0 2

and γ ≈ log 2 + γ 0 where θ and θ0 are parameters in µ(ω) for method I and method II respectively. γ and γ 0 are also parameters in µ(ω) for method I and method II respectively. These results confirm that on the curves in Fig. 3.17, eigenvalues computed from method I are twice as large in magnitude as the eigenvalues computed from method II. Motivation for these results is drawn from the fact that we analytically verified that the eigenvalues of (2.21) – (2.25) with a = 0 were half those of (1.1) with β = 0.

Chapter 4

Summary The discrete eigenvalue problem associated with (1.1) was formulated on a single and double spatial domain. Both discrete eigenvalue problems were solved using the ChebyshevTau method. Eigenvalues were computed using ( 0, x ∈ [−1, 0], β(x) = xp , x ∈ (0, 1], and

 α(η) = 2

η+1 2

p = 2, 3, 4

p p = 2, 3, 4.

More that half the eigenvalues computed from both methods had negative real parts. At some low frequencies, the real part (ω) and imaginary part (µ) were related by µ = exp(θω + γ). θ and γ were computed using a least-squares fit. A summary of θ and γ is given in Table 4.1 and Table 4.2 . Poor eigenvalue approximations and eigenvalues p

θ

γ

2

-2.1042

-2.3757

3

-0.9467

-0.5786

4

-0.6769

-0.4001

Table 4.1: θ and γ for method I.

with positive real parts were caused by ill-conditioned matrices.

27

28 p

θ

γ

2

-4.0052

-2.7824

3

-1.9986

-1.5275

4

-1.3497

-1.0821

Table 4.2: θ and γ for method II.

References [1] Shuping Chen, Kangsheng Liu, and Zhuangyi Liu. Spectrum and stability for elastic systems with global or local Kelvin-Voigt damping. SIAM J. APPL. MATH., 59(2):651–668, 1998. [2] Qiong Zhang. Exponential stability of an elastic string with Kelvin-Voigt damping. Z. Angew. Math. Phys., 61:1009–1015, March 2010. [3] Michael Renardy. On localized Kelvin-Voigt damping. ZAMM Z. Angew. Math. Mech., 84(4):280–283, February 2004. [4] Mark Embree. Eigenvalues and resolvent norm estimates for Kelvin-Voigt damping. (Unpublished), November 2008. [5] C.Canuto, M.Y.Hussaini, A.Quarteroni, and T.A.Zang. Spectral Methods Fundamentals in Single Domains. Springer, 2010. [6] David Gottlieb and Steven A.Orszag. Numerical Analysis of Spectral Methods: Theory and Applications. SIAM, 1977.

29

Appendix A

Properties of Chebyshev polynomials and expansions Chebyshev polynomials of the first kind are polynomial functions obtained from a singular St¨ urm-Liouville problem. They are given by Tk (x) = cos[k arccos(x)].

(A.1)

If Tk is normalized so that Tk (1) = 1 then, Tk (x) = cos kθ

and x = cos θ.

We can show from a trigonometric identity substitution and by induction that the following recurrence relation is obtained Tk+1 = 2xTk − Tk−1 , where T0 (x) ≡ 1 and T1 (x) ≡ x. Chebyshev polynomials are orthogonal in the inner product with respect to the weight (1 − x2 )−1/2 i.e. Z

1

hTk , T` i =

( Tk (x)T` (x)(1 − x2 )−1/2 dx =

−1

( ck =

0,

πck /2, otherwise.

2, if k = 0; 1, if k = 1, 2, 3 . . . 30

k 6= `;

(A.2)

(A.3)

31 Using trigonometric identities, (A.1) and some manipulations we can show that Tk (±1) = (±1)k ,

Tk (x)T` (x) =

(A.4)

Tk+` (x) + T|k−`| (x) , 2

(A.5)

and Tk0 (±1) = (±1)k+1 k 2 .

(A.6)

Furthermore, if a known function f (x) is expanded with a series of Chebyshev polyno∞ X fˆk Tk (x) then, mials as k=0

f0 =

∞ X

(1) fˆk Tk (x),

(A.7)

k=0

where 2 (1) fˆk = ck

∞ X

pfˆp

k ≥ 0.

(A.8)

p=k+1 p+k odd

In a similar way f 00 =

∞ X

(2) fˆk Tk (x),

(A.9)

k=0

where 1 (2) fˆk = ck

∞ X

p(p2 − k 2 )fˆp

k ≥ 0.

(A.10)

p=k+2 p+k even

The truncated form of (A.8) in matrix-vector notation is given as: ˆf (1) = Mˆf ,

(A.11)

32 where

           2   M=  ck           

0 1 0

3

0

5

0

···

0 0 2

0

4

0

6

···

0 0 0 .. . 0 0 .. . 0 .. .

3 ..

0

5

7

.

..

..

0 ..

..

.

..

.

.

.

0 0 0 ···

.

..

.

0 ··· .. .

N



 0    N    0     N   ..  .           0

(A.12)

Similarly, the truncated form of (A.10) in matrix-vector notation is given as: ˆf (2) = M2ˆf ,

(A.13)

In (A.8) and (A.10), ck is given by (A.3). The results from (A.8) and (A.10) are obtained from (A.1), a trigonometric identity, and some manipulations. They can also be generalized to the kth derivative.

Appendix B

Sample MATLAB code B.1

Method I – p = 2

clear all close all nn = 402; % This has to be even x = cos(pi.*(0:nn−1)./(nn−1)); % Chebyshev−Gauss−Lobatto nodes beta = x.ˆ2; beta(nn/2+1:nn) = 0; b = cbt(beta); % exact coefficients for xˆ2 % b = [(pi/4)*(2/(pi*2)),(2/3)*(2/pi),(pi/8)*(2/pi), ... %

2*(−2*sin((3:nn−1)*pi/2)./(−4*(3:nn−1)+(3:nn−1).ˆ3))./pi];

% preallocated matrices M = zeros(nn,nn); Q1 = [ones(nn−2,1) ((−1).ˆ(0:nn−3))']; Q2inv = [ones(2,1)./2 ((−1).ˆ(0:1))'./2]; S = [eye(nn−2,nn−2); −(Q1*Q2inv)']; % constructing B1

33

34 B1 = tril(toeplitz(b(1:end))); B1=.5*B1; B1(2:end,1)=2*B1(2:end,1); B1(1:nn+1:nn*nn)=2*B1(1:nn+1:nn*nn); % constructing B2 B2 = triu(toeplitz(b(1:end)),1); B2=.5*B2; % constructing B3 B3 = zeros(nn,nn); B3(2:end−1,2:end−1) = fliplr(triu(toeplitz(flipud(b(3:end))))); B3 = .5*B3; % constructing M k = 1:nn+1:nn*nn; for i=nn−1:−2:1 M(k(1:i)+nn−i)=nn−i:nn−1; end M = M'; M(2:end,:) = 2*M(2:end,:); M2=M*M; MBM=M*(B1*M+B2*M+B3*M); N = [zeros(nn−2,nn−2), eye(nn−2,nn−2);... M2(1:nn−2,:)*S, MBM(1:nn−2,:)*S]; d = eigs(N,3*length(N)/4,'sm'); e = d(real(d)0); f = d(real(d)