Legendre Spectral Projection Methods for Urysohn Integral Equations Payel Das1 , Mitali Madhumita Sahani2 , Gnaneshwar Nelakanti3 Department of Mathematics Indian Institute of Technology, Kharagpur - 721 302, India
Abstract In this paper, we consider the Legendre spectral Galerkin and Legendre spectral collocation methods to approximate the solution of Urysohn integral equation. We prove that the approximated solutions of the Legendre Galerkin and Legendre collocation methods converge to the 1 exact solution with the same orders, O(n−r ) in L2 -norm and O(n 2 −r ) in infinity norm, and the iterated Legendre Galerkin solution converges with the order O(n−2r ) in both L2 -norm and infinity norm, whereas iterated Legendre collocation solution converges with the order O(n−r ) in both L2 -norm and infinity norm, n being the highest degree of Legendre polynomial employed in the approximation and r being the smoothness of the kernel. We are able to obtain similar superconvergence rates for the iterated Galerkin solution for Urysohn integral equations with smooth kernel as in the case of piecewise polynomial basis functions. Keywords: Urysohn integral equations, smooth kernels, Spectral method, Galerkin method, collocation method, Legendre polynomials, superconvergence rates
1. Introduction In this section, we consider the following Urysohn integral equation Z 1 x(t) − k(t, s, x(s))ds = f (t), −1 ≤ t ≤ 1,
(1.1)
−1
where k and f are known functions and x is the unknown solution to be found in a Banach space X. Several numerical methods for approximating the solution of Fredholm integral equation of type (1.1) are known. Numerical methods which use properties of a classical Schauder basis in the Banach space C([a, b] × [a, b]), for approximation of the fixed point of Fredholm integral operator, are given in [17], [18]. Various spectral methods are available in literature to solve integral equations (see [13], [16], [19], [20], [21], [22], [24], [25]). The Galerkin, collocation, 1
E-mail:
[email protected] E-mail:
[email protected] 3 E-mail:
[email protected], Telephone No.:+91- 3222-283680. 2
Preprint submitted to Elsevier
April 22, 2014
Petrov-Galerkin, Nystr¨ om methods are most commonly used projection methods for finding numerical solutions of the equation of type (1.1) (see [3], [4], [5], [6]). In [3], [4] Atkinson developed general framework for Galerkin and collocation methods for the solution of Urysohn integral equations (1.1) using piecewise polynomial basis functions and obtain superconvergence results using the iterated version of the approximate solutions. Discretized versions of collocation and Galerkin methods for Urysohn integral equations were introduced in [5] and [6]. Let Xn be a sequence of piecewise polynomial subspaces of X of degree ≤ r − 1 on [−1, 1] and Pn be either orthogonal or interpolatory bounded projections from X onto Xn . Then in Galerkin or in collocation method, the Urysohn integral equation (1.1) is approximated by xn − Pn K(xn ) = Pn f,
(1.2)
and its iterated solution is defined by x ˜n = f + K(xn ). Under some suitable conditions on the kernel k and the right hand side function f of the equation (1.1), it is known that the orders of convergence for Galerkin and collocation solutions are O(hr ) and for the iterated Galerkin and iterated collocation solutions are O(h2r ), where h denotes the norm of the partition (see [3], [4]). However to obtain more accurate solutions using piecewise polynomial basis functions, one has to increase the number of partitioning points. Therefore a large system of nonlinear equations has to be solved, which is very much expensive computationally. In this paper, we consider the Galerkin and collocation methods and their iterated versions to approximate the solutions of Urysohn integral equation (1.1) with a smooth kernel, using global polynomial basis functions. Use of global polynomials will imply smaller nonlinear systems, something which is highly desirable in practical computations. Hence we choose to use global polynomials rather than piecewise polynomial basis functions in this paper. In particular, we use Legendre polynomials, which can be generated recursively with ease and possess nice property of orthogonality. Further, these Legendre polynomials are less expensive computationally compared to piecewise polynomial basis functions. We obtain almost similar convergence rates using Legendre polynomial bases as in the case of piecewise polynomial bases. We organize this paper as follows. In Section 2, we discuss the Legendre spectral Galerkin and Legendre spectral collocation methods to obtain superconvergence results. In Section 3, numerical results are given to illustrate the theoretical results. Throughout this paper, we assume that c is a generic constant. 2. Legendre Spectral Galerkin and Collocation Methods: Urysohn Integral Equations with Smooth Kernel In this section, we describe the Galerkin and collocation methods for solving Urysohn integral equations using Legendre polynomial basis functions. Let X = L2 [−1, 1] or C[−1, 1] and consider the following Urysohn integral equation Z 1 x(t) − k(t, s, x(s)) ds = f (t), − 1 ≤ t ≤ 1, (2.1) −1
2
where k(., ., .) and f are known functions and x is the unknown function to be determined. Let 1
Z K(x)(t) =
k(t, s, x(s)) ds, x ∈ X. −1
Then the equation (2.1) can be written as x − K(x) = f.
(2.2)
The Frechet derivative K0 (x) is the linear integral operator defined by Z
0
1
(K (x)h)(t) = −1
∂ k(t, s, x(s))h(s) ds = ∂u
Z
1
ku (t, s, x(s))h(s) ds. −1
Throughout the paper, the following assumptions are made on f and k(t, s, u): (i) f ∈ C[−1, 1], (ii) M =
sup |ku (t, s, x(s))| < ∞, t,s∈[−1,1]
∂ (iii) the kernel k(t, s, u), ku (t, s, u) and ∂t ku (t, s, u), satisfies Lipschitz conditions in the third variable u, i.e., for any u1 , u2 ∈ R, ∃ c1 , c2 , c3 > 0 such that
|k(t, s, u1 ) − k(t, s, u2 )| ≤ c1 |u1 − u2 |,
and
|ku (t, s, u1 ) − ku (t, s, u2 )| ≤ c2 |u1 − u2 |, ∂ ku (t, s, u1 ) − ∂ ku (t, s, u2 ) ≤ c3 |u1 − u2 |. ∂t ∂t
Next, we define the operator T on X by T u := K(u) + f, u ∈ X, then the equation (2.2) can be written as x = T x.
(2.3)
The following theorem gives the condition for the existence of unique solution of the equation (2.3) in X. Theorem 2.1. Let X = L2 [−1, 1] or C[−1, 1] and f ∈ X. Assume k(., ., .) ∈ C([−1, 1]×[−1, 1]× R) satisfies the Lipschitz condition in the third variable, i.e., |k(t, s, u1 ) − k(t, s, u2 )| ≤ c1 |u1 − u2 |,
u1 , u2 ∈ R,
with 2c1 < 1. Then the operator equation x = T x has a unique solution x0 ∈ X, i.e., we have x0 = T x0 .
3
Proof of the above theorem can be easily done using similar technique given in Theorem 2.4 of [15]. For the rest of the paper we assume that the kernel k(., ., .) ∈ C r ([−1, 1] × [−1, 1] × R). For j = 0, 1, 2, ..., r, we have Z 1 j ∂ 0 (j) |[K (x0 )x] (t)| = ku (t, s, x0 (s))x(s)ds j −1 ∂t j Z 1 ∂ |x(s)|ds ≤ sup j ku (t, s, x0 (s)) −1 t,s∈[−1,1] ∂t √ ≤ 2kkkj,∞ kxkL2 ≤ 2kkkj,∞ kxk∞ i+j where kkkr,∞ = max ∂t∂i ∂sj ku (t, s, x0 (s)) . 0≤i,j≤r
t,s∈[−1,1]
Hence for j = 0, 1, 2, ..., r, we have k[K0 (x0 )x](j) k∞ ≤ and k[K0 (x0 )x](j) kL2 ≤
√
√
2kkkj,∞ kxkL2 ≤ 2kkkj,∞ kxk∞
(2.4)
2k[K0 (x0 )x](j) k∞ ≤ 2kkkj,∞ kxkL2 .
(2.5)
Next we prove the following lemma, which we need in our convergence analysis. Lemma 2.1. For any x, y ∈ L2 [−1, 1] or C[−1, 1], the following hold k(K(x0 ) − K(x))yk∞ ≤ c1 kx0 − xkL2 kykL2 , k(K0 (x0 ) − K0 (x))yk∞ ≤ c2 kx0 − xkL2 kykL2 . Proof. Using Lipschitz’s continuity of k(t, s, u) and Cauchy-Schwarz inequality, we have k(K(x0 ) − K(x))yk∞ = =
max |(K(x0 ) − K(x))y(t)| Z 1 [k(t, s, x0 (s)) − k(t, s, x(s))]y(s)ds max
t∈[−1,1]
t∈[−1,1]
Z
−1
1
≤ c1
|x0 (s) − x(s)||y(s)|ds −1
≤ c1 kx0 − xkL2 kykL2 .
(2.6)
On the similar lines, using Lipschitz’s continuity of ku (t, s, u) and Cauchy-Schwarz inequality, we obtain k(K0 (x0 ) − K0 (x))yk∞ ≤ c2 kx0 − xkL2 kykL2 .
(2.7)
Hence the proof follows. 2 Next we will apply Legendre Galerkin and Legendre collocation methods to the equation (2.1). To do this, we let Xn be the sequence of polynomial subspaces of X of degree ≤ n and 4
we choose Legendre polynomials {φ0 , φ1 , φ2 , . . . , φn } as an orthonormal basis for the subspace Xn , where, φ0 (x) = 1, φ1 (x) = x, x ∈ [−1, 1], and for i = 1, 2, . . . , n − 1 (i + 1)φi+1 (x) = (2i + 1)xφi (x) − iφi−1 (x), x ∈ [−1, 1].
(2.8)
Orthogonal projection operator: Let X = L2 [−1, 1] or C[−1, 1] and let the operator PnG : X → Xn be the orthogonal projection defined by PnG x
=
n X
hx, φj i φj , x ∈ X,
(2.9)
j=0
R1 where hx, φj i = −1 x(t)φj (t)dt. We quote the following lemmas which follows from (Canuto et al. [8], pp 283-287). Lemma 2.2. Let PnG : X → Xn denote the orthogonal projection defined by (2.9). Then the projection PnG satisfies the following properties. (i) {PnG : n ∈ N} is uniformly bounded in L2 -norm. (ii) There exists a constant c > 0 such that for any n ∈ N and u ∈ X, kPnG u − ukL2 ≤ c inf ku − φkL2 . φ∈Xn
Lemma 2.3. Let PnG be the orthogonal projection defined by (2.9). Then for any u ∈ C r [−1, 1], there hold ku − PnG ukL2 ≤ cn−r ku(r) kL2 , ku − ku −
PnG uk∞ PnG uk∞
≤ cn
3 −r 4
ku
≤ cn
1 −r 2
V (u(r) ),
(r)
kL2 ,
(2.10) (2.11) (2.12)
where c is a constant independent of n. and V (u(r) ) denotes the total variation of u(r) . Interpoaltory projection operator: Let {τ0 , τ1 , . . . , τn } be the zeros of the Legendre polynomial of degree n + 1 and define interpolatory projection PnC : X → Xn by PnC u ∈ Xn , PnC u(τi ) = u(τi ), i = 0, 1, . . . , n, u ∈ X.
(2.13)
According to the analysis of (Canuto et al. [8], pp 289), PnC satisfies the following lemmas. Lemma 2.4. Let PnC : X → Xn be the interpolatory projection defined by (2.13). Then there hold (i) {PnC : n ∈ N} is uniformly bounded in L2 - norm. 5
(ii) There exists a constant c > 0 such that for any n ∈ N and u ∈ X, kPnC u − ukL2 ≤ c inf ku − φkL2 → 0, n → ∞. φ∈Xn
Lemma 2.5. Let PnC : X → Xn be the interpolatory projection defined by (2.13). Then for any u ∈ C r [−1, 1], there exists a constant c independent of n such that ku − PnC ukL2 ≤ cn−r ku(r) kL2 .
(2.14)
Noting that 23/2 kPnC k∞ = 1 + √ n1/2 + B0 + O(n−1/2 ), π where B0 is a bounded constant (see Tang et al. [22]), we have k(I − PnC )uk∞ ≤ (1 + kPnC k∞ ) inf ku − χk∞ χ∈Xn
1 2
1
≤ cn n−r ku(r) k∞ ≤ cn 2 −r ku(r) k∞ .
(2.15)
Throughout this paper, we assume that the projection operator Pn : X → Xn is either orthogonal projection PnG defined by (2.9) or interpolatory projection operator PnC defined by (2.13). By Lemma 2.2 and 2.4, we have that kPn kL2 is uniformly bounded. We denote, kPn kL2 ≤ p, for all n ∈ N and kPn xkL2 ≤ p1 kxk∞ , where p and p1 are constants independent of n. Further, we have from Lemma 2.3, 2.5 and estimate (2.15) that ku − Pn ukL2
≤ cn−r ku(r) kL2 , β−r
ku − Pn uk∞ ≤ cn
(r)
ku
k∞ , 0 < β < 1, and r = 0, 1, 2, ...
(2.16) (2.17)
where c is a constant independent of n, β = 43 for orthogonal projection operators and β = 12 for interpolatory projections. Note that kPn u − uk∞ 9 0, as n → ∞ for any u ∈ C[−1, 1]. The projection method for equation (2.2) is seeking an approximate solution xn ∈ Xn such that xn − Pn K(xn ) = Pn f. (2.18) If Pn is chosen to be PnG , the above scheme (2.18) leads to Legendre Galerkin method, whereas if Pn is replaced by PnC we get the Legendre collocation method. Let Tn be the operator defined by Tn u := Pn K(u) + Pn f.
(2.19)
Then the equation (2.18) can be written as xn = T n xn .
6
(2.20)
In order to obtain more accurate approximated solution, we further consider the iterated projection method for (2.2). To this end, we define the iterated solution as x ˜n = f + K(xn ).
(2.21)
Applying Pn on both sides of the equation (2.21), we obtain Pn x ˜n = Pn f + Pn K(xn ).
(2.22)
From equations (2.18) and (2.22), it follows that Pn x ˜n = xn . Using this, we see that the iterated solution x ˜n satisfies the following equation x ˜n − K(Pn x ˜n ) = f.
(2.23)
Letting Ten (u) := f + K(Pn u), u ∈ X, the equation (2.23) can be written as x ˜n = Ten x ˜n . We quote the following theorem from Vainikko [23] which gives us the conditions under which the solvability of one equation leads to the solvability of other equation. Theorem 2.2. Let Tb and Te be continuous operators over an open set Ω in a Banach space X. Let the equation x = Tex has an isolated solution x ˜0 ∈ Ω and let the following conditions be satisfied. (a) The operator Tb is Frechet differentiable in some neighbourhood of the point x ˜0 , while the linear operator I − Tb0 (˜ x0 ) is continuously invertible. (b) Suppose that for some δ > 0 and 0 < q < 1, the following inequalities are valid (the number δ is assumed to be so small that the sphere kx − x ˜0 k ≤ δ is contained within Ω). sup
−1
k(I − Tb0 (˜ x0 ))
(Tb0 (x) − Tb0 (˜ x0 ))k ≤ q,
(2.24)
kx−˜ x0 k≤δ −1 α = k(I − Tb0 (˜ x0 )) (Tb(˜ x0 )) − Te(˜ x0 ))k ≤ δ(1 − q).
(2.25)
Then the equation x = Tbx has a unique solution x ˆ0 in the sphere kx − x ˜0 k ≤ δ. Moreover, the inequality α α ≤ kˆ x0 − x ˜0 k ≤ (2.26) 1+q 1−q is valid. Next we recall the definition of collectively compact approximation and some theorems from Anselone [2] and Ahues et al.[1]. Definition 2.1. A sequence {Kn } in B(X, Y) is said to be a collectively compact approximation cc of K ∈ B(X, Y), i.e., Kn −→ K if (i) Kn x → Kx, for each x ∈ X, (ii) for some positive integer n0 , the set
S n≥n0
compact.
7
{(Kn − K)x : kxk ≤ 1, x ∈ X} is relatively
cc
n
Lemma 2.6. Let {Kn } be a uniformly bounded sequence in B(X) and Kn −→ K or Kn −→ K . If (I − K) is invertible, then (I − Kn )−1 exists and uniformly bounded on X, for sufficiently large n. Now we discuss the existence and convergence rates of the approximated solution xn to x0 . Theorem 2.3. Let x0 ∈ C r [−1, 1] be an isolated solution of the equation (2.2). Assume that 1 is not an eigenvalue of the linear operator K0 (x0 ), where K0 (x0 ) denotes the Frechet derivative of K(x) at x0 . Let Pn : X → Xn be either orthogonal or interpolatory projection operator defined by (2.9) or (2.13), respectively. Then the equation (2.18) has a unique solution xn ∈ B(x0 , δ) = {x : kx − x0 kL2 < δ} for some δ > 0 and for sufficiently large n. Moreover, there exists a constant 0 < q < 1, independent of n such that αn αn ≤ kxn − x0 kL2 ≤ , 1+q 1−q where αn = k(I − Tn 0 (x0 ))−1 (Tn (x0 ) − T (x0 ))kL2 . Proof. Using estimates (2.5) and (2.16), we have k(Tn 0 (x0 ) − T 0 (x0 ))xkL2
= k(Pn K0 (x0 ) − K0 (x0 ))xkL2 = k(Pn − I)K0 (x0 )xkL2 ≤ cn−r k[K0 (x0 )x](r) kL2 ≤ 2cn−r kkkr,∞ kxkL2 .
This implies kTn 0 (x0 ) − T 0 (x0 )kL2 → 0, as n → ∞, i.e., Tn0 (x0 ) is norm convergent to T 0 (x0 ). Hence by Lemma 2.6, we have (I − Tn0 (x0 ))−1 exists and uniformly bounded on X, for some sufficiently large n, i.e., there exists some A1 > 0 such that k(I − Tn0 (x0 ))−1 kL2 ≤ A1 < ∞. Now from the estimate (2.7), we have for any x ∈ B(x0 , δ), k(Tn 0 (x0 ) − Tn 0 (x))ykL2
= k(Pn K0 (x0 ) − Pn K0 (x))ykL2 ≤ kPn kL2 k(K0 (x0 ) − K0 (x))ykL2 √ 2kPn kL2 k(K0 (x0 ) − K0 (x))yk∞ ≤ √ √ ≤ p 2c2 kx0 − xkL2 kykL2 ≤ p 2c2 δkykL2 .
This implies
√ kTn 0 (x0 ) − Tn 0 (x)kL2 ≤ p 2c2 δ.
Hence, we have sup kx−x0 kL2 ≤δ
−1
k(I − Tn 0 (x0 ))
√ (Tn 0 (x0 ) − Tn 0 (x))kL2 ≤ A1 p 2c2 δ ≤ q, (say)
where 0 < q < 1, which proves the equation (2.24) of Theorem 2.2.
8
(2.27)
Taking use of (2.16), we have αn = k(I − Tn 0 (x0 ))−1 (Tn (x0 ) − T (x0 ))kL2 ≤ A1 kTn (x0 ) − T (x0 )kL2 = A1 kPn (Kx0 + f ) − (Kx0 + f )kL2 = A1 k(Pn − I)(Kx0 + f )kL2 = A1 k(Pn − I)x0 kL2 (r)
≤ A1 cn−r kx0 kL2 → 0, as n → ∞.
(2.28)
By choosing n large enough such that αn ≤ δ(1 − q), the equation (2.25) of Theorem 2.2 is satisfied. Hence by applying Theorem 2.2, we obtain αn αn ≤ kxn − x0 kL2 ≤ , 1+q 1−q 2
This completes the proof.
Theorem 2.4. Let x0 ∈ C r [−1, 1], r ≥ 1, be an isolated solution of the equation (2.2). Assume that 1 is not an eigenvalue of the linear operator K0 (x0 ), where K0 (x0 ) denotes the Frechet derivative of K(x) at x0 . Let Pn : X → Xn be either orthogonal or interpolatory projection operator defined by (2.9) or (2.13), respectively. Then the equation (2.18) has a unique solution xn ∈ B(x0 , δ) = {x : kx − x0 k∞ < δ} for some δ > 0 and for sufficiently large n. Moreover, there exists a constant 0 < q < 1, independent of n such that αn αn ≤ kxn − x0 k∞ ≤ , 1+q 1−q where αn = k(I − Tn 0 (x0 ))−1 (Tn (x0 ) − T (x0 ))k∞ . Proof. Using estimates (2.4) and (2.17), we have k(Tn 0 (x0 ) − T 0 (x0 ))xk∞ = k(Pn K0 (x0 ) − K0 (x0 ))xk∞ = k(Pn − I)K0 (x0 )xk∞ ≤ cnβ−r k[K0 (x0 )x](r) k∞ ≤ 2cnβ−r kkkr,∞ kxk∞ .
(2.29)
Since 0 < β < 1, for β < r = 1, 2, 3, ..., it follows that kTn0 (x0 ) − T 0 (x0 )k∞ = O(nβ−r ) → 0, as n → ∞. Hence by applying Lemma 2.6, we see that (I − Tn0 (x0 ))−1 exists and uniformly bounded on X, for some sufficiently large n, i.e., there exists some A2 > 0 such that k(I − Tn0 (x0 ))−1 k∞ ≤ A2 < ∞. Now, we have for any x ∈ B(x0 , δ),
9
k(Tn 0 (x0 ) − Tn 0 (x))yk∞ ≤ k(Pn K0 (x0 ) − Pn K0 (x))yk∞ ≤ k(Pn − I){(K0 (x0 ) − K0 (x))y}k∞ + k(K0 (x0 ) − K0 (x))yk∞ .
(2.30)
Using the estimate (2.7), we have k(K0 (x0 ) − K0 (x))yk∞ ≤ c2 kx0 − xkL2 kykL2 ≤ 2c2 kx0 − xk∞ kyk∞ ≤ 2c2 δkyk∞ .
(2.31)
Putting r = 1 in the estimate (2.17) and using the assumption (iii), we have k(Pn − I){(K0 (x0 ) − K0 (x))y}k∞ ≤ cnβ−1 k[(K0 (x0 ) − K0 (x))y](1) k∞ ≤ cnβ−1 sup |[(K0 (x0 ) − K0 (x))y](1) (t)| t∈[−1,1]
Z 1 ∂ = cn sup [ku (t, s, x0 (s)) − ku (t, s, x(s))]y(s)ds t∈[−1,1] −1 ∂t Z 1 ≤ cnβ−1 c3 |(x0 − x)(s)||y(s)|ds β−1
−1
≤ 2cnβ−1 c3 kx − x0 k∞ kyk∞ ≤ 2cnβ−1 c3 δkyk∞ .
(2.32)
Hence combining estimates (2.30), (2.31) and (2.32), we have kTn 0 (x0 ) − Tn 0 (x)k∞ ≤ (2cc3 nβ−1 + 2c2 )δ. Since β < 1, we have sup
−1
k(I − Tn 0 (x0 ))
kx−x0 k∞ ≤δ
(Tn 0 (x0 ) − Tn 0 (x))k∞ ≤ A2 (2cc3 nβ−1 + 2c2 )δ ≤ q, (say)
where 0 < q < 1, which proves the equation (2.24) of Theorem 2.2. Since β < r = 1, 2, ..., using the estimate (2.17), we have αn = k(I − Tn 0 (x0 ))−1 (Tn (x0 ) − T (x0 ))k∞ ≤ A2 kTn (x0 ) − T (x0 )k∞ = A2 kPn (Kx0 + f ) − (Kx0 + f )k∞ = A2 k(Pn − I)(Kx0 + f )k∞ = A2 k(Pn − I)x0 k∞ ≤
(r) cnβ−r kx0 k∞
(2.33)
→ 0, as n → ∞.
By choosing n large enough such that αn ≤ δ(1 − q), the equation (2.25) of Theorem 2.2 is satisfied. Hence by applying Theorem 2.2, we obtain αn αn ≤ kxn − x0 k∞ ≤ , 1+q 1−q 10
This completes the proof. 2 Next we discuss the existence and convergence of the iterated approximate solution x ˜n to x0 . Theorem 2.5. Let x0 ∈ C r [−1, 1] be an isolated solution of the equation (2.2). Assume that 1 is not an eigenvalue of K0 (x0 ). Then for sufficiently large n, the operator I − Ten0 (x0 ) is invertible on C[−1, 1] and there exist constants L, L1 > 0 independent of n such that k(I − Ten0 (x0 ))−1 k∞ ≤ L and k(I − Ten0 (x0 ))−1 kL2 ≤ L1 . Proof. Consider |Ten0 (x0 )x(t)| = |K0 (Pn x0 )Pn x(t)| ≤ |(K0 (Pn x0 ) − K0 (x0 ))Pn x(t)| + |K0 (x0 )Pn x(t)|
(2.34)
Now using estimates (2.7), (2.16) and the fact that kPn xkL2 ≤ p1 kxk∞ , we have k{K0 (Pn x0 ) − K0 (x0 )}Pn xk∞ ≤ c2 kPn x0 − x0 kL2 kPn xkL2 (r)
≤ cc2 p1 n−r kx0 kL2 kxk∞ → 0, as n → ∞.
(2.35)
Again using the Cauchy-Schwarz inequality, we have kK0 (x0 )Pn xk∞ =
max |K0 (x0 )Pn x(t)| Z 1 = max ku (t, s, x0 (s))Pn x(s)ds t∈[−1,1] −1 Z 1 ≤ max |ku (t, s, x0 (s))| Pn x(s)ds t,s∈[−1,1] −1 √ √ 2M kPn xkL2 ≤ 2M p1 kxk∞ . ≤ t∈[−1,1]
(2.36)
Now combining the estimates (2.34), (2.35) and (2.36) we obtain (r)
kTen0 (x0 )k∞ ≤ (cc2 p1 n−r kx0 kL2 +
√
2M p1 ) < ∞.
This shows that kTen0 (x0 )k∞ is uniformly bounded. Now |Ten0 (x0 )x(t) − Ten0 (x0 )x(t0 )| = |K0 (Pn x0 )Pn x(t) − K0 (Pn x0 )Pn x(t0 )|
where
≤ T1 + T2 + T3 .
(2.37)
T1 = |K0 (Pn x0 )Pn x(t) − K0 (x0 )Pn x(t)|,
(2.38)
T2 = |K0 (x0 )Pn x(t) − K0 (x0 )Pn x(t0 )|,
(2.39)
T3 = |K0 (x0 )Pn x(t0 ) − K0 (Pn x0 )Pn x(t0 )|.
(2.40)
11
Now using the estimate (2.35), we have T1 = |(K0 (Pn x0 ) − K0 (x0 ))Pn x(t)| ≤ k(K0 (Pn x0 ) − K0 (x0 ))Pn xk∞ (r)
≤ cc2 p1 n−r kx0 kL2 kxk∞ → 0, as n → ∞ and
(2.41)
T3 = |(K0 (Pn x0 ) − K0 (x0 ))Pn x(t0 )| ≤ k(K0 (Pn x0 ) − K0 (x0 ))Pn xk∞ (r)
≤ cc2 p1 n−r kx0 kL2 kxk∞ → 0, as n → ∞.
(2.42)
Since ku (t, s, u) ∈ C([−1, 1] × [−1, 1] × R), ku (t, s, u) is uniformly continuous in first variable t. Hence for any > 0, however small, there exists some number δ > 0 such that |ku (t, s, u) − k(t0 , s, u)| < , whenever |t − t0 | < δ. Hence T2
Z = ≤
1
[ku (t, s, x0 (s)) − ku (t , s, x0 (s))]Pn x(s)ds −1 Z 1 0 sup |ku (t, s, x0 (s)) − ku (t , s, x0 (s))| |Pn x(s)|ds 0
−1≤s≤1
√ ≤ 2kPn xkL2 , as t → t0 √ ≤ 2p1 kxk∞ → 0, as t → t0 .
−1
(2.43)
Hence combining estimates (2.37), (2.41), (2.42) and (2.43) we have |Ten0 (x0 )x(t) − Ten0 (x0 )x(t0 )| → 0, as t → t0 and n → ∞.
(2.44)
This implies {Ten0 (x0 )}∞ n=1 is collectively compact. Hence using Lemma 2.6, we can conclude that (I − Ten0 (x0 ))−1 is invertible on C[−1, 1] and there exist constants L > 0 independent of n such that k(I − Ten0 (x0 ))−1 k∞ ≤ L. On similar lines it can be shown that the result is true for L2 -norm, i.e., there exists a constants L1 > 0 independent of n such that k(I − Ten0 (x0 ))−1 kL2 ≤ L1 . Theorem 2.6. Let x0 ∈ C r [−1, 1] be an isolated solution of the equation (2.2). Let Pn : X → Xn be either orthogonal or interpolatory projection operator defined by (2.9) or (2.13), respectively. Assume that 1 is not an eigenvalue of K0 (x0 ), then for sufficiently large n, the iterated solution x ˜n defined by (2.23) is the unique solution in the sphere B(x0 , δ) = {x : kx − x0 k∞ < δ}. Moreover, there exists a constant 0 < q < 1, independent of n such that βn βn ≤ k˜ xn − x0 k∞ ≤ , 1+q 1−q where βn = k(I − Ten0 (x0 ))−1 (Ten (x0 ) − T (x0 ))k∞ . 12
Proof. From Theorem 2.5, there exists a constant L > 0 such that k(I − Ten0 (x0 ))−1 k∞ ≤ L, for sufficiently large value of n. Using the estimate (2.7) of Lemma 2.1 and the fact that kPn xkL2 ≤ p1 kxk∞ , for any x ∈ B(x0 , δ), we have k{Ten0 (x) − Ten0 (x0 )}yk∞ = k{K0 (Pn x)Pn − K0 (Pn x0 )Pn }yk ∞
0
0
= k(K (Pn x) − K (Pn x0 ))Pn yk∞ ≤ c2 kPn (x − x0 )kL2 kPn ykL2 ≤ c2 p21 kx − x0 k∞ kyk∞ ≤ c2 p21 δkyk∞ .
(2.45)
This implies sup kx−x0 k∞ ≤δ
−1 k(I − Ten0 (x0 )) (Ten0 (x) − Ten0 (x0 ))k∞ ≤ Lc2 p21 δ ≤ q, (say)
where 0 < q < 1, which proves the equation (2.24) of Theorem 2.2. Now using the estimate (2.6), we have k{Ten (x0 ) − T (x0 )}yk∞ ≤ k{K(Pn x0 ) − K(x0 )}yk∞ ≤ c1 k(I − Pn )x0 kL2 kykL2 √ ≤ 2c1 k(I − Pn )x0 kL2 kyk∞ Hence using estimate (2.16), we have kTen (x0 ) − T (x0 )k∞ ≤ Hence
√
(r)
2c1 cn−r kx0 kL2 → 0, as n → ∞.
(2.46)
βn = k(I − Ten0 (x0 ))−1 (Ten (x0 ) − T (x0 ))k∞ √ (r) ≤ L 2c1 cn−r kx0 kL2 → 0, as n → ∞.
Choose n large enough such that βn ≤ δ(1 − q). Then the equation (2.25) of Theorem 2.2 is satisfied. Thus by applying Theorem 2.2, we obtain βn βn ≤ k˜ xn − x0 k∞ ≤ 1+q 1−q where
−1 βn = k(I − Ten0 (x0 )) (Ten (x0 ) − T (x0 ))k∞ .
2
This completes the proof.
Theorem 2.7. Let x0 ∈ C r [−1, 1] be an isolated solution of the equation (2.2). Let Pn : X → Xn be either orthogonal or interpolatory projection operator defined by (2.9) and (2.13) respectively. Assume that 1 is not an eigenvalue of K0 (x0 ), then for sufficiently large n, the iterated solution x ˜n defined by (2.23) is the unique solution in the sphere B(x0 , δ) = {x : kx − x0 kL2 < δ}. Moreover, there exists a constant 0 < q < 1, independent of n such that βn βn ≤ k˜ xn − x0 kL2 ≤ , 1+q 1−q where βn = k(I − Ten0 (x0 ))−1 (Ten (x0 ) − T (x0 ))kL2 . 13
Proof. From Theorem 2.5, we have (I − Ten0 (x0 ))−1 exists and it is uniformly bounded in L2 -norm on C[−1, 1], i.e, there exists a constant L1 > 0 such that k(I − Ten0 (x0 ))−1 kL2 ≤ L1 . Now using the estimate (2.7) and the fact that kPn kL2 ≤ p, we have for any x ∈ B(x0 , δ), √ kTen0 (x) − Ten0 (x0 )kL2 ≤ 2k{Ten0 (x) − Ten0 (x0 )}yk∞ √ = 2k{K0 (Pn x)Pn − K0 (Pn x0 )Pn }yk∞ √ = 2k(K0 (Pn x) − K0 (Pn x0 ))Pn yk∞ √ ≤ 2c2 kPn (x − x0 )kL2 kPn ykL2 √ √ 2c2 p2 kx − x0 kL2 kykL2 ≤ 2c2 p2 δkykL2 . ≤ Thus we obtain −1
sup kx−x0 kL2 ≤δ
k(I − Ten0 (x0 ))
√ (Ten0 (x) − Ten0 (x0 ))kL2 ≤ L1 2c2 p2 δ ≤ q, (say)
where 0 < q < 1, which proves the equation (2.24) of Theorem 2.2. Now using the estimate (2.46), we have βn = k(I − Ten0 (x0 ))−1 (Ten (x0 ) − T (x0 ))kL2 ≤ k(I − Ten0 (x0 ))−1 kL2 kTen (x0 ) − T (x0 )kL2 √ ≤ L1 2kTen (x0 ) − T (x0 )k∞ → 0, as n → ∞. Choose n large enough such that βn ≤ δ(1 − q). Hence the equation (2.25) of Theorem 2.2 is satisfied. Then applying Theorem 2.2, we get βn βn ≤ k˜ xn − x0 kL2 ≤ , 1+q 1−q −1
βn = k(I − Ten0 (x0 ))
where
(Ten (x0 ) − T (x0 ))kL2 . 2
This completes the proof.
Theorem 2.8. Let x0 ∈ C[−1, 1] be an isolated solution of the equation (2.2). Let x ˜n defined by the iterated scheme (2.23). Then the following hold
and
k˜ xn − x0 k∞ ≤ c{kx0 − Pn x0 k2L2 + | < gt , (I − Pn )x0 > |},
(2.47)
k˜ xn − x0 kL2 ≤ c{kx0 − Pn x0 k2L2 + | < gt , (I − Pn )x0 > |},
(2.48)
where gt (s) = ku (t, s, x0 (s)) and c is a constant independent of n. Proof. From Theorem 2.6, we have βn βn ≤ k˜ xn − x0 k∞ ≤ , 1+q 1−q where
0
βn = k(I − Ten (x0 ))−1 (Ten (x0 ) − T (x0 ))k∞ . 14
Hence using Theorem 2.5, we get 0 k˜ xn − x0 k∞ ≤ βn = k(I − Ten (x0 ))−1 (Ten (x0 ) − T (x0 ))k∞ 0 ≤ k(I − Ten (x0 ))−1 k∞ kTen (x0 ) − T (x0 )k∞
≤ LkK(Pn x0 ) − K(x0 )k∞ .
(2.49)
We denote g(t, s, x0 , x, θ) = ku (t, s, x0 + θ(x − x0 )) and gt (s) = ku (t, s, x0 (s)). Now Z 1 |K(Pn x0 )(t) − K(x0 )(t)| = [k(t, s, Pn x0 (s)) − k(t, s, x0 (s))]ds −1 Z 1 [ku (t, s, x0 (s) + θ(Pn x0 (s) − x0 (s)))](x0 − Pn x0 )(s)ds = −1 Z 1 = g(t, s, x0 , Pn x0 , θ)(x0 − Pn x0 )(s)ds −1 Z 1 [g(t, s, x0 , Pn x0 , θ) − gt (s) + gt (s)](x0 − Pn x0 )(s)ds = −1 Z 1 ≤ [g(t, s, x0 , Pn x0 , θ) − gt (s)](x0 − Pn x0 )(s)ds −1 Z 1 gt (s)(x0 − Pn x0 )(s)ds (2.50) + −1
For the first term of the above estimate (2.50), we have Z 1 [g(t, s, x0 , Pn x0 , θ) − gt (s)](x0 − Pn x0 )(s)ds −1 Z 1 = [ku (t, s, x0 (s) + θ(Pn x0 (s) − x0 (s))) − ku (t, s, x0 (s))](x0 − Pn x0 )(s)ds −1
Z
1
≤ c2
|(x0 − Pn x0 )(s)||(x0 − Pn x0 )(s)|ds −1 Z 1
= c2
|(x0 − Pn x0 )(s)|2 ds ≤
−1
√
2c2 kx0 − Pn x0 k2L2 .
Now for the second term of (2.50), we have Z 1 gt (s)(x0 − Pn x0 )(s)ds = | < gt (.), (I − Pn )(x0 ) > |.
(2.51)
(2.52)
−1
Hence combining estimates (2.50), (2.51) and (2.52), we have √ kK(Pn x0 ) − K(x0 )k∞ ≤ 2c2 kx0 − Pn x0 k2L2 + | < gt (.), (I − Pn )(x0 ) > |.
(2.53)
Therefore from estimates (2.49) and (2.53), we have k˜ xn − x0 k∞ ≤ LkK(Pn x0 ) − K(x0 )k∞ ≤ c{kx0 − Pn x0 k2L2 + | < gt (.), (I − Pn )(x0 ) > |}. 15
(2.54)
where c is a constant independent of n, this proves the estimate (2.47). Similarly for L2 -norm, we can show that √ k˜ xn − x0 kL2 ≤ 2k˜ xn − x0 k∞ ≤ c{kx0 − Pn x0 k2L2 + | < gt (.), (I − Pn )(x0 ) > |},
(2.55)
this proves the estimate (2.48). This completes the proof. 2 Now we discuss the convergence rates for the approximate and iterated approximate solutions. To distinguish between the Legendre Galerkin sloutions and Legendre collocation solutions, we set the following notations. In case of Legendre Galerkin method, we denote the approximate solution and the iterated approximate solution by xn = xG ˜n = x ˜G n and x n, respectively. For Legendre collocation method, we write the approximate solution and iterated approximate solution as xn = xC ˜n = x ˜C n and x n , respectively. Theorem 2.9. Let x0 ∈ C r [−1, 1] be a isolated solution of the equation (2.1) and xn = xG n be be the Legendre collocation approximation of x0 . the Legendre Galerkin solution or xn = xC n Then we get the following convergence rates. −r C kx0 − xG n kL2 , kx0 − xn kL2 = O(n )
and
1
C −r 2 kx0 − xG ). n k∞ , kx0 − xn k∞ = O(n
Proof. From Theorem 2.3, we have αn αn ≤ kxn − x0 kL2 ≤ , 1+q 1−q where αn = k(I − Tn 0 (x0 ))−1 (Tn (x0 ) − T (x0 ))kL2 . Hence we have from the estimate (2.28) kxn − x0 kL2 ≤ αn ≤ A1 k(Pn − I)x0 kL2 (r)
≤ A1 cn−r kx0 kL2 = O(n−r ). where c is a constant independent of n. Now for the error in infinity norm, using the estimate (2.33), we have kxn − x0 k∞ ≤ αn ≤ A2 k(Pn − I)x0 k∞ . Hence for Legendre Galerkin solution xn = xG n , using estimate (2.12) of Lemma 2.3, we have 1
(r)
1
G −r 2 kxG V (x0 ) = O(n 2 −r ), n − x0 k∞ ≤ A2 k(Pn − I)x0 k∞ ≤ A2 cn
(2.56)
and for Legendre collocation solution xC n , using estimate (2.15), we have 1
(r)
1
C −r 2 kxC kx0 k∞ = O(n 2 −r ). n − x0 k∞ ≤ A2 k(Pn − I)x0 k∞ ≤ A2 cn
(2.57)
Hence the proof follows. 2 Next we will discuss the error bounds for the iterated Legendre Galerkin and iterated Legendre collocation solutions separately. 16
Theorem 2.10. Let x0 ∈ C r [−1, 1] be a isolated solution of the equation (2.1) and x ˜G n be the iterated Legendre Galerkin approximations of x0 . Then we get the following superconvergence rates. −2r kx0 − x ˜G ), n kL2 = O(n −2r kx0 − x ˜G ). n k∞ = O(n
Proof. From Theorem 2.8, we have G 2 G k˜ xG n − x0 k∞ ≤ c{kx0 − Pn x0 kL2 + | < gt (.), (I − Pn )(x0 ) > |},
(2.58)
where c is a constant independent of n. Using the orthogonality of the projection operators PnG and Cauchy-Schwarz inequality, we obtain | < gt (.), (I − PnG )(x0 ) > | = | < (I − PnG )gt (.), (I − PnG )(x0 ) > | ≤ k(I − PnG )gt (.)kL2 kx0 − PnG x0 kL2
(2.59)
Hence using estimates (2.58), (2.59) and (2.10), we have G 2 G G k˜ xG n − x0 k∞ ≤ c{kx0 − Pn x0 kL2 + k(I − Pn )gt (.)kL2 kx0 − Pn x0 kL2 } (r)
(r)
≤ cn−2r kx0 k2L2 + cn−2r kx0 kL2 k(gt (.))(r) kL2 = O(n−2r ). And also
k˜ xG n − x0 kL2 ≤
(2.60) √
−2r 2k˜ xG ). n − x0 k∞ = O(n
(2.61) 2
Hence the proof follows.
Theorem 2.11. Let x0 ∈ C r [−1, 1] be a isolated solution of the equation (2.1) and x ˜C n be the iterated Legendre collocation approximations of x0 . Then we have the following convergence rates. −r kx0 − x ˜C n kL2 = O(n ), −r kx0 − x ˜C n k∞ = O(n ).
Proof. Using Theorem 2.8, Lemma 2.5, we have for the interpolatory projection operator PnC C 2 C k˜ xC n − x0 k∞ ≤ c{kx0 − Pn x0 kL2 + | < gt (.), (I − Pn )(x0 ) > |}
≤ c{kx0 − PnC x0 k2L2 + kgt kL2 kx0 − PnC x0 kL2 } (r)
(r)
≤ c{n−2r kx0 k2L2 + n−r kgt kL2 kx0 kL2 } = O(n−r ), and
k˜ xC n − x0 kL2 ≤
(2.62) √ −r 2k˜ xC n − x0 k∞ = O(n ). 17
(2.63)
Hence the proof follows. 2 Remark: From Theorems 2.9, 2.10, 2.11 we observe that the Legendre Galerkin and Legendre collocation solutions of Urysohn integral equation have same order of convergence, O(n−r ) in 1 L2 −norm and O(n 2 −r ) in infinity norm. The iterated Legendre Galerkin solution converges with the order O(n−2r ) in both L2 −norm and infinity norm, whereas the iterated Legendre collocation solution converges with the order O(n−r ) in both L2 −norm and in infinity norm. This shows that iterated Legendre Galerkin method improves over the iterated Legendre collocation method. 3. Numerical Example In this section we present the numerical results. To apply Legendre Galerkin and Legendre collocation methods, we choose the approximating subspaces Xn to be the Legendre polynomial subspaces of degree ≤ n. Legendre polynomials can be generated by the following three-term recurrence relation φ0 (s) = 1, φ1 (s) = s, s ∈ [−1, 1]. and (i + 1)φi+1 (x) = (2i + 1)xφi (x) − iφi−1 (x), x ∈ [−1, 1], i = 1, 2, ..., n − 1.
(3.1)
˜G We denote, the Galerkin and iterated Galerkin solutions by xG n , respectively and also n and x C C ˜n , respectively, in the following the collocation and iterated collocation solutions by xn and x tables. We present the errors of the approximation solutions and the iterated approximation solutions in both L2 −norm and infinity norm . In Tables 1 and 2, n represents the highest degree of the Legendre polynomials employed in the computation. The numerical algorithms are compiled by using Matlab. Example 3.1. We consider the following integral equation Z 1 x(t) − k(t, s, x(s))ds = f (t), −1 ≤ t ≤ 1
(3.2)
−1 √
−1 πt 2 with the kernel function k(t, s, x(s)) = ( 3 162π ) cos( π|s−t| 4 )[x(s)] and the function f (t) = ( 4 ) cos( 4 ) πt where the exact solution is given by x(t) = cos( 4 ).
18
Table 1: Legendre Galerkin method
n 2 4 5 7 8
kx − xG n kL2 0.16612096278e-02 0.00867599248e-03 0.00867599248e-03 0.00024104785e-04 0.00000041535e-04
kx − xG n k∞ 0.3471163964e-02 0.00218586143e-02 0.00218586143e-02 0.00069092030e-04 0.00000131351e-04
kx − x ˜G n kL2 0.27140926077e-04 0.00003171918e-04 0.00003171918e-04 0.00000997521e-08 0.00000023431e-08
kx − x ˜G n k∞ 0.21215376302e-04 0.00002479408e-04 0.00002479407e-04 0.00000820455e-08 0.00000066613e-08
Table 2: Legendre Collocation method
n 2 4 5 7 8
kx − xC n kL2 0.23728226126e-02 0.01227979480e-03 0.00867602639e-03 0.00024104407e-04 0.00000058748e-04
kx − xC n k∞ 0.63959128045e-02 0.00400877378e-02 0.00218077988e-02 0.00068986980e-04 0.00002505430e-05
kx − x ˜C n kL2 0.50686226008e-03 0.01552173562e-05 0.00036041468e-05 0.00001183440e-08 0.00000026700e-08
kx − x ˜C n k∞ 0.39620142716e-03 0.00012132949e-03 0.00028172709e-05 0.00000096811e-07 0.00000006217e-07
From Tables 1 and 2, we see that the numerical results agree with the theoretical results. Example 3.2. We consider the following integral equation Z 1 x(t) − k(t, s, x(s))ds = f (t), −1 ≤ t ≤ 1
(3.3)
0
with the kernel function k(t, s, x(s)) = 51 cos(πt) sin(πs)[x(s)]3 and the function f (t) = sin(πt) √ where the exact solution is given by x(t) = sin(πt) + 31 (20 − 391) cos(πt). For this example, we have compared our results with the piecewise polynomial-based Galerkin and collocation methods proposed in [3] and [4]. To do this, we consider a uniform partition of [0, 1]: 0 = t0 < t1 < t2 < ... < tn+1 = 1 where ti = i−1 n , i = 1, 2, ..., n + 1. We choose the approximating subspaces as the space of piecewise constant functions, which has dimension n. The collocation points are taken to be the roots of Legendre polynomial of degree 1 in [0,1], shifted to (ti−1 , ti ), which are actually the mid points of each sub-intervals, i.e, we choose collocation points as: 2i − 1 si = , i = 1, 2, ..., n. 2n In Tables 3 and 4, we present the errors in Legendre Galerkin and Legendre collocation methods and in Tables 5 and 6 , we give the errors for Galerkin and collocation methods with approximating subspace as the space of piecewise constant functions. In Tables 3 and 4, n 19
denote the highest degree of Legendre polynomial employed in the computation and in Tables 5 and 6, n denote the the dimension of the approximating subspace.
Table 3: Legendre Galerkin method
n
kx − xG n kL2
kx − xG n k∞
kx − x ˜G n kL2
kx − x ˜G n k∞
2 3 4 5 6 7 8 9
0.18447865e-01 0.17294403e-01 0.42424720e-03 0.36907755e-03 0.52214001e-05 0.41553615e-05 0.39931165e-07 0.28892391e-07
0.64441433e-01 0.52172170e-01 0.19748104e-02 0.12964794e-02 0.28375175e-04 0.16651422e-04 0.24426326e-06 0.12912277e-06
0.10769066e-02 0.10103846e-02 0.63371262e-05 0.53242459e-05 0.96363220e-08 0.70737260e-08 0.05210533e-11 0.32443617e-11
0.15229748e-02 0.14288986e-02 0.89620430e-05 0.75296151e-05 0.13627807e-07 0.10003751e-07 0.73688416e-11 0.45881077e-11
Table 4: Legendre Collocation method
n
kx − xC n kL2
kx − xC n k∞
kx − x ˜C n kL2
kx − x ˜C n k∞
2 3 4 5 6 7 8 9
0.25334382e-01 0.17300974e-01 0.56284025e-03 0.36911801e-03 0.66747628e-05 0.41555544e-05 0.49285798e-07 0.28887128e-07
0.1066739308 0.51950086e-01 0.30247078e-02 0.12940851e-02 0.43031650e-04 0.16654219e-04 0.35599727e-06 0.12690817e-06
0.14589879e-02 0.10608995e-02 0.16697387e-04 0.55660729e-05 0.52125995e-07 0.74765899e-08 0.45669483e-10 0.36173459e-11
0.20633190e-02 0.15003373e-02 0.23613654e-04 0.78716100e-05 0.73717234e-07 0.10573487e-07 0.64586392e-10 0.51156718e-11
20
Table 5: Piecewise polynomial based Galerkin method
n
kx − xG n kL2
kx − xG n k∞
kx − x ˜G n kL2
kx − x ˜G n k∞
2 4 8 16 32 64 128 256
0.30981458760 0.15921432377 0.8019028-e-01 0.40166351e-01 0.20088412e-01 0.10035378e-01 0.50117181e-02 0.24743334e-02
0.68983512701 0.38594513556 0.19641976481 0.96868820e-01 0.48426304e-01 0.23558627e-01 0.11860178e-01 0.60119542e-02
0.30029759e-01 0.52877651e-02 0.13699122e-02 0.34553767e-03 0.86545529e-04 0.21605623e-04 0.53890086e-05 0.13135942e-05
0.42468461e-01 0.74780235e-02 0.19373469e-02 0.48866369e-03 0.12239377e-03 0.30554943e-04 0.76212033e-05 0.18577013e-05
Table 6: Piecewise polynomial based collocation method
n
kx − xC n kL2
kx − xC n k∞
kx − x ˜C n kL2
kx − x ˜C n k∞
2 4 8 16 32 64 128 256
0.31698305338 0.16014959283 0.80306852e-01 0.40184230e-01 0.20095233e-01 0.10048001e-01 0.50241483e-02 0.25120249e-02
0.74927708817 0.38899230331 0.19578990964 0.97272747e-01 0.48687700e-01 0.24177809e-01 0.12241549e-01 0.61006930e-02
0.21307836e-01 0.13752503e-02 0.34590201e-03 0.86614210e-04 0.21660648e-04 0.54156002e-05 0.13539812e-05 0.33848211e-06
0.30133808e-01 0.19448960e-02 0.48917894e-03 0.12249090e-03 0.30632759e-04 0.76588094e-05 0.19148171e-05 0.47868563e-06
From Tables 3, 4, 5 and 6, we see that the Legendre Galerkin and Legendre collocation methods are more rapid than the piecewise polynomial based Gelrkin and collocation methods. It is clear from the numerical results that, in case of Legendre Galerkin and Legendre collocation methods, we obtain better errors by solving much smaller nonlinear system of equations. For example, we see that, in iterated Legendre Galerkin and iterated Legendre collocation method, to obtain the error of order 10−5 , a system of size 5 × 5 is needed to be solved, where as in piecewise polynomial based iterated Galerkin and iterated colloctaion method, we need to solve a system of size 128 × 128. References [1] Ahues, M., Largillier, A. and Limaye, B. V. Spectral computations for bounded operators, Chapman and Hall/CRC, New York (2001). [2] Anselone, P. M. Collectively compact operator approximation theory and application to integral equations. Prentice Hall, Englewood cliffs, NJ, (1971). 21
[3] Atkinson, K. A survey of numerical methods for solving nonlinear integral equations, J. Integral Equations Appl. 4.1 (1992): 15-46. [4] Atkinson, K. E., and Potra, F. A. Projection and iterated projection methods for nonlinear integral equations. SIAM journal on numerical analysis, 24.6 (1987): 13521373. [5] Atkinson, K. and Flores, J. The discrete collocation method for nonlinear integral equations, IMA J. Numer. Anal. 13.2 (1993): 195-213. [6] Atkinson, K. E., and Potra, F. A. The discrete Galerkin method for nonlinear integral equations. Integral Eqs. Appl, 1 (1998): 17-54. [7] Atkinson, K. E. The numerical solution of integral equations of the second kind, Cambridge University Press, Cambridge, (1997). [8] Canuto, C., Hussaini, M. Y., Quarteroni, A. and Zang, T. A. Spectral methods, fundamentals in single domains, Springer-Verlag, Berlin, (2006). [9] Chen, Zhongying, Bin Wu, and Yuesheng Xu. Fast multilevel augmentation methods for solving Hammerstein equations, SIAM Journal on Numerical Analysis 47.3 (2009): 2321-2346. [10] Chen, Zhongying, Jianfei Li, and Yongdong Zhang. A fast multiscale solver for modified Hammerstein equations, Applied Mathematics and Computation 218.7 (2011): 3057-3067. [11] Ganesh, M., and M. C. Joshi. Numerical Solvability Of Hammerstein IntegralEquations Of Mixed Type, (1991). [12] Ganesh, M., and M. C. Joshi. Discrete numerical solvability of Hammerstein integral equations of mixed type. Journal of Integral Equations and Applications 2, no. 1 (1989): 107-124. [13] Guo Ben-yu. Spectral methods and their applications, World Scientific, Singapore (1998). [14] L. M. Delves and J.L.Mohamed, Computational methods for integral equations, Cambridge University Press, (1985). [15] Kaneko, H., Noren, R. D. and Xu, Y. Regularity of the solution of Hammerstein equations with weakly singular kernels, Int. Eqs. Oper. Theory. 13 (1990) (5): 660670.
22
[16] Long, Guangqing, Mitali Madhumita Sahani, and Gnaneshwar Nelakanti. Polynomially based multi-projection methods for Fredholm integral equations of the second kind. Applied Mathematics and Computation 215.1 (2009): 147-155. [17] MI, Berenguer, MV, Fernandez Munoz, AI, Garralda-Guillem, and M Ruiz Galan, Numerical treatment of fixed point applied to the nonlinear Fredholm integral equation. Fixed Point Theory and Applications 2009 (2009). [18] Palomares, A., and M. Ruiz Galan. Isomorphisms, Schauder bases in Banach spaces, and numerical solution of integral and differential equations. Numerical Functional Analysis and Optimization 26.1 (2005): 129-137. [19] Panigrahi, Bijaya Laxmi, and Gnaneshwar Nelakanti. Superconvergence of Legendre projection methods for the eigenvalue problem of a compact integral operator. Journal of computational and applied mathematics 235.8 (2011): 2380-2391. [20] Shen, J., Tang, T. Spectral and High-Order Methods with Applications. Science Press, Beijing (2006). [21] Shen, J., Tang, T., and Wang, L.-L.Spectral Methods: Algorithms, Analysis and Applications. Springer Series in Computational Mathematics. Springer, New York (2011). [22] Tang, T., Xu, X., and Cheng, J. On spectral methods for Volterra type integral equations and the convergence analysis. J. Comput. Math. 26.6 (2008): 825-837. [23] Vainikko, G. M. A perturbed Galerkin method and the general theory of approximate methods for nonlinear equations, USSR Computational Mathematics and Mathematical physics, 7.4 (1967): 1-41. [24] Wan, Zhengsu, Yanping Chen, and Yunqing Huang. Legendre spectral Galerkin method for second-kind Volterra integral equations, Frontiers of Mathematics in China 4.1 (2009): 181-193. [25] Xie, Ziqing, Xianjuan Li, and Tao Tang. Convergence analysis of spectral Galerkin methods for Volterra type integral equations, Journal of Scientific Computing 53.2 (2012): 414-434.
23