Adv Comput Math (2010) 33:305–330 DOI 10.1007/s10444-009-9137-4
Numerical methods for Fredholm integral equations with singular right-hand sides L. Fermo · M. G. Russo
Received: 5 November 2008 / Accepted: 8 July 2009 / Published online: 22 July 2009 © Springer Science + Business Media, LLC 2009
Abstract Fredholm integral equations with the right-hand side having singularities at the endpoints are considered. The singularities are moved into the kernel that is subsequently regularized by a suitable one-to-one map. The Nyström method is applied to the regularized equation. The convergence, stability and well conditioning of the method is proved in spaces of weighted continuous functions. The special case of the weakly singular and symmetric kernel is also investigated. Several numerical tests are included. Keywords Fredholm integral equations · Projection methods · Nyström method Mathematics Subject Classifications (2000) 65R20 · 45E05
Communicated by Yuesheng Xu. Dedicated to Prof. Giuseppe Mastroianni on the occasion of his 70th birthday. Work supported by the research project PRIN 2006 of the Italian Ministry for the University and Research. L. Fermo · M. G. Russo (B) Department of Mathematics and Computer Science, University of Basilicata, v.le dell’Ateneo Lucano 10, 85100 Potenza, Italy e-mail:
[email protected] L. Fermo e-mail:
[email protected] 306
L. Fermo, M.G. Russo
1 Introduction In this paper we deal with Fredholm integral equations of the following type f (y) − μ
1 −1
k(x, y) f (x)v α,β (x)dx = g(y),
|y| < 1
(1.1)
where v α,β (x) = (1 − x)α (1 + x)β , α, β > −1, denotes a Jacobi weight, μ ∈ R is a constant, f is the unknown function while k and g are known. The case when the right-hand side and the kernel are smooth functions was extensively investigated and there exists a wide literature about efficient numerical methods based on the polynomial approximation of the solution and on the Nyström methods (see for instance [1] and the bibliography therein and more recently [6]). In this paper we consider the right-hand side singular at ±1 and the kernel that can be singular at the same points and/or weakly singular and symmetric. We remark that the integral equation under consideration is, for instance, equivalent to a so called linear integral equation of the third kind. Such equations arise in several physical problems (elasticity theory, neutron transport, e.g. see [2] and the references therein). There exists some numerical procedure that works also in the proposed case (see again [6]), but the convergence could be very slow. The strategy proposed in this paper is to regularize the equation and then to use a global method. In particular we propose to “move” the singularities into the kernel and then regularize the equation by means of a suitable polynomial type transformation (e.g. see [10, 15] and their bibliographies). This approach is particularly suitable in the case when the kernel is weakly singular and symmetric. Indeed in this situation if the right-hand side is smooth then the solution can have singularities, which behavior is known, only at the endpoints [23]. Moreover for these type of equations a wide literature is available concerning fast algorithms (see, for instance [3–5, 8, 9] and [21]). We use the Nyström method based on the Gaussian quadrature formula, in the case when the kernel is regular, and based on a product quadrature rule, when the kernel is symmetric. We prove that these methods are stable and give sharp error estimates in suitable spaces of continuous functions equipped with weighted norms. The paper is structured as follows. Since we use global methods we need some tools of the Approximation Theory in order to characterize the spaces of weighted continuous functions. Thus Section 2 is devoted to this aim. In Section 3 the main results are described. In particular we describe our regularization approach and show the convergence of the Nyström method in Subsection 3.1 In Subsection 3.2 is discussed the special case of the symmetric and weakly singular kernel. Section 4 is devoted to the proofs of the main results. Finally in Section 5 several numerical tests are given confirming the theoretical results of Section 3.
Fredholm equations
307
2 Spaces of functions In this section we define the spaces of functions in which we will study the equations under consideration. Fixed the Jacobi weight v γ ,δ , γ , δ ≥ 0, we will consider the space of weighted continuous function Cvγ ,δ , defined as γ ,δ Cvγ ,δ = f ∈ C((−1, 1)) : lim f (x)v (x) = 0 x→±1
in the case γ , δ > 0, while in the case γ = 0 (respectively δ = 0), Cvγ ,δ consists of all functions which are continuous on (−1, 1] (respectively on [−1, 1)) and such that limx→−1 f v γ ,δ (x) = 0 (respectively limx→1 f v γ ,δ (x) = 0). Moreover if γ = δ = 0 we set Cv0,0 = C([−1, 1]). Cvγ ,δ equipped with the weighted uniform norm f Cvγ ,δ = f v γ ,δ ∞ = sup|x|≤1 | f (x)v γ ,δ (x)| is a Banach space. In order to introduce some suitable subspaces of Cvγ ,δ we recall the definition of the main part of the modulus of smoothness of Ditzian and Totik [7] (2.1) kϕ ( f, τ )vγ ,δ := sup v γ ,δ khϕ f I hk
0 0, then system (3.2) has a unique solution and cond (Am ) = O(log m). Moreover the following error estimate holds ∗ f − f ∗ v γ ,δ ≤ C log m g Z vγ ,δ m ) r( ∞ mr
(3.7)
where C = C (m, f ∗ ). Therefore, from the theoretical point of view, the proposed method is stable and convergent even if the values of the Zygmund parameter r are small, that means when the kernel and/or the right-hand side have a low smoothness or are weakly singular in ±1. Nevertheless in these cases the convergence estimate (3.7) leads to very poor numerical results.
310
L. Fermo, M.G. Russo
The following numerical test shows what happens in the case the right handside is singular for instance at −1. Example 3.1 Consider the equation f (y) −
1 7
x2 + y2 − 1 sin y . f (x)dx = 2 y + 3y − 10 1+y
1 −1
(3.8)
It has a unique solution in Cv3/4,3/4 . We solve system (3.2) and construct fm∗ . m . Next According to estimate (3.7), we expect an order of convergence O log m1/2 table shows the values of the weighted approximating polynomial fm∗ v 3/4,3/4 at the point y = 0.6 for increasing values of m. In particular the correct digits with respect to the value obtained for m = 512 are showed. In the table is also included the condition number of the matrix of the system for the values of m under consideration. Thus it is clear
m 8 16 32 64 128 256
∗ v 3/4,3/4 (0.6) fm
0.3 0.3 0.3 0.320 0.320 0.3206
cond 1.038270190556079 1.045268715514449 1.048331076633055 1.049763514821298 1.050536269242601 1.050915973341755
that the procedure is stable (and well conditioned) while the speed of convergence is very slow. In this paper we will consider the above mentioned case when the righthand side and/or kx are singular (or with a low smoothness) at the endpoints. We will show that with a suitable regularization we can apply successfully the Nyström method getting a satisfactory convergence order. 3.1 The numerical method We consider the following integral equation f (y) −
μ G1 (y)
1
−1
k∗ (x, y) f (x)v α,β (x)dx =
g∗ (y) G2 (y)
(3.9)
where g∗ and k∗x are assumed to be smooth and G1 and G2 are functions having zeros at −1 with a sub-exponential growth or decrease and such that either G1 ≡ G2 or lim
x→−1
G2 (x) = 0. G1 (x)
Fredholm equations
311
We multiply both sides of (3.9) by G2 . Denoting by F(y) = f (y)G2 (y), we have F(y) − μ
G2 (y) G1 (y)
1 −1
k∗ (x, y) F(x)v α,β (x)dx = g∗ (y) G2 (x)
(3.10)
i.e. an integral equation in which the kernel is singular in x at −1 and could have a zero in y at −1, but with a smooth right-hand side. In order to regularize the kernel we introduce the one to one map φq (t) = 21−q (1 + t)q − 1,
q ∈ N.
We note that φq (t) ≥ 0, t ∈ [−1, 1], and that for each Jacobi weight v α,β we have v α,β φq (t) =
1 2(q−1)β
ξ(t) v α,qβ (t)
where we set
ξ(t) := 1 +
1+t 2
+ ... +
1+t 2
q−2
+
1+t 2
q−1 α .
Now we make the changes of variables x = φq (t) and y = φq (s) in Eq. 3.10 and we get F(s) − μ
1 −1
h(t, s)F(t)v α,β (t)dt = G(s),
s ∈ [−1, 1]
(3.11)
where F(s) = F φq (s) = f φq (s) G2 φq (s) ,
(3.12)
G(s) = g∗ φq (s) ,
(3.13)
G2 φq (s) k∗ φq (t), φq (s) 1 + t (β+1)(q−1) ξ(t) h(t, s) = q . 2 G1 φq (s) G2 φq (t)
(3.14)
The transformation φq (t) was extensively used in the numerical methods for solving integral equations (see for instance [12, 18, 22]). We remark that for suitable values of q > 1 the new kernel h(t, s) turns out to be smooth.
312
L. Fermo, M.G. Russo
In order to simplify the description of the numerical procedure and to get the proofs less technical, but without loss of generality, we will assume that ⎞ ⎛ σ (1 + y)
⎠ , 0 < σ < β + 1. (3.15) G1 (y) = G2 (y) = ⎝ e log 1+y We remark that in this case h(t, s) has the following expression e (1−q)(1−σ +β) ∗ h(t, s) = q2 k φq (t), φq (s) log 1−q 2 (1 + t)q ξ(t)(1 + t)q(β+1−σ )−(β+1) .
(3.16)
Hence we will consider Eq. 3.11 in Cvγ ,δ with γ , δ such that 0 ≤ γ < α + 1,
0 ≤ δ < β + 1.
(3.17)
We get the following result about the class of smoothness of the known functions in the regularized Eq. 3.11. Proposition 3.1 If the known functions in (3.9) satisfy the following properties of smoothness g∗ Z r (vγ ,δ ) < ∞,
(3.18)
sup k∗x Z r (vγ ,δ ) < ∞,
(3.19)
sup v γ ,δ (y)k∗y Z r < ∞.
(3.20)
|x|≤1
|y|≤1
where r > 0, then the known functions of the regularized Eq. 3.11 are in the following classes of smoothness G Z r (vγ ,δ ) < ∞,
(3.21)
sup ht Z r (vγ ,δ ) < ∞,
(3.22)
sup v γ ,δ (s)hs Z η < ∞,
(3.23)
|t|≤1
|s|≤1
where η = min 2(q(1 − σ + β) − (β + 1)), r . We immediately remark that since we are assuming (3.15) then for the ∗ k∗ functions g = Gg 2 and kx = Gx2 we get, if δ is chosen such that δ > σ , kϕ (g, t)vγ ,δ ∼ kϕ (kx , t)vγ ,δ ∼ t2(δ−σ ) log t−1 and hence g, kx ∈ Z 2(δ−σ ) v γ ,δ .
Fredholm equations
313
Hence the previous proposition shows us that the known functions of the regularized equation G, ht are smoother then the corresponding functions of the original equation. On the other hand if r is sufficiently large, also hs can be taken smooth as we want for suitable values of q > 1. Moreover we point out that the proposition holds true for any value of r > 0. Nevertheless it is clear that our regularization strategy is more efficient when r is sufficiently large. Otherwise it would be necessary to combine our approach with another regularization technique (in order to regularize the known functions in some internal point, where, for instance, the derivatives could have a singular point). In order to approximate the solution of (3.11) we consider the Nyström method. Setting 1 (AF )(s) = μ h(t, s)F(t)v α,β (t)dt, (3.24) −1
Equation 3.11 can be rewritten in operator form as (I − A)F = G,
(3.25)
where I denotes the identity operator. In order to approximate AF we use the Gaussian quadrature formula related to the weight v α,β . Hence we define the operator m λk v α,β h (xk , s) F (xk ) v γ ,δ (xk ) Km F(s) = μ (3.26) v γ ,δ (xk ) k=1
where xk , k = 1, . . . , m, are the zeros of the m-th orthonormal polynomial pm v α,β w.r.t. the weight v α,β and λk v α,β are the Christoffel numbers. Then we consider the operator equation (I − Km )Fm = G
(3.27)
where Fm is unknown. Multiplying this equation by the weight v γ ,δ and collocating on the point xi , i = 1, . . . , m, we have that ak = Fm (xk )v γ ,δ (xk ), k = 1, . . . , m, are the unknowns of the linear system m v γ ,δ (xi ) δi,k − μλk v α,β γ ,δ h(xk , xi ) ak = Gv γ ,δ (xi ), i = 1, . . . , m. v (xk ) k=1 (3.28) Therefore the solution [a∗1 , . . . , a∗m ]T of (3.28) (if it exists) allows us to construct the weighted Nyström interpolant Fm (s)v γ ,δ (s) = μ
m k=1
v γ ,δ (s) λk v α,β γ ,δ h(xk , s)a∗k + G(s)v γ ,δ (s). v (xk )
(3.29)
Now we state the theorem on the convergence and stability of this Nyström m the matrix of the coefficients of system (3.28) and by method. Denote by A m ) its condition number in infinity norm. We get the following result. cond(A
314
L. Fermo, M.G. Russo
Theorem 3.1 Assume that Ker{I − A} = {0} in Cvγ ,δ with γ and δ as in (3.17) and let F ∗ be the unique solution of (3.11) for a given G ∈ Cvγ ,δ . Moreover assume that (3.18)–(3.20) holds true. Then, for m sufficiently large, system (3.28) is uniquely solvable and well conditioned too, m ≤ C (3.30) cond A where C does not depend on m. Moreover there results [F ∗ − Fm ]v γ ,δ ∞ ≤
C F ∗ Z η (vγ ,δ ) mη
(3.31)
where C = C (m, F ∗ ) and η = min{2(q(1 − σ + β) − (β + 1)), r}. Remark 3.1 1. Since the solution of (3.9) can be written as F φq−1 (y) f (y) = , G2 (y) by means of Fm we can construct a function approximating f as Fm φq−1 (y) fm (y) = . G2 (y)
(3.32)
2. We note that the matrix of the coefficients Am of system (3.2) is of the type Am = I − m where I is the identity matrix of order m and α,β v γ ,δ (xi ) k (xk , xi ) . m = −μλk v v γ ,δ (xk ) i,k=1,m If we replace in the definition of m the values k(xk , xi ) with k(φq (xk ), φq (xi )) we get the matrix v γ ,δ (xi ) k(φq (xk ), φq (xi )) m = −μλk (v α,β ) γ ,δ . v (xk ) i,k=1,m m of system (3.28) can be obtained as Hence, the matrix A m = I − Cm , A where C is a diagonal matrix in which entries are given by e (1−q)(1−σ +β) Cii = q2 ξ(xi ) log 1−q (1 + xi )q(β+1−σ )−(β+1) 2 (1 + xi )q for any i = 1, . . . , m.
Fredholm equations
315
Moreover, if we replace in the right hand side of system (3.2), g(xi ) with g(φq (xi )), we have the right-hand side of system (3.28). 3.2 A special case Until now we considered the case when the kernel could have some (weak) singularity at the endpoints. Now we take into account the case of the kernel that could be also weakly singular along the diagonal of [−1, 1] × [−1, 1]. Thus we consider equations of the type f (y) − μ
1 −1
k(x, y) f (x)dx = g(y),
(3.33)
where for instance it could be k(x, y) = |x − y|ρ (1 + x)λ , ρ, λ > −1. In this special case we have some additional information about the smoothness of the solution, deriving by the knowledge of the properties of the kernel and by the smoothness of the right-hand side, according to some results by Vainikko and Pedas [20, 23]. Hence assume that the kernel of (3.33) has a singular behaviour along the diagonal of [−1, 1] × [−1, 1] and in the fixed point x = −1 and that the homogeneous equation corresponding to (3.33) has only the trivial solution in C([−1, 1]). If, with fixed ρ, λ > −1, such that λ > −ρ − 1, the known functions g and k satisfy the following assumptions ∀ j, ∈ N0 : j + ≤ n, g(n) ∈ C([−1, 1]), ∂ j ∂ ∂ + k(x, y) ≤ C |x − y|ρ− j(1 + x)λ− , ∂y ∂y ∂x
(3.34)
with x, y ∈ (−1, 1) and C = C (n, k), then Eq. 3.33 has a unique solution f ∗ ∈ C([−1, 1]) that for any i = 0, 1, . . . , n and ∀y ∈ (−1, 1) satisfies | f ∗(i) (y)| ≤ C (1 − y)1+ρ−i (1 + y)1+ρ+λ−i ,
C = C (n).
(3.35)
The previous result says that if the right-hand side is smooth then the solution is smooth inside the interval while it can have a singular behaviour at the endpoints. This singularity depends on all the singularities of the kernel (the diagonal ones as well as the boundary ones). We want to apply this result to our case. For the sake of simplicity we assume that α = β = 0 in Eq. 3.9, i.e. we consider the equation f (y) −
μ G1 (y)
1 −1
k∗ (|x − y|) f (x)dx =
g∗ (y) G2 (y)
(3.36)
with G1 and G2 as in (3.15) and where k∗ is a symmetric kernel. Moreover we consider (3.36) in Cv with v := v γ ,γ , 14 ≤ γ < 34 .
316
L. Fermo, M.G. Russo
Multiplying by G2 and setting F(y) = f (y)G2 (y) we get the equation 1 F(y) − μ k∗ (|x − y|)(1 + x)−σ F(x)dx = g∗ (y). (3.37) −1
Therefore using (3.35) and (2.4) it is possible to deduce the following result. Theorem 3.2 If g∗ and k∗ (|x − y|)(1 + x)−σ satisfies (3.34) with n ∈ N, λ = −σ , ∗ σ < 1 + ρ, then the solution F of (3.37) belongs to Z 2(1+ρ−σ ) . Now we regularize the equation with the same regularization technique of the previous section. For the sake of simplicity we regularize the solution only in y = −1, but it is possible to apply a similar strategy also for y = 1. Hence setting x = φq (t) and y = φq (s), we obtain 1 F(s) − μ h(t, s)F(t)dt = G(s) (3.38) −1
where F(s) = F(φq (s)) and G(s) = g∗ (φq (s)) and −σ φq (t). h(t, s) = k∗ |φq (t) − φq (s)| 1 + φq (t)
(3.39)
Using the properties of the kernel of Eq. 3.37 it is possible to prove that (3.38) has a unique solution F ∗ in C([−1, 1]) (see for instance [19, Lemma 4.1]) satisfying, for all i = 0, 1, . . . , n |F ∗(i) (s)| ≤ C (1 − s)1+ρ−i (1 + s)q(1+ρ−σ )−i
(3.40)
where C = C (n). In order to describe the numerical method we rewrite the kernel h(t, s) as h(t, s) = M(t, s)k∗ (|t − s|) where M(t, s) = q2(1−q)(1−σ ) (1 + t)q−1−qσ log
(3.41)
e2q−1 k∗ (21−q |(1 + t)q − (1 + s)q |) . (1 + t)q k∗ (|t − s|)
Now we apply a Nyström-type method to Eq. 3.38. Thus let Lm (F, t) ≡ Lm (v 0,0 , F, t) be the Lagrange polynomial interpolating a continuous function F on the zeros {xk }k=1,...,m of the Legendre orthogonal polynomial pm ≡ pm (v 0,0 ), i.e. Lm (F, t) = m k=1 lk (t)F(xk ), where lk (t), k = 1, . . . , m are the fundamental Lagrange polynomials. In the integral in (3.38) we approximate F(t)M(t, s) by means of Lm (F M(·, s), t). Hence we define the following operator 1 m Km F(s) = μ M(xk , s)F(xk ) lk (t)k∗ (|t − s|)dt k=1
=μ
−1
m λk M(xk , s)Sm (k∗ (| · −s|), xk )F(xk )v(xk ) v(xk ) k=1
(3.42)
Fredholm equations
317
where λk are the Christoffel numbers related to the Legendre weight and m−1 Sm k∗ (| · −s|, t) = cν (s) pν (t), cν (s) = ν=0
1 −1
pν (ξ )k∗ (|ξ − s|)dξ,
is the m-th Fourier sum of k∗ (|t − s|) w.r.t. the Legendre orthonormal system. Hence we consider the operator equation m F m = G I−K (3.43) m is unknown. By multiplying (3.43) by v and collocating on the knots where F xi , i = 1, . . . , m, we have the following linear system m v(xi ) δi,k − μ λk ak = (Gv)(xi ) (3.44) M(xk , xi )Sm k∗ (| · −xi |), xk v(xk ) k=1
m (xk )v(xk ) are the unknowns. with i = 1, . . . , m and where ak = F If the matrix of the coefficients of the above system Bm is invertible, its ∗ T solution a1 , . . . , a∗m allows us to construct the weighted Nyström interpolant m (s)v(s) = μ F
m k=1
λk
∗ v(s) M(xk , s)Sm k∗ (| · −s|), xk ak + G(s)v(s). v(xk )
(3.45)
The convergence and stability of the proposed method is stated by the following Theorem. Let A be the operator defined in (3.24) with α = β = 0 and with h(t, s) defined in (3.41). Theorem 3.3 Under the same assumptions of Theorem 3.2 with ρ > − 43 , if Ker(I − A) = {0} in Cv with 14 ≤ γ < min 34 , ρ + 1 and F ∗ denotes the solution of (3.38), then, for m sufficiently large, system (3.44) has a unique solution and cond(Bm ) ≤ C where C does not depend on m. Moreover for any integer q > ∗ 1 F −F m v = O , ∞ m2(1+ρ+γ )
(3.46) 1+ρ 1+ρ−σ
there results (3.47)
where the constant in “O" is independent of m. Finally note that the modified moments 1 cν (s) = k∗ (|t − s|) pν (t)dt, −1
for the most frequently used kernels, are given by recurrence relations that are available in the literature (see for instance [14]).
318
L. Fermo, M.G. Russo
4 Proofs of the main results Before proving the main results we note that for any f1 , f2 ∈ Cw , where w is a Jacobi weight with nonnegative exponents, there holds E2m ( f1 f2 )w ≤ f1 w∞ Em ( f2 ) + 2 f2 ∞ Em ( f1 )w ,
(4.1)
Proof of Proposition 3.1 About the smoothness of G and ht we note that these functions are essentially given by the composition of functions in Z r v γ ,δ with of degree q. Therefore the class of the smoothness is still a polynomial Z r v γ ,δ . We have only to check the class of hs . We recall that hs (t) = C(q)k∗ φq (t), φq (s) log
e ξ(t)(1 + t)q(β+1−σ )−(β+1) 21−q (1 + t)q
=: N(t, s)(1 + t)q(β+1−σ )−(β+1) . In order to estimate kϕ (hs , τ ) using (2.3), we need to estimate Em (hs ). By (4.1) with w = 1 we get E2m (hs ) ≤ Ns ∞ Em (1 + ·)q(β+1−σ )−(β+1) + 2(1 + ·)q(β+1−σ )−(β+1) ∞ Em (Ns ). Therefore under the assumption (3.20) and using (2.2) we get E2m (hs ) ≤ C
1 m2(q(β+1−σ )−(β+1))
1 + r m
where C is independent of m. Hence setting η = min{2(q(β +1−σ )−(β +1)), r} by (2.3) we have kϕ (hs , τ )
[ τ1 ] ≤ Cτ (i + 1)k−1−η ≤ C τ η k
i=0
and the Proposition follows.
Before proving Theorem 3.1 we recall that if we apply the Gaussian quadrature formula w.r.t. the weight v α,β to a function f ∈ Cvγ ,δ , with γ < α + 1 and δ < β + 1, then m 1 |em ( f )| = f (t)v α,β (t)dt − λk v α,β f (xk ) ≤ C E2m−1 ( f )vγ ,δ (4.2) −1 k=1
where C = C (m, f ).
Fredholm equations
319
Proof of Theorem 3.1 For the sake of simplicity we will omit the subscript in the norm of the operators, since all the operators involved in the discussion are maps in these spaces. In order to prove the stability and convergence of the Nyström method we will prove that
Cvγ ,δ →Cvγ ,δ
1. [AF − Km F]v γ ,δ ∞ tends to zero for any F ∈ Cvγ ,δ ; 2. (A − Km )Km tends to zero. In fact by Step 1, by virtue of the principle of uniform boundedness, we can deduce sup Km < +∞. m
Hence (see for instance [1]), under these assumptions and for m sufficiently large, I − Km is invertible in Cvγ ,δ and is uniformly bounded since there results (I − Km )−1 ≤ and in addition
1 + (I − A)−1 Km 1 − (I − A)−1 (A − Km )Km
∗ [F − Fm ]v γ ,δ ∼ [AF − Km F]v γ ,δ ∞ , ∞
(4.3)
(4.4)
where the constants in ∼ are independent of m and F ∗ . Step 1 can be proved using (4.1) and (4.2). Indeed [AF − Km F]v γ ,δ ∞ ≤ C sup v γ ,δ (s)E2m−1 (hs F )vγ ,δ |s|≤1
≤ C 2 sup v γ ,δ (s)hs ∞ Em−1 (F )vγ ,δ |s|≤1
+ Fv γ ,δ ∞ sup v γ ,δ (s)Em−1 (hs ) . |s|≤1
(4.5)
Now we note that ϕ (Km F, τ )vγ ,δ = sup v γ ,δ hϕ Km F Ih h≤τ
≤C
1
−1
v α−γ ,β−δ (t)dt Fv γ ,δ ∞ sup ϕ (ht , τ )vγ ,δ , |t|≤1
with C = C (τ, F ). Hence, since we are assuming (3.22), by (2.2) there results, for any fixed m, En (Km F )vγ ,δ ≤
C Fv γ ,δ ∞ , nr
C = C (n, m, F ).
(4.6)
320
L. Fermo, M.G. Russo
By applying (4.5) to the function Km F and using (4.6) with n = m − 1 we get [A − Km ]Km Fv γ ,δ ≤ C Fv γ ,δ ∞ ∞ mη since we are assuming (3.23), and Step 2 follows. About the convergence estimate (3.31) we note that by (4.4) and using (4.5) we deduce ∗ [F − Fm ]v γ ,δ ≤ C F ∗ Z vγ ,δ sup v γ ,δ (s)hs Z η ) η( ∞ mη |s|≤1 ∗ since, γby the assumption on G and h(s, t), we have that F is at least in ,δ Zη v . Hence (3.31) is proved. Finally in order to prove (3.30) it is sufficient to prove that m ≤ cond(I − Km ) = I − Km (I − Km )−1 . cond A
To this end we can use the same arguments in [1, p.113] only by replacing the usual infinity norm with the weighted uniform norm of the space Cvγ ,δ . Proof of Theorem 3.2 By (3.35) it immediately follows that ∗(n) n F ϕ ≤ C h2(1+ρ−σ )−n Ihn
∗
where Ihn = [−1 + 4h2 n2 , 1 − 4h2 n2 ] and C is independent of F and h. Hence by (2.4) we get ∗
nϕ F , τ ≤ C τ 2(1+ρ−σ ) and the Theorem follows.
In order to prove Theorem 3.3 we need some auxiliary tools. We recall that !1 1 p if u is a Jacobi weight, a function f ∈ Lu , iff f u p = −1 | f u| p p < ∞. The following result can be found in [16]. Theorem B (Nevai) Let w = v α,β , u = v γ ,δ , α, β > −1, γ , δ ≥ 0, 1 ≤ p < ∞ and let k(x, y) a kernel satisfying the condition k(·, y) + k(·, y) (4.7) sup 1 + log < ∞, 1 ≤ p < ∞. u u y p Then, ∀ f ∈ Cu , we have: sup Lm (w, f )k(·, y) p ≤ C f u∞ ,
|y|≤1
(4.8)
Fredholm equations
321
or, equivalently,
1
sup
|y|≤1
−1
k(x, y) f (x) − Lm (w, f, x) p dx
where C = C (m, f ), if and only if 1 k(x, y) p sup √w(x)ϕ(x) dx < ∞ and y −1
1p
≤ C Em−1 ( f )u
(4.9)
√ wϕ ∈ L1 . u
(4.10)
We recall now a result about the estimate in the weighted L1 space of the Fourier operator. Thus, if w is a Jacobi weight the Fourier operator is !1 defined as Sm (w, f ) = m k=0 ck ( f ) pk (w), where ck ( f ) = −1 f (t) pk (w, t)dt are the Fourier coefficients of f w.r.t. the orthonormal system { pm (w)}m . Theorem C Let w and u be two Jacobi weights. Then " u w 1 w ∈ L1 , , ∈ L∞ √ wϕ u u ϕ
(4.11)
are necessary and sufficient conditions such that for any f ∈ L1u it results Sm (w, f )u1 ≤ C log m f u1 ,
C = C (m, f ),
(4.12)
while
e + , Sm (w, f )u1 ≤ C f u 1 + log | f u| + log ϕ 2 1
C = C (m, f ),
(4.13)
for all the functions f such that the right-hand side exists. Estimate (4.12) can be found in [11], while (4.13) can be deduced by the results in [17]. Proof of Theorem 3.3 We want to proceed as in the proof of Theorem 3.1. m }. Hence we have to prove Steps 1–2 for the operators sequence { K For the assumptions on k∗ (|t − s|), ρ and γ , by applying (4.9) with u = v, w ≡ 1, p = 1, we get A− K m Fv ≤ C sup v(s)Em−1 (Ms F )v ∞ |s|≤1
≤ C sup v(s)E[ m−1 ] (Ms )Fv∞ + C sup v(s)Ms ∞ E[ m−1 ] (F )v |s|≤1
2
|s|≤1
2
(4.14) having used also (4.1) and where C is independent of m and [a] denotes the integer part of a ∈ R+ . Therefore Step 1 is proved, being F ∈ Cv and Ms ∈ C([−1, 1]).
322
L. Fermo, M.G. Russo
m F we get Finally we have to prove Step 2. By (4.14) applied to K A−K m Fv∞ + m K m Fv ≤ C sup v(s)E m−1 (Ms ) K [ ] ∞ |s|≤1
m F + C sup v(s)Ms ∞ E[ m−1 ] K 2
|s|≤1
About A1 , since q >
2
1+ρ , 1+ρ−σ
v
:= A1 + A2 .
(4.15)
then Ms ∈ Z 2(q−1−qσ +(q−1)ρ) uniformly with re-
spect to s and, since supm K˜ m < ∞ it follows A1 ≤ C
Fv∞ . 2(q−1−qσ +(q−1)ρ) m
m F )v for all m Hence the estimate of A2 is left. In order to estimate E[ m−1 ] ( K 2 we can estimate ϕ Km F, τ v and use (2.3). Thus assume h ≤ τ < m1 . We recall that hϕ(s) ( fg)(s) = f s+ h2 ϕ(s) hϕ(s) g(s)+ hϕ(s) f (s)g s − h2 ϕ(s) , for all continuous functions f, g. Hence there results m F(s)| ≤ v(s)| hϕ(s) K m λk hϕ(s) Mxk (s)Sm k∗ (|· − s|) , xk v(xk ) k=1 m λk Sm k∗ · − s + h ϕ(s) , xk = C Fv∞ v(s) v(xk ) 2
≤ C Fv∞ v(s)
k=1
m h λk · hϕ(s) Mxk (s) + Mxk s − ϕ(s) v(xk ) 2 k=1 ∗ . · Sm hϕ(s) k (| · −s|, xk ) In order to evaluate the first sum at the right-hand side we first note that Mxk (s) is uniformly bounded with respect to s, ∀k = 1, . . . , m and in addition maxk | hϕ(s) Mxk (s)| goes, at least, like h for s ∈ Ih = [−1 + h2 , 1 − h2 ]. Then we use the Marcinkiewicz inequality [13] and (4.13) (which holds true under the assumptions on γ and ρ). In the second sum we also apply the Marcinkiewicz inequality and then (4.12). Hence we finally get 1 m F(s)| ≤ C Fv∞ v(s) h+log 1 v(s)| hϕ(s) K | hϕ(s) k∗ (|t−s|)|v −1 (t)dt . h −1 (4.16) Thus the evaluation of the integral at the right-hand side is left. This integral (or similar ones) was evaluated several times and in different contexts (see for instance [14]). For the convenience of the reader we give here a sketch of the proof.
Fredholm equations
323
The interesting case is when k∗ satisfies (3.34) with ρ < 0. Indeed in the case ρ > 0 it is possible to proceed by using the Lagrange Theorem. Hence we assume ρ < 0 and recalling that we have to take |s| ≤ 1 − 4h2 we split the integral as follows: 1 | hϕ(s) k∗ (|t − s|)|v −1 (t)dt −1
=
s−2hϕ(s) −1
+
s
s−2hϕ(s)
+
s+2hϕ(s)
+
s
× | hϕ(s) k∗ (|t − s|)|v −1 (t)dt =:
1
s+2hϕ(s) 4
Ji .
i=1
In evaluating J2 and J3 we can use (3.34) and bound k∗ (|t − ·|) by means of |t − ·|ρ . Hence we get J2 + J3 ≤ C v −1 (s)h1+ρ . In order to evaluate J1 and J4 we remark that we are assuming k∗ in Cn−1 , n > 1, far from 0 and hence we can use the Lagrange Theorem and get J1 + J4 ≤ C v −1 (s)h. Therefore, since ρ < 0, we finally have that 1 | hϕ(s) k∗ (|t − s|)|v −1 (t)dt ≤ C v −1 (s)h1+ρ , −1
where C is independent of h. Hence using this estimate in (4.16) and taking the sup on |s| ≤ 1 − 4h2 and then the suph≤τ we finally have m F, τ ≤ C Fv∞ τ + log 1 τ 1+ρ , ϕ K v τ where C is a positive constant independent of F and τ . Therefore by using (2.2) we have m F ≤ C log m Fv∞ E[ m−1 ] K v 2 m1+ρ where C is independent of m. m Thus we found that A2 ≤ C log Fv∞ that together with the estimate m1+ρ derived for A1 leads to A−K m K m → 0 and hence Step 2 is proved.
324
L. Fermo, M.G. Russo
About the conditioning of the matrix Bm we can follow the same arguments used in the proof of Theorem 3.1. Finally we prove estimate (3.47). Working as in the Proof of Theorem 3.2 and using (3.40) we get that F ∗ ∈ Z 2(1+ρ+γ ) . Hence by (4.14) we get ∗ F −F m v
∞
m F ∗ v ≤ C ∼ A−K ∞
1 m2(q−1−qσ +(q−1)ρ)
+
1
m2(1+ρ+γ )
where C is a positive constant independent of m. Since q can be arbitrarily chosen then the worse term is the second one and the Theorem is completely proved.
5 Numerical examples In this section, we show by some examples that our theoretical results are confirmed by the numerical tests. We think as exact the approximate solutions obtained for m = 600 and in all the tables we show only the digits which are correct according to them. All the computations were performed in 16-digits arithmetic. In the first three examples we perform the Nyström interpolant Fm given by (3.29) and then we construct the function fm approximating the solution of (3.9) according to (3.32). In the tables we report the solution in a point y ∈ [−1, 1] arbitrarily chosen. If we evaluate the solution at any other point y we have the same accuracy, even if y is close to the singularity. m as in (3.45) and then In the last example we work similarly by performing F m F constructing fm = G2 . In every table cond denotes the condition number in the infinity norm of the matrix of the used system. Example 5.1 We consider Eq. 3.8 showed in Section 3.1. Multiplying by (1 + y) and applying the transformation, Eq. 3.8 becomes 1 F(s) − 7
1
−1
h(t, s)F(t)dt = G(s)
where G(s) = sin 21−q (1 + s)q − 1 , and h(t, s) = q 21−q k φq (t), φq (s) (1 + s)q/2 (1 + t)q/2−1 . With α = β = 0 we choose δ = γ = 34 .
Fredholm equations
325
Fig. 1 f512 (y)v 3/4,3/4 (y)
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
We note that in this particular case when q is an even integer then both the right-hand side and the kernel are analytic functions. Hence in these cases the convergence is very fast. Take for instance q = 2. m 8 16 32
Fm v 3/4,3/4
−1 φq (0.6)
0.27313899 0.27313899230716 0.2731389923071604
fm v 3/4,3/4 (0.6)
0.32069404 0.32069404501206 0.3206940450120637
The graph of the weighted approximating polynomial f512 (y)v 3/4,3/4 (y) is given in Fig. 1. Example 5.2 We consider 1
√ 1 dx f (y) − log 2 + 1 + x + y2 f (x) √ = e y y2 + 2y + 5 3 2 20 −1 1−x
(5.1)
in Cv1/2,1/2 . The right-hand side is an analytic function while the kernel has the first derivative singular in x at −1. Using system (3.2), we can construct the polynomial sequence fm∗ v 1/2,1/2 that converges in Cv1/2,1/2 to the exact solution m as showed by the following numerical results. with order O log m2 m 8 16 32 64 128 256 512
∗ v 1/2,1/2 (−0.8) fm
2.0368 2.0368 2.03680 2.036805 2.036805 2.036805 2.0368050
cond 1.560262818365483 1.657121815336339 1.727343702367332 1.779838412013231 1.819921911898767 1.850976265817392 1.875267822920051
326
L. Fermo, M.G. Russo
Since both the right-hand side and the kernel, with respect to the variable y, are not singular, we apply only the regularizing transformation. In this way Eq. 5.1 is equivalent to 1 1 dt F(s) − h(t, s)F(t) √ (5.2) = g(φq (s)), 3 20 −1 1+t where
2 h(t, s) = q21−q k φq (t), φq (s) ξ(t)(1 + t)(q−1) 3 .
Note that if q = 4 both the kernel and the right-hand side of (5.2) are analytic function and the convergence is very fast as the following results show m 8 16 32 64
Fm v 1/2,1/2
−1 φq (−0.8)
3.3681 3.3681853798 3.368185379900 3.368185379900416
fm v 1/2,1/2 (−0.8)
2.036 2.03680508530 2.0368050853065 2.036805085306559
The graph of the weighted approximating polynomial f512 (y)v 1/2,1/2 (y) is given in Fig. 2. Example 5.3 We consider 1 9/4 1 1 e|y− 2 | 7/2 1/3 f (y) − | sin (x + y)| f (x)(1 + x) dx = 4 8 −1 1+y in Cv3/5,4/5 . By multiplying by 4 1 + y and then regularizing we get the new equation where the right-hand side is in Z 9/4 (v 3/5,4/5 ).
Fig. 2 f512 (y)v 1/2,1/2 (y)
12
10
8
6
4
2
0 -1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Fredholm equations
327
If q = 2, according to (3.31) the order of convergence is O m15/3 . The following table shows the numerical result in the fixed point φq−1 (0.7): m 8 16 32 64 128 256 512
Fm v 3/5,4/5 φq−1 (0.7)
0.775 0.77587 0.775875 0.77587542 0.775875427 0.7758754274 0.77587542745
fm v 3/5,4/5 (0.7)
cond
0.9423 0.94230 0.942308 0.9423086 0.9423086 0.942308679 0.9423086797
1.327651498424709 1.367660373139216 1.389921833461915 1.403661309128700 1.411720211895127 1.416345027778765 1.419003818239734
Moreover if we perform the values of Fm v 3/5,4/5 in 20 equispaced points 20 {yi }i=1 of the interval [−1, 1] we find that the discrete infinity norm maxi |F512 v 3/5,4/5 (yi ) − F256 v 3/5,4/5 (yi )| is of the order 4.65 · 10−10 . For the values of the parameter q greater than 3 the order of convergence is given by the smoothness of the right-hand side (in other words it is the case when in (3.31) η = r). Hence for all q ≥ 3 the order of convergence is O m19/4 . The following table concerns the case q = 3: m 8 16 32 64 128 256 512
Fm v 3/5,4/5
−1 φq (0.7)
0.6266 0.6266676 0.6266676 0.6266676 0.626667655 0.626667655 0.6266676557
fm v 3/5,4/5 (0.7)
cond
0.942 0.942308 0.942308 0.9423086 0.942308679 0.942308679 0.9423086797
1.404097379903444 1.472600778504364 1.507252712580243 1.527665943129061 1.540186051621951 1.547390002134695 1.551491815298267
The graph of the weighted approximating polynomial f512 (y)v 3/5,4/5 (y) is given in Fig. 3.
Fig. 3 f512 (y)v 3/5,4/5 (y)
4
3.5
3
2.5
2
1.5
1
0.5
0 -1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
328
L. Fermo, M.G. Russo
If we compute also in this case maxi |F512 v 3/5,4/5 (yi ) − F256 v 3/5,4/5 (yi )| we get the same as in the case q = 2, i.e. the error is about 8.27 · 10−10 . Example 5.4 Consider the equation
1 f (y) − 5
1 −1
|x − y|
−2/5
cos y2 + 1 f (x)dx = 3 1+y
in the space Cv1/2,1/2 . If we apply to this equation the Nyström method based on the product quadrature formula we get the following numerical results ( fm∗ denotes the Nyström interpolant)
m 8 16 32 64 128
∗ v 1/2,1/2 (0.25) fm
0.8 0.87 0.87 0.87 0.876
Now by multiplying by turns to be
∗ v 1/2,1/2 (0.75) fm
0.2 0.21 0.21 0.217 0.217
3 1 + y and regularizing, the transformed equation
F(s) −
1 5
1
−1
h(t, s)F(t)dt = G(s)
where G(s) = cos((2(1−q) (1 + s)q−1 )2 + 1) and h(t, s) = M(t, s)|t − s|− 5 with 2
M(t, s) = q2(1−q) 3 (1 + t) 3 q−1 2
Fig. 4 f512 (y)v 1/2,1/2 (y)
2
|21−q [(1 + t)q − (1 + s)q ]| |t − s|
−2/5 .
1.2
1
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Fredholm equations
329
Therefore if we choose q = 3 then the assumptions of Theorem 3.3 are fulfilled and the convergence estimate of our Nyström method, according to (3.47), 22 is O m− 10 . The table shows the value of the approximation of the solution of the original equation in the points 0.25 and 0.75. The graph of f512 v 1/2,1/2 is in Fig. 4. m 8 16 32 64 128 256 512
fm v 1/2,1/2 (0.25)
0.876 0.876 0.87650 0.87650 0.8765020 0.876502053 0.8765020536
fm v 1/2,1/2 (0.75)
0.217 0.21724 0.217247 0.217247 0.217247 0.217247602 0.2172476027
cond 2.808483446192598 2.829837341054557 2.839989106188874 2.845158406481834 2.847785495858179 2.849110635199911 2.849776037741677
Acknowledgements We are very grateful to Professor Giuseppe Mastroianni for the helpful suggestions and remarks. In particular we thank him for providing us with estimate (4.13). We also thank the referees for the accurate reading of the paper and their pertinent comments.
References 1. Atkinson, K.E.: The Numerical Solution of Integral Equations of the Second Kind. Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, Cambridge (1997) 2. Bart, G.R., Warnock, R.L.: Linear integral equations of the third kind. SIAM J. Math. Anal. 4, 609–622 (1973) 3. Beylkin, G., Coifman, R., Rokhlin, V.: Fast wavelet transforms and numerical algorithms I. Commun. Pure Appl. Math. 44, 141–183 (1991) 4. Cao, Y., Huang, M., Liu, L., Xu, Y.: Hybrid collocation methods for Fredholm integral equations with weakly singular kernels. Appl. Numer. Math. 57, 549–561 (2007) 5. Chen, Z., Micchelli, C.A., Xu, Y.: Fast collocation methods for second kind integral equations. SIAM J. Numer. Anal. 40, 344–375 (2002) 6. De Bonis, M.C., Mastroianni, G.: Projection methods and condition numbers in uniform norm for Fredholm and Cauchy singular integral equations. SIAM J. Numer. Anal. 44(4), 1351–1374 (2006) 7. Ditzian, Z., Totik, V.: Moduli of Smoothness. SCMG Springer-Verlag. Springer, New York 8. Graham, I.: Singularity expansions for the solution of the second kind Fredholm integral equations with weakly singular convolution kernels. J. Integr. Equ. 4, 1–30 (1982) 9. Greengard, L., Rokhlin, V.: A fast algoritm for particle simulations. J. Comput. Phys. 135, 279–292 (1997) 10. Kress, R.: A Nyström method for boundary integral equations in domains with corners. Numer. Math. 58(2), 145–161 (1990) 11. Luther, U., Mastroianni, G.: Fourier projections in weighted L∞ spaces. In: Problems and Methods in Mathematical Physics (Chemnitz, 1999), Operator Theory: Advances and Applications, vol. 121, pp. 327–351. Birkhäuser, Basel (2001) 12. Mastroianni, G., Frammartino, C., Rathsfeld, A.: On polynomial collocation for second kind integral equations with fixed singularities of Mellin type. Numer. Math. 94(2), 333–365 (2003) 13. Mastroianni, G., Russo, M.G.: Lagrange interpolation in weighted Besov spaces. Constr. Approx. 15, 257–289 (1999)
330
L. Fermo, M.G. Russo
14. Mastroianni, G., Russo, M.G., Themistoclakis, W.: Numerical methods for Cauchy singular integral equations in spaces of weighted continuous functions. In: Operator Theory: Advanced and Applications, vol. 160, pp. 311–336. Birkäuser, Basel (2005) 15. Monegato, G., Scuderi, L.: High order methods for weakly singular integral equations with nonsmooth input functions. Math. Comput. 67(224), 1493–1515 (1998) 16. Nevai, P.: Mean convergence of Lagrange interpolation. III. Trans. Am. Math. Soc. 282, 669–698 (1984) 17. Nevai, P.: Hilbert transforms and Lagrange interpolation. Addendum to: “Mean convergence of Lagrange interpolation. III” (Trans. Am. Math. Soc. 282(2), 669–698 (1984)). J. Approx. Theory 60(3), 360–363 (1990) 18. Pedas, A., Vainikko, G.: Smoothing transformation and piecewise polynomial collocation for weakly singular Volterra integral equations. Computing 73, 271–293 (2004) 19. Pedas, A., Vainikko, G.: Smoothing transformation and piecewise polynomial projection methods for weakly singular Fredholm integral equations. Commun. Pure Appl. Anal. 5(2), 395–413 (2006) 20. Pedas, A., Vainikko, G.: Integral equations with diagonal and boundary singularities of the kernel. Z. Anal. ihre Anwend. 25(4), 487–516 (2006) 21. Rathsfeld, A.: A wavelet algorithm for the solution of a singular integral equation over a smooth two-dimensional manifold. J. Integral Equ. Appl. 10, 445–501 (1998) 22. Scuderi, L.: A collocation method for the generalized airfoil equation for an airfol with a flap. SIAM J. Numer. Anal. 35(5), 1725–1739 (1998) 23. Vainikko, G., Pedas, A.: The properties of solutions of weakly singular integral equations. J. Aust. Math. Soc. 22, 419–430 (1981)