1
Convergence Analysis of the Gaussian Regularized Shannon Sampling Formula
arXiv:1601.01363v1 [cs.IT] 7 Jan 2016
Rongrong Lin and Haizhang Zhang
Abstract—We consider the reconstruction of a bandlimited function from its finite localized sample data. Truncating the classical Shannon sampling series results in an unsatisfactory convergence rate due to the slow decayness of the sinc function. To overcome this drawback, a simple and highly effective method, called the Gaussian regularization of the Shannon series, was proposed in engineering and has received remarkable attention. It works by multiplying the sinc function in the Shannon series with a regularization Gaussian function. L. Qian (Proc. Amer. √ Math. Soc., 2003) established the convergence rate of O( n exp(− π−δ n)) for this method, where δ < π is the 2 bandwidth and n is the number of sample data. C. Micchelli et al. (J. Complexity, 2009) proposed a different regularization method and obtained the corresponding convergence rate of O( √1n exp(− π−δ n)). This latter rate is by far the best among 2 all regularization methods for the Shannon series. However, their regularized function involves the solving of a linear system and is implicit and more complicated. The main objective of this note is to show that the Gaussian regularized Shannon series can also achieve the same best convergence rate as that by C. Micchelli et al. We also show that the Gaussian regularization method can improve the convergence rate for the useful average sampling. Numerical experiments are presented to justify the obtained results. Index Terms—Bandlimited functions, Gaussian regularization, oversampling, Shannon’s sampling theorem, average sampling.
I. I NTRODUCTION
T
HE main purpose of this paper is to show that the Gaussian regularized Shannon sampling formula to reconstruct a bandlimited function can achieve by far the best convergence rate among all regularization methods for the Shannon sampling series in the literature. As a result, we improve L. Qian’s error estimate [1] for this highly successful method in engineering. We first introduce the Paley-Wiener space Bδ (Rd ) with the bandwidth δ := (δ1 , δ2 , . . . , δd ) ∈ (0, +∞)d defined as
Bδ (Rd ) := {f ∈ L2 (Rd ) ∩ C(Rd ) : supp fˆ ⊆ [−δ, δ]}, Qd where [−δ, δ] := k=1 [−δk , δk ]. In this paper, the Fourier transform of f ∈ L1 (Rd ) takes the form Z 1 f (t)e−it·ξ dt, ξ ∈ Rd , fˆ(ξ) := √ ( 2π)d Rd
This work was supported in part by Natural Science Foundation of China under grants 11222103 and 11101438. R. Lin is with the School of Mathematics and Computational Science, Sun Yat-sen University, Guangzhou 510275, P. R. China (e-mail:
[email protected]). H. Zhang (corresponding author) is with the School of Mathematics and Computational Science and Guangdong Province Key Laboratory of Computational Science, Sun Yat-sen University, Guangzhou 510275, P. R. China (e-mail:
[email protected]).
where t · ξ denotes the standard inner product of t and ξ in Rd . The classical Shannon sampling theorem [2], [3] states that each f ∈ Bπ (R) can be completely reconstructed from its infinite sampling data {f (j) : j ∈ Z}. Specifically, it holds X f (t) = f (j) sinc (t − j), t ∈ R, f ∈ Bπ (R), (1) j∈Z
where sinc (x) = sin(πx)/(πx). Many generalizations of Shannon’s sampling theorem have been established (see, for example, [4]–[12] and the references therein). In practice, we can only sum over finite sample data “near” t in (1) to approximate f (t). Truncating the series (1) results in a convergence rate of the order O( √1n ) due to the slow decayness of the sinc function, [13]–[16]. Besides, this truncated series was proved to be the optimal reconstruction method in the worst case scenario in Bπ (R). Dramatic improvement of the convergence rate can be achieved when f ∈ Bδ (R), δ < π. In this case, {f (j) : j ∈ Z} turns out to be a set of oversampling data, where oversampling means to sample at a rate strictly larger than the Nyquist rate πδ < 1. Three explicit methods [1], [14], [17] have been proposed in order to reconstruct a univariate bandlimited function f ∈ Bδ (R) (δ < π) from its finite oversampling data {f (j) : −n + 1 ≤ j ≤ n} with an exponentially decaying approximation error. They work by multiplying the sinc function with a rapidly-decaying regularization function. Jagerman [14] used a power of the sinc functions as the regularizer and obtained the convergence rate of 1 π−δ exp(− n) . O n e Micchelli et al. [17] chose a spline function as the regularizer and attained the convergence rate 1 π−δ O √ exp(− n) , (2) 2 n which is by far the best convergence rate among all regularization methods for the Shannon sampling series in reconstructing a bandlimited function. However, the spline regularization function in [17] is implicit and involves the solving of a linear system. A third method using a Gaussian function as the regularizer was first proposed by Wei [18]. Precisely, the Gaussian regularized Shannon sampling series in [18] is defined as (Sn,r f )(t) :=
n X
j=−n+1
f (j) sinc (t−j)e−
(t−j)2 2r2
, t ∈ (0, 1), f ∈ Bδ (R). (3)
2
A convergence analysis of the method was presented by Qian [1], [8]. Taking a small computational mistake in (2.33) in [1] into consideration and optimizing about the variance r of the Gaussian function as done in [17], Qian actually established the following convergence rate for (3) √ π−δ n) . (4) O n exp(− 2
Due to its simplicity and high accuracy (4), the Gaussian regularized Shannon sampling series (3) has been widely applied to scientific and engineering computations. In fact, more than a hundred such papers have appeared (see http://www.math.msu.edu/∼wei/pub-sec.html for the list, and [18]–[23] for comments and discussion). We note that although the exponential term in (4) is as good as that in√(2) for the spline function regularization [17], the first term n is much worse than √1n in (2). The first purpose of this paper is to show that the convergence rate of the Gaussian regularized Shannon sampling series can be improved to (2). Thus, this method enjoys both simplicity and the best convergence rate by far. This will be done in Section II, where we also improve the convergence rates of the Gaussian regularized Shannon sampling series for derivatives and multivariate bandlimited functions. In Section III, we show that the Gaussian regularization method can also improve the convergence rate for the useful average sampling. In the last section, we demonstrate our results via several numerical experiments. II. I MPROVED C ONVERGENCE R ATES FOR THE G AUSSIAN R EGULARIZATION
In this section, we improve the convergence rate analysis for the Gaussian regularized Shannon sampling series. Specifically, we shall show that it can achieve by far the best rate (2) for univariate bandlimited functions and its derivatives, and for multivariate bandlimited functions. We separate the three cases into different subsections. A. Univariate Bandlimited Functions Let f ∈ Bδ (R) with δ ∈ (0, π) throughout this subsection. We shall use the Gaussian regularized Shannon sampling series (3) to reconstruct the values of f at t ∈ (0, 1) from the finite localized oversampling data {f (j) : −n + 1 ≤ j ≤ n}. We shall need a few technical facts in order to improve the convergence rate established in [1]. They were also frequently used in the estimates in [1], [8]. Firstly, it is well-known that sinc (· − j), j ∈ Z form an orthonormal basis for Bπ (R). As f ∈ Bδ (R) ⊆ Bπ (R), we have by the Parseval identity X kf k2L2 (R) = |f (j)|2 , f ∈ Bδ (R). (5) j∈Z
The second result needed is the following upper bound estimate of Mills’ ratio [24] of standard normal law: Z +∞ 2 2 e−x , x > 0. (6) e−t dt < 2x x
We shall also need a variant of its discrete version: 2 X (t−j)2 r2 − (n−1) r2 , r > 0, n ≥ 2, t ∈ (0, 1). e e− r 2 < n−1 j ∈(−n,n] /
(7) The next two are a useful computation of the Fourier transform Z (t−j)2 e−ijξ ξ+π − r2 η2 sinc (t − j)e− 2r2 ˆ(ξ) = re 2 dη, ξ ∈ R 2π ξ−π (8) and an associated estimate 2 r2 Z (ξ+π)r √ r 2 e− (π−δ) 2 2 1 −τ 2 √ for all ξ ∈ [−δ, δ]. e dτ ≤ 1− π (ξ−π)r π (π − δ)r √ 2 (9) The last one is the upper bound estimate |Hk (x)| ≤ (2x)k , |x| ≥
k 2
(10)
for the k-th order Hermite polynomial defined as k
Hk (x) := k!
⌊2⌋ X
(−1)i
i=0
(2x)k−2i , x ∈ R. i!(k − 2i)!
We are in a position to present an improved convergence rate analysis for the Gaussian regularized Shannon sampling series. We follow the methods in [1] and emphasize that the improvement is achieved by applying the Cauchy-Schwartz inequality to obtain a better estimate for (2.5) in [1]. q Theorem 1: Let δ ∈ (0, π) , n ≥ 2, and choose r := n−1 π−δ . The Gaussian regularized Shannon sampling series (3) satisfies sup f ∈Bδ (R), kf kL2 (R) ≤1
kf − Sn,r f kL∞ ((0,1))
(π−δ)(n−1) √ 2 1 e− √ p 2δ + ≤ . n π (π − δ)(n − 1)
(11)
Proof: Let f ∈ Bδ (R) with kf kL2(R) ≤ 1. Set (f − Sn,r f )(t) := E1 (t) + E2 (t), t ∈ (0, 1), where E1 (t) := f (t) − E2 (t) :=
X
X j∈Z
j ∈(−n,n] /
f (j) sinc (t − j)e−
f (j) sinc (t − j)e−
(t−j)2 2r2
(t−j)2 2r2
,
.
Bound E1 (t) by its Fourier transform as follows 1 |E1 (t)| ≤ √ kEˆ1 kL1 (R) , t ∈ (0, 1). 2π Computing and bounding Eˆ1 by (8), (9) and similar arguments as those in [1], we have √ − (π−δ)2 r2 2 2δe |E1 (t)| ≤ , t ∈ (0, 1). (12) π(π − δ)r Observe that | sinc (t − j)| ≤
1 , t ∈ (0, 1), j ∈ / (−n, n]. πn
3
Thus,
It has been estimated in [1] that
|E2 (t)|
≤
X
j ∈(−n,n] /
1 ≤ πn
sin π(t − j) − (t−j)2 e 2r2 |f (j)| π(t − j)
X
j ∈(−n,n] /
|f (j)|e
2 − (t−j) 2r2
kE1 kL∞ ((0,1)) ≤
, t ∈ (0, 1).
Apply the Cauchy-Schwartz inequality, we get by (5) and (7) 12 X 12 (t−j)2 1 X |E2 (t)| ≤ e− r 2 |f (j)|2 πn j ∈(−n,n] / 2
≤
− (n−1) 2r2
j ∈(−n,n] /
re √ , t ∈ (0, 1). πn n − 1
q (13)
Combining (12) with (13) and optimally choosing r = n−1 π−δ completes the proof. We remark that the estimate (11) is of the same order as the best convergence rate (2) by far in the literature. A second remark is on the degenerated case when δ = π. In this case, the estimate in [1] or the above (11) is apparently meaningless. To make up for the drawback, a more delicate upper bound estimate is needed for E1 . Specifically, we have Z 1 ˆ1 (ξ)|dξ | E |E1 (t)| ≤ √ 2π [−π,π]\[−π+n− 34 ,π−n− 34 ] Z π−n− 34 1 ˆ1 (ξ)|dξ |E +√ −3 2π −π+n r 43 2 n4 1 1 . ≤ √ 3 + π n8 π r 9
Taking r = n 8 above and using (13), we obtain the conver3 gence rate 3n− 8 for f ∈ Bπ (R).
kf (s) − (Sn,r f )(s) kL∞ ((0,1)) ! √ s+ 1 (π−δ)(n−2) 2 e− 24(s + 2)! 2δ 2 √ √ p + . n 2s + 1 π (π − δ)(n − 2) sup
f ∈Bδ (R),kf kL2 (R) ≤1
≤
Proof: We may suppose that f ∈ Bδ (R) with kf kL2 (R) ≤ 1. Set f (s) (t) − (Sn,r f )(s) (t) := E1 (t) + E2 (t), t ∈ (0, 1), where (s) (t−j)2 1 X , f (j) sinc (t − j)e− 2r2 E1 (t) := f (s) (t) − √ 2π j∈Z (s) X (t−j)2 1 E2 (t) := √ f (j) sinc (t − j)e− 2r2 . 2π j ∈(−n,n] /
(π−δ)2 r2
1
2 δ s+ 2 e− 2 2s + 1 π(π − δ)r
.
(14)
For each t ∈ (0, 1), we calculate X 1 f (j) E2 (t) = √ 2π j ∈(−n,n] / " s # sin(πt − jπ) (s−k) (t−j)2 (k) X s! − 2r2 · e k!(s − k)! π(t − j) k=0 " s X s! X 1 = √ f (j) k! 2π j ∈(−n,n] k=0 / ! s−k π s−k−l−1 sin π(t − j + (s−k−l) ) X 2 (−1)l l! · · l!(s − k − l)! (t − j)l+1 l=0 # t−j ) − (t−j)2 (−1)k Hk ( √ 2r √ e 2r2 . · k ( 2r) Noticing the simple fact 1 1 ≤ for j ∈ / (−n, n], t ∈ (0, 1), l ∈ Z+ , |t − j|l+1 n
(15)
we obtain by (10) |E2 (t)|
B. Derivatives of Univariate Bandlimited Functions By the Paley-Wiener theorem, each function in Bδ (R) is infinitely differentiable. In this subsection, we are concerned with the reconstruction of the derivatives of f ∈ Bδ (R) by the Gaussian regularized Shannon sampling series (3). The convergence rate obtained in [1], [8] is also of the order (4). We shall improve the estimate. s2 Theorem 2: Let s ∈ N, n ≥ max{3, 2(π−δ) }, and r := q n−2 π−δ . It holds
r
X s! ≤ √ |f (j)| π 2π j ∈(−n,n] / # " s s−k |t − j|k X 1 X (t−j)2 1 π s−k−l e− 2r2 · l+1 2k k! (s − k − l)! |t − j| r l=0 k=0 X s! √ ≤ |f (j + 1)| nπ 2π |j|≥n " s # s−k X |j|k X π s−k−l − j22 · e 2r . · k!r2k (s − k − l)! l=0
k=0
Since ex >
xj j!
for all x > 0 and j ∈ Z+ , we have
X |j| j2 X s! √ |f (j + 1)| e r2 (s − k + 1)eπ e− 2r2 nπ 2π |j|≥n k=0 −j 2 +2|j| eπ (s + 2)! X ≤ |f (j + 1)|e 2r2 . 3 n(2π) 2 |j|≥n s
|E2 (t)|
≤
q s2 sr √ Note that n ≥ 1 + 2(π−δ) . and r = n−2 π−δ implies n ≥ 2 We thus get by (5), (7) and the Cauchy-Schwartz inequality kE2 (t)kL∞ ((0,1))
≤
1 eπ (s + 2)!e 2r2 3
n(2π) 2
π
≤
e (s + 2)!e 3
(2π) 2
1 2r2
2
X
j2
e− r 2
j≥n−1 2
− (n−2) 2r2
re √ . n n−2
12
(16)
4
Substituting r =
q
n−2 π−δ
into (14) and (16), we have
Note that
kE2 kL∞ ((0,1)) + kE2 kL∞ ((0,1)) √ π−δ 1 δ s+ 2 2 eπ (s + 2)!e 2(n−2) p √ + = √ 3 π 2s + 1 (π − δ)(n − 2) (2π) 2 π − δn (π−δ)(n−2) 2 ·e− (π−δ)(n−2) √2δ s+ 12 2 24(s + 2)! e− √ p + . < √ n 2s + 1 π (π − δ)(n − 2)
In the last inequality, we use r = 24. The proof is complete.
q
n−2 π−δ
≥
e√2 2 2π
and
d X Y
j ∈J / n k=1 d X
≤
k=1
(17)
where d Y sin(πxk ) , x ∈ Rd πxk
j ∈J / n
sinc (t − j)e
(n−1)2
d r 2 e− r 2 ≤ 2 2 . π n (n − 1)
Proof: Observe that {j ∈ Zd : j ∈ / Jn } ⊆
≤
k=1
·
X
{j ∈ Zd : jk ∈ / (−n, n]}.
k=1
jk ∈(−n,n] /
Y X
l6=k
jl ∈Z
.
Qd
k=1 [−δk , δk ]
k=1
2
Proof: By (6), we have for ξ ∈ 1 √ π
Z
(ξk +π)r √ 2 (ξk −π)r √ 2
1 =1− √ π
Z
2
e−τ dτ +∞
Qd
k=1 [−δk , δk ]
2 1 e−τ dτ − √ π
(π−ξk )r √ 2 (π−ξk )2 r2 − 2
Z
+∞
(π+ξk )r √ 2 (π+ξk )2 r2 − 2
2
e−τ dτ
1 e e . √ ≥1− √ +√ π 2(π − ξk )r 2(π + ξk )r
We then apply the Pdelementary fact that for constants τk > 0, 1 ≤ k ≤ d with k=1 τk < 1, it holds d Y
k=1
(1 − τk ) ≤
d X
τk .
k=1
As a consequence, Z k√ d Y 2 2 1 √ 1− e−τ dτ π (ξk√−π)r k=1 2 (π−ξk )2 r2 (π+ξk )2 r2 d 2 2 1 X e− e− √ ≤ √ +√ π )r 2(π − ξ )r 2(π + ξ k k k=1 r (π−ξk )2 r2 d 2 2 X e− ≤ . π (π − ξk )r (ξ +π)r
k=1
d [
By (18), we have for each t ∈ (0, 1)d X kt−jk2 sinc 2 (t − j)e− r2 j ∈J / n d X
(tk −jk )2 r2
r (π−∆)2 r2 Z (ξk√+π)r d Y 2 2 2 de− 1 −τ 2 √ 1− e dτ ≤ . π (ξk√−π)r π (π − ∆)r
1−
and kxk denotes the standard Euclidean norm of x in Rd . Two lemmas are needed to present an improved convergence analysis for (17). Lemma 3: Let r > 0 and n ≥ 2. It holds for all t ∈ (0, 1)d 2
jk ∈(−n,n] /
sinc 2 (tk − jk )e−
∆ := max{δk : k = 1, 2, . . . , d}.
k=1
− kt−jk r2
X
(tk −jk )2 r2
Lemma 4: Let δ ∈ (0, π)d . It holds for ξ ∈
j∈Jn
2
sinc 2 (tk − jk )e−
Applying (7) to the last inequality gives the desired result. Introduce the important constant
≤
In this subsection, we show that the best convergence rate (2) can also be achieved in the reconstruction of a multivariate bandlimited function by the Gaussian regularized Shannon sampling series. Techniques to be used for this case is quite different from those in the univariate case and also much differs from those in [8]. Let δ := (δ1 , δ2 , . . . , δd ) ∈ (0, π)d and Jn := {j ∈ Zd : j ∈ (−n, n]d } in this subsection. The Gaussian regularized Shannon sampling series [8] to reconstruct a multivariate function f ∈ Bδ (Rd ) from its finite sample data f (Jn ) is defined as X kt−jk2 (Sn,r f )(t) := f (j) sinc (t − j)e− 2r2 , t ∈ (0, 1)d ,
X
sinc 2 (x − m) = 1, x ∈ R. Thus, we have
m∈Z
3π
√1 π
C. Multivariate Bandlimited Functions
sinc (x) :=
P
sinc 2 (tk − jk )e−
sinc 2 (tl − jl )e−
(tk −jk )2 r2
(tl −jl )2 r2
.
r 2 x2
(18)
The proof is completed by noting that x1 e− 2 is decreasing on (0, +∞). With the above preparation, we have the following main theorem. q n−1 . The Theorem 5: Let δ ∈ (0, π)d , n ≥ 2, and r := π−∆ multivariate Gaussian regularized Shannon sampling series (17) satisfies sup
kf − Sn,r f kL∞ ((0,1)d ) r ! (π−∆)(n−1) 2 d d e− p d(2∆) 2 + . n π (π − ∆)(n − 1)
f ∈Bδ (Rd ), kf kL2 (Rd ) ≤1
≤
5
Proof: Let f ∈ Bδ (Rd ) with kf kL2(Rd ) ≤ 1. Set f (t) − (Sn,r f )(t) := E1 (t) + E2 (t), t ∈ (0, 1)d , where X kt−jk2 f (j) sinc (t − j)e− 2r2 , E1 (t) := f (t) − E2 (t) :=
X
j ∈J / n
j∈Zd
f (j) sinc (t − j)e−
kt−jk2 2r2
.
By Lemma 4 and similar arguments as those in [1] for the univariate case, we have Z (ξk√+π)r d Y 2 2 1 √ e−τ dτ |Eˆ1 (ξ)| = |fˆ(ξ)| 1 − (ξ −π)r k√ π r k=1 (π−∆)2 r2 2 2 2 de− ≤ |fˆ(ξ)| , ξ ∈ [−δ, δ]. π (π − ∆)r
Bounding E1 by its Fourier transform and then applying the Cauchy-Schwartz inequality, we get Z 1 ˆ |E(ξ)|dξ kE1 kL∞ ((0,1)d ) ≤ √ 2π [−δ,δ] (π−∆)2 r2 Z 2 d e− |fˆ(ξ)|dξ ≤ (19) π (π − ∆)r [−δ,δ]
kf k2L2(Rd )
=
X
j∈Zd
2
√
sinc 2 (t − j)e−
kt−jk2 r2
(n−1)2
d re− 2r2 √ ≤ . π n n−1 The proof is completed by taking r = (20).
q
n−1 π−∆
Z
σ/2
−σ/2
21
(20)
eitξ dν(t), ξ ∈ [−δ, δ].
(23)
(24)
Under the above assumptions on ν, W (ξ) is an even function on [−δ, δ] and satisfies σδ σ ≤ W (ξ) ≤ 1, |W ′ (ξ)| ≤ , 0 < γ := cos 2 2 (25) σ2 ′′ |W (ξ)| ≤ , ξ ∈ [−δ, δ]. 4 In practice, we only have the finite sample data f˜(j), −n < j ≤ n. A natural reconstruction method is by 1 √ 2π
in (19) and
III. G AUSSIAN R EGULARIZATION FOR AVERAGE S AMPLING In this section, we will apply the method of Gaussian regularization to the useful average sampling. A main purpose is again to improve the convergence rate. We first introduce some basic facts about the average sampling. Sampling a function f at j ∈ Z can be viewed as applying the Delta distribution to the function f (j +·). The Delta distribution is used theoretically but hard to implement physically. Hence, a practical way is to approximate the Delta distribution by an averaging function with small support around the origin. For this sake, we consider the following average sampling strategy: Z σ/2 f˜(j) := f (j + ·)dν(x), j ∈ Z, f ∈ Bδ (R), −σ/2
ˆ φ(ξ)W (ξ) = 1 for almost every ξ ∈ [−δ, δ],
d
j ∈J / n
j ∈J / n
where φ is to be chosen. It has been proved in [25] that if φ ∈ C(R) ∩ L2 (R) with supp φˆ ⊆ Iδ := [−2π + δ, 2π − δ] then (22) holds if and only if
W (ξ) :=
By this, Lemma 3, and the Cauchy-Schwartz inequality, we obtain 12 X kE2 kL∞ ((0,1)d ) ≤ |f (j)|2 ·
Thus, {f˜(j) : j ∈ Z} is in fact induced by a frame in Bδ (R). By the standard frame theory, we are able to completely reconstruct f from the infinite sample data {f˜(j) : j ∈ Z} through a dual frame. For studies along this direction, see, for example, [4], [5], [9], [10], [26], [27]. Motivated by the Shannon sampling theorem, we desire a dual frame that is generated by the shifts of a single function. In other words, we prefer a complete reconstruction formula of the following form 1 X˜ f= √ f (j)φ(· − j), f ∈ Bδ (R), (22) 2π j∈Z
where
|f (j)| , f ∈ Bδ (R ).
X
j∈Z
(π−∆)2 r2
d
2 d(2∆) 2 e− ≤ . π (π − ∆)r
Recall that
where 0 < σ < πδ and ν is a symmetric positive Borel probability measure on [−σ/2, σ/2]. It was observed in [25] that σδ X kf k2L2 (R) ≤ (21) |f˜(j)|2 ≤ kf k2L2 (R) . cos 2
n X
j=−n+1
f˜(j)φ(· − j).
(26)
The main purpose of this section is to illustrate by an explicit example that compared to the above direct truncation, regularization by a Gaussian function as follows X (t−j)2 1 f˜(j)φ(t−j)e− 2r2 , t ∈ (0, 1), f ∈ Bδ (R) (Sn,r f )(t) := √ 2π j∈(−n,n] (27) may lead to a better convergence rate. The function φ satisfying (23) in our example is specified as 1 , ξ ∈ [−δ, δ], hW (ξ) i2 |ξ|−(2π−δ) ˆ φ(ξ) := P (ξ), δ ≤ |ξ| ≤ 2π − δ, (28) 2π−2δ 0, elsewhere, where
P (ξ) :=
π − 2δ W ′ (δ) δW ′ (δ) 1 |ξ| + . − + (π − δ)W (δ) W 2 (δ) (π − δ)W (δ) W 2 (δ)
6
Then supp φˆ ⊆ [−2π + δ, 2π − δ], φˆ ∈ C 1 (R) and φˆ′ is absolutely continuous on R. By (25), we have 3π − δ πσ + 2, (π − δ)γ γ 2 8 4σ σ ′′ , + kφˆ kL∞ (Iδ ) ≤ 3 + γ (π − δ)3 γ (π − δ)2 γ 2 2δσ 2 32 16σ kφˆ′′ kL1 (Iδ ) ≤ 3 + . + γ (π − δ)2 γ (π − δ)γ 2
ˆ L∞ (I ) ≤ kφk δ
(29)
Finally, we shall use an elementary fact from the Fourier analysis [28] that 1 kφˆ′′ kL1 (Iδ ) |φ(t − j)| ≤ √ , j ≥ 2, t ∈ (0, 1). 2π |t − j|2
(30)
The main result of this section is as follows. 5 6 Theorem 6: Let δ ∈ (0, π), n ≥ 2, γ = cos( σδ 2 ), r = n and φˆ be given as (28). Then the convergence rate of (27) is sup f ∈Bδ (R), kf kL2 (R) ≤1
5
kf − Sn,r f kL∞ ((0,1)) ≤ cn− 3 ,
where ! √ √ √ ( δ + 2δ)σ 2 1 16σ 4σ δ c= + + + πσ δ γ3 (π − δ)2 π−δ γ2 ! √ √ 1 32 8 δ + + 10 δ . + 2 (π − δ) π−δ (π − δ)γ Proof: Let f ∈ Bδ (R) with kf kL2 (R) ≤ 1. Set f (t) − Sn,r f (t) := F1 (t) + F2 (t), t ∈ (0, 1), where (t−j)2 1 X˜ f (j)φ(t − j)e− 2r2 , F1 (t) := f (t) − √ 2π j∈Z X (t−j)2 1 f˜(j)φ(t − j)e− 2r2 . F2 (t) := √ 2π j ∈(−n,n] /
Estimate of F1 . By (22), we P −ijξ ˆ ( √12π j∈Z f˜(j)φ(ξ)e )χ[−δ,δ] (ξ), ξ ξ ∈ R, X
have fˆ(ξ) = ∈ R. Then, for
(28), we have ˆ 1 (ξ)| |F " Z 2 2 |fˆ(ξ)| ˆ − φ(ξ ˆ − η) re− r 2η dη ≤ √ φ(ξ) ˆ 2π|φ(ξ)| |η|