SHARP ERROR ESTIMATES FOR INTERPOLATORY APPROXIMATION ON CONVEX POLYTOPES ALLAL GUESSAB∗ AND GERHARD SCHMEISSER† Abstract. Let P be a convex polytope in the d-dimensional Euclidean space. We consider an interpolation of a function f at the vertices of P and compare it with the interpolation of f and its derivative at a fixed point y ∈ P. The two methods may be seen as multivariate analogues of an interpolation by secants and tangents, respectively. For twice continuously differentiable functions, we establish sharp error estimates with respect to a generalized Lp norm for 1 ≤ p ≤ ∞. The case p = 1 is of special interest since it provides analogues of the midpoint rule and the trapezoidal rule for approximate integration over the polytope P. In the case where P is a simplex and p > 1, this investigation covers recent results by S. Waldron [8] and by M. St¨ ampfle [6]. Key words. Interpolation on convex polytopes, sharp error estimates, approximation of functions, approximate integration, approximation of functionals AMS subject classifications. 41A05, 41A20, 41A44, 41A63, 41A80, 65D05, 65D30
1. Introduction and Notation. Denote by P1 the class of all polynomials in d real variables of degree at most 1, also called the class of affine functions on Rd . Let P ⊂ Rd be a convex polytope of positive measure with vertices v1 , . . . , vn , and let B1 , . . . , Bn be an associated system of continuous functions on P with the following properties: Non-negativity. For i = 1, . . . , n, we have (1.1)
Bi (x) ≥ 0
(x ∈ P).
Linear precision. For every λ ∈ P1 , we have (1.2)
λ(x) =
n X
λ(vi )Bi (x).
i=1
Warren [10] showed that B1 , . . . , Bn can be chosen as rational functions, which are uniquely determined if one requires that each Bi have minimal degree. Furthermore, for an arbitrary convex polytope, he presented an algorithm for constructing these functions B1 , . . . , Bn in a finite number of steps. Since vertices of a convex polytope are extremal points, it is easily deduced from the “linear precision” that (1.3)
Bi (vj ) = δij
(i, j ∈ {1, . . . , n}),
where we use Kronecker’s delta. As a consequence of (1.2) and (1.3), the functions B1 , . . . , Bn are linearly independent and span an n-dimensional linear space Rn which contains P1 as a subspace. By C(P), C 1 (P), and C 2 (P), we denote the spaces of functions which are defined on P and are continuous, continuously differentiable, and twice continuously differentiable, respectively. ∗ Department of Applied Mathematics, University of Pau, 64000 Pau, France, e-mail address:
[email protected]. † Mathematical Institute, University of Erlangen-Nuremberg, 91054 Erlangen, Germany, e-mail address:
[email protected].
1
2
A. GUESSAB AND G. SCHMEISSER
Next, let L be a positive linear functional on C(P). The positivity means that L(f ) > 0 for every nontrivial non-negative function f ∈ C(P). Examples of such functionals are weighted integrals Z L(f ) := (1.4) w(x)f (x)dx f ∈ C(P) , P
where w is integrable and positive on P except for a set of measure zero. For f ∈ C(P), we introduce (1.5)
p 1/p kf kp := L(|f | )
(1 ≤ p < ∞)
and (1.6)
kf k∞ := sup |f (x)| , x∈P
which define norms on C(P). When L is given by (1.4) and w = 1, then k · kp is the familiar Lp norm. For general L, we may think of P as being equipped with a mass distribution such that L(1) is the total mass of P. The possibility of having an arbitrary L is of interest mainly in our applications of the case p = 1 (see Section 4). For this reason, we do not use a weighted supremum norm. By k · k, without any subscript, and by h·, ·i, we want to denote the Euclidean norm and the standard inner product in Rd . In this paper, we shall study the linear interpolation operator Λv , defined by (1.7)
Λv [f ] :=
n X
f (vi )Bi
f ∈ C(P) ,
i=1
which interpolates f at the vertices of P, and shall compare it with (1.8) Λy [f ] := f (y) + Df (y)(· − y) f ∈ C 1 (P) , where y ∈ P. Clearly, Λy [f ] interpolates f at y and the same holds for the first derivative. As regards our notation, we want to follow the convention that a superscript in roman type indicates an abbreviation for a word while a subscript in italic type is a mathematical quantity. In particular, the superscript v shall always refer to interpolation at the vertices. Similarly, we shall use the superscripts sb for smallest ball and cm for center of mass. 2. Auxiliary Results. For convenient reference, we first state some properties of the operators Λy and Λv as lemmas. Lemma 2.1. For y ∈ P, the operator Λy has the following properties. (i) It maps C 1 (P) into P1 . (ii) It reproduces functions from P1 . (iii) It approximates convex functions from below. Proof. The properties (i) and (ii) are obvious. Property (iii) is a well known fact about differentiable, convex functions; see [5, p. 98, Theorem A]. Lemma 2.2. The operator Λv has the following properties. (i) It maps C(P) into Rn . (ii) It reproduces functions from Rn .
ERROR ESTIMATES FOR APPROXIMATION ON CONVEX POLYTOPES
3
(iii) It approximates convex functions from above. (iv) If f, g ∈ C(P) and f (vi ) ≤ g(vi ) for i = 1, . . . , n, then Λv [f ] ≤ Λv [g]. Proof. Since {B1 , . . . , Bn } is a basis of Rn , the properties (i) and (ii) are obvious consequences of the definition of Λv . Next, it follows from (1.2) that x =
n X
(x ∈ P),
vi Bi (x)
i=1
which is a representation of x as a convex combination of the vertices of P. Hence, for a convex function f , ! n n X X f (x) = f vi Bi (x) ≤ f (vi )Bi (x) = Λv [f ](x) (x ∈ P), i=1
i=1
and so statement (iii) is verified. Finally, recalling (1.1), we see that, under the hypothesis of statement (iv), Λv [f ] =
n X
f (vi )Bi ≤
i=1
n X
g(vi )Bi = Λv [g].
i=1
This completes the proof. It will turn out that the constants in our error estimates are determined by the interpolation error of the quadratic function k · k2 . We therefore introduce the (nonnegative) functions (2.1) ey := k · k2 − Λy k · k2 , where y ∈ P, and (2.2)
n X ev := Λv k · k2 − k · k2 = kvi k2 Bi − k · k2 . i=1
Representations, interrelations, and estimates for these functions are stated in the following lemma. Lemma 2.3. The functions ey and ev are non-negative and vanish at the interpolation points of Λy and Λv , respectively. They satisfy the equations (2.3)
ey = k · −yk2 ,
(2.4)
ev =
n X
k · −vi k2 Bi ,
i=1
ev + ey =
(2.5)
n X
ey (vi )Bi .
i=1
Furthermore, denoting by (2.6)
Bsb =: {x ∈ Rd : kx − xsb k ≤ rsb }
4
A. GUESSAB AND G. SCHMEISSER
the smallest ball that contains P, we have ev (x) ≤ (rsb )2 − kx − xsb k2 ≤ (rsb )2
(2.7)
for all x ∈ P. For notational simplicity, we write Λsb := Λy
(2.8)
and esb := ey
if y = xsb .
Proof. From the definition of the functions ey and ev , it is clear that they vanish at the interpolation points of Λy and Λv , respectively. Since k·k2 is a convex function, the statements (iii) of Lemmas 2.1 and 2.2 show that ey and ev are non-negative. Next, from the definition of ey , we deduce that ey (x) = kxk2 − kyk2 + 2hy, x − yi = kxk2 + kyk2 − 2hy, xi = kx − yk2 , which is (2.3). Since ev + ey belongs to Rn , statement (ii) of Lemma 2.2 shows that, for any x ∈ P, we have (2.9)
ev (x) + ey (x) =
n X
n X ev (vi ) + ey (vi ) Bi (x) = ey (vi )Bi (x) ,
i=1
i=1
which is (2.5). Substituting y = x in (2.9) and using (2.3), we obtain (2.4). For a proof of (2.7), we first note that xsb ∈ P, as a consequence of the convexity of P. Since hsb := (rsb )2 − k · −xsb k2
(2.10)
is non-negative on P, while ev vanishes at all the vertices of P, we clearly have hsb (vi ) − ev (vi ) ≥ 0
(i = 1, . . . , n).
Therefore statement (iv) of Lemma 2.2 implies that Λv [hsb − ev ] ≥ 0. Furthermore, using (2.3), (2.5), and the notation (2.8), we find that (2.11)
hsb − ev = (rsb )2 − esb − ev = (rsb )2 −
n X
esb (vi )Bi ,
i=1
which obviously belongs to Rn . Hence statement (ii) of Lemma 2.2 allows us to conclude that (2.12) hsb − ev = Λv hsb − ev ≥ 0 , which gives (2.7) immediately. Remark 2.4. Inequality (2.7) is of interest for the following reason. As we shall see, the best constants in our error estimates for Λv [f ] are determined by norms of ev . If ev is complicated, then we may use the simpler function (2.10) instead and obtain a constant which is possibly somewhat worse, but which may still be good enough for practical applications. In the case where P is a simplex, it can even be shown that sup ev (x) = sup hsb (x) = (rsb )2 ; x∈P
see [6, Lemma 4.2].
x∈P
ERROR ESTIMATES FOR APPROXIMATION ON CONVEX POLYTOPES
5
3. Approximation of Functions. We are mainly interested in approximation of functions from C 2 (P). However, in the case where P is a simplex, St¨ampfle [6, Theorem 4.1, statements (i)–(iv)] also presented results for functions belonging to lower regularity classes. These statements extend to Λv by exactly the same arguments as in [6]. We only mention a result for a Lipschitz class which is more general than the one considered in [6]. For α ∈ (0, 1] and L > 0, we write f ∈ LipL (α, P) and say that f satisfies a Lipschitz condition of order α with Lipschitz constant L on P if f ∈ C(P) and |f (x) − f (y)| ≤ Lkx − ykα
(x, y ∈ P).
Theorem 3.1. Let f ∈ LipL (α, P). Then f (x) − Λv [f ](x) ≤ L ev (x) α/2
(3.1)
(x ∈ P)
and, for each p ∈ [1, ∞],
f − Λv [f ] ≤ L (ev )α/2 . p p
(3.2)
Proof. From (1.2) and the definition of Λv , it is clear that f (x) − Λv [f ](x) =
n X
f (x) − f (vi ) Bi (x) ,
i=1
and so, by the triangle inequality and the Lipschitz condition, n X f (x) − Λv [f ](x) ≤ L kx − vi kα Bi (x) .
(3.3)
i=1
Next, using H¨ older’s inequality with p := 2/α and q := 2/(2 − α), which is an admissible pair of exponents, and recalling (1.2) and (2.4), we find that n X
kx − vi kα Bi (x) =
n X
kx − vi kα Bi (x)1/p · Bi (x)1/q
i=1
i=1
≤
n X
!1/p kx − vi kαp Bi (x)
i=1
=
n X
·
n X
!1/q Bi (x)
i=1
!α/2 2
kx − vi k Bi (x)
α/2 = ev (x) .
i=1
Combining this with (3.3), we obtain (3.1). Clearly, (3.2) is an immediate consequence of (3.1). For twice differentiable functions f : P → R, we denote by 2 ∂ f (x) H[f ](x) := ∂xi ∂xj i,j=1,...,d
6
A. GUESSAB AND G. SCHMEISSER
the Hessian matrix of f at x and introduce (3.4) D2 f := sup sup y > H[f ](x)y , x∈P y∈Rd kyk=1
> agreeing that the elements of Rd are column vectors the so that y , which denotes 2 2 transpose of y, becomes a row vector. Clearly, D f = 0 for f ∈ P1 and D f = 2 |c| for f = ck · k2 . Subsequently, we shall often refer to the space n o 2 (3.5) F2 := f := λ + c k·k : λ ∈ P1 , c ∈ R .
The following theorem for Λy is not more than an easy exercise in calculus. We formulate it as a theorem only in order to compare it with the corresponding result for Λv . Theorem 3.2. Let f ∈ C 2 (P). Then, (3.6)
2 f (x) − Λy [f ](x) ≤ 1 kx − yk2 D f 2
(x, y ∈ P).
Furthermore, for each p ∈ [1, ∞],
2
f − Λy [f ] ≤ cy,p (3.7) , D f p where (3.8)
cy,p :=
1 key kp . 2
Both inequalities are sharp. Equality is attained for every f ∈ F2 . Proof. By the Taylor formula of order two, we have 1 (x − y)> H[f ] y + θ(x − y) (x − y) 2 2 for some θ ∈ (0, 1). Now the definition of , given in (3.4), shows that (3.6) D f holds. Inequality (3.7) is an immediate consequence of (3.6). Finally, the case of equality is easily verified. f (x) − Λy [f ](x) =
Theorem 3.3. Let f ∈ C 2 (P). Then, (3.9)
2 f (x) − Λv [f ](x) ≤ 1 ev (x) D f 2
(x ∈ P).
Furthermore, for each p ∈ [1, ∞],
2
f − Λv [f ] ≤ cvp (3.10) D f , p where (3.11)
cvp :=
1 v ke kp . 2
Both inequalities are sharp. Equality is attained for every f ∈ F2 .
ERROR ESTIMATES FOR APPROXIMATION ON CONVEX POLYTOPES
7
Proof. Inequality (3.6) may be rewritten as (3.12) −
1 1 2 2 kx − yk2 D f ≤ f (x) − Λy [f ](x) ≤ kx − yk2 D f (x, y ∈ P). 2 2
Next, from statement (iv) of Lemma 2.2, it follows that inequalities between continuous functions on P are preserved when the operator Λv is applied on both sides. Moreover, statement (i) of Lemma 2.1 together with statement (ii) of Lemma 2.2 show that Λv Λy [f ] = Λy [f ] . Hence (3.12) implies that −
1 v 1 v 2 2 Λ k · −yk2 (x) Λ k · −yk2 (x) D f D f ≤ Λv [f ](x) − Λy [f ](x) ≤ . 2 2
Now, taking y = x and noting that Λx [f ](x) = f (x) and, by (2.4), n X Λv k · −xk2 (x) = kvi − xk2 Bi (x) = ev (x) , i=1
we obtain 1 1 2 2 − ev (x) ≤ Λv [f ](x) − f (x) ≤ ev (x) , D f D f 2 2 which is equivalent to (3.9). Inequality (3.10) is an immediate consequence of (3.9). The statement on the occurrence of equality is easily verified by a calculation. Remark 3.4. Since Λv is a positive operator which reproduces affine functions, inequality (3.9) can also be deduced from [9, Theorem 1.4] in conjunction with the above Lemma 2.3. The operator Λy has just one interpolation point, which is of multiplicity two. Such an interpolation can be described by d + 1 scalar equations. The interpolation of the operator Λv , which has n simple interpolation points, can be described by n scalar equations. Since n ≥ d + 1, we may expect that the operator Λv is at least as precise as Λy . In the following proposition, we compare the constants (3.8) and (3.11) when p = ∞. Proposition 3.5. For p = ∞, the constants (3.8) and (3.11) satisfy the relations (3.13)
cv∞ ≤ cy,∞
(y ∈ P)
and (3.14)
inf cy,∞ =
y∈P
(rsb )2 2
the infimum being attained for y = xsb , where rsb and xsb specify the smallest ball Bsb which contains P, as introduced in (2.6). If all the vertices of P lie on the boundary of Bsb , then (3.15)
cv∞ =
(rsb )2 . 2
8
A. GUESSAB AND G. SCHMEISSER
Proof. It follows from (2.5) that (3.16)
ev (x) ≤
n X
ey (vi )Bi (x) ≤ max ey (vi )
(x ∈ P),
1≤i≤n
i=1
which implies (3.13). Since a convex function, defined on a convex set, attains its supremum at an extreme point (see for example [4, p. 91]), we have (3.17)
max ey (vi ) = sup ey (x) = 2cy,∞ .
1≤i≤n
x∈P
This shows that cy,∞ attains its smallest value at a point where φ(y) := max ey (vi ) = max ky − vi k2 1≤i≤n
1≤i≤n
attains its minimum. Clearly, this is the center of the smallest ball Bsb that contains P, and so min φ(y) = φ(xsb ) = (rsb )2 . y∈P
Thus (3.14) is verified. If all the vertices of P lie on the boundary of Bsb , then kxsb − vi k = rsb for i = 1, . . . , n. Therefore, by (2.4), ev (xsb ) =
n X
kxsb − vi k2 Bi (xsb ) = (rsb )2
i=1
n X
Bi (xsb ) = (rsb )2 ,
i=1
which shows that cv∞ ≥ (rsb )2 /2. Combining this inequality with (3.13) and (3.14), we obtain (3.15). In the univariate case, where P is an interval [a, b], it is known and also seen from (3.15) that, for y = (b + a)/2, we have cv∞ = cy,∞ =
(b − a)2 . 8
Moreover, the mean value 1 Λy [f ] + Λv [f ] 2
a+b y= 2
gives an approximation whose constant in the error bound is (b − a)2 /16. A generalization is given in the following proposition. Proposition 3.6. Let f ∈ C 2 (P). Then, for every y ∈ P and α ∈ [0, 1], we have
2
f − αΛy [f ] − (1 − α)Λv [f ] ≤ c(α, y) (3.18) D f , ∞
where (3.19)
c(α, y) :=
1 sup αey (x) + (1 − α)ev (x) . 2 x∈P
ERROR ESTIMATES FOR APPROXIMATION ON CONVEX POLYTOPES
9
Furthermore, (3.20)
inf
inf c(α, y) ≤
0≤α≤1 y∈P
(rsb )2 = c( 21 , xsb ) , 4
where rsb and xsb are the radius and the center of the smallest ball Bsb that contains P. Equality occurs in (3.20) if all the vertices of P lie on the boundary of Bsb . In this case, inequality (3.18) is sharp when α = 1/2 and y = xsb , and equality is attained for every function f ∈ F2 . Proof. The estimates (3.6) and (3.9) may be rewritten as 1 1 2 2 − ey (x) D f ≤ f (x) − Λy [f ](x) ≤ ey (x) D f 2 2 and 1 1 2 2 − ev (x) D f D f ≤ f (x) − Λv [f ](x) ≤ ev (x) . 2 2 Multiplying the first inequalities by α and the second by 1−α, and adding the results, we obtain 2 f (x) − αΛy [f ](x) − (1 − α)Λv [f ](x) ≤ 1 αey (x) + (1 − α)ev (x) . D f 2 This implies (3.18). Next, using (2.5) and the notation (2.8), we find that c( 21 , xsb ) =
n X 1 1 kxsb − vi k2 Bi (x) . sup ev (x) + esb (x) = sup 4 x∈P 4 x∈P i=1
If vj is a vertex on the boundary of Bsb , then, by (1.1), (1.3), (2.11), and (2.12), n X
kxsb − vi k2 Bi (vj ) = kxsb − vj k2 = (rsb )2 ≥
i=1
n X
kxsb − vi k2 Bi (x)
i=1
for all x ∈ P. This shows that sup
n X
kxsb − vi k2 Bi (x) = (rsb )2
x∈P i=1
and completes the proof of (3.20). Using (3.19), we deduce that c(α, y) ≥
1−α cv sup ev (x) = (1 − α) cv∞ ≥ ∞ 2 x∈P 2
if α ∈ [0, 12 ]
and, in conjunction with (3.14), c(α, y) ≥
α (rsb )2 sup ey (x) = α cy,∞ ≥ 2 x∈P 4
if α ∈ [ 21 , 1].
10
A. GUESSAB AND G. SCHMEISSER
Under the hypothesis that all the vertices of P lie on the boundary of Bsb , we know from Proposition 3.5 that cv∞ =
(rsb )2 . 2
Hence c(α, y) ≥
(rsb )2 4
α ∈ [0, 1], y ∈ P ,
which shows that equality occurs in (3.20). Finally, we have to verify the statement on the occurrence of equality for functions f from the class F2 . For this, it is clearly enough to consider the function f := k · k2 only. Using the notation (2.8), we may rewrite (2.1) and (2.2) as f (x) − Λsb [f ](x) = esb (x) , f (x) − Λv [f ](x) = −ev (x) . Therefore, f (x) −
1 1 sb 1 sb Λ [f ](x) − Λv [f ](x) = e (x) − ev (x) 2 2 2
and consequently,
f − 1 Λsb [f ] − 1 Λv [f ] = 1 sup esb (x) − ev (x) .
2 2 2 x∈P ∞ If all the vertices of P lie on the boundary of Bsb , then sup esb (x) − ev (x) ≥ esb (xsb ) − ev (xsb ) = ev (xsb ) = (rsb )2 , x∈P
where the last equation follows from (2.4) and (1.2), and so
sb 2
f − 1 Λsb [f ] − 1 Λv [f ] ≥ (r ) .
2 2 2 ∞ On the other hand, (3.18) and (3.20) show that
sb 2
f − 1 Λsb [f ] − 1 Λv [f ] ≤ (r ) .
2 2 2 ∞ Hence equality occurs for f = k · k2 . 4. Approximation of Linear Functionals. In the case p = 1, Theorems 3.1– 3.3 provide an approximation of L(f ), defined in (1.4), by the values of f (and possibly of Df ) at the interpolation points of Λy and Λv , respectively. Indeed, if Λ is any of the two operators Λy and Λv , and I(f ) := L(Λ[f ]), then, using that L is linear and positive, we have
L(f ) − I(f ) = L(f − Λ[f ]) ≤ L |f − Λ[f ]| = f − Λ[f ] . 1
ERROR ESTIMATES FOR APPROXIMATION ON CONVEX POLYTOPES
11
Let us now turn to details. Denoting by id the identity mapping on P and observing that L(id) is a mapping from P into Rd , we shall consider the operators L(id) (4.1) Iy (f ) := L(Λy [f ]) = L(1) f (y) + Df (y) −y L(1) and I v (f ) := L(Λv [f ]) =
(4.2)
n X
f (vi )L(Bi ) .
i=1
In the case p = 1, the constants (3.8) and (3.11) can be expressed as (4.3)
cy,1 =
1 L(ey ) 2
cv1 =
and
1 v I (ey ) − L(ey ) . 2
Note that the last equation, which is deduced with the help of (2.5), is independent of y. Now Theorems 3.2 and 3.3 imply the following corollaries. Corollary 4.1. Let f ∈ C 2 (P). Then, for any y ∈ P, we have |L(f ) − Iy (f )| ≤
L(ey ) D2 f . 2
Equality is attained for every f ∈ F2 . Corollary 4.2. Let f ∈ C 2 (P). Then, for any y ∈ P, we have |L(f ) − I v (f )| ≤
I v (ey ) − L(ey ) . D2 f 2
Equality is attained for every f ∈ F2 . Remark 4.3. The conclusions of Corollaries 4.1 and 4.2 can be refined when, in addition, f is known to be a convex function. In fact, in this case, we also have Iy (f ) ≤ L(f ) ≤ I v (f ) as a consequence of the statements (iii) of Lemmas 2.1 and 2.2. The “cubature rule” I v (f ) may be seen as a multivariate analogue of the trapezoidal rule. As (4.1) shows, the “cubature rule” Iy (f ) simplifies and does not depend on Df if y is chosen as xcm :=
L(id) . L(1)
In this case, Iy (f ) is a multivariate analogue of the midpoint rule. The point xcm will be called the center of mass of P with respect to the functional L. Note that xcm always belongs to P. Indeed, if xcm were outside P, then there would exist a separating hyperplane λ(x) := a + hb, xi = 0, where a ∈ R and b ∈ Rd , such that λ(x) > 0 for x ∈ P and λ(xcm ) < 0. Since L is positive, we would have L(λ) > 0. On the other hand, the linearity of L implies that L(λ) = aL(1) + hb, L(id)i = aL(1) + hb, L(1)xcm i = L(1)λ(xcm ) < 0,
12
A. GUESSAB AND G. SCHMEISSER
which is a contradiction. For notational simplicity, we now write (4.4)
Λcm := Λy ,
I cm := Iy ,
ecm := ey ,
ccm p := cy,p
if y = xcm .
Since ey (x) = kx − yk2 = kx − xcm k2 + kxcm − yk2 + 2hx − xcm , xcm − yi, we find, using the definition of xcm , that cy,1 = L(ey ) = L(ecm ) + L(1)kxcm − yk2 . This shows that the constant in the error estimate of Corollary 4.1 becomes smallest if and only if y = xcm . Remark 4.4. It may be interesting to compare the operators I cm and I v . Recalling that cv1 in (4.3) does not depend on y, we may take y = xcm . Then Corollaries 4.1 and 4.2 show that the quotient (4.5)
κ :=
L(ecm ) I v (ecm )
indicates which one of the two operators I cm and I v has the smaller constant in its error estimate. We see that ccm < cv1 if and only if κ ∈ (0, 1/2). Since, for 1 v convex functions, I approximates L from above, we always have κ ∈ (0, 1). In all the standard examples considered by us, we found that κ ∈ (0, 1/2). However, κ ∈ [1/2, 1) will occur when L is of the form (1.4) and the weight function w is large near the vertices. 5. Examples. We illustrate our results by considering three special classes of convex polytopes for which interpolation and approximation problems have been studied in the literature. Rb a
5.1. Intervals (the univariate case). Let d := 1, P := [a, b], and L(f ) := f (x) dx. Then xsb = xcm = 12 (a + b), a+b a+b a+b Λcm [f ](x) = f + f0 x− , 2 2 2
and Λv [f ](x) =
b−x x−a f (a) + f (b) . b−a b−a
2 Moreover, D f = supa≤x≤b |f 00 (x)| . For the constants (3.8) with y = xcm and (3.11), we find that ccm p
1/p 1 (b − a)2p+1 = 2 22p (2p + 1)
(1 ≤ p < ∞)
and cvp =
1/p 1 B(p + 1, p + 1)(b − a)2p+1 2
(1 ≤ p < ∞),
ERROR ESTIMATES FOR APPROXIMATION ON CONVEX POLYTOPES
13
where Z
1
xs−1 (1 − x)t−1 dx
B(s, t) := 0
is the beta function. Furthermore, v ccm ∞ = c∞ =
(b − a)2 . 8
= 2, which It can be shown that ccm < cvp for 1 ≤ p < ∞. In particular, cv1 /ccm p 1 expresses the well known fact that the constant in the error term of the trapezoidal rule is twice as large as that of the midpoint rule. 5.2. Multidimensional simplices. Let S ⊂ Rd be a non-degenerate simplex with vertices v0 , . . . , vd . The uniquely determined rational basis functions B0 , . . . , Bd of minimal degree are the classical barycentric coordinates, which may be constructed as follows. Let λi (x) = 0 be the equation of a hyperplane that contains all the vertices of S other than vi . Then Bi (x) = For L(f ) :=
R S
λi (x) λi (vi )
(i = 0, . . . , d).
f (x) dx, we obtain xcm =
1 |S|
d
Z x dx = S
1 X vi , d + 1 i=0
where we write |S| for the d-dimensional volume of S. This gives a representation of ecm in terms of the vertices, which, via (4.4) and (3.17), leads us to
2
d X
1
(v − v ) max ccm = i j . ∞
2 2(d + 1) 0≤i≤d j=0
Since the basis functions Bi belong to P1 , the function ev , defined in (2.2), is now of the form ev = λ − k · k2 , where λ ∈ P1 . Therefore ev (x) = 0 is the equation of the uniquely defined sphere that contains all the vertices of S (see e. g., St¨ampfle [6, Proposition 3.1]). Thus ev can be represented as ev (x) = rb2 − kx − x bk2 for some rb > 0 and x b ∈ Rd . The case of the approximation by Λv with respect to the norm k · k∞ is covered by the papers of Waldron [8, Theorem 2.1] and St¨ampfle [6, Theorem 4.1]; also see de Boor [1]. Clearly, cv∞ = rb2 /2 when x b ∈ S. Otherwise, it can be shown that cv∞ = 12 (b r2 − ρ2 ), where ρ is the distance of x b from S. Geometrically, 2cv∞ may be interpreted as the square of the radius of the smallest ball that contains S (see [6, Lemma 4.2]). For the standard unit simplex of dimension d ≥ 2, a straightforward calculation gives ccm ∞ =
d2 + d − 1 2(d + 1)2
and
cv∞ =
d−1 , 2d
14
A. GUESSAB AND G. SCHMEISSER
and so 1 cv∞ = 1− < 1. cm 2 c∞ d(d + d − 1) For the calculation of the constants (4.3) for y = xcm , we first determine L(ecm ) with the help of a cubature rule which is exact for all polynomials of degree less or equal to two, taken from the book of Stroud [7, p. 307, formula Tn : 2-2]. It gives L(ecm ) =
Z
ecm (x) dx =
S
d (2 − d) |S| X cm e (vi ) (d + 2)(d + 1) i=0 X 4 |S| + ecm (vij ), (d + 2)(d + 1) 0≤i<j≤d
where vij = 12 (vi + vj ). Simplifying the second sum by making use of the special form of ecm , we arrive at L(ecm ) =
d X |S| ecm (vi ). (d + 2)(d + 1) i=0
Since the basis functions B0 , . . . , Bd belong to P1 , we conclude that L(Bi ) = L(1)Bi (xcm ) =
|S| L(1) = d+1 d+1
(i = 0, . . . , d)
and therefore I v (ecm ) =
d |S| X cm e (vi ). d + 1 i=0
Thus, by (4.3), the definition of ecm in (4.4), and the representation in (2.3), we have ccm = 1
d X |S| kvi − xcm k2 2(d + 2)(d + 1) i=0
and cv1 =
d X |S| kvi − xcm k2 . 2(d + 2) i=0
These values for ccm and cv1 also follow from [3, Corollary 6.2, formulae (6.4) and 1 v (6.5)]. We see that c1 = (d + 1)ccm 1 and κ = 1/(d + 2) in (4.5). 5.3. Hyperrectangles. Let R := [a1 , b1 ] × · · · × [ad , bd ] be a rectangle in Rd with vertices vi := (vi1 , . . . , vid )
(i = 1, . . . , 2d ).
To each vertex vi , there exists exactly one vertex of maximal distance, which we call the diametrically opposite vertex and which we denote by v i := (v i1 , . . . , v id ). Any
ERROR ESTIMATES FOR APPROXIMATION ON CONVEX POLYTOPES
15
two vertices vi and vj have at least one common component unless they are a pair of diametrically opposite vertices. Therefore Bi (x) :=
d Y xj − v ij v − v ij j=1 ij
(i = 1, . . . , 2d ),
where x = (x1 , . . . , xd ), are the rational basis functions of smallest degree. They span d a polynomial space R of dimension 2 , which contains P1 as a subspace. For L(f ) := R f (x) dx, the center of mass is xcm =
1 (a1 + b1 , . . . , ad + bd ). 2
With this, we find that d
ecm (vi ) =
1X (ai − bi )2 =: (rcm )2 4 i=1
(i = 1, . . . , 2d ).
Therefore (2.5) implies that ecm (x) + ev (x) = (rcm )2 for all x. Consequently, sup ecm (x) = sup ev (x) = (rcm )2 , x∈R
x∈R
or equivalently, v ccm ∞ = c∞ =
(rcm )2 . 2
For determining the best constants in the case p = 1, we first calculate L(ecm ) =
Z
kx − xcm k2 dx =
R
where |R| =
Qd
i=1 (bi
(rcm )2 |R| , 3
− ai ), and note that I v (ecm ) = (rcm )2 |R| .
Hence (4.3) with y = xcm implies that ccm 1 =
(rcm )2 |R| 6
and
cv1 =
(rcm )2 |R| . 3
Thus, cv1 /ccm 1 = 2 and κ = 1/3, as in the univariate case. In the literature, analogues of the trapezoidal rule for hyperrectangles have been studied in the context of tensor product rules (see, e.g., [2, § 8.2]).
16
A. GUESSAB AND G. SCHMEISSER REFERENCES
[1] C. de Boor, Error in linear interpolation at the vertices of a simplex, WWW (1998), http://www.cs.wisc.edu/~deboor/multiint/. [2] F.-J. Delvos and W. Schempp, Boolean Methods in Interpolation and Approximation, Longman Scientific & Technical, Essex, 1989. [3] A. Guessab and G. Schmeisser, Convexity results and sharp error estimates in approximate multivariate integration, Math. Comp., 73 (2004), pp. 1365–1384. [4] G. Hadley, Nonlinear and Dynamic Programming, Addison–Wesley, Reading, Massachusetts, 1964. [5] A. W. Roberts and D. E. Varnberg, Convex Sets, Academic Press, New York, 1973. ¨mpfle, Optimal estimates for the linear interpolation error on simplices, J. Approx. [6] M. Sta Theory, 103 (2000), pp. 78–90. [7] A. H. Stroud, Approximate Calculation of Multiple Integrals, Prentice-Hall, Englewood Cliffs, N. J., 1971. [8] S. Waldron, The error in linear interpolation at the vertices of a simplex, SIAM J. Numer. Anal. 35 (1998), pp. 1191–1200. [9] S. Waldron, Sharp error estimates for multivariate positive linear operators, in Approximation Theory IX, Volume I, Proceedings of the 9th international conference, Nashville, TN, USA, 1998, C. K. Chui et al., eds., Vanderbilt University Press, Nashville, TN, 1998, pp. 339–346. [10] J. Warren, Barycentric coordinates for convex polytopes, Adv. Comput. Math. 6 (1996), pp. 97-108.