The complexity of de nite elliptic problems with noisy data Technical Report CUCS-035-96 Arthur G. Werschulz Department of Computer and Information Sciences, Fordham University Fordham College at Lincoln Center New York, NY 10023 and Department of Computer Science Columbia University New York, NY 10023 September 26, 1996 Abstract. We study the complexity of 2mth order de nite elliptic problems Lu = f (with homogeneous Dirichlet boundary conditions) over a d-dimensional domain , error being measured in the H m( )norm. The problem elements f belong to the unit ball of W r;p ( ), where p 2 [2; 1] and r > d=p. Information consists of (possibly-adaptive) noisy evaluations of f or the coecients of L. The absolute error in each noisy evaluation is at most . We nd that the nth minimal radius for this problem is proportional to n?r=d + , and that a noisy nite element method with quadrature (FEMQ), which uses only function values, and not derivatives, is a minimal error algorithm. This noisy FEMQ can be eciently implemented using multigrid techniques. Using these results, we nd tight bounds on the "-complexity (minimal cost of calculating an "-approximation) for this problem, said bounds depending on the cost c() of calculating a -noisy information value. As an example, if the cost of a -noisy evaluation is c() = ?s (for s > 0), then the complexity is proportional to (1=")d=r+s.
1. Introduction
The majority of research (see, e.g., [9]) in information-based complexity has concentrated on problems for which we have partial information that is exact. There has recently been a stream of work (much of which has been done by L. Plaskota, and is described in his monograph [7]) on the complexity of problems with partial information that is contaminated by noise. In this paper, we study the complexity of elliptic partial dierential equations Lu = f , with noisy partial information. Most previous work (see, e.g., [10], [11], and [12], as well as the references cited therein) on the complexity of elliptic PDEs has assumed that we have complete information about the coecients of L, and exact (but partial) information about the right-hand side f . As a typical result, consider the 2mth order elliptic boundary value problem Lu = f (with homogeneous Dirichlet boundary conditions), de ned on a d-dimensional domain . The right-hand sides f belong to the unit ball BW r;p( ) of the Sobolev space W r;p( ), so that they have r derivatives in the Lp sense. We require that p 2 [2; 1] and r > d=p. Error of an approximation is measured in the H m ( )-norm. Information about a problem element f consists of the values of f (or some of its derivatives) at a nite number of points in . Then the minimal error over all algorithms using at most n evaluations is (n?r=d ). ?It then follows that the "-complexity (i.e., the minimal cost of calculating an "-approximation) is (1=")d=r . Moreover, a nite element method using quadrature (FEMQ), which only uses function values (and no derivatives) is optimal. The details for the special case This research was supported in part by the National Science Foundation under Grant CCR-95-00850.
p = 2 can be found in [10, Section 5.5]; the proof for the general case p = [2; 1] is not much
dierent than that for this special case. Of course, it is more realistic to assume that we have only partial information about the coecients of L. This means that we are studying classes of elliptic Dirichlet problems La u = f . Here La is a linear elliptic operator of order 2m with coecients a, de ned on a d-dimensional domain . The right-hand sides f once again belong to B W r;p( ), and the coecient vectors a now belong to a class A of functions. Note that our problem elements are now of the form [f ; a]. Since the solution u = L?a 1 f depends nonlinearly on a, we are now dealing with a nonlinear problem. There has been little work on the complexity of nonlinear problems arising in partial dierential equations. One such result is the following, from [10, pp. 110{111]: Assume that we can compute f and the coecients a of La (or their derivatives) at points in . Then the nth minimal error is (n?r=d ), this error being achieved by an FEMQ using n evaluations. Although [10] does not derive the complexity from ? this minimal error result, it is not too dicult to show that the "-complexity is still (1=")d=r . Indeed, we can use multigrid techniques (see [2], especially Chapter 7) to get a suciently-good approximation to the FEMQ, in time proportional to the number of information evaluations used. However, we can ask that the information be made even more realistic. So far, we have only dealt with the case of exact partial information about problem elements [f ; a]. But in practice, these evaluations are contaminated by noise. In this paper, we study the complexity of elliptic problems in which we have noisy information about the coecients of La and the function f . How does this change the problem complexity? What algorithms are optimal? Note that Plaskota's monograph [7] on complexity and noisy information mainly deals with linear problems. Hence, we cannot directly apply the results of [7]. However, it turns out that we can obtain lower bounds by considering only problem elements [f ; a] with xed a, and then applying the ideas in [7]; we can get upper bounds by using some perturbation arguments, along with the results in [10, pp. 110{111]. We will slightly restrict the generality of the problem in two respects, mainly to simplify the exposition: (1) We consider only de nite elliptic problems. These are self-adjoint problems whose variational formulations involve strongly coercive bilinear forms. (2) We measure error in the norm kkH m ( ) , which is equivalent to the problem's natural energy norm. Information about any particular [f ; a] consists of a nite number of noisy samples. We can calculate approximate values of (some derivative of) either f or a coecient of La at any point in , the error in each approximate value being at most 0. In other words, let be a multi-index (which tells us which derivative, possibly the zeroth, to evaluate) and let x be a point in (at which we will evaluate). Rather than having an exact value of (D f )(x) or of (D a)(x), with a some coecient appearing in La , we have a value y for which jy ? (D f )(x)j or jy ? (D a)(x)j , respectively. We assume that the noise level of all evaluations is the same. The extension of the results of this paper to include the case where the noise levels of evaluations vary is an open problem. Let us outline the contents and results of this paper. In Section 2, we give a precise description of the class of problems to be solved, namely 2mth order elliptic problems over a d-dimensional domain, with problem elements of smoothness r. Next, we describe noisy information for this problem, said information being possibly-adaptive. We de ne algorithms using said information and the error of such algorithms. Finally, we describe our model of computation, which allows us to de ne the cost of an algorithm and the complexity of our problem. Note that since we are using 2
noisy information values, the cost c() of calculating a noisy sample value will depend on , see [7, Section 2.9] for further discussion. In Section 3, we prove a lower bound of n?r=d + for the nth minimal radius of -noisy information for this problem. This means that if we want to be able to calculate "-approximations for arbitrarily small ", we need to both increase n and decrease the noise level . This means that if we cannot decrease the noise level, then there is a cuto error value "0 such that we can only calculate "-approximations for " "0 . Once we know a lower bound on the minimal radius, we want to nd an algorithm whose error matches this bound. We describe the noisy nite element method with quadrature (FEMQ) in Section 4. Although we allow the evaluation of derivatives of problem elements, the noisy FEMQ only evaluates function values, and not higher-order derivatives. Furthermore, the FEMQ uses nonadaptive information, even though adaptive information is permissible. In Section 5, we show that the error of the FEMQ using n noisy samples is proportional to n?r=d + when the parameters de ning the noisy FEMQ are properly chosen. Thus the noisy FEMQ is a minimal error algorithm, and adaption is no stronger than nonadaption for our problem. Note that the n-evaluation noisy FEMQ requires the solution of an n n linear system Ga x = b, where Ga depends on the coecients a of the dierential operator and b depends on the righthand side f . If we were only considering a single xed operator L, then we could precompute the inverse (or LU -decomposition) of Ga , since this is independent of any problem element f . We could then ignore the cost of this precomputation, considering it as a xed overhead, since it would only be done once. However, for the problems studied in this paper, not only do the right-hand sides f vary, but also the operators La , since we consider arbitrary [f ; a] 2 F . This means that the factorization of Ga is no longer independent of the problem element considered, and so we cannot ignore its cost. We discuss the ecient implementation of the noisy FEMQ in Section 6. Using a multigrid technique, we can calculate an approximation to the noisy FEMQ solution. This multigrid approximation uses (n) noisy evaluations and has error proportional to n?r=d + . Moreover, we can calculate this approximation using (n) arithmetic operations, which is optimal. Finally, in Section 7, we determine the "-complexity of our problem. Recall that c() is the cost of calculating a -accurate function value. We nd that comp(") =
(
inf
0 d=p. Let Rd be a given bounded, simply-connected region with @ 2 C 2m+r . For suciently smooth v : ! R, we de ne the partial dierential operator (La v)(x) =
X
?
(?1)jj D a; (x)D v(x)
jj;j jm
8 x 2 ;
Here, a = [a; ]jj;j jm, where the a; are real-valued functions for jj; j j m. We will assume that a; = a ; for all multi-indices ; 2 (Z+)d , i.e., the elliptic operator La is formally selfadjoint. Associated with the operator La is the bilinear form
Ba(v; w) =
Z
X
jj;j jm
a; D vD w
on H0m ( ). We will be interested in elliptic Dirichlet problems. The classical formulation of such a problem is to nd, for f : ! R, a function u : ! R such that La u = f in ; (2.1) @nj u = 0 on @ (0 j m ? 1); with @nj denoting the j th outward-oriented normal derivative. The variational formulation is to nd, for f 2 W r;p ( ), an element u 2 H0m ( ) such that
Ba (u; v) = hf; viL2 ( ) 8 v 2 H0m ( ):
(2.2)
We will let A denote a class of coecient vectors, each giving an elliptic problem. More precisely, for given positive 0 , M , and , we will let A denote the class of all a such that the following conditions hold: (1) The operators La are strongly elliptic in , i.e., (?1)m
X
jj;j j=m
a; (x) + 0 jj2m 4
8 x 2 ; 8 2 Rd ; 8 a 2 A :
(2) The coecients of the operators La are bounded in the W r;p ( ) senes, i.e.,
8 jj; j j m; 8 a 2 A :
ka; kW r;1( ) M
(3) The bilinear forms Ba are uniformly strongly H0m ( )-coercive, i.e.,
8 v 2 H0m( ); 8 a 2 A :
Ba (v; v) kvkH0m ( )
(2.3)
Roughly speaking, a 2 A if (2.1) is a self-adjoint elliptic boundary value problem, the only novelty being that we require a \uniformity condition." Note that for the sake of simplicity, we have assumed that the coecient vector a and the right-hand side f all have the same smoothness, i.e., the same number r of derivatives (in the Sobolev sense). Our class of problem elements will be F = B W r;p( ) A . We de ne a solution operator S : F ! H0m ( ) by letting u = S ([f ; a]) i u satis es (2.2), i.e., u is the variational solution to the Dirichlet problem (2.1). The operator S is nonlinear. However, S ([f ; a]) depends nonlinearly only on a, i.e., for any xed a, the operator S ([; a]) is a linear operator. Hence we may use the generalized Lax-Milgram Lemma ([1, pg. 112], [6, pg. 310]) to see that for any [f ; a] 2 F , there exists a unique solution u 2 H0m ( ) to (2.2). Hence, the solution operator S is well-de ned. We wish to calculate approximate solutions to this problem, using noisy standard information. To be speci c, we will be using uniformly sup-norm-bounded noise. Our notation and terminology is that of [7] and [8]. Let 2 [0; 1] be a noise level . For [f ; a] 2 F , we calculate -noisy information
N ([f ; a]) = y = [y1 ; : : : ; yn(y) ]; (2.4) about [f ; a], where for each index i 2 f1; : : : ; n(y)g, there exist a multi-index (i) and a point xi 2 such that either j(i)j < r ? dp and jyi ? (D(i) f )(xi )j or, for some multi-indices and of order at most m,
j(i)j < r
and
jyi ? (D(i) a; )(xi )j :
(The Sobolev embedding theorem guarantees that these derivatives are well-de ned.) Note that for any i, . whether to terminate at the ith step, . the points xi , . the multi-indices (i), . the choice of whether to evaluate (a derivative of) the right-hand side f or a coecient function a; may all be determined adaptively, depending on the previously-calculated y1 ; : : : ; yi?1 . Let N ([f ; a]) Sdenote the set of all such y, i.e., the set of all such noisy information about [f ; a], and we let Y = [f ;a]2F N ([f ; a]) denote the set of all possible noisy information values. Then an algorithm using the noisy information N is a mapping : Y ! H0m ( ). We want to solve this problem in the worst case setting. This means that the cardinality of information N is given by card N = sup n(y); y2Y
5
and the error of an algorithm using N is given by e(; N ) = sup sup kS ([f ; a]) ? (y)kH m ( ) : [f ;a]2F y2N ([f ;a])
Next, we describe our model of computation. We will use the model found in [7, Section 2.9]. Here are the most important features of this model: (1) For any multi-index , any point x 2 , and any function v de ned on , the cost of calculating a -noisy value of (D v)(x) is c(). Here, the cost function c : R+ ! R+ is a nonincreasing function, with c() > 0 for suciently small positive . (2) Arithmetic operations and comparisons are done exactly, with unit cost. (3) We are not charged for Boolean operations. (4) Linear operations over H0m ( ) are done exactly, with cost g. For any noisy information N and any algorithm using N , we shall let cost(; N ) denote the ? worst case cost of calculating N ([f ; a]) over all [f ; a] 2 F . Now that we have de ned the error and cost of an algorithm, we can nally de ne the complexity of our problem. We shall say that comp(") = inf f cost(; N ) : N and such that e(; N ) " g is the "-complexity of our problem. An algorithm using noisy information N for which ? e(; N ) " and cost(; N ) = comp(") is said to be an optimal algorithm. 3. A Lower Bound on the Minimal Radius
The most commonly-used idea (see, e.g., [9, Section 4.4]) for determining the problem complexity and optimal algorithms is as follows: we rst determine the minimal error possible using a given number of evaluations, and then invert this relationship to determine the minimal number of evaluations necessary to achieve a given error. We will use this idea in this paper. Let n 2 Z+ and 2 [0; 1]. If N is -noisy information of cardinality at most n, then r(N ) = inf e(; N ): using N
is the radius of information, i.e., the minimal error among all algorithms using given information N . The nth minimal radius rn () = inf f r(N ) : card N n g; is the minimal error among all algorithms using noisy information of cardinality at most n. Noisy information N n; of cardinality n such that ? r(N n; ) = rn () is said to be nth optimal information. An optimal error algorithm using nth optimal information is said to be an nth minimal error algorithm. In this Section, we show that the nth minimal radius of noisy information is bounded from below by n?r=d + , i.e., the sum of the nth minimal radius of exact information and the noise level. In the next Section, we show that the nite element method with quadrature (FEMQ) of degree at least r using n noisy evaluations achieves this error, and hence this FEMQ is a minimal error algorithm. In Section 7, we use these results to nd the problem complexity and to determine when the FEMQ is an optimal algorithm. The main result of this section is a lower bound on the nth minimal radius: 6
Theorem 3.1. rn () = (n?r=d + ). Proof: We rst claim that
rn () = ():
(3.1) Indeed, choose an arbitrary, but xed, element a of A . Let N be (possibly-adaptive) noisy information of cardinality at most n. De ne a new solution operator Sa : B W r;p( ) ! H0m ( ) as
Sa (f ) = S ([f ; a ])
8f 2 BW r;p( ):
?
De ne information N for the problem Sa ; B W r;p( ) as follows. For any f 2 B W r;p( ), write N ([f ; a ]) = [y1 ; : : : ; yl ]
for some l n. Each yi is a noisy evaluation of (a derivative of) either f or of some coecient a; . Let l0 be the number of noisy evaluations of f in N ([f ; a ]). Without loss of generality, suppose that y1 ; : : : ; yl0 are these noisy f -evaluations, i.e.,
jyj ? (D(i) f )(xj )j
(1 j l0 )
for points x1 ; : : : ; xl0 2 and multi-indices (1); : : : ; (l0 ). Then N (f ) = [y1 ; : : : ; yl0 ]:
Extending our notation for radius of information to include the solution operator and problem element class, it is obvious that B W r;p( ) F implies that ?
r(N ; S; F ) r N ; Sa ; BW r;p ( ) : Since N is noisy information for a linear problem (Sa ; B W r;p ( )), there exists nonadaptive innon formation N such that r;p r N ; Sa ; BW r;p( ) 12 r N non ; Sa ; B W ( ) ; ?
?
see [7, Chapter 2.7]. It is easy to see that the hypotheses of [7, Lemma 2.8.2] are satis ed, and so r;p r N non ; Sa ; B W ( ) = (); ?
and the desired result (3.1) follows, as claimed. We next claim that rn () = (n?r=d ):
(3.2)
Indeed, since rn () rn (0), it suces to show that rn (0) = (n?r=d). This latter inequality was proved for the case p = 2 in [10, pg. 111], the only dependence on the assumption that p = 2 being in its use of [10, Theorem 5.5.1]. It is easy to see that the proof of this latter Theorem easily extends to the case of p 2 [2; 1]. Hence the desired result (3.2) holds, as claimed. Our Theorem now follows immediately from (3.1) and (3.2). 7
4. The Noisy FEMQ
In this section, we de ne the noisy nite element method with quadrature (FEMQ). This is an algorithm using standard information consisting only of function evaluations, i.e., no derivative evaluations are used. Our notation is the standard one found in, e.g., [4] and [10, Chapter 5]. The easiest way to describe the noisy FEMQ is by following three steps. First, we describe the noise-free \pure" nite element method (FEM), which uses non-standard information. Next, we describe the noise-free FEMQ, which uses exact standard information. Finally, we describe the noisy FEMQ. Before describing each of these FEMs, we rst establish some notation. Let K^ be a xed polyhedron in Rd . We call K^ a reference element. We next let K be a (small) nite element, i.e., the ane image of K^ under a bijection FK , where
FK (^x) = BK x^ + bK
^ 8 x^ 2 K;
(4.1)
where BK 2 Rdd is invertible and bK 2 Rd . Next, we let T be a triangulation of consisting of nite elements, where each K 2 T is the image of the reference element K^ under the ane bijection FK . Select a xed value of k 2 Z++, and let Pk (K ) denote the space of polynomials having total degree at most k, considered as functions over K . Given this triangulation T , we de ne a nite element space
S (T ) = s 2 H0m( ) : s K 2 Pk (K ) 8 K 2 T
of degree k. We will assume that the following conditions hold: (1) fTn g1 n=1 is a family of triangulations of such that Sn = S (Tn ) is a nite element space of dimension n. (2) fTn g1 n=1 is a quasi-uniform family of triangulations, i.e., lim sup sup hK < 1; n!1 K 2Tn K where hK is the diameter of K and K is the diameter of the largest sphere contained in K . (3) Let k k denote the `2 matrix norm on Rd . Then kBK k 1 for any element K 2 Tn and any triangulation Tn . We rst recall how the noise-free \pure" FEM is de ned. Let n 2 Z+, and let fs1 ; : : : ; sn g be a basis for Sn . For [f ; a] 2 F , nd n X u n = j sj ; in Sn such that
j =1
Ba (un ; si) = hf; siiL2 ( )
Note that the coecient vector satis es where
(1 i n):
a = [1 ; : : : ; n ]T Ga = b; G = [Ba(sj ; si)]1i;jn 8
(4.2)
and
b = [hf; s1 iL2 ( ) ; : : : ; hf; sn iL2( ) ]T :
Since the bilinear forms Ba are uniformly strongly coercive, it follows that for any n 2 Z++ and any [f ; a] 2 F , there exists a unique un 2 Sn satisfying (4.2). Hence, the pure FEM is well-de ned. Of course, if we want to calculate un , we will need to calculate the inner products appearing in the matrix G and the vector b, which means that we have to calculate the various integrals Z
a; D sj D si
and
Z
(1 i; j n and jj; j j m) (1 i n):
fsi
Since only standard information is available to us, we cannot calculate these integrals for arbitrary [f ; a] 2 F . Instead, we shall use numerical quadrature to approximate these integrals, which gives us the (noise-free) FEMQ. The quadrature rule used to de ne the FEMQ is initially de ned on the reference element. This reference quadrature rule has the form J
X I^v^ = !^ j v^(^bj )
j =1
for functions v^ de ned on K^ . This rule is said to be exact of degree q if Z
K^
v = I^v^
8 v^ 2 Pq (K^ ):
We de ne a local quadrature rule over a particular nite element K as
IK v =
J X j =1
!j;K v(bj;K );
where
!j;K = det BK !^ j and bj;K = FK (^bj ) (1 j J ) for K = FK (K^ ), with FK given by (4.1). Next, for any ` 2 Z+, we let
N` =
J [ [ K 2T` j =1
(4.3)
fbj;K g
denote the set of all quadrature nodes in all the elements belonging to T` . This is usually not a disjoint union, since a quadrature node on the boundary of one element will be on the boundary of an adjacent element sharing a common face. We can now de ne the noise-free FEMQ. Let
2 m + d = d
9
(4.4)
denote the maximum number of coecients that can appear in a 2mth order elliptic operator de ned on a d-dimensional domain. Given n 2 Z+, we de ne
n~ = maxf card N` : ` 2 Z+ and ( + 1) card N` n g:
(4.5)
Roughly speaking, n~ = bn=( + 1)c, allowing for the fact that n~ must be the cardinality of the set N` of quadrature nodes for some triangulation T` . Let fs1 ; : : : ; sn~ g denote a basis for the nite element space Sn~ . For [f ; a] 2 F , we de ne a new bilinear form Ba;n~ on Sn~ by
Ba;n~ (v; w) = =
X
X
jj;j jm K 2Tn~ X
IK (a; D vD w)
J X X
jj;j jm K 2Tn~ j =1
!j;K a; (bj;K ) (D v)(bj;K ) (D w)(bj;K )
8 v; w 2 Sn~ :
and a linear functional fn~ on Sn~ by
fn~ (v) =
X
K 2Tn~
IK (fv) =
J X X K 2Tn~ j =1
Then we seek
uQn~ = such that The new coecient vector satis es where now
!j;K f (bj;K ) v(bj;K ) n~ X j =1
8 v 2 Sn~ :
j sj ;
Ba;n~ (uQn~ ; si) = fn~ (si )
(1 i n):
(4.6)
a = [1 ; : : : ; n ]T Ga = b; G = [Ba;n~ (si ; sj )]1i;jn~
and
b = [fn~ (s1 ); : : : ; fn~ (sn~ )]T : Note that since r > d=p, the entries in the matrix G and the coecient vector b are well-de ned. Let
= minfk + 1; rg:
In the remainder of this paper, we shall assume that the following conditions hold: (1) The smoothness r of the problem elements F satis es r 1 (as well as our previous requirement r > d=p). (2) The degree k of the nite element subspaces Sn~ satis es k > d=p ? 1. (3) I^ is exact of degree 2k + ? 1 over the reference element K^ . 10
Let us write
Nn ([f ; a]) = [Nn (f ); Nn (a)]
where1
Nn (f ) = f f (bj;K ) : 1 j J and K 2 Tn~ g:
and
Nn (a) = f a; (bj;K ) : 1 j J and K 2 Tn~ and jj; j j m g We see that uQn~ depends on [f ; a] only through Nn ([f ; a]), and so we write uQn~ = n (Nn ([f ; a])), with n an algorithm using Nn , which is exact standard information of cardinality at most n. We are nally ready to de ne the noisy FEMQ. Given n 2 Z+, we once again choose the largest n~ 2 Z+ satisfying (4.5), and a basis fs1 ; : : : ; sn~ g for the nite element space Sn~ . We now calculate a noisy version of Nn ([f ; a]). That is, for each element K 2 Tn~ , each index j 2 f1; : : : ; J g, and each pair of multi-indices (; ) with jj m and j j m, we obtain real numbers a~; ;j;K; and f~j;K; satisfying
ja~; ;j;K; ? a; (bj;K )j
(4.7)
jf~j;K; ? f (bj;K )j :
(4.8)
and
Let N~ n; denote this noisy version of Nn , i.e.,
N~ n; ([f ; a]) = fN~ n; (f ); N~ n; (a)g
where2 and
N~ n; (f ) = ff~j;K; satisfying (4.8) : 1 j J and K
2 Tn~ g:
N~ n; (a) = fa~; ;j;K; satisfying (4.7) : 1 j J and K
2 Tn~ and jj; j j m g: Clearly, N~ n; is noisy information of cardinality at most n. For [f ; a] 2 F , we de ne a new bilinear form B~a;n~ ; on Sn~ by B~a;n~ ; (v; w) =
X
J X X
jj;j jm K 2Tn~ j =1
!j;K a~; ;j;K; (D v)(bj;K ) (D w)(bj;K )
8 v; w 2 Sn~ :
and a linear functional f~n~ ; on Sn~ by
f~n~ ; (v) =
J X X K 2Tn~ j =1
!j;K f~j;K; v(bj;K )
Then we seek
u~Qn~ =
n~ X j =1
8 v 2 Sn~ :
j sj
1 We really should use lists of elements, set out in a speci ed order, for Nn (f ) and Nn (a), so that Nn ([f ; a]) will be
a vector. The reader will indulge this slight abuse of notation, since any precisely-correct alternative would be far more long-winded. 2 This is also a slight abuse of notation.
11
such that The new coecient vector satis es where now and
B~a;n~ ; (uQn~ ; si) = f~n~ ; (si )
(1 i n):
(4.9)
a = [1 ; : : : ; n ]T Ga = b; G = [B~a;n~ ; (si ; sj )]1i;jn~
b = [f~n~ ; (s1 ); : : : ; f~n~ ; (sn~ )]T : We see that u~Qn~ depends on [f ; a] only through N~ n; ([f ; a]), and so we write u~Qn~ = ~n; (N~ n; ([f ; a])), with ~n; an algorithm using our noisy standard information N~ n; . Remark : Recall that we have stated that the solution operator S , the pure FEM, and the noiseless FEMQ are all well-de ned. We have not stated such a result for the noisy FEMQ. We will prove that the noisy FEMQ is well-de ned in the next section. 5. The Noisy FEMQ is a Minimal Error Algorithm
In this section, we prove that the noisy FEMQ is well-de ned, and that it is a minimal error algorithm. In particular, we give conditions on the degree k of the nite element space which are guarantee that the FEMQ using n noisy evaluations with a noise level of has error proportional to n?r=d + . Our starting point is Strang's Lemma (see [10, pp. 310{312] for a proof of a version having slightly more restrictive hypotheses). Recall that the bilinear forms Ba are uniformly strongly coercive, with constant ; see (2.3). Lemma 5.1. Suppose that there exists 0 2 (0; 1] and n 2 Z++ such that for any 2 [0; 0 ], any n n and any a 2 A , we have (5.1) jBa (v; w) ? B~a;n~ ; (v; w)j 12 kvkH m ( ) kwkH m ( ) 8 v; w 2 Sn~ : Then for any n n , any 2 [0; 0 ], and any [f ; a] 2 F , there is a unique u~Qn~ 2 Sn~ such that (4.9) holds. Moreover, there exists a positive constant C , such that if u = S ([f ; a]) is the solution to (2.2), then ~a;n~ ; (v; w)j jf (w) ? f~n~ ; (w)j j B a (v; w) ? B Q m m + kwk m ku ? u~n~ kH ( ) C inf ku ? vkH ( ) + sup ; kwkH m ( ) v2Sn~ w2Sn~ H ( ) the constant C being independent of n, , and [f ; a]. Before we can use Strang's Lemma, we need to prove some preliminary estimates. In what follows, we use the standard notational technique of letting C denote a generic constant whose value may change from one place to another. Lemma 5.2. There exists a positive constant C such that jBa;n~ (v; w) ? B~a;n~ ; (v; w)j CkvkH m ( ) kwkH m ( ) 8 v; w 2 Sn~ and jfn~ (v) ? f~n~; (v)j CkvkL2 ( ) 8 v 2 Sn~ ; for any [f ; a] 2 F and any n 2 Z+, with n~ = n~ (n) satisfying (4.5). 12
Proof: Let [f ; a] 2 F , and n 2 Z+. We establish the rst inequality. For any v; w 2 Sn~ , we have
jBa;n~ (v; w)?B~a;n~ ; (v; w)j X =
X
J X
K 2Tn~ jj;j jm j =1 J X X X K 2Tn~ jj;j jm j =1
!j;K [a; (bj;K ) ? a~; ;j;K; ](D v)(bj;K )(D w)(bj;K )
(5.2)
j!j;K (D v)(bj;K )(D w)(bj;K )j:
Consider a particular element K 2 Tn~ , as well as particular multi-indices and . Using (4.3), we have J X
J X j!j;K (D v)(bj;K )(D w)(bj;K )j = j det BK j j!^ j (D v)(bj;K )(D w)(bj;K )j: j =1 j =1
(5.3)
Let Dl denote the Frechet derivative, where l = jj. As on [4, pg. 118], there exists a subset fe1 ; : : : ; el g of the standard basis for Rd such that for any x 2 K , we have (D v)(x) = (Dl v)(x)(e1 ; : : : ; el ) = (Dl v^)(^x)(BK?1 e1 ; : : : ; BK?1 el ); with x^ = FK?1 (x). Letting
k(Dl v)(x)k = supf j(Dl v)(x)(1 ; : : : ; l )j : 1; : : : ; l in the Euclidean unit ball of Rd g we have
k(D v)(x)j kBK k?l k(Dl v^)(^x)k C kBK k?l sup k(D v^)(^x)k jj=l
for some constant C , independent of K and v. Since l m and kBK k 1, we see that
k(D v)(x)j C kBK k?m sup k(D v^)(^x)k: jjm
Using this inequality, along with the analogous inequality
k(D w)(x)k C kBK k?m sup k(D w^)(^x)k; j jm
in (5.3), and then summing over the multi-indices and for which jj m and j j m, we see that X
J X
jj;j jm j =1
j!j;K (D v)(bj;K )(D w)(bj;K )j C kBK k?2m j det BK j 13
J X j =1
j!^ j j sup j(D v^)(^bj )j sup j(D w^)(^bj )j: jjm
j jm
Now
J X j =1
j!^ j j sup j(D v^)(^bj )j sup j(D w^)(^bj )j jjm
j jm
X J
j!^ j j sup j(D v^)(^bj )j2 jjm j =1
1=2 X J
j!^ j j sup j(D w^)(^bj )j2 j jm j =1
1=2
C kv^kH m(K^ ) kw^kH m(K^ ) C kBK k2mj det BK j?1 kvkH m (K) kwkH m (K) ;
where we have used [4, Theorem 3.1.2] in the last inequality above. Hence, J X
X
jj;j jm j =1
j!j;K (D v)(bj;K )(D w)(bj;K )j C kvkH m (K) kwkH m (K) :
Substituting this inequality into (5.2), we nd X kvkH m (K) kwkH m (K) jBa;n~ (v; w) ? B~a;n~ ; (v; w)j C K 2Tn~
C
X
kvk2H m (K)
1=2 X
K 2Tn~ = CkvkH m ( ) kwkH m ( ) ;
K 2Tn~
kwk2H m (K)
1=2
as required. Next, we establish the second inequality. For any v 2 Sn~ , we have
jfn~ (v) ? f~n~; (v)j =
J X X
K 2Tn~ j =1
!j;K [f (bj;K ) ? f~j;K; ]v(bj;K )
Let K 2 Tn~ . Using (4.3), we have J X j =1
j!j;K v(bj;K )j = j det BK j
J X j =1
J X X
K 2Tn~ j =1
j!j;K v(bj;K )j: (5.4)
j!^ j v^(^bj )j:
(5.5)
P Now v^ 7! Jj=1 j!^ j v^(^bj )j is a linear functional on the nite-dimensional space Pk (K^ ), and is thus a bounded linear functional, with respect to any norm on Pk (K^ ). Hence there is a constant C , independent of v^, such that
J X j =1
j!^ j v^(^bj )j C kv^kL2 (K^ )
8 v^ 2 Pk (K^ ):
Applying this result to (5.5), using [4, Theorem 3.1.2] to estimate kv^kL2 (K^ ) in terms of kvkL2 (K ) , and using the quasi-uniformity of the sequence of triangulations, we see that there exists a constant C , independent of v, K , and n, such that J X j =1
j!j;K v(bj;K )j C j det BK j kv^kL2 (K^ ) C j det BK j1=2 kvkL2 (K) C n~ ?1=2kvkL2 (K) : 14
Substituting this inequality into (5.4), we nd that there exist constants C such that
jfn~ (v) ? f~n~; (v)j Cn~ ?1=2
X
K 2Tn~
kvkL2 (K)
Cn~ ?1=2 card(Tn~ )1=2 CkvkL2 ( ) ;
X
K 2Tn~
kvk2L2 (K)
1=2
as required. We are now ready to prove the main result of this section. Theorem 5.1. There exist n 2 Z++ and 0 > 0 such that ~n; is well-de ned for all n n and all 2 [0; 0 ]. Furthermore, e(~n; ; N~ n; ) = O(n?=d + ); where = minfk; rg: Proof: We rst show that ~n; is well-de ned. As in [10, pg. 106], we see that there exists a positive constant C such that
jBa(v; w) ? Ba;n~ (v; w)j C
X
ka; kW r;p ( ) n?=d kvkH m ( ) kwkH m ( )
jj;j jm CMn?=d kvkH m ( ) kwkH m ( )
8 v; w 2 Sn~
for any n 2 Z++. (Recall that is given by (4.4) and that M is given by condition (2) de ning A .) Using the rst inequality in Lemma 5.2, we have
jBa (v; w) ? B~a;n~; (v; w)j C (Mn?=d + )kvkH m ( ) kwkH m ( )
8 v; w 2 Sn~
(5.6)
for any n 2 Z++ and any 2 [0; 1]. It now follows that there exists 0 2 (0; 1] and n 2 Z++ such that (5.1) holds for any 2 [0; 0 ], any n n and any a 2 A . Hence, Strang's Lemma implies that if 2 [0; 0 ] and n n , then for any [f ; a] 2 F , there is a unique u~Qn~ 2 Sn~ such that (4.9) holds. Thus the noisy FEMQ ~n; is well-de ned for any such and n. Before we bound the error of the noisy FEMQ, we rst note that by the conditions de ning A , the so-called \shift theorem" for elliptic problems holds for a constant that is independent of a 2 A . That is, if f 2 H r ( ), then for any a 2 A , we have S ([f ; a]) 2 H r+2 ( ). Moreover,
?1 kS ([f ; a])kH r+2 ( ) kf kH r ( ) kS ([f ; a])kH r+2 ( ) ;
(5.7)
where the constant is independent of a 2 A , depending only on m, M , and r. See, for instance, the proof in [5], noting that the shift constant depends mainly on the geometry of the region and the size of the coecients in the partial dierential operator La . We now turn to the error of the noisy FEMQ. Let 2 [0; 0 ] and n n . For [f ; a] 2 F , let u = S ([f ; a]). From [10, pg. 107], there exists v 2 Sn~ such that
ku ? vkH m ( ) Cn?=dkukH r+2m ) : 15
(5.8)
Using (5.7), we nd that
kukH r+2m( ) kf kH r ( ) :
(5.9)
kf kH r ( ) C kf kW r;p( ) C;
(5.10)
Since p 2, there exists a positive constant C , independent of f , such that since f 2 B W r;p( ). Combining (5.8){(5.10), we nd that
ku ? vkH m( ) Cn?=dkf kH r ( ) Cn?=d:
(5.11)
Now for any w 2 Sn~ , we nd from [10, pg. 106] that
jf (w) ? fn~ (w)j Cn?=dkf kH r ( ) kwkH m ( ) Cn?=dkwkH m ( ) ; where we have again used (5.10). Using this inequality and the second inequality in Lemma 5.2, we have jf (x) ? f~n~; (w)j C (n?=d + )kwkH m ( ) : (5.12): Use (5.6), (5.12), and (5.11) in Strang's Lemma. Since , we nd
ku ? u~Qn~ kH m( ) C (n?=d + ); as required. Remark : Theorem 5.1 gives an upper bound on the error of the noisy FEMQ. This upper bound is sharp for the case p = 2, i.e., e(~n; ; N~ n; ) = (n?=d + ) for p = 2: Indeed, clearly (3.1) implies that
e(~n; ; N~ n; ) rn () = (): On the other hand, the exact FEMQ is an instance of a noisy FEMQ, and so e(~n; ; N~ n; ) e(n ; Nn): But for p = 2, we have
e(n ; Nn ) = (n?=d );
see [10, pg. 106]. Combining these last three inequalities, we get e(~n; ; N~ n; ) = (n?=d + ); the desired lower bound matching the upper bound in Theorem 5.1, when p = 2. Combining Theorems 3.1 and 5.1, we nd Corollary 5.1.
(1) rn () = (n?r=d + ). (2) The noisy FEMQ, using a quadrature rule that is exact of degree at least 2k + r ? 1, is a minimal error algorithm if k r. (3) Adaption is no stronger than non-adaption. 16
6. Multigrid Implementation of the Noisy FEMQ
As we mentioned in the Introduction, both the matrix G and the vector b in the linear system Ga = b characterizing the noisy FEMQ depend on the problem element [f ; a] 2 F . This means that the standard technique of ignoring the cost of reducing G to a form more suitable for solving linear
systems cannot be ignored, as we often do when said matrix does not depend on any particular problem element. Hence we need to nd an ecient implementation of the noisy FEMQ. One idea is to use a multigrid technique. The main ideas underlying multigrid methods are as follows: (1) We do not need an exact solution of the linear system Ga = b, but only one whose error is comparable to the error of the noisy FEMQ. (2) We can use an iteration for solving the linear system. Moreover: (a) A suciently-accurate solution corresponding to the coarser grid is a good initial guess for the solution corresponding to the ner grid. (b) The iteration on the ner grid has the eect of smoothing, i.e., damping out the oscillatory part of the error, so that this smoothed solution is well approximated on the coarser grid. Our presentation (and analysis) of the multigrid technique will be based on that in [3, Chapter 6], which covers only the de nite problems. We rst establish notation. Recall that fTn g1 n=1 is a quasi-uniform grid sequence. Let us write
hj = Kmax h 2T K j
for the meshsize of Tj . Recall (from Theorem 5.1) that the noisy FEMQ ~n; is well-de ned if n n. Let n1 = n < n2 < < nl?1 < nl be a sequence of integers, chosen so that
Tnj?1 Tnj , and thus Snj?1 Snj and
(6.1) hnj 21 hnj?1 (2 j l): We let j be xed, but arbitrary, index in f1; : : : ; lg. If p1 ; : : : ; pnj are the interior nodes of the triangulation Tnj , then we get the standard nite element basis fs1 ; : : : ; snj g for Snj by requiring that si (pi0 ) = i;i0 for 1 i; i0 nj (see, e.g., the discussion in [10, Sections 5.7 and A.2.3]). We de ne a mesh-dependent inner product h; ij on Snj by nj X d hv; wij = hnj v(pi)w(pi ) i=1
8 v; w 2 Snj :
Then the operator Aj on Snj is de ned by
hAj v; wij = B~a;nj ; (v; w)
8 v; w 2 Snj :
Note that we may follow the proof of [3, Lemma 6.2.8] to nd an upper bound
(Aj ) j = Chn?j2m 17
(6.2)
on the spectral radius of Aj , where the constant C is independent of the index j and the coecient vector a. Let us de ne fj 2 Snj by requiring that
hfj ; sij = f~nj (s)
8 s 2 Snj
and let us write u~j for the solution u~j = u~Qnj of the noisy FEMQ for Snj , so that
Aj u~j = fj : We then let Ijj?1 : Snj?1 ! Snj be the natural embedding, and let Ijj ?1 : Snj ! Snj?1 be its adjoint, i.e., hIjj?1 w; vij?1 = hw; Ijj?1 vij = hw; vij 8 v 2 Snj?1 ; w 2 Snj : Recalling that j is an upper bound on (Aj ), we now de ne the j th-level multigrid iteration recursively, in terms of the multigrid iterations at lower levels: function MG(j : Z+; z0 ; g : Snj ): Snj ;
begin if k = 1 then MG := A?1 1 g else begin
z1 := z0 + ?j 1 (g ? Aj z0 ); f pre-smoothing g g := Ijj?1 (g ? Aj z1 ); f ne-to-coarse intergrid transfer g q1 := MG(j ? 1; 0; g ); f error correcting g z2 := z1 + Ijj?1 q1; f coarse-to- ne intergrid transfer g z3 := z2 + ?j 1 (g ? Aj z2 ); f post-smoothing g
end; MG := z3
end
Then for any index t, the t-fold full multigrid scheme produces an approximation u^j to u~j as follows: function FMG(j; t : Z+): Snj ;
begin if j = 1 then u^j := A?1 1 f1 else begin j j
u0 := Ij?1 u^j?1 ;
for ji := 1 to t do j
ui := MG(j; ui?1 ; fj ); u^j := ujt
end; FMG := u^j
end
Let
N n; = [N~ n1 ; ; N~ n2 ; ; : : : ; N~ nl ; ];
18
with l the maximal index for which card N n; n. Then we may write
?
u^l = n; N n; ([f ; a]) ; where n; is the full multigrid algorithm. The main result for this section is Theorem 6.1.
(1) The full multigrid algorithm is well-de ned. (2) There exists an index t such that the error of the full multigrid algorithm is
e(n; ; N n; ) = O(n?=d + ); where (as in Theorem 5.1)
= minfk; rg:
(3) The combinatory cost of the full multigrid scheme FMG(l; t) is (n). Proof: The well-de nedness follows from Theorem 5.1. To prove the desired error estimate, let us rst consider the j th-level multigrid iteration. Let k kEj be the energy norm de ned by
kvkEj = B~a;nj ; (v; v)1=2 ; this energy norm being equivalent to the usual H0m ( )-norm. We claim that there exists a constant C such that kz ? MG(j; z0 ; g)kEj C C+ 1 kz ? z0 kEj ; (6.3)
the constant C being independent of g; z; z0 2 Snj , j 2 Z+, and [f ; a] 2 F . (There is a \1" in the denominator because we do one pre-smoothing and one post-smoothing step at each level.) Indeed, we only need to (carefully) check that the proof of the analogous result [3, Proposition 6.6.12] applies in our case, once we have made the following changes: (1) Instead of using [3, Lemma 6.2.8], we use our estimate (6.2) for the spectral radius of Aj . (2) For any s 0, let jjjvjjjs;j = hAsj v; vij 8 v 2 Snj :
Let Pj : H0m ( ) ! Snj be the orthogonal projection operator with respect to the inner product B~a;nj ; , i.e., for any v 2 H0m ( ), the element Pj v 2 Snj satis es
B~a;nj ; (Pj v; w) = B~a;nj ; (v; w)
8 w 2 S nj :
Then instead of using the approximation property in [3, Corollary 6.4.4], we use the analogous result that there exists a positive constant C such that
jjj(I ? Pj?1 )vjjj1;j Chmnj jjjvjjj2;j 19
8 v 2 S nj :
We now consider the error of the full multigrid method, following the proof of [3, Theorem 6.7.1], with a few modi cations. Let = C C+ 1 :
Choose [f ; a] 2 F , and let u = S ([f ; a]). For any j , let e^j = u~j ? u^j , noting that e^1 = 0. Using (6.3) and the de nition of FMG, we see that
ke^j kEj tku~j ? u^j?1 kEj?1 : Thus, there exist positive constants C such that
ke^j kH m( ) Ctku~j ? u^j?1 kH m( ) Ct(ku ? u~j?1 kH m( ) + ku ? u~j kH m( ) + ke^j?1 kH m( ) ): From Theorem 5.1, we have
ku ? u~j kH m( ) C (hnj + ) ku ? u~j?1kH m( ) C (hnj?1 + ): Using (6.1), it follows that
ke^j kH m( ) Ct[(hnj + ) + ke^j?1 kH m( ) ]: Solving this inequality, we nd that there exist constants C such that
ke^j kH m( ) C
j ?1 X i=0
where we have again used (6.1). So if then we nd that Hence
kS ([f ; a]) ? n;
j ?1
j ?1
i=0
i=0
X X (hnj?i + )(Ct )i C hnj (2Ct )i + (Ct )i ;
ln 2C ; t > ln 1= t
t
ke^j kH m( ) 1 ?C2Ct hnj + 1 ?CCt = O(hj + ): ?
N n; ([f ; a]) kH m ( ) = ku ? u^j kH m ( ) ku ? u~j kH m ( ) + ke^j kH m ( ) = O(hnj + );
establishing the desired error bound for the full multigrid algorithm. We now estimate the cost of calculating u^j , using ideas similar to those in the proof of [3, Proposition 6.7.4]. First, let Wj denote the amount of work in the j th-level scheme. We nd
Wj 2Cnj + Wj?1 for some constant C , so that
Wj 2C (nj + nj?1 + + n1): 20
Using (6.1), and so
?
nj = dim Snj = (h?njd ) = ( 12 hnj?1 )?d = (2d nj?1); Wj = O
X j ?1
2?di nj Cnj
i=0
for some constant C . Finally, let W^ j denote the work done by FMG(j; t). We nd that W^ j W^ j?1 + rWj W^ j?1 + tCnj : Hence
W^ j tC (nj + nj?1 + + n1 ) Cnj
for some constant C . In particular
W^ l = O(nl ) = O(n):
Since W^ l is the combinatory cost of the full multigrid scheme FMG(l; t), this completes the proof of the Theorem. 7. Complexity
In this Section, we determine the complexity of the noisy elliptic problem. It will be useful to explicitly specify some of the order-of-magnitude constants in some of the estimates in the previous sections. Thus, Theorem 3.1 tells us that there exists a positive constant C1 such that rn () C1 (n?r=d + ): (7.1) Moreover, let ~n; be the noisy FEMQ of degree k r, using a quadrature rule that is exact of degree at least 2k + r ? 1. Then by Theorem 6.1, there exist positive constants C2 and C3 = C3 (g) such that e(n; ; N n; ) C2 (n?r=d + ) (7.2) and (7.3) cost(n; ; N n; ) C3 c()n: We now have Theorem 7.1. The problem complexity is bounded from below by (
comp(") inf c()
&
C1?1 " ?
>0
and from above by
1
(
comp(") C3 inf c()
&
d=r ')
;
d=r ')
1
: C2?1 " ? The upper bound is attained by using the noisy FEMQ n; described above, with >0
n=
&
1
C2?1 " ?
and with chosen minimizing (7.5). 21
d=r '
;
(7.4) (7.5)
(7.6)
Proof: To prove (7.4), suppose that is an algorithm using noisy information N such that e(; N ) ". Then card N n, where n must be large enough to make rn () ". The lower
bound (7.1) immediately tells us that
n
&
1
C1?1 " ?
d=r '
:
But the cost of any algorithm using n information evaluations must be at least n c(), and so &
cost(; N ) c()
1
C1?1 " ?
d=r '
:
Since and N are an arbitrary algorithm and noisy information such that e(; N ) ", we nd that & d=r ' 1 : comp(") c() ?1
C1 " ?
Finally, since > 0 is arbitrary, we get the desired lower bound (7.4). To prove the remainder of this Theorem, let > 0. If (7.6) holds, then we may use (7.2) to see that e(~n; ; N~ n; ) ". Now using (7.3), we have cost(n; ; N n; ) C3 c()
&
1
C2?1 " ?
d=r '
:
Choosing minimizing the right-hand side in this inequality, the desired result follows. Comparing the lower and upper bounds in Theorem 7.1, we see that (
1
comp(") = inf c() C ?1 " ? >0
d=r )!
;
(7.7)
for some constant C , which allows us to determine the complexity for various cost functions c(). For instance, if c is dierentiable, then (7.7) holds if satis es d=r = c0 () : ? C 1 " ? c() As a speci c example, consider the cost function c() = cs () = ?s , where s > 0. After some calculations, we nd that for " > 0, the optimal is
= C (rsrs"+ d) ;
so that
s d=r+s ! d C ( rs + d ) comp(") = comps (") = sr : d" Simplifying a bit, we see that the optimal is proportional to ", and that
comp(") =
d=r+s !
22
1
"
:
(7.8)
Recall that the complexity of this problem using exact information is compexact (") =
d=r !
1
"
;
see [10, Section 5.5]. Let us compare the results for noisy and exact information. First, note that lims!0 cs () = 1, i.e., the cost of obtaining -accurate samples becomes a constant, independent of , when s tends to zero. Using (7.8), we see that lims!0 comps (") = ? compexact (") . Thus as the (varying) cost of noisy information approaches the ( xed) cost of exact information, the problem complexity for noisy information approaches that for exact information. Moreover, we can determine the penalty that must be paid when noisy information is used for the elliptic problem, instead of exact information. As mentioned in the Introduction to this paper, one way of measuring this penalty is to write comp(") = where
d=r0 !
1
"
;
r0 = d +d rs r:
Hence, the complexity of our problem using noisy information of smoothness r is the same as the complexity using exact information of lesser smoothness r0 . Bibliography 1. Babuska, I. and Aziz, A. K., Survey lectures on the mathematical foundations of the nite element method, The Mathematical Foundations of the Finite Element Method With Applications to Partial Dierential Equations, (A. K. Aziz, ed.), Academic Press, New York, 1972, pp. 3{359. 2. Bramble, J., Multigrid Methods, Pitman Research Notes in Mathematics, Vol. 294, Wiley, New York, 1993. 3. Brenner, S. C. and Scott, L. R., The Mathematical Theory of Finite Element Methods, Springer, New York, 1994. 4. Ciarlet, P. G., The Finite Element Method For Elliptic Problems, North-Holland, Amsterdam, 1978. 5. Friedman, A., Partial Dierential Equations, Krieger, Malabar, FL, 1976. 6. Oden, J. T. and Carey, G. F., Finite elements: Mathematical aspects, Prentice-Hall, Englewood Clis, 1983. 7. Plaskota, L., Noisy Information and Computational Complexity, Cambridge University Press, Cambridge, 1996. 8. Plaskota, L., Worst case complexity of problems with random information noise, J. Complexity (1996) (to appear). 9. Traub, J. F., Wasilkowski, G. W., and Wozniakowski, H., Information-Based Complexity, Academic Press, New York, 1988. 10. Werschulz, A. G., The Computational Complexity of Dierential and Integral Equations: An InformationBased Approach, Oxford University Press, Oxford, 1991. 11. , The complexity of multivariate elliptic problems with analytic data, J. Complexity 11 (1995), 154{173. 12. , The complexity of the Poisson problem for spaces of bounded mixed derivatives, Lectures in Applied Mathematics: Proceedings of the Joint AMS-IMS-SIAM Summer Seminar on \Mathematics of Numerical Analysis: Real Number Algorithms," Park City, UT, July 17{August 11, 1995 (J. Renegar, M. Shub, S. Smale, eds.), 1996 (to appear). 13. , The complexity of inde nite elliptic problems with noisy data, J. Complexity (to appear).
23