Complexity of local solution of multivariate integral ... - CiteSeerX

Report 1 Downloads 17 Views
Complexity of local solution of multivariate integral equations Karin Frank Fachbereich Informatik Universitat Kaiserslautern Postfach 3049, D{67653 Kaiserslautern 7th July 1994

Abstract

In this paper the complexity of the local solution of Fredholm integral equations is studied. For certain Sobolev classes of multivariate periodic functions with dominating mixed derivative we prove matching lower and upper bounds. The lower bound is shown using relations to s-numbers. The upper bound is proved in a constructive way providing an implementable algorithm of optimal order based on Fourier coecients and a hyperbolic cross approximation.

1 Introduction One of the standard problems considered in information{based complexity theory is the solution of Fredholm integral equations of the second kind. These equations often appear in physical applications, e.g. boundary value problems can be formulated in this form. To get a general idea of the existing results, we start with a short overview. Within the framework of information{based complexity several cases are distinguished. The rst distinction is made with respect to the required result. One can either be interested in full solution, i.e. in computing an approximation to the solution function on the whole domain, or in local solution, i.e. in computing the value of some functional applied to the solution function. This functional can be e.g. the value of the solution function at a single point or a weighted mean. The second distinction is made between di erent types of knowledge about the input data: Either only values of the kernel and the right{hand side at some points are known (this is called standard information), or the values of some linear functionals both of the kernel and the right{hand side are given (this is called linear information). Note that the permission of linear information includes a wider class of algorithms. 

e-mail: [email protected]

1

The rst work on complexity of Fredholm problems where lower bounds were shown, was the paper of Emelyanov and Ilin [EI67], in which the class of r{times continuously di erentiable data with standard information was considered, both for full and local solution. The upper bound was shown by a two{grid iteration. For the more general case of full solution with linear information some results were obtained by Pereverzev [Per88], [Per89], [Per91]. Werschulz (1985) discussed the problem of full solution of integral equations with xed kernel and varying right{hand side, both with standard and linear information of the right{hand side. The problem of local solution with linear information was rst studied by Heinrich [Hei93], [Hei94]. For the class of r{times continuously di erentiable data an upper bound was derived. Concerning the lower bound, only an equivalence to an open problem in s{numbers could be shown. However, replacing the class C r by the Hilbertian Sobolev class W r , this approach could be extended by Frank and Heinrich [FH94], resulting in the proof of matching upper and lower bounds. In the present paper, the Sobolev class of periodic functions with dominating mixed derivative is discussed. This class of functions was recently studied by Pereverzev, who got some results on the complexity of the full solution for the case of linear information. The aim of our paper is to obtain upper and lower bounds of the same order for the complexity of local solution in the general situation of linear information. To show the lower bound of the Theorem, we use an s{number technique, which is based on the fact that the radius of information of the problem is bounded from both sides by the so{called Gelfand numbers of some operator (see Section 3 for de nitions). There exist various types of s{numbers, e.g. Gelfand numbers or Kolmogorov numbers, whose relation to linear problems is well{known [TWW88]. Recently, s{number methods were applied by Heinrich [Hei93] to the problem of complexity of integral equations. Probably this method could also be used to proof the upper bound of the Theorem. However, we prefer the constructive and more intelligible way to estimate the radius of information from above by the error of a concrete algorithm of optimal order. The algorithm is based on a two{grid iteration, where the kernel is represented by a speci c hyperbolic cross approximation. Approximations of that type were introduced by Babenko [Bab60]. 2

2 Formulation of the problem and the main result

2.1 The problem

Let us rst introduce some notations. Let G = [0; 1]d with d 2 , and L (G) be the space of square summable with respect to the Lebesgue measure functions on G. We consider the orthonormal trigonometric basis in L ([0; 1]) e ( )  1 IN

2

0

2

2

p

en( ) = p2 cos 2n e?n( ) = 2 sin 2n for n 2 . Then for a given multiindex i = (i ; : : :; id) 2 d the basis function ei 2 L (G) is de ned by ei(t) = ei (t )  : : :  eid (td) (t = (t ; : : : ; td) 2 G) : The Fourier coecients of f 2 L (G) are given by f^(i) = (f; ei) (i 2 d) : Similarly, an orthonormal basis feij gi;j2Zd in L (G ) is de ned by eij (s; t) = ei(s)  ej (t) (s; t 2 G) Then, the Fourier coecients of k 2 L (G ) are of the following form k^(i; j ) = (k; eij ) (i; j 2 d) : IN

Z Z

1

2

1

1

1

2

Z Z

2

2

2

2

Z Z

Now we shall de ne the class of data to be discussed. Therefore, given a multiindex i = (i ; i ; : : : ; id) 2 d we set |i| = max(1; ji j)  max(1; ji j)  : : :  max(1; jidj), where jik j denotes the ordinary absolute value of ik 2 . Let r  0. Then the function spaces Hr (G) and Hr;r (G ) are de ned as X Hr (G) = ff 2 L (G) : kf kr = |i| r f^(i) < 1g ; 1

Z Z

2

1

2

Z Z

2

2

2

2

Hr;r (G ) = fk 2 L (G ) : kkkr;r = 2

X

i2Z d

2

2

2

i;j 2Z d

2

|i|2r |j |2r k^ (i; j )2 < 1g :

For simplicity, we will often use the following notation: Hr = Hr (G), Hr;r = Hr;r (G ), L = L (G). Note that for r 2 the space Hr (G) constitutes the Sobolev space of periodic functions f on [0; 1]d, for which both f and the generalized mixed derivative @ dr f belong to L . These spaces are called Sobolev spaces with dominating mixed @ r t :::@ r td derivative. By H?r = (Hr ) we denote the dual space of Hr . L imbeds into H?r in a canonical way, and the H?r {norm of a function f 2 L is given by X kf k?r = |i|? r f^(i) : 2

2

IN

2

2

1

2

2

2

2

2

i2Z d

Note that L is a dense subspace of H?r . Finally, we de ne subsets F  Hr (G), K  Hr;r (G ) of the form F = ff 2 Hr (G) : kf kr  g ; K = fk 2 Hr;r (G ) : kkkr;r  ; k(I ? Tk )? : L ! L k  g ; 2

0

2

0

0

0

1

2

3

2

2

where ; > 0 and > 1. Now we are ready to state the problem to be studied. We consider integral equations of the form u ? Tk u = f ; (1) where f 2 F , k 2 K , and Tk denotes the integral operator Tk : LZ (G) ! L (G) Tk u = k(s; t) u(t) dt : 0

0

2

2

G

The problem is to be formulated within the framework of information{based complexity theory. Here only the most important de nitions are outlined, referring to [TWW88] for further notations. Since we are interested not in the full solution of (1), but rather in the value of one linear functional  of it, we have to consider the so{called local solution operator S : K F !  ? S(k; f ) = (I ? Tk )? f;  ; where  2 L is a given non{zero linear functional. For example, this may be a Fourier coecient or a weighted mean. We permit linear information on the data, i.e. the information operator is de ned by N : K  F ! n , N = (N ; N ) with N k = ((k; g ); : : :; (k; gn )) ; gk 2 Hr;r (G ) (k = 1; : : : ; n ) N f = ((f; h ); : : : ; (f; hn )) ; hl 2 H?r (G) (l = 1; : : :; n ) where n + n = n. Here, Hr;r (G ) denotes the dual space of Hr;r (G ). An approximation to the exact solution S(k; f ) is to be computed. An arbitrary mapping ' : n ! , which combines the information N (k; f ) and computes an approximation '(N (k; f )) to S, is called an algorithm. Then the error of an approximation '(N (k; f )) is de ned by e(S; N; ') = sup jS(k; f ) ? '(N (k; f ))j : 0

IR

0

1

2

0

1

1

2

1

1

0

2

1

2

IR

1

2

1

2

2

2

IR

2

IR

f 2F0 ;k2K0

Let us agree about the model of computation. We assume, that standard arithmetical operations, including comparisons, can be performed with unit cost, while linear functionals on the input data can be computed with constant cost c(d). Imagine a subroutine which supplies the computation of one linear functional on the data.

4

2.2 The main result

Our main theorem provides estimates for the radius of information of the given problem. This quantity describes the minimal error, which can be obtained by any algorithm ' using at most n information functionals: en(S) = N k;finf2Rn ' Rinf n !R e(S; N; ') (

:

)

This is the crucial quantity to be analyzed in information{based complexity. Since any algorithm of cost n can use at most n information functionals due to the model of computation, en(S) serves as a general lower bound for the error of any algorithm of cost n.

Theorem 1 Let r > 0. For each  2 L (G),  6= 0, there exist constants c ; c > 0 such that for all n 2 c  n? r log r d? n  en (S)  c  n? r log r d? n : 2

1

2

IN

1

2

2 (2

1)

2

2

2 (2

1)

3 Proof of the lower bound To prove the lower bound of the Theorem, we are going to use a method of Heinrich, which was originally developed for the class of r times continuously di erentiable data (see [Hei93]) and extended to other situations in [FH94]. For this end, let us de ne some mapping  by  : Hr;r (G ) ! L(Hr (G); H?r (G)) k = Tk : Hr (G) ! H?r (G) and introduce the so{called Gelfand numbers of an operator. Given two Banach spaces E and F , let BE denote the unit ball of E and L(E; F ) the space of all bounded linear operators from E to F . Then for an operator T 2 L(E; F ) and n 2 the n{th Gelfand number of T is de ned by kTxk : sup cn(T ) =  ;:::;inf 2E x2B 2

IN

1

n?1

E 1 (x)=::: =n?1 (x)=0

For details on these numbers we refer to [Pie78], [Pie87]. The relation of Gelfand numbers to the radius of information for linear problems with arbitrary linear information is well{known [TWW88]. Since in our case the solution operator S is nonlinear, this result is not applicable and we need the theorem below, which states an equivalence of the radius of information of our problem and the Gelfand numbers of the operator . The proof of this theorem will not be given in detail. Instead, we shall show several lemmas, which make the proof of [Hei93] work in this case as well. 5

Theorem 2 There are constants a ; a > 0 such that for all n 2 : a  c n ()  e n (S)  a  cn () : 1

1

IN

2

3 +2

3 +1

2

+1

First, an agreement about the notation of constants is to be made. If a(x) and b(x) are functions de ned on some set X , the notation a(x)  b(x) means that there is a constant c > 0 such that a(x)  c  b(x) for all x 2 X . One writes a(x)  b(x) if a(x)  b(x) and b(x)  a(x). For simplicity, we will often use the same symbol for possibly di erent constants. Lemma 1 There are constants c ; c > 0 such that for all k 2 K : c  BHr  (I ? Tk )? BHr  c  BHr : Proof: By assumption k(I ? Tk )? : L ! L k  : Furthermore, kTk : L ! Hr k  c ; (2) which can be proved using the Fourier coecients. For this purpose, let (ij )i;j2Zd be P r r^ a sequence of numbers with ij = |i| |j | k(i; j ). Then kkkr = i;j2Zd ij  and 1

2

0

1

1

1

2

2

2

2

0 1 X kTk f kr = |i| r @ f^(j )k^ (i; j )A d i2Z d 0j2Z 1 X r @X ^ = |i| f (j )|i|?r |j |?r ij A d i2Z d 0 j2Z 1 X @X ^ A f (j )ij  X

2

2

2

2

2

2

2

2

2

j 2Z d

i2Z d

 kf k  : 2 0

2

Together with the relation (I ? Tk )? = I + Tk (I ? Tk)? these inequalities imply the boundedness of the operator (I ? Tk )? : Hr ! Hr and in that way the right{hand side of the lemma. The left{hand side follows from the continuity of the linear operator (I ? Tk ) : Hr ! Hr , which is a consequence of (2). 2 1

1

1

6

Lemma 2 There are constants c ; c > 0 such that c  BHr  fTk : k 2 BHr;r g  c  BHr ; 1

2

2

1

where Tk denotes the adjoint operator of Tk . Proof: Since  2 L2, the right{hand side follows from

kTk : L ! Hr k  c ; 2

which can be derived from inequality (2) using a symmetry argument. The left{hand side can be shown considering kernels k 2 Hr;r of the form k(s; t) = ei (s)f (t) ; where f 2 Hr and i 2 d is a xed index such that (ei ; ) 6= 0. 0

Z Z

0

0

2

Lemma 3 (i) 9c > 0 : f(I ? Tk )? Tk : k 2 K g  fTh : h 2 c  BHr;r g (ii) 8 > 0 9c > 0 : fTh : h 2 c  BHr;r g  f(I ? Tk )? Tk : k 2   BHr;r g 1

1

0

2

1

1

2

Proof: To prove the rst statement, the function kj 2 L (G) (j 2 k^j (i) = k^(i; j ) (i 2 d ) : 2

Z Z

d ) is de ned by the relation

Z Z

Then

kj (s) = Tk ej = Notice that

X

kkkr;r = 2

i;j 2Z d

X

=

j 2Z d

X

=

j 2Z d

Z G

k(s; t) ej (t) dt :

(3)

|i|2r |j |2r k^ (i; j )2

|j |2r

X

i2Z d

|i|2r k^ (i; j )2

|j |2r kkj k2r :

Given k 2 K , let h 2 L (G ) be the function de ned by (I ? Tk )? Tk = Th : Combining this with (3) yields hj = Th ej = (I ? Tk )? Tk ej = (I ? Tk )? kj 0

2

2

1

1

1

7

and so

khkr;r =

X

2



j 2Z d

X

|j |2r khj k2r

j 2Z d

|j |2r kkj k2r

= kkkr;r : This implies the rst statement. To prove the second one, we take h 2 c BHr and de ne k 2 L (G ) by Tk = Th(I + Th)? : Then Th = (I ? Tk )? Tk and for a well-chosen c the same argument as above gives k 2 BHr . 2

2

2

2

1

1

2

2

Now, we shall estimate the Gelfand numbers of  to show the lower bound. For this end, operators W : l ( d) ! Hr;r (G ) ; V : L(Hr (G); H?r (G)) ! l1( d) ; are constructed and composed with  to a diagonal operator D : l ( d ) ! l1 ( d ) ; D = V W : (4) Then the Gelfand numbers of  are estimated by the Gelfand numbers of D, which are much easier to determine. Z 2 Z

2

2

Z Z

Z 2 Z

2

Z Z

2

2

Let fbij gi;j2Zd be the unit vector basis of l (

Z 2 Z

d)

2

and de ne

Wbij = |i|?r |j |?r ei(s)  ej (t) ; V (T ) = (ij )i;j2Zd ;

where

ij = |i|?r |j |?r (Tej ; ei) : Hence, the operator W is an isometry, so kW k = 1, and the operator V is an injection with kV k  1. The operator D, de ned by (4), has the following form: Dbij = ij  bij ; ij = |i|? r |j |? r : 2

2

8

Now we de ne a nonincreasing sequence     : : :  n  : : : by n = inf f" : jf(i; j ) : ij  "gj < ng = Amax min fij : (i; j ) 2 Ag : Z d 1

2

2

jAj=n

For our purpose, we need the set An, which is de ned by An = f(i; j ) : ij  n? r g = f(i; j ) : |j |? r |i|? r  n? r g = f(i; j ) : |i|  |j |  n g : From the de nition of fn gn2N follows immediately: 4

2

2

4

2

jAn j  n? r > jAn j : 4

+1

Moreover, as can be seen easily, the cardinality of the set An is jAnj  n log d? n : Hence, 2

2

n

1

 n? r 4

d?1 n]

[ 2 log2

which can be transformed in a standard way into N  N ? r (log N ) r d? : From the Theorem (11.11.7) in [Pie78] follows cn(D : l ( d) ! l1( d))  n  n? r (log n) r d? : Furthermore, by basic properties of Gelfand numbers, cl(D)  kV kcl()kW k and thus c ()  cl(D) 2

2 (2

Z 2 Z

2

1)

2

Z Z

2

l

2 (2

kV k  kW k

1)

for all l 2 . Finally, using (5) we get cn()  n? r (log n) r d? for all n 2 . This proves the lower bound of the Theorem. IN

2

2 (2

1)

IN

9

(5)

4 Proof of the upper bound The upper bound is proved by providing a concrete algorithm and estimating the number of required information functionals, the error, and the complexity of the method. Our algorithm constitutes a modi cation of the algorithm used in [FH94]. The structure of the set of Fourier coecients taking part in the approximation of the kernel is essentially changed according to the di erent function spaces considered. Hence the basic index sets are to be rede ned and new norm estimates have to be derived. Let k 2 K , f 2 F be given. Fix n 2 and put An = fi 2 d : |i|  n g Bn = fi 2 d : |i|  n g Cn = f(i; j ) 2 d : max(|i|; |j |)  n g Dn = f(i; j ) 2 d : |i|  |j |  n g Remember that |i| is de ned by |i| = max(1; ji j)  max(1; ji j)  : : :  max(1; jidj) in contrast to [FH94]. So the cardinalities of these sets are 0

IN

0

1 3

Z Z

2

Z Z

1 3

2

Z Z

2

2

Z Z

1

jAnj jBn j jCnj jDn j

   

2

logd? n logd? n log d? n log d? n Let us shortly recall the idea of the algorithm. The projections g; h and f of k and f , respectively, are de ned by  f^(i) if i 2 B n r f 2 H (G) : f^ (i) = 0 otherwise  k^(i; j ) if (i; j ); 2 C n g 2 Hr;r (G ) : g^(i; j ) = 0 otherwise;  k^(i; j ) if (i; j ) 2 D n r;r (6) h 2 H (G ) : ^h(i; j ) = 0 otherwise: First, the algorithm computes an approximation v to (I ?Tk)? f by a two-grid iteration, setting v = 0 and determining vl (l = 1; : : :; l ) from (I ? Tg ) vl = f + (Th ? Tg) vl? : (7) Then, taking v = vl , the nal approximation is calculated by  (8) n (k; f ) = (f; ) + (v; Tk ) :

n n n n

1 3

1

1

2

2 3

2(

2

2

1)

1

0

0

0

2

2

1

0

0

0

1

0

10

In this case, l = 12 iterations are sucient. The unique solvability of (7) follows from Lemma 6(ii) below. In terms of Fourier coecients, the algorithm looks like the following (l = 1; : : : ; l ): X ^ X k(i; j ) v^l? (j ) (9) v^l(i) ? k^(i; j ) v^l(j ) = f^(i) + 0

0

j 2An

for i 2 An, and v^l(i) = f^(i) +

1

j :(i;j )2DnnCn

X

k^(i; j ) v^l? (j )

(10)

for i 2 Bn n An. Finally, (8) turns into X^ k(; j ) v^(j ) n (k; f ) = (f; ) +

(11)

j :(i;j )2Dn

1

j 2Bn

where Z ^k(; j ) = (k;  ej ) =

Z G2

k(s; t) (s) ej (t) ds dt :

Since for |i| > n both f^ (i) = 0 and g^(i; j ) = ^h(i; j ) = 0 (j 2 d ), from (7) follows v^l(i) = 0 for i 2= Bn . Hence, the system (9) and (10) is equivalent to equation (7). Note that the system of linear equations (9) is to be solved only for a comparatively small set of unknowns v^l(i) (i 2 An), whereas the main part of Fourier coecients v^l(i) (i 2 Bn n An) can be computed directly from (10). So the number of operations needed for the solution of the system (9) should not exceed the number of operations required for the computation of the remaining Fourier coecients in (10). 2

Z Z

0

Now, we shall estimate the number of information functionals and of operations needed in the computational process (7) and (8). The information about the functions k; f , required in (9), (10), and (11), can be collected in the information operator N = (N ; N ): 1



^   ; k(; j ) j2B i;j 2Dn n    ^

2

k^(i; j )

Nk = 1

Nf =

f (i)

2



(

i2Bn

)

; (f; ) :

Consequently, a total of jDn j = O(n log d? n) information functionals is needed. The solution of the system of linear equations (9), e.g. by Gaussian method, requires O(jAnj ) = O(n log d? n) operations; the computation of the remaining Fourier coecients in (10) can be performed in O(jDn j) = O(n log d? n) operations. The nal approximation (11) requires O(jBn j) = O(n logd? n) operations. So we have a total of O(n log d? n) operations and O(n log d? n) information functionals necessary for the computation of (7) and (8). 2

2

3(

3

1

1)

2

1

2

2

2

1

2

2

11

1

2

1

The algorithm ' is simply de ned by '(N k; N f ) = n(k; f ) and it will be of optimal order, if its error satis es e(S; N; ')  n? r (card(N ) = [n log d? n]) : 1

2

4

2

2

1

Before we start the proof of this error bound, we rewrite the algorithm in a more convenient form. Let Y = (I ? Tg )? (Th ? Tg ) Z = (I ? Tg )? (Tk ? Th) w = (I ? Tk )? f : (12) Then u ? w = (I ? Tk )? (f ? f ) : (13) From (12) and (7) we derive (I ? Tg) w = f + (Tk ? Tg ) w ; (I ? Tg ) (w ? vl) = (Th ? Tg) (w ? vl? ) + (Tk ? Th) w : This implies w ? vl = Y (w ? vl? ) + Z w (l = 1; : : :; l ) and so, for l = l = 12 and v = vl : 1

1

1

0

1

0

0

1

1

0

w?v =Y w+ 12

X 11

l=0

0

0

Y lZ w :

(14)

Now three lemmas will be formulated which hold for all k 2 K , the projections g; h 2 K introduced in (6) and, if not mentioned otherwise, for any n 2 . The constants involved are independent of k and n. These lemmas shall help us to analyze the behaviour of the operators Y and Z . 0

IN

0

Lemma 4 (i) kTk ? Th : Hr ! H?r k  n? r (ii) kTk ? Tg : L ! L k  n? r (iii) kTh ? Tg : L ! L k  n? r : 4

2

2

3

2

2

3

12

Proof: In detail we shall give only the proof of the rst statement. The other two can be shown in a similar way. Let f 2 Hr (G), kf kr  1, and de ne ij ; j such that k^(i; j ) = |i|?r |j |?r ij ; f^(j ) = |j |?r j : Since k 2 K , f 2 BHr , it follows that

X

i;j 2Z d

0

ij  ; 2

2

X

j 2Z d

j  1 : 2

0 1   X k^(i; j ) ? ^h(i; j ) f^(j )A |i|? r @ d i2Z d 1 0j2Z X ? r@ X ^ ^ A = |i| k(i; j )f (j ) d j i;j 2 = D i2Z n 1 0 X ? r@ X |i|?r |j |?r |j |?r ij j A = |i| i2Z d 1 0j i;j 2=Dn X ? r@ X |j |? r ij j A = |i| j i;j 2=Dn i2Z d 0 1 X X @ ij j A  i;jmax |i|? r |j |? r  2=Dn

Now it turns out that k(Tk ? Th)f k?r = 2

X

2

2

2

2

:(

)

2

2

:(

)

2

2

4

:(

)

2

4

(

4

)

i2Z d

 n? r : 8

j 2Z d

2

2

This proves the rst statement.

Lemma 5 For each T = Tk ; Th; Tg the following estimates hold: (i) kT : L ! Hr k  c (ii) kT : H?r ! L k  c : 2

2

Proof: For T = Tk the rst statement is already shown in the proof of Lemma 1. The proof for T = Th; Tg is similar. (ii) follows from (i) by duality: If kT : L ! Hr k  c, then kT  : H?r ! L k  c. 2 2

2

13

Lemma 6 There are constants c ; c ; c ; c > 0 and n 2 such that (i) k(I ? Tk )? : Hr ! Hr k  c ; k(I ? Tk )? : H?r ! H?r k  c : (ii) For n  n , k(I ? Tg )? : L ! L k  c : (iii) For n  n , k(I ? Tg )? : H?r ! H?r k  c : 1

1

2

3

4

0

IN

1

1

2

1

0

2

2

3

1

0

4

Proof: The rst statement of (i) follows from the relation (I ? Tk )? = I + Tk (I ? Tk)? and from Lemma 5(i). The second one is implied by duality. To show the second part of the Lemma, we conclude from Lemma 4(ii), that 8n  n : kTk ? Tg : L ! L k  c  n? r  21 : Moreover, for k 2 K we have k(I ? Tk )? : L ! L k  : Since (I ? Tg )? = (I + (I ? Tk )? (Tk ? Tg))? (I ? Tk )? ; we get k(I ? Tg )? : L ! L k  2 : Using the lemmas 5(ii) and 6(ii) and the relation (I ? Tg )? = I + (I ? Tg )? Tg ; we derive the third statement. 1

1

0

2

1 3

2

0

1

2

2

1

1

1

2

1

2

1

1

Corollary 1 For n  n : kY : L ! L k  n? r kY : H?r ! H?r k  c kZ : Hr ! H?r k  n? r : 0

2

1

2

3

4

14

2

Now we are ready to accomplish the proof of the upper bound. It follows from the de nition of f , that 0

kf ? f k?r  c  n? r : 4

0

Lemma 6(i) gives

ku ? wk?r = k(I ? Tk )? (f ? f )k?r  c  kf ? f k?r  c  n? r : 1

0

0

(15)

4

Moreover,

kwkr = k(I ? Tk )? f kr  c  kf kr  c : 1

0

0

From Corollary 1 and equation (14) we deduce X kw ? vk?r  kY wk + k Y l : H?r ! H?r k  kZ : Hr ! H?r k  kwkr 11

12

 

0

l=0 c  n? r + c  n?4r c  n?4r : 12 3

Together with (15) this gives

ku ? vk?r  c  n? r : 4

Finally, we get ?  jS(k; f ) ? n(k; f )j = (I ? Tk?)? f;  ? (f; ) ? (v; Tk) = (?f; ) + Tk (I ? Tk)? f;  ? ( f; ) ? (v; Tk) = (I ? Tk)? f; Tk ? (v; Tk) = j(u ? v; Tk)j 1

1

1

 ku ? vk?r kTkkr  c  ku ? vk?r  c  n? r : 4

This completes the proof of the Theorem.

5 Summary In the present paper, the complexity of local solution of Fredholm integral equations for a Sobolev class of multivariate periodic functions with dominating mixed derivative is discussed. Matching upper and lower bounds of order O(n? r log r d? n) are derived. Consequently, the stated problem is tractable in d, i.e. the complexity does not increase exponentially with the dimension [Woz93]. 2

15

2 (2

1)

To prove the lower bound, an s{number technique was used, which was applied earlier to two other special classes of functions. So this method seems to be a powerful means for the estimation of the radius of information. Actually, we are trying to nd more general conditions, which can guarantee the applicability of this technique to a wider class of problems. The upper bound was shown constructing an implementable algorithm of optimal order, based on a hyperbolic cross approximation of the kernel. Usually, high{dimensional problems are the domain of Monte{Carlo algorithms. It would be interesting to compare our deterministic algorithm with stochastic ones. Numerical experiments will be reported in a forthcoming paper.

References [Bab60] [EI67]

[FH94] [Hei93] [Hei94] [Per88] [Per89] [Per91]

K. I. Babenko: Approximation of a certain class of periodic functions of several variables by trigonometric polynomials (in Russian). Dokl. Akad. Nauk SSSR, 132(1960), 982{985. K. V. Emelyanov, A. M. Ilin: On the number of arithmetic operations, necessary for the approximate solution of Fredholm integral equations of the second kind (in Russian). Zh. Vychisl. Mat. i Mat. Fiz. 7(1967), 905{ 910. K. Frank, S. Heinrich: Complexity of local solution of integral equations. Beitrage zur Angewandten Analysis und Informatik (ed. by E. Schock), 1994. S. Heinrich: Complexity of integral equations and relations to s{numbers. J. Complexity 9(1993), 141{153. S. Heinrich: Random approximation in numerical analysis. Proceedings of the Conference "Functional Analysis\ Essen 1991 (ed. by K. D. Bierstedt, A. Pietsch, W. M. Ruess, D. Vogt), Marcel Dekker, 1994. S. V. Pereverzev: On the complexity of the problem of nding solutions of Fredholm equations of the second kind with di erentiable kernels I (in Russian). Ukrain. Mat. Zh. 40(1988), 84{91. S. V. Pereverzev: On the complexity of the problem of nding solutions of Fredholm equations of the second kind with di erentiable kernels II (in Russian). Ukrain. Mat. Zh. 41(1989), 189{193. S. V. Pereverzev: Hyperbolic cross and complexity of the approximate solution of Fredholm equations of the second kind with di erentiable kernels (in Russian). Sibir. Mat. Zh. 32(1991), 107{115. 16

[Pie78]

A. Pietsch: Operator Ideals. Deutscher Verlag der Wissenschaften, Berlin 1978. [Pie87] A. Pietsch: Eigenvalues and s-Numbers. Cambridge University Press, Cambridge 1987. [TWW88] J. F. Traub, G. W. Wasilkowski, H. Wozniakowski: Information{Based Complexity. Academic Press, New York 1988. [Wer85] A. G. Werschulz: What is the complexity of the Fredholm problem of the second kind?. J. Integral Equations 9(1985), 213{241. [Woz93] H. Wozniakowski: Tractability and strong tractability of linear multivariate problems. To appear, 1993.

17