Brakhage's implicit iteration method and Information Complexity of equations with operators having closed range Sergei V. Pereverzev and Eberhard Schock
Abstract
An a posteriori stopping rule connected with monitoring the norm of second residual is introduced for Brakhage's implicit nonstationary iteration method, applied to ill-posed problems involving linear operators with closed range. It is also shown that for some classes of equations with such operators the algorithm consisting in combination of Brakhage's method with some new discretization scheme is order optimal in the sense of Information Complexity.
1 Introduction The present paper is devoted to methods for approximate solution of linear operator equations of the form
Tx = y
(1:1)
with operators T having closed range. The equations (1.1) will be considered in the Hilbert space X with the usual inner product (; ) and the usual norm kkX . It is known [9, p.153] that operator T 2 L(X; X ) has closed Range(T ) if and only if for some > 0 kTukX (1:2) inf u2X kukX u?Ker(T );u6=0
1
Let e1 ; e2; : : : ; em; : : : be some orthonormal basis of the Hilbert space X and Pm be the orthogonal projector on span fe1 ; e2; : : : ; em g, that is
Pm u =
m X i=1
(u; ei)ei
For r 2 (0; 1) we let X r denote a linear subspace of X which is equipped with the norm kukX r := kukX + kDr ukX ; where Dr is some linear (non-bounded) operator acting from X r to X , and for any m = 1; 2; : : : (1:3) kI ? PmkX !X r cr m?r ; where I is the identity operator and the constant cr is independent of m. We assume that operators T of equations (1.1) have some special structure. Namely, T =B+A (1:4) where B is some xed operator such that B; B 2 L(X; X ) \ L(X r ; X r ) and
A 2 H r := fA : kAkX !X r 1; kAkX !X r 2; k(Dr A)kX !X r 3 g;
= ( 1; 2; 3) where L is such that for any u; v 2 X (u; Lv) = (L u; v). For xed B such r that B; B 2 L(X; X ) \ L(X r ; X r ) and > 0 we denote by KB; ; the set of r operators T 2 B H of the form (1.4) which satisfy the condition (1.2). r . Consider some examples of operators T 2 KB; ; Example 1. As Hilbert space X we take the space L2 (0; 1) of squaresummable functions on (0,1) and for r = 1 as X r we take Sobolev space W21(0; 1) of functions f having square-summable derivatives f 0 2 L2(0; 1). In this case Dr = dtd . In Chapter 6 [9] as an example of equation (1.1) with operator having closed range one takes the Fredholm problem of the second kind 1 Tx(t) :=
x(t) ? ?1
Z 0
a(t; )x( )d = y(t);
(1:5)
where is eigenvalue of integral operator with kernel a(t; ). If a(t; ) has mixed square-summable partial derivatives then integral operator from (1.5) 2
belongs to H r for some and r = 1. It means that operator T of equation r (1.5) belongs to KB; ; for r = 1; B = I and for some > 0. r Example 2. Let X = W2r be the Sobolev space of 2 -periodic functions having derivatives up to the order rr which are square-summable on [0; 2]. In this case X = L2 (0; 2); Dr = dtd r and orthonormal basis fei g consists of trigonometric functions. In [6, x3.4.3] one considers singular integral equations with Hilbert kernel 2
2
Z Z ? t a 2 (t) Tx(t) := a1 (t)x(t) + 2 ctg 2 x( )d + a3(t; )x( )d = y(t): 0 0 (1:6) r It is common knowledge that for a1(t); a2 (t) 2 W2 singular operators 2
Z a t x( )d; 2 (t) Bx(t) := a1(t)x(t) + 2 ctg ? 2
(1:7)
0
B x(t)
:= a1 (t)x(t) +
(2)?1
Z2 0
ctg t ? a2 ( )x( )d; 2
act continuously from L2 to L2 and from W2r to W2r . Moreover, if a21 + a22 6= 0 then operator T from (1.6) has closed range and satis es the codition (1.2) for some > 0. On thei+other hand, if the kernel a3 (t; ) has square-summable partial derivatives @t@ i @tjj ; i; j = 0; 1; : : : ; r, then integral operator
Ax(t) :=
Z2
a3 (t; )x( )d
0
belongs to H r for some and X = L2 ; X r = W2r . Thus, if coecients a1(t); a2 (t); a3(t; ) satisfy the above mentioned conditions and a1 (t); a2 (t) r are xed then the operator T from equation (1.6) belongs to KB; ; for B determined by (1.7). Note that special case of (1.6) when a1 (t) 0; a2(t) 1 was considered in [3, x1.5]. Solving the problem (1.1) with operators T having nontrivial Ker(T ) = 6 0 one usually seeks the unique element that has minimal norm among all solutions of (1.1). If T + denotes the Moore-Penrose generalized inverse for operator T , this unique element is given by T +y; y 2 Range(T ). On the 3
other hand, as indicated in [9, p.152], if Range(T ) 6= X or Ker(T ) 6= 0 then the problem (1.1) is not well posed in the sence of Hadamard and the crux of the diculty is that only an approximation y 2 X to y 2 Range(T ) is available such that ky ? y kX , where is a known error bound. In this paper we will study the information complexity of the problem of recovery of T +y from the equation
Tx = y
(1:8)
The paper is organized as follows. In Section 2 we propose a new discretization scheme for (1.8). Combining this scheme with Brakhage's implicit iteration method we obtain an approximation to T +y with the best possible order of accuracy O(). In Section 3 we show that in the sence of Information Complexity our discretization scheme is order optimal for the class of r equation (1.1) with operators T 2 KB; ; at least in the case when Range(B ) is closed and dim Ker(B ) < 1. Note that these conditions are ful lled for operators B considered in Examples 1 and 2.
2 Brakhage's implicit iteration method In 1993 in personal communication H.Brakhage proposed an implicit nonstationary adaptive iteration method for solving linear operator equations of the form (1.1), which has a linear convergence rate. This method consists in the construction of the following sequence of approximate solutions
xk = (I + k?1T T )?1(xk?1 + k?1T y); x0 = 0
(2:1)
where
k?1 ? y k2X k?1 = kTkTx (2:2) (Txk?1 ? y)k2X Now we present a very nice unpublished result of H.Brakhage concerning method (2.1),(2.2).
Theorem. (H.Brakhage, 1993). Let T be a linear and injective operator from X to X . Assume that y 2 Range(T ) is such that the solution x^ of (1.1)
belongs to Range((T T ) ) for some > 0. Then there is a constant c > 0 such that kx^ ? xk kX c2? k+1 :
4
Proof. For g 2 Range((T T )p) we will use a norm kgkp = k(T T )pgkX : It is common knowledge that p
q
kgkX kgk?p+pq kgkqp+q ; p; q > 0;
(2:3)
kgkq kgkq+l : kgkp kgkp+l
(2:4)
and for l 0; q p
Let k = xk ? x^. From (2.1), (2.2) it follows that (I + k?1T T )k = k?1; (2:5) k k2 (2:6) k?1 = kk?1 k12=2 : k?1 1 If x^ 2 Range((T T ) ) then using (2.5) it is easy to calculate that for p ?
kk k2p + 2k?1kk k2p+1=2 + k2?1kk k2p+1 = kk?1k2p
(2:7)
kk kp kk?1kp; p ?:
(2:8)
and
Combining (2.7) and (2.4),(2.6) we have kk k21 = kk?1k21=2 kk k21 = kk?1k21 kk?1k21 kk?1k21=2
kk k21 kk k21=2 + 2k?1kk k21 + k2?1kk k23=2 = k?1 k?1 2 kk k3=2 kk k21 = 2
= k?1 = kk k2 1=2
k + 2k?1 + k?1 kk k21=2 + 2k?1 + k2?1 kk k21 ?1 k k?1 k?1 41 : = + 2 + 2 = = + 2 + k k?1 k?1 k k?1 k kk k21
Then
kk k1 2?1kk?1k1 2?2 kk?2k1 2?k k0 k1 = 2?k kx^k1: 5
Combining this with (2.3), (2.8) for x^ = (T T ) v; v 2 X , we arrive at the nal inequality 1
1
kx^ ? xk kX = kk kX kk k?+1 kk k1+1 k0k?+1 2? k+1 kx^k1 = 1
= 2? +1 kvkX+1 kvk+1 c2? +1 ; k
k
as claimed. Note that the structure of Brakhage's method (2.1), (2.2) is close to explicit iteration method proposed by V.M. Friedman [1], in which xk = (I ? k?1T T )xk?1 + k?1T y; k = 1; 2; : : : ; and k?1 has the form (2.2) too. Originally Friedman's method was proposed for nonperturbed equations (1.1) with operators having closed range. For noisy equations (1.8) with such operators explicit iteration method closed to Friedman's method was studied in [2, x3.3]. In [2] it is also shown that an order-optimal accuracy O() is attained when a posteriori residual stopping rule is used to determine the iteration number M for which ky ? TnxM kX b ; (2:9) where b > 1 and Tn is such that kT ? TnkX !X . Our goal in this section is to establish the order of accuracy O() for Brakhage's implicit nonstationary method with perturbed data when the iteration number m is selected by discrepancy principle connected with monitoring the norm of second residual. Namely, m will be chosen by kTn(Tnxm ? y )kX b : (2:10) In our opinion it makes sense to use a posteriori stopping criterion (2.10) because usually m selected by (2.10) has value less than M chosen as in (2.9). Note that for the stationary iteration methods stopping criterion (2.10) was studied earlier in [9, p.166]. In the sequel as operator Tn we will take
Tn = B +
n X k=1
(P2k ? P2k?1 )AP2n?k + P1AP2n :
Moreover, we assume that for y from (1.1) kykX . From Lemma A.2 [5] it follows 6
r Lemma 2.1 For T 2 KB; ; and n such that n2n ?1=r log1+1=r 1 kT ? TnkX !X :
Let us denote by Sp(L) the spectrum of some selfadjoint nonnegative operator L 2 L(X; X ). r , 0 < < and n such that n2n ?1=r log1+1=r 1 Lemma 2.2 For T 2 KB; ; 2 2 2 2 Sp(Tn Tn); Sp(TnTn ) [0; ] [ [( ? ) ; kTnkX !X ]:
The assertion of the Lemma 2.2 follows from Lemma 2.1 and Lemma 1.3 [9, p.154]. Let En and Fn be the orthogonal projectors on invariant subspaces of Tn Tn and TnTn respectively corresponding to the part of spectrum which belongs to [( ? )2; kTnk2X !X ]. Moreover, we will consider the orthogonal projectors E and F on closed subspaces Range(T ) and Range(T ) respectively. From Lemma 2.1 and Lemma 1.5 [9, p.156] we obtain the following statement.
Lemma 2.3 Asume the condition of Lemma 2.1. Then kTn(I ? En)kX !X ; k(I ? En )E kX !X ? :
Let us apply Brakhage's implicit iteration method to the equation (1.8) and de ne xk by xk = (I + k?1TnTn)?1(xk?1 + k?1Tn y ); (2:11) where 2 k?1 = kTkT(Tnxxk?1 ??yykX)k2 ; x0 = 0; Tny 6= 0: (2:12) n
n k?1
X xk may
From (2.11), (2.12) we see that be expressed as xk = gk (TnTn)Tn y ; where kY ?1 ? 1 gk = 1 ? (i + 1)?1 ; 0: From [7],[10] it follows
i=0
7
Lemma 2.4 For 2 (0; 1] max j1 ? gk ()j k? ;
2[0;1)
max gk () = k ;
where
2[0;1)
k :=
k?1 X i=0
i :
Let us denote by k the second residual obtained on k-th iteration of Brakhage's method (2.11), (2.12), that is k = Tn(Tnxk ? y ):
Theorem 2.1 Let the assumptions of Lemma 2.1 be ful lled and m be such that for some b 1 kk kX > b; k = 1; 2; : : : ; m, but km+1 kX b . Then kT +y ? xm+1 kX c ; where c depends only on ; ; r; b and .
Proof. We begin by noting that k = Tn(Tngk (TnTn)Tn y ? y ) = = (TnTngk (Tn Tn) ? I )Tny + (TnTngk (Tn Tn) ? I )Tn(y ? y) = = J 1 + J2 : (2:13) Using Lemmas 2.1 and 2.4 we have
kJ1kX k(TnTngk (TnTn) ? I )TnTn T +ykX + + k(TnTngk (TnTn) ? I )Tn(T ? Tn)T +ykX p + k + max j1 ? g ()j kT ? T k + max j 1 ? g ( ) j k T k X k n X !X kT kX 0 0 p ? 1 (k + 22 k?1=2 )kT +ykX ; (2:14) 8
p
p
kJ2kX sup j1 ? gk ()j 22 k?1=2
(2:15)
kT +ykX ?1:
(2:16)
0
By virtue of (1.2) for y such that kykX we have
Then from (2.13)-(2.16) one sees that for k = 1; 2; : : : ; m
b kk kX c1 k?1 + c2 k?1=2;
(2:17)
where c1 and c2 are some constants depending on ; . (In the sequel we will often use the same symbol c for possibly dierent constants). Keeping in mind (2.17) it is easy to calculate that
?1=2 k
p
2 2 b ?c2 + 2cc2 + 4c1b = p p 2p 1 (c2 + c22 + 4c1b) p p 2 b p2 c :
c2 + c2 + 4c1b Using this bound, we obtain for k = 1; 2; : : : ; m the estimate k c?1:
(2:18)
To complete the proof of the theorem we need the following lemmas.
Lemma 2.5 Assume the condition of Lemma 2.1. Then for any k k(I ? En )(xk ? T +y)kX c(1 + k ): Proof. From the de nitions of E; T + it follows that ET +y = T +y and as in [9, p. 158] we have
(I ? En)(xk ? T +y) = (I ? En)(TnTngk (TnTn ) ? I )ET +y+ +(I ? En)gk (TnTn )Tn(y ? TnT +y) = J3 + J4: From Lemma 2.3 and (2.16) one sees that
kJ3 kX = k(TnTngk (TnTn ) ? I )(I ? En)ET +ykX 9
sup j1 ? gk ()jk(I ? En )ET +ykX 0
kY ?1 ( ? ) sup (i + 1)?1 c : 0 i=0 On the other hand, from Lemma 2.1 and (2.16) we get the estimate ky ? Tn T +ykX = ky ? y + (T ? Tn)T +ykX (1 + ?1 ) (2:19) Then by virtue of Lemma 2.4 we nd kJ4kX k(I ? En )gk (TnTn)TnkX !X ky ? TnT +ykX
p c sup2 gk () c 2 k : 2[0; ]
Summing up we get the assertion of the lemma.
Lemma 2.6 Assume the condition of Theorem 2.1. Then there is some constant c depending only on ; ; b such that m c. Proof. Note that 2 m = kTkT(Tnxxm ??yykX)k2 X n n m 2 k F n (Tn xm ? y )kX k(I ? Fn )(Tn xm ? y )kX kT (T x ? y )k + : b X n n m
By virtue of the de nition of Fn for any k = 1; 2; : : : we have kFn(Tnxk ? y )kX kFn(Tnxk ? y )kX 1 kTn(Tnxk ? y )kX kTnFn(Tnxk ? y )kX ? Moreover, from Lemmas 2.3, 2.5 and (2.16), (2.18) it follows that
(2:20)
k(I ?Fn)(Tnxm ?y )kX = k(I ?Fn)(y ?y +(Tn ?T )T +y +Tn(xm ?T +y))kX (1 + ?1) + k(I ? Fn)Tn(xn ? T +y)kX c + kTn(I ? En)(I ? En)(xm ? T +y)kX c + k(I ? En)(xm ? T +y)kX c (1 + + m 2 ) c : 10
Using this bound and (2.20) we obtain the assertion of the lemma. Now we are in a position to complete the proof of our theorem. It is easy to see that kT +y ? xm+1 kX = = kEn(T +y ? xm+1 )kX + k(I ? En )(T +y ? xm+1 )kX : (2:21) By virtue of the de nition of En (see [9, p. 166]) kEn(T +y ? xm+1 )kX ?1 kTnEn(T +y ? xm+1 )kX = = 1 kFnTn (T +y ? xm+1 )kX : (2:22) ? Keeping in mind that km+1kX b , from (2.20) we nd kFn(Tnxm+1 ? y )kX ?1 km+1 kX b? : Using this bound and (2.19) we obtain
kFnTn(T +y ? xm+1 )kX kFn(TnT +y ? y )kX + (2:23) +kFn(Tnxm+1 ? y )kX kTnT +y ? y kX + b c : ? On the other hand, from (2.18) and Lemmas 2.5, 2.6 we know that
k(I ? En )(T +y ? xm+1 )kX c (1 + m+1 ) = = c (1 + m + m ) c (1 + ):
The assertion of the theorem follows from (2.21){(2.24).
(2:24)
Remark 2.1 Since k?1 kTnk?X2!X and k ! 1 with k ! 1, it follows from (2.17) that kk kX ! 0 with k ! 1 and there exists m satisfying stopping criterion (2.10)
11
3 The Estimate of Information Complexity In this section we shall investigate the information complexity of nding r approximate solutions of equations (1.1) with operators T 2 KB; ; and exact r r free terms y 2 X := fg : g 2 X ; kgkX r g. The formulation of the problem and terminology are borrowed from the monograph by J. Traub, G. Wasilkowski, H. Wozniakowski [8]. Let U = fuigki=1 be some collection of continuous functionals ui of which u1; u2; : : : ; uj are determined on L(X; X ) and uj+1; uj+2; : : : ; uk on X , card(U ) := k. Assume that we have perfect information about xed operator B in (1.4). Usually this operator has very simple form (see Examples 1 and 2 in Section 1), Therefore, as discrete information about equations (1.1), (1.8) we will consider numerical vector
U (A; y ) = fu1(A); u2(A); : : : ; uj (A); uj+1(y ); : : : ; uk (y )g;
(3:1)
generated by collection U . Any such collection of functionals will be called a method of specifying information. By the aldorithm ' of an approximate solution of the equations (1.1) we mean the operator assigning to information (3.1) an element '(U; A; y ) 2 X as an approximate solution of (1.1). Moreover, for a xed method of specifying information U we denote by (U ) the set of all algoritms using the information of the form (3.1). The error of the algorithm ' 2 (U ) on the class of equations (1.1) with r operators T 2 KB; ; and exact free terms y 2 Range(T ) \ Xr is de ned as r ; X r ; '; U ) = sup e (KB; ; r
sup
T 2KB; ; y2Range(T )\Xr
kT +y ? '(U; A; y )kX : sup y
ky?y kX
The minimal error which can be achieved using at most n values of information functionals is determined by the quantity r ; X r ) = inf Rn; (KB; ; U
card(U )n
r ; X r ; '; U ) inf e (KB; ;
'2(U )
(3:2)
called the n-th minimal radius of information. From the results of Chapter 6 [9] and Chapter 3 [2] it follows that for suciently large n r ; X r) Rn; (KB; ;
12
Then the quantity r ; X r ) = inf fn : R (Kr ; X r ) { g; { 1; N;{ (KB; ; n; B; ;
characterizes the information complexity of recovering solutions T +y of equar ; y 2 Range(T ) \ X r from perturbed equations tions (1.1) with T 2 KB; ; (1.8). This is the minimal amount of discrete information which allows to obtain the best possible order of accuracy O(). The next lemma ascertains a connection between (3.2) and so-called Babenko's pretabulated n-width supr n(Xr ; X ) := inf 2
n g2X
sup
g1 ;g2 2Xr
kg1 ? g2kX ;
(g1 )=(g2 )=(g)
where n is the set of all continuous maps from Xr into n-dimensional Euclidean space.
Lemma 3.1 If the operator B in (1.4) has a closed range and dim Ker(B ) < 1 then for suciently large 1; 2; 3; ; r ; X r ) 1 (X r ; X ); Rn;0(KB; ; 2 n
where depends only on ; ; .
Proof. Denote by r the imbedding constant of X r in X , i.e. kgkX r kgkX r for any g 2 X r . Let us assume for simplicity that there is v 2 X r such that kvkX = 1 and kvkX r = r?1 (In general case for any arbitrary however small " > 0 there is v" such that kv"kX = 1; r?1 kv"kX r r?1 + " ). Note that if
1 > kB kX !X r?1 ; 2 > kB kX r !X r r?1; 3 > kB kX r !X r r?2 ; then for any g 2 Xr ,
= minf 1r ? kB kX !X ; 2 ? kB kX r !X r r?1;
3r ? kB kX r !X r r?1; (kB kX r !X r + 1)?1g; 13
(3:3)
r one can nd the operator Tg 2 KB; ; such that
g = Tg v;
(3:4)
yg = Tg g 2 Xr : (3:5) Indeed, for g 2 Xr consider the operator Ag determined by formula Ag f = (g ? B v; f )v. For any f 2 X and determined by (3.3) we have
kAg f kX r kvkX r (kgk + kB kX !X kvkX )kf kX r?1( + kB kX !X )kf kX 1kf kX ; kAg f kX r = kg ? B vkX r (v; f ) ( + kB kX r !X r r?1)kf kX 2kf kX ; k(Dr Ag )f kX r = kg ? B vkX r (Dr v; f ) ( + kB kX r !X r r?1)kDr vkX kf kX ( + kB kX r !X r r?1)r?1 kf kX 3kf kX : This means that for any g 2 Xr Ag 2 H r . It is common knowledge that if Range(B ) is closed and dim Ker(B ) < 1; dim Range(A) < 1 then Range(A + B ) is closed too. Since dim Range(Ag ) = 1 and Ag 2 H r it r . Moreover, follows that for some > 0 Tg = B + Ag 2 KB; ; kyg kX r = kTg gkX r (kB kX r !X r + kAg kX !X r ) (kB kX r !X r + 1 ) ; Tgv = B v + Ag v = B v + (g ? B v)kvk2X = g; and (3.4), (3.5) are proved. From these relatoins it follows that g?Ker(Tg ) and g = Tg+yg ; yg 2 Xr . Thus, for any g 2 Xr we can nd an equatuon (1.1) r with T 2 KB; ; and y 2 Xr such that T +y = g. Then the same steps as in the proof of Lemma 17.1 [4] lead to the assertion of the lemma. Let ?n be the plane set of the form ?n
= f1g [1; 2n]
n [ (2k?1; 2k ] [1; 2n?k]:
k=1
Consider the method of specifying information
U?n (A; y ) = f(ei; Aej ); (i; j ) 2 ?n; (ek ; y ); k = 1; 2; : : : ; 2ng 14
and the algorithm 'm 2 (U?n ) within the framework of which we apply Brakhage's method (2.11), (2.12) to equation
Tnx = P2n y :
(3:6)
In this algorithm an approximation to the solution T +y is given by
'm (U?n ; A; y ) = gm+1(TnTn )TnP2n y and m is determined in Theorem 2.1, where instead of y we use P2n y . Keeping in mind that for n2n ?1=r log1+1=r 1 and y 2 Xr
ky ? P2n y kX ky ? P2n ykX + kP2n (y ? y )kX cr 2?nr + c ; from Theorem 2.1 we obtain the estimate r ; X r ; ' ; U ) c : e (KB; ; m ?n
(3:7)
Theorem 3.1 If dim Ker(B ) < 1; Range(B ) is closed and for the pretabulated width of the ball Xr we have the estimate
n (Xr ; X ) c n?r ; n = 1; 2; : : : ;
(3:8)
then for suciently large 1 ; 2 ; 3 ; ; and { r ; X r ) c ?1=r log1+1=r 1 c1 ?1=r N;{ (KB; ;
The method of specifying information U?n ; n2?n ?1=r log1+1=r 1 is orderr ; X r ). optimal in the power scale in the sense of the quantity N;{ (KB; ;
Proof. From Lemma 3.1 and (3.8) for any n such that r ; X r) { Rn; (KB; ; we have
r ; X r ) R (Kr ; X r ) c n?r ; { Rn; (KB; ; n;0 B; ;
n c ?1=r : 15
r ; X r ) c ?1=r . On the other hand, from (3.8) it follows Thus N;{ (KB; ; that for N = card(U?n ) n2n c ?1=r log1+1=r 1 ; r ; X r ) e (Kr ; X r ; ' ; U ) c Rn; (KB; ; B; ; m ?n and for suciently large { r ; X r ) N c ?1=r log1+1=r 1 : N;{ (KB; ; The theorem is proved.
Remark 3.1 The conditions of Theorem 3.1 are ful lled for operators B
and subspaces X r indicated in Examples 1 and 2. Thus, for equations considered in these Examples our Theorem gives the exact order of Information Complexity in the power scale.
16
References [1] Friedman, V. M. (1959), New methods of solving a linear operator equations, Dokl. Akad. Nauk SSSR. 128, 482-484. [2] Giljazov, S. F. (1987), "Methods for solving linear ill-posed problems", Moscow State Univ. Pub., Moscow. [3] Morozov, V. A., Grebennikov A.I. (1992), "Methods for solving ill-posed problems: Algorithmical Aspect", Moscow State Univ. Pub., Moscow. [4] Pereverzev, S. V. (1996), "Optimization of methods for approximate solution of operator equations", Nova Science Pub., New York. [5] Pereverzev, S. V., Schock, E., Solodky, S. G. (1998), "On the ecient discretization of integral equations of the third kind", Preprint N 229, Fachbereich Mathematik, Universitat Kaiserslautern, Kaiserslautern. [6] Prossdorf, S. (1978), "Some classes of singular equations", NorthHolland Pub. Comp., Amsterdam, New York, Oxford. [7] Schock, E. (1987), Implicite iterative methods for the approximate solution of ill-posed problems, Bollettino U. M. I. 7, 1171-1184. [8] Traub, J. F., Wasilkowski, G. W., Wozniakowski, H. (1988), "Information-Based Complexity", Academic Press, New York. [9] Vainikko, G. M., Veretennikov, A. Yu. (1986), "Iteration Procedures in ill-posed problems", Nauka, Moscow. [10] Xi, Z. (1993), "Iterated Tikhonov Regularization for linear ill-posed problems", Dissertation, Universitat Kaiserslautern, Kaiserslautern.
17