Some Perturbation Theory for Linear Programming James Renegar School of Operations Research and Industrial Engineering Cornell University Ithaca, New York 14853 e-mail:
[email protected] October 1992
Research
was supported by NSF Grant CCR-9103285 and IBM.
0
1 Introduction This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the optimal value to data perturbations. We show, for example, that an LP has a large optimal solution, or has a sensitive optimal value, only if the instance is nearly primal infeasible or dual infeasible. The results are not particularly surprising but they do formalize an interesting viewpoint which apparently has not been made explicit in the linear programming literature. The results are rather general. Several of the results are valid for linear programs de ned in arbitrary real normed spaces. A Hahn-Banach Theorem is the main tool employed in the analysis; given a closed convex set in a normed vector space and a point in the space but not in the set, there exists a continuous linear functional strictly separating the set from the point. We introduce notation, then the results. Let X; Y denote real vector spaces, each with a norm. We use the same notation (i.e. k k) for all norms, it being clear from context which norm is referred to. Let X denote the dual space for X; this is the space of all continuous linear functionals c : X ! IR (continuous with respect to the norm topology). Endow X with the operator norm; if c 2 X then
kc k := supfc x : kxk = 1g
De ne Y and its norm analogously. Let X denote the dual space of X . Note that X can be viewed as a subset of X ; x 2 X induces the continuous linear functional on X given by c 7! c x. If X = X then X is said to be re exive. To make this introductory section expositionally clean we assume throughout it that X is re exive; no such restriction is placed on Y ; in later sections the requirement that X be re exive is sometimes removed. Many important normed spaces are re exive, e.g., nite dimensional spaces regardless of the norm, Hilbert spaces. Let L(X; Y ) denote the space of bounded (i.e. continuous) linear operators from X to Y . Endow this space with the usual operator norm; if A 2 L(X; Y ) then
kAk := supfkAxk; kxk = 1g:
Let CX ; CY be convex cones in X; Y , each with vertex at the origin, i.e., each is closed under multiplication by non-negative scalars and under addition. The cone CX induces an \ordering" on X by 1
x0 x00 $ x0 ? x00 2 CX : (It is easily veri ed that we obtain a partial ordering i the cone CX is pointed; we do not assume pointedness.) Similarly, CY induces an \ordering" on Y: In this introductory section we assume CX and CY are closed. Given A 2 L(X; Y ); b 2 Y; c 2 X we de ne the LP instance d := (A; b; c ) by sup c x s:t: Ax b x 0: Many researchers have studied linear programming in this generality (cf. [1], [2], [3], [5]). Although linear programming from this vantage point is generally referred to by names such as \in nite linear programming" we prefer the phrase \analytic linear programming" because of close connections to functional analysis. Although we use the symbol \" the reader should note that all common forms of LP are covered by the general setting. For example, what one typically writes as \Ax = b" is obtained by letting CY = f0g. Similarly, what one typically expresses as \no non-negativity constraints" is obtained by letting CX = X. The LP instance d has a natural dual which we now de ne. First, the cone CX has a dual cone in X de ned by x2X
CX := fc~ 2 X ; x 2 CX ) c~ x 0g: De ne CY analogously. The closed cones CX and CY induce orderings on X and Y just as CX and CY induced orderings on X and Y . Relying on these orderings, the dual d of d := (A; b; c) is de ned as the following LP: y b y A c y 0: (Here, y A is the linear functional x 7! y Ax. The linear transformation y 7! y A is an element of L(Y ; X ); it is the dual operator of A.) Let val(d); val(d ) denote the optimal value for d; d; if d is infeasible de ne val(d) = ?1; if d is infeasible de ne val(d ) = 1. inf s:t:
y 2Y
2
It is a simple exercise to verify the weak duality relation val(d) val(d ). However, unlike nite dimensional polyhedral linear programming, strong duality may be absent in analytic linear programming; it can happen that val(d) < val(d ) even when d and/or d is feasible. Conditions guaranteeing that no such \duality gap "occurs are of interest. Consider the space D consisting of all instances d = (A; b; c); we view the cones CX ; CY (and hence the orderings) to be xed independently of d. For d = (A; b; c) 2 D de ne the norm of d as the value
kdk := maxfkAk; kbk; kckg: Let Pri; denote the set of all d 2 D which are (primal) infeasible. Let Dual; denote the set of all d 2 D for which the dual d is infeasible. Given d 2 D de ne dist(d; Pri;) := inf fkd ? d~k; d~ 2 Pri;g; the distance from d to the set of primal infeasible LP's. Similarly, de ne dist(d; Dual;) := inf fkd ? d~k; d~ 2 Dual;g: Let Feas(d) denote the set of feasible points for d, and let Opt(d) denote the optimal solution set. It can happen that Opt(d) = ; even when val(d) is nite and the cones CX and CY are closed; in general, \sup" cannot be replaced by \max" in specifying the objective of d. Theorem 1.1 Assume X is re exive, CX and CY are closed. Assume d = (A; b; c) 2 D. If d satis es dist(d; Pri;) > 0 then statements (1) and (2) are true:
(1) There exists x 2 Feas(d) satisfying
kbk kxk dist(d; Pri;) *
*
(2) If x0 2 Feas(d+d) where d := ( 0 ; b; 0 ) (i.e. perturbation of b alone), then there exists x 2 Feas(d) satisfying
f1; kx0kg kx ? x0k kbk max dist(d; Pri;) If d satis es both dist(d; Pri;) > 0 and dist(d; Dual;) > 0 then statements (3), (4) and (5) are true:
3
(3)
?kbk kc k val(d) = val(d ) kbk kck dist(d; Pri;) dist(d; Dual;) (4) Opt(d) 6= ;; moreover, if x 2 Opt(d) then
kbk
kdk
kxk dist(d; Dual;) dist(d; Pri;)
(5) If d := (A; b; c) satis es both kdk < dist(d; Pri;) and kdk < dist(d; Dual;); then
k + kc k k b k + k b k k c jval(d + d) ? val(d)j kAk dist(d; Dual;) ? kdk dist(d; Pri;) ? kdk kdk Pri; [ Dual;) dist(d; kdk kc k + kck + kbk dist(d; Pri;) ? kdk dist(d; Dual;) kdk k + kbk + kck dist(d;kbDual ;) ? kdk dist(d; Pri;) 2
The propositions in Section 3 provide bounds which are more detailed; several of the bounds allow both X and Y to be arbitrary normed vector spaces. Note that the rst order terms of the quantity on the right of the inequality in assertion (5) of the theorem are bounded above by
where
kAk + kbk + kc k d1 d2 d3 d1 d2 d1 d2
(1.1)
Pri;) dist(d; Dual;) ; d := minfd ; d g: d1 := dist(d; 3 1 2 kdk ; d2 := kdk This bound depends cubically on the inverses of the relative distances d1 and d2. In Section 5 we show by way of examples that this bound cannot be improved in general; similarly for the other bounds in the theorem. However, the results of Section 3 can be used to obtain better bounds for many special cases, e.g., if jval(d)j kdk then one obtains a bound analogous to (1.1) depending quadratically on the inverses of d1 and d2 rather than cubically. 4
The theorem focuses on solution characteristics of d rather than d ; analogous results pertaining to d are discussed in Section 3. Assertions (1) and (2) in the theorem are similar to results one nds in the literature on linear equations, i.e., when the cones CX and CY are subspaces. (See [11]). In this restricted context the term \maxf1; kx0kg" occurring in assertion (2) can be replaced simply with \1"; in fact, this replacement is valid for arbitrary closed cones CX and CY if d = (A; b; c) has the property that dist((A; tb; c ); Pri;) (1:2) is independent of t satisfying 0 < t 1 (as it is if CX and CY are closed subspaces).1 Assertion (2) of the theorem can be easily extended to allow perturbations * in A as well as b. If x0 2 Feas(d + d) where d = (A; b; 0 ) then x0 2 * * Feas(d+0 d) where 0d := ( 0 ; b ? (A)x0 ; 0 ) and hence assertion (2) implies there exists x 2 Feas(d) satisfying f1; kx0kg : kx ? x0 k (kbk + kAk kx0k) max dist(d; Pri;) The bounds asserted by the theorem will be useful in developing a complexity theory for linear programming where problem instance \size" is de ned using quantities similar to condition numbers; see Renegar [3] and Vera [5] for work in this direction. Others have studied perturbations of linear programs but not in terms of the quantities dist(d; Pri;) and dist(d; Dual;); cf. Homan [4], Mangasarian [6], [7] and Robinson [9]. In the sections that follow we do not assume X or Y is re exive unless stated. However, we do assume X and Y are indeed normed as is natural for perturbation theory. Whenever we write \cone" we mean \convex cone with vertex at the origin". We do not assume the cones CX and CY are closed unless stated.
2 Duality Gaps
In this section we prove that if X is re exive, CX and CY are closed, dist(d; Pri;) > 0 and dist(d; Dual;) > 0 then val(d) = val(d ), i.e. no duality gap. We begin with well-known propositions from which the proof follows easily. For completeness we include short proofs of the well-known propositions. 1 To see this fact note we may assume that kx0 k > 1. Let t := 1=kx0 k and observe that x=kx0k is feasible for (A;tb; c ) and x0 =kx0k is feasible for (A; t(b +b);c ). Applying assertion (2) to these \scaled" LP's and using the assumption that (1.2) is independent of 0 < t 1 gives the desired conclusion.
5
The exposition throughout the paper allows the reader to skip all proofs yet still follow the main thread of the development. Fix A 2 L(X; Y ) and de ne C(A) := fb 2 Y ; 9 x 0 such that Ax bg: It is easily veri ed that C(A) is a cone. Let C(A) denote the closure of C(A); the closure is also a cone. If X and Y are nite dimensional and if the cones CX and CY which de ne the orderings are polyhedral, then C(A) is polyhedral and hence C(A) = C(A). In other cases the closure may contain additional points. A system of inequalities Ax b x 0 is said to be asymptotically consistent if b 2 C(A), i.e. if it can be made consistent by an arbitrarily slight perturbation of b. The following proposition relies only on the local convexity of the normed space Y . In the case of nite-dimensional polyhedral linear programming the proposition is Farkas' lemma. Proposition 2.1 (Dun [2]) Assume A 2 L(X; Y ); b 2 Y . Consider the following two systems:
y A 0 y 0 y b < 0
Ax b x 0
The rst system is asymptotically consistent if and only if the second is inconsistent.
Proof. Let C(A) denote the set of all functionals y 2 Y satisfying y~b 0 for all ~b 2 C(A). Since in a normed space any closed convex set can be strictly separated from a point not in the set by a continuous linear functional (separation version of the Hahn-Banach Theorem; cf. [10], Theorem 3.4b), and since C(A) is a convex set (because C(A) is), it easily follows that C(A) = f~b 2 Y ; y 2 C(A) ) y ~b 0g: Noting that C(A) = f~b ; 9 x 0; ^b 0 such that ~b = Ax + ^bg; 6
(2.1)
and recalling that fx; x 0g and f^b; ^b 0g are cones, we have C(A) = fy 2 Y ; (x 0 ) y Ax 0) and (^b 0 ) y ^b 0)g; that is, C(A) = fy 2 Y ; (y A 0) and (y 0)g:
(2.2)
The proposition follows immediately from (2.1) and (2.2). 2 Shortly, we state a dual analog of the proposition. Before doing so we digress to present a simple technical lemma important for establishing the analog. Assuming A 2 L(X; Y ), the system Ax b $ b ? Ax 2 CY (2:3) x 0 $ x 2 CX is naturally associated with a system which we will call its \double-dual -extension": b ? A x 2 CY (2:4) x 2 CX where x 2 X , CX is the dual cone for CX ; CY is the dual cone for CY , and A is the dual operator of the dual operator of A. Viewing X as a subset of X ; Y as a subset of Y , it is trivial to see that CX CX ; CY CY; also, if x 2 X then A x = Ax. Hence, any solution of the original system is also a solution of the double-dual-extension. However, the doubledual-extension can have solutions which are not solutions of the original system; it can have solutions in X nX. This possibility is a well-known obstruction to the development of a symmetric duality theory for analytic linear programming. To gain symmetry we occasionally impose additional assumptions; ironically, these are not symmetric, being restrictive for X but not for Y . The following lemma becomes important.
Lemma 2.2 Assume X is re exive, CX and CY are closed. Assume A 2 L(X; Y ); b 2 Y: Then the solutions for the system (2.3) are identical with those
for the double-dual-extension (2.4).
Proof. It is a simple exercise using the separation version of the Hahn-Banach Theorem to prove that CY = CY \ Y since CY is closed. Similarly, CX = CX \ X = CX using X = X . Since Ax = A x for all x 2 X, the lemma follows. 2 7
Corollary 2.3 Assume X is re exive, CX and CY are closed. Assume A 2 L(X; Y ); c 2 X . Consider the following two systems: Ax 0 y A c x 0 y 0 c x > 0
The rst system is inconsistent if and only if the second is asymptotically consistent (meaning that it can be made consistent by an arbitrarily slight perturbation of c ).
Proof. Replacing the rst system of Proposition 2.1 with the second system of the corollary, note that the appropriate second system for the proposition will then have the same solutions as the rst system of the corollary by Lemma2.2. 2 Letting d = (A; b; c) denote an LP instance, the asymptotic optimal value "val(d) is de ned to be the supremum of the optimal objective values of LP's obtained from d by perturbing b by an arbitrarily small amount, that is, ~ "val(d) := lim sup val(d): #0
~b k~b?bk r such that (3.2) remains valid if r0 is substituted for r. For otherwise there would exist a sequence fxi g Feas(d) such that kxi ? x0k # r; the sequence would have a weakly convergent subsequence by the Eberlein-Shmulyan Theorem (cf. [14]); because the weak-closure of a convex set in a normed space is identical to the strong closure (by the Hahn-Banach Theorem) it is easily argued that the weak limit x of the subsequence would satisfy kx ? x0 k r, x 2 CX and b ? Ax 2 CY contradicting (3.2). There exists 0 > 0 such that kb0 ? bk 0 ) Feas(d0) \ fx; kx ? x0k r0 g = ; where d0 := (A; b0; c ): For otherwise there would exist a sequence of pairs f(xi; bi)g such that kbi ? bk # 0, kxi ? x0k r0 and xi 2 Feas(di ) where di = (A; bi ; c); the sequence fxi g would have a weakly convergent subsequence by the Eberlein-Shmulyan Theorem; the weak limit x of the subsequence would be easily argued to satisfy both x 2 Feas(d) and kx ? x0k r0 contradicting the de nition of r0. Let K denote the closed, convex set consisting of all points of distance at most r from the convex set [ Feas(d0): b0
kb0 ?bk0
By de nition of 0 , x0 2= K; thus, by the Hahn-Banach Theorem there exists ~c 2 X such that supfc~ x; x 2 K g < c~ x0 and hence kb0 ? bk 0 ) supfc~ x; x 2 Feas(d0)g < c~ x0 ? kc~kr: ~ Letting d~ := (A; b; ~c), it follows that the asymptotic optimal value "val(d) satis es "val(d)~ < ~c x0 ? kc~ kr: 15
Hence, by Proposition 2.4 and the fact that "val(d)~ 6= 1 (because "val(d) 6= ?1), (3.3) is valid. 2 Proof of Proposition 3.11. Assume the conclusion is not true, i.e., assume Feas(d) \ fx; kx ? x0 k rg = ; where
* kbk : x0 := 0 and r := dist(d; Pri;)
Then d; x0 and r satisfy the assumptions of Lemma 3.13. Let d~ be as in the conclusion of that lemma; thus kbk kc~ k : val(d~ ) < c~ x0 ? kc~ kr = ? dist(d; Pri;) However, this contradicts Proposition 3.6 applied to d~ since dist(d;~ Pri;) = dist(d; Pri;). 2 Proof of Proposition 3.12. Assume the conclusion is not true, i.e., assume Feas(d) \ fx; kx ? x0 k rg = ; where
f1; kx0kg r := kbk max dist(d; Pri;)
Then d; x0 and r satisfy the assumptions of Lemma 3.13. Let d~ be as in the conclusion of that lemma; thus f1; kx0kg (3:4) val(d~ ) < c~ x0 ? kc~ kr = c~ x0 ? kbk kc~ k max dist(d; Pri;) Note that x0 2 Feas(d~ + d) implying, by weak duality, c~ x0 val([d~ + d]): (3:5) Combining (3.4) and (3.5), f1; kx0kg : val([d~ + d]) ? val(d~ ) > kbk kc~ k max (3:6) dist(d; Pri;) Since dist(d;~ Pri;) = dist(d; Pri;) > 0; (3.6) and Lemma 3.9 yield kc~kmaxf1; kx0kg < maxfkc~ k; val(d~ )g This gives a contradiction since val(d~ ) kc~ k kx0 k by (3.4). 2 16
4 Proof of Theorem 1.1 We now collect our results to prove the theorem. As mentioned in the introduction, the results of Section 3 provide for many special cases better bounds than those asserted by the theorem. Proposition 3.11 establishes (1) of the theorem, Proposition 3.12 establishes (2), and Proposition 3.8 establishes (3). Proposition 3.5 together with the lower bound provided by (3) of the theorem show that in the setting of (4) we have Opt(d) 6= ; and kck : k x 2 Opt(d) ) kxk dist(d;kbDual max 1; ;) dist(d; Pri;) Since kc k kdk, to establish (4) it thus suces to show that under the assumptions of (4) at least one of the following two relations is true: kdk (4:1) dist(d; Pri;) 1 *
(4:2) Opt(d) = f 0 g: * Assume (4.1) is not true. Then dist( 0 ; Pri;) (i.e., the distance from Pri; * to the identically zero LP) satis es dist( 0 ; Pri;) > 0 and hence, since Pri; is closed under multiplication by positive scalars, Pri; = ;. Note that Pri; = ; implies f~b 2 Y ; ~b 0g = Y ; otherwise the RHS of the identically zero LP could be perturbed to obtain an infeasible LP, a contradiction. Hence, Pri; = ; implies that the feasible region for all LP's is precisely the closed cone fx; x 0g. Consequently, if (4.1) and (4.2) are not true then a slight perturbation of c in d = (A; b; c) creates an LP with unbounded optimal solution, contradicting dist(d; Dual;) > 0 as is assumed in (4) of the Theorem. In all, either (4.1) or (4.2) is true, concluding the proof of (4). Towards proving (5) rst note we may assume that kdk (4:3) dist(d; Pri; [ Dual;) 1: Otherwise, by arguments analogous to the preceding, we deduce that * fc~ 2 X ; ~c *0g = X (hence fx 2 X; x 0g = f 0 g) and f~b 2 Y ; ~b 0g = Y . Consequently 0 is an optimal solution for all LP's; then (5) follows trivially. So we may assume (4.3). Assertion (5) is established with (3), Proposition 3.10 and tedious consideration of several cases. For example, consider the case
jval(d + d) ? val(d)j = val(d + d) ? val(d) 17
(4:4)
val(d + d) < 0: (4:5) Then the factor for kAk in the bound of Proposition 3.10 is kc k maxfkb + bk; ?val(d + d)g (4:6) dist(d; Pri;) dist(d + d; Dual;) The crucial point is that at most one of the two quantities val(d) and val(d+d) appear in this expression; the same is true for all cases. As noted in the remark following the statement of Proposition 3.10, the value ?val(d + d) in (4.6) can be replaced with ?val(d); in turn, ?val(d) can be replaced with the upper bound kbk kc k=dist(d; Pri;) (provided by statement (3) of the theorem), which clearly does not exceed (kbk + kbk)kdk (4.7) dist(d; Pri; [ Dual;) : Substituting (4.7) for ?val(d + d) in (4.6) and using (4.3), one arrives at a quantity which is obviously bounded from above by the factor for kAk in (5). Similarly, one argues that the other factors are correct in the case that (4.4) and (4.5) are valid. The other cases are handled with equally tedious and obvious arguments. 2
5 Examples In this section we display simple examples indicating that the bounds of Theorem 1.1 cannot be improved in general without relying on additional parameters. The examples form a family of two variable LP's depending on parameters s and t: max x1 s:t: (st)x1 + x2 s tx1 + x2 1 x1 ; x 2 0 We use the notation d(s; t) = (A(s; t); b(s; t); c(s; t)) when referring to the family. We assume 0 < s 1; 0 < t 1 Endow the domain X = IR2 with the `1 -norm and endow the range Y = IR2 with the `1 -norm. So X = IR2 is given the `1 -norm and Y = IR2 is given the `1-norm. 18
Lemma 5.1 kA(s; t)k = kb(s; t)k = kc(s; t)k = 1 dist(d(s; t); Pri;) = s dist(d(s; t); Dual;) = t Proof. A straightforward exercise left to the reader. 2 Lemma 5.1 elucidates the roles of the parameters s and t; they allow us to choose the values kdk kdk dist(d; Pri;) ; dist(d; Dual;) independently between 1 and 1. In light of this the following lemma indiciates the \optimality" of several of the bounds of Theorem 1.1 or their dual versions.
Lemma 5.2 (1) If y is feasible for the dual d(s; t) then ky k 1t (2) Let d(r; s; t) denote the LP obtained from d(s; t) by replacing the rst coef cient of b(s; t) with s ? r. Letting x0 = ( 1t ; 0), which is feasible for d(s; t); and assuming r 0, all feasible points x for d(r; s; t), satisfy
kx ? x0 k str : (3)
val(d(s; t)) = 1t (4)
9 y 2 Opt(d(s; t)) 3 ky k = st1
(namely, y = ( st1 ; 0).)
~ s; t) is the LP obtained from d(s; t) by replacing the (1; 1) coecient (5a) If d(r; of A(s; t) with st + r then ~ s; t)) ? val(d(s; t))j 1 jval(d(r; lim ~ s; t) ? A(s; t)k = st2 r#0 kA(r; 19
(5b) If d(r; s; t) is as de ned in (2) then
jval(d(r; s; t)) ? val(d(s; t))j = 1 : lim r#0 kb(r; s; t) ? b(s; t)k st Proof. A straightforward exercise left to the reader. 2 The \optimality" of the bounds of Theorem 1.1 not addressed by Lemma 5.2, and the \optimality" of the remaining dual version bounds, are veri ed by considering the dual of d(s; t) as the primal, multiplying the objective by ?1 to obtain a maximization problem. The reader might be confused as to why Lemma 5.2(2) indicates the optimality of Theorem 1.1(2); in the notation of the theorem, let d = d(r; s; t); d+d = d(s; t) and consider r # 0 so dist(d; Pri;) ! s.
20
References [1] E.J. Anderson and P. Nash, Linear Programming in In nite Dimensional Spaces: Theory and Applications, Wiley, Chichester, 1987. [2] R.J. Dun, \In nite programs," in Linear Equalities and Related Systems, H.W. Kuhn and A.W. Tucker, eds., Princeton University Press, Princeton, 1956, 157-170. [3] A.V. Fiacco and K.O. Kortanek, eds., Semi-In nite Programming and Applications, Lecture Notes in Economics and Mathematical Systems 215, Springer-Verlag, New York, 1983. [4] A.J. Homan, \On approximate solutions of systems of linear inequalities," Journal of Research of the National Bureau of Standards 49, 1952, 263-265. [5] C. Kallina and A.C. Williams, \Linear programming in re exive spaces," SIAM Review (13) 1971, 350-376. [6] O.L. Mangasarian, \A condition number for linear inequalities and linear programs," in Methods of Operations Research (43), Proceedings of the 6th Symposium on Operations Research, Universitat Augsburg, G. Bamberg and O. Opitz, eds., Verlagsgruppe Athenaum/Hain/Scriptor/Hanstein, Konigstein 1981, 3-15. [7] O.L. Mangasarian, \Lipschitz continuity of solutions of linear inequalities, programs and complementarity problems," SIAM J. Control and Optimization, Vol. 25, No. 3, May 1987. [8] J. Renegar, \Incorporating condition measures into the complexity theory of linear programming," preprint, School of Operations Research and Industrial Engineering, Cornell University (
[email protected]). [9] S.M. Robinson, \Bounds for error in the solution set of a perturbed linear program," Linear Algebra and Its Applications 6, 1973, 69-81. [10] W. Rudin, Functional Analysis, McGraw-Hill, New York, 1973. [11] G.W. Stewart and J. Sun, Matrix Perturbation Theory, Academic Press, Boston, 1990. [12] J. Vera, \Ill-posedness and the computation of solutions to linear programs with approximate data," preprint, Dept. Ingenieria Industrial, Universidad de Chile, Republica 701, Casilla 2777, Santiago, Chile (
[email protected]). 21
[13] J. Vera, \Ill-posedness and the complexity of deciding existence of solutions to linear programs," preprint, Dept. Ingenieria Industrial, Universidad de Chile, Republica 701, Casilla 2777, Santiago, Chile (
[email protected]). [14] K. Yosida, Functional Analysis, Springer-Verlag, New York, 1966.
22