A Polynomial Time Interior{Point Path{Following Algorithm for LCP Based on Chen{Harker{Kanzow Smoothing Techniques Song Xu and J.V.Burke September 4, 1997 Revised Version
Abstract
A polynomial complexity bound is established for an interior point path following algorithm for the monotone linear complementarity problem that is based on the Chen{Harker{Kanzow smoothing techniques. The fundamental dierence with the Chen{Harker and Kanzow algorithms is the introduction of a rescaled Newton direction. The rescaling requires the iterates to remain in the interior of the positive orthant. To compensate for this restriction, the iterates are not required to remain feasible with respect to the ane constraints. If the method is initiated at an interior point that ispalso feasible with respect to the ane constraints, then the complexity bound is O( nL); otherwise, the complexity bound is O(nL). The relations between our search direction and the one used in the standard interior-point algorithm are also discussed.
Keywords: linear complementarity, polynomial complexity, path following, interior point method
Abbreviated title: An LCP Continuation Method AMS(MOS) subject classi cations (1991): 90C33
Department of Mathematics, Box # 354350, University of Washington, Seattle, WA 98195{4350 (e{ mail:
[email protected] and
[email protected]). This research is supported by the National Science Foundation Grant No. DMS-9303772
1
1 Introduction Consider the monotone linear complementarity problem: LCP: Find (x; y) 2 IRn IRn satisfying Mx ? y + q = 0; (1.1) T x 0; y 0; (x ) y = 0; (1.2) where M 2 IRnn is positive semi{de nite and q 2 IRn. In this paper, we establish the polynomial complexity of an interior point path following algorithm for LCP. The proposed algorithm can be viewed as an interior point variation on the Chen and Harker [5] and the Kanzow [21] non{interior path following algorithms for LCP. The algorithm has the same best polynomial{time complexity as is exhibited by the standard short{step interior point path following algorithm. The results of this paper represent a rst step toward understanding the relationship between interior and non{ interior path following methods and provide a spring{board for discovering the complexity of the new non{interior path following algorithms for LCP. Path following (or continuation) methods for solving LCP are typically designed to follow the path in the positive orthant, IRn++ IRn++ , determined by the equations F (x; y) = 0 for > 0 where the function F : IRn IRn ! IRn IRn is given by " # Mx ? y + q F (x; y) := (x; y) ; (1.3) with (a; b) = ab ? and
3 2 (x1; y1) (1.4) (x; y) = 64 : : : 75 : (xn; yn) This path is called the central path [23]. Most path following methods attempt to follow the central path by applying Newton's method to the equations F (x; y) = 0 for decreasing values of . In this regard, predictor{corrector strategies are the most popular due to their rapid local convergence (for examples, see [29, 31, 34, 35]). In a predictor{corrector strategy a predictor step ( = 0) is followed by a corrector step ( > 0) to return the iterates to a pre-speci ed neighborhood of the central path. Interior point methods stay in the vicinity of the central path and remain in the positive orthant [23]. Each iterate of a feasible interior point method must satisfy the ane equation 0 = Mx ? y + q while the iterates of an infeasible interior point method are not required to satisfy this equation. Non{interior path following methods also follow the central path, but the iterates do not necessarily reside in the positive orthant. The rst non{interior path following method for LCP was developed by Chen and Harker [5] and was based on a scaled version of the function s 2 a + b (a; b) = 2 ? (a ?4 b) + : 2
Later Kanzow [21] developed non{interior path following methods based on the functions and s 2 2 a + b a +b + : (a; b) = p ? 2 2 It is easy to show that (a; b) = 0 (or (a; b) = 0) if and only if 0 a; 0 b; and ab = . Thus, the functions and have a fundamental advantage over the function which makes them well suited to non{interior path following methods. That is, the condition (a; b) = 0 or ( (a; b) = 0) guarantees the non{negativity of the arguments a and b. Using and as building blocks, one de nes the functions " # Mx ? y + q F (x; y) := (x; y) ; (1.5) where and
2 (x; y) = 64
(x1 ; y1)
::: (xn ; yn )
"
3 75 ;
(1.6)
# Mx ? y + q F (x; y) := (x; y) ; (1.7) where 3 2 (x1 ; y1) (1.8) (x; y) = 64 : : : 75 : (xn; yn) Clearly, a point (x; y) is on the central path if and only if F (x; y) = 0 (or F (x; y) = 0). Setting = 0, we have 0(a; b) = minfa; bg. This instance of has been studied extensively by Pang [26, 27] and Harker and Pang [18]. Again taking = 0, the function 0(a; b) was introduced by Fischer in [12] who attributes the function to Burmeister. In the growing literature associated with the function 0 [8, 10, 11, 14, 13, 15, 20, 22, 30] it is often referred to as the Fischer{Burmeister function. Newton{like implementations based on these functions have proven to be quite successful. Extensions to solving nonlinear programming problems with equilibrium constraints are also being studied [9, 16]. Two reasons for the growing interest in non{interior methods based on the functions and are (1) these methods are ideally suited for application to the nonlinear complementarity problem where the interiority restriction on the iterates is quite severe, and (2) the numerical evidence on the eciency of these methods is very impressive. We partially explain this numerical success by establishing the polynomial complexity of an interior point implementation. This is the rst complexity result available for these methods and indicates that a similar complexity result may be possible for a non{interior implementation. The functions and are very closely related. By rewriting the expression under the square root, we see that s 2 a + b (a; b) = 2 ? (a +4 b) + ( ? ab) 3
and
s a + b (a + b)2 + ( ? ab) : (a; b) = p ? 2 2 The analysis of the algorithms based on these two functions are very similar diering only by a constant here and there. In what follows, we choose to focus on the function . However, whenever appropriate, we indicate how the analysis diers when the function is used instead of . The plan of the paper is as follows. In Section 2, we discuss the rescaled Newton direction for the functions F and F and its relation to the direction used in the standard interior point methods. The algorithm and its complexity are presented in Section 3. We conclude in Section 4 with some remarks on the relationship between our algorithm and the algorithms studied by Mizuno [24, 25]. A few words about our notation are in order. All vectors are column vectors with the superscript T denoting transpose. The notation IRn is used for real n{dimensional space with IRn++ being the positive orthant, i.e. the set of vectors in IRn that are componentwise positive. Following standard usage in the interior point literature, we denote by e 2 IRn the vector each of whose components is 1, and, for the vectors x; y, and z in IRn, we denote by X; Y; and Z the diagonal matrices whose diagonal entries are given by x; y, and z, respectively, e.g., Xii = xi for i = 1; 2; : : : ; n. With this notation, the function (x; y) de ned in (1.4) can be written as (x; y) = Xy ? e. Given x 2 IRn , we denote by kxk1 ; kxk, and kxk1, the 1{norm, the 2{norm, and the 1{norm of x, respectively, and by xmin the minimum component of the vector x.
2 The Rescaled Newton Directions The rst step in our analysis is to rescale the Newton step to yield iterates comparable to those of a standard interior point strategy. By analogy with the infeasible interior point strategies, at iteration k we compute a Newton step based on the equations " k# F k (x; y) = s0 ; where sk := (1 ? k )(Mxk ? yk + q) with 0 < k < 1. The equations for the Newton step (xk ; yk) take the form
M x ? y = ? k (Mxk ? yk + q) Dxk x + Dyk y = ? k (xk ; yk ) ;
(2.1) (2.2)
where
0 0 1 1 k k B p1 B1 CC CC x y Dxk := diag B @ 2 ? r (xk )2+(iyk )2 A and Dyk := diag B@ p2 ? r (xk)2+(iyk )2 A: i i i i k k 2 2 + + 2
2
4
Observe that if (x; y) is on the central path, then s 2 2 xi + yi + = xip+ yi ; for i = 1; : : : ; n. 2 2 r k2 k2 yi ) + k in the de nitions of the diagonal matrices By replacing the expression (xi ) +( 2 xkip+yik Dxk and Dyk by the expression 2 and then multiplying (2.2) through by the diagonal p k k matrix diag 2(xi + yi ) , we obtain the rescaled Newton equations
M x ? y = ? k (Mxk ? yk + q) (2.3) Y k x + X k y = ?2 ^ k (xk ; yk ) : (2.4) where, for the sake of convenience, we de ne ! x + y i i (x; y) : ^ (x; y) := diag p 2 The only dierence between these rescaled Newton equations and the Newton equations used in a standard interior point path following strategy occurs in equation (2.4) where 2 ^ k (xk ; yk) replaces the usual term k (xk ; yk ). The pattern of our development should now be clear. After a few identities and inequalities relating the functions ^ (x; y) and (x; y) have been established, a convergence theory and complexity analysis can be developed which is based on standard techniques from the theory of interior point path following methods. The necessary identities and inequalities are given in the next lemma. Lemma 2.1 For a; b; 2 IR satisfying a > 0; b > 0; > 0, we have (a + p b)(a; b) ^(a; b) = (2.5) (a + b) + a2 + b2 + 2 ; 2 (2.6) (a; b) = 2 ^ (a; b) ? (a; b); and j ^(a; b)j j(a; b)j; (2.7) where ^(a; b) := ap+2b (a; b). In addition, given 2 (0; 1) and (x; y) 2 IRn++ IRn++ in the neighborhood we have
N (; ) = f(x; y) : k(x; y)k g;
(2.8)
2
2 ^ (x; y) ? (x; y)
2(1 ? ) :
(2.9)
Remarks 1. The identity (2.5) is due to B. Chen [2].
2. Inequality (2.7) implies that for > 0 and > 0, the neighborhood N (; ) (used in standard interior point methods) is contained in the neighborhood
(2.10) N ^(; ) = f(x; y) :
^ (x; y)
g: 5
3. The identities (2.5) and (2.6), the inequalities (2.7) and (2.9),i and the second remark h p 2 remain (a + b) + a + b2 + 2 , ^, and ^ replaced q the2expressions i h valid with by (a + b) + (a ? b) + 4 , ^ and ^ , respectively, where ^ (a; b) := a+2 b (a; b) and ^ (x; y) := diag (xi+2 yi ) (x; y). 4. The bound (2.9) shows that the values 2 ^ k (xk ; yk ) approach the values k (xk ; yk ) used in the standard interior point methods as k approaches 0. This partially explains why the interior point method based on the rescaled Newton direction studied in the next section has the same best polynomial-time complexity as the standard short step path-following interior point methods.
Proof For a; b; 2 IR satisfying a > 0; b > 0; > 0, the identity (2.5) is easily derived. The identity (2.6) and the inequality (2.7) follow readily from (2.5). In order to see the bound (2.9), note that for any (x; y) 2 N (; ), we have xiyi (1 ? ) for i = 1; 2; : : : ; n; and so
(xi + yi)2 x2i + yi2 + 2xiyi 2x y 2(1 ? ) for i = 1; 2; : : : ; n: i i 2 2 It now follows from the identity (2.6), the inequality (2.7), and (2.11) that
2
^
k (x; y)k2
( x; y ) 2 ^
2 (x; y) ? (x; y) k (x; y)k 2(1 ? ) 2(1 ? ) 2 2 2 2(1 ? ) = 2(1 ? ) :
(2.11)
2
3 The Algorithm We present an algorithm based on the interior point algorithm proposed by Tseng [29]. The global linear convergence and complexity results are stated without proof since these proofs closely parallel those provided by Tseng [29]. Algorithm Choose any ( 1; 2) 2 IR2 satisfying 2 0 < 1 < 2 < 1; 1 2? 1 < 2; 2(1 ?1 ) + 2 1 2 + 22(1 ? 1) < 1; (3.1) 1 1 2n+1 satisfying k 0 (x0; y 0)k 0 . Let and any (x0; y0; 0) 2 IR++ 1
1 ? [ 2(1 ?1 1) + 2 1 2 + 22(1 ? 1)] pn + : 1 = 1 2
6
(3.2)
For k = 0; 1; : : : ; compute (xk+1; yk+1; k+1) from (xk ; yk; k ) according to
xk+1 = xk + xk; yk+1 = yk + yk; k+1 = (1 ? k )k ; where k is the largest 2 (0; 1] satisfying
2 ^ k (xk ; yk ) + X k (Mxk ? yk + q)
2(k ?
k (xk ; yk)
);
(3.3) (3.4)
and (xk ; yk) is the unique vector in IR2n satisfying
M xk ? yk = ? k (Mxk ? yk + q); Y k xk + X k yk = ?2 ^ k (xk ; yk ):
(3.5) (3.6)
Remarks 1. To implement the algorithm using the function , begin by selecting the parameters 1 and 2 so that 2 0 < 1 < 2 < 1; 1 2? 1 < 2; (1 ? 1 ) + 2 1 2 + 22(1 ? 1) < 1: 1 1 Then set 2 1 ? [ (1? 1 1) + 2 1 2 + 22(1 ? 1)] pn + 1 = 1
(3.7) (3.8)
and replace the function ^ k in (3.4) and (3.6) by the function ^ k . 2. The set of pairs ( 1; 2) satisfying either (3.1) or (3.7) is non{empty. In both cases, it follows that 1 > 0. For a choice of 1 and 2 satisfying both (3.1) and (3.7), take 1 = 0:09; 2 = 0:2. The following Theorem shows that if the algorithm is initiated in the positive orthant, then it is well{de ned and the iterates remain both in the positive orthant and the neighborhood f(x; y) 2 IRn IRn : k(x; y)k 1g for decreasing values of .
Theorem 3.1 Fix any ( 1; 2) 2 IR2 satisfying
(3.1).k Let 1 bek givenk by (3.2). Suppose
2 n +1 k k k k k that (x ; y ; ) 2 IR++ satis es k (x ; y ) 1 and (x ; y ) satis es (3.5) and (3.6), with k being the largest 2 (0; 1 ] satisfying (3.4), then k > 0 exists and (xk + xk; yk + yk ) > 0;
(1? k)k (xk + xk; yk + yk )
1(1 ? k )k :
(3.9) (3.10)
Proof For sake of simplicity, denote (x; y; ) = (xk ; yk; k ), (x; y) = ( xk ; yk) and
= k respectively. We rst establish that > 0 exists. By Proposition 2.1,
2 ^ (x; y)
2 k(x; y)k 2 1. By the choice of 1 and 2 in (3.1), we know that 2 1 < 2(1 ? 1). Therefore,
2 ^ (x; y)
< 2(1 ? 1) 2( ? k(x; y)k); which implies that > 0 exists since a strict inequality holds in (3.4) when = 0. 7
Next set r = 2 ^ (x; y), s = Mx ? y + q, and z = X ?1x and =
(x;y)
. Then the system (3.5) and (3.6) can be rewritten as MXz ? y = ? s; Y Xz + X y = ?r: It follows that
(Y X + XMX )z = ?r ? Xs: Since M is positive semide nite, we have
zT Y Xz zT (Y X + XMX )z = zT (?r ? Xs) kzk kr + Xsk ;
(3.11)
which implies that
r + Xsk kr + Xsk ; kzk kmin (3.12) (1 ? ) i xi yi where the second inequality follows from k(x; y)k = kXy ? ek = . Thus, Xy (1 ? )e: (3.13) By combining (3.12) with (3.4), we nd that kzk 2 < 1. Thus, in particular, e + z > 0.
Let x0 = x + x and y0 = y + y. Hence x0 = x + Xz = X (e + z) > 0, since x > 0. From Lemma 2.1 and (3.6), for each i = 1; : : : ; n, we have
j(xi + (x)i)(yi + (y)i) ? j = jxiyi ? + [xi(y)i + yi(x)i] + (x)i(y)ij ^2 (xi; yi) ^ = j2 (xi; yi) ? ( xip+yi )2 ? 2 ^(xi; yi) + (x)i(y)ij
^2 (xi; yi)
2
(xi +yi )2 + j(x)i(y )i j 2 ^2 (xi; yi) + j(x)i(y)ij;
2(1 ? )
(3.14)
where (3.14) follows from the fact that (xi+2yi )2 2xiyi 2(1 ? ): Therefore,
0 ^2 1
( x ; y ) 1 1
kX 0 y0 ? ek 2(1 ?1 )
B@ : : : CA
+ kZX yk (by (3.14))
^2 (xn; yn)
0 ^2 1
( x ; y ) 1 1
2(1 ?1 )
B@ : : : CA
+
Z (?2 ^ (x; y) ? Y x)
(by (3.6))
^2 (xn; yn)
2
1
1 2(1 ? ) ^ (x; y) + 2 Z ^ (x; y)
+ kZY Xzk 8
2(1 ?1 )
^ (x; y)
2 + 2
Z ^ (x; y)
1 + kZY Xzk1
2(1 ?1 )
^ (x; y)
2 + 2 kzk
^ (x; y)
+ zT Y Xz
) + 2 + kzk kr + Xsk (Proposition 2.1 and (3.11)) 2(1( 2 ? ) )2 + 2 + (1 ? ) (by (3.4)) 2(1( 2 2 2 ? ) 2
(3.15) 2(1 ?1 ) + 2 1 2 + 22(1 ? 1); 1 where (3.15) follows from the fact that 1 and 2 2 + 22(1 ? ) = 2 2 + 22 ? 22 = (2 2 ? 22) + 22 1(2 2 ? 22) + 22 = 2 1 2 + 22(1 ? 1): Therefore, by (3.1) and (3.15), jjX 0y0 ? ejj 1. It follows from x0 > 0 and 1 < 1 2
that y0 > 0. The triangle inequality, (3.15), and the inequality < 1 now imply that jjX 0y0 ? (1 ? )ejj jjX 0y0 ? ejj + pn (1 ? ) (1 ? ) 1? 2 1 pn 2
2(1? 1 ) + 2 1 2 + 2 (1 ? 1) + 1? 1 ?
p p 1 ? 1 ( n + 1 ) 1 n + 1 ? = 1 : 1? 1
1
The following global linear convergence result is patterned on [29, Theorem 3.1].
2
Theorem 3.2 Let S denote the set of solutions to LCP: S := f(x; y) : 0 x; 0 y; y = Mx + q; and xT y = 0g; and let 1; 2 ; 1 and f(xk ; y k ; k ; k )gk=0;1;:::; be generated by the Algorithm of Section 3. Then
0 < ( xk ; yk ); 1k
k (xk ; yk)
; and
k (Mx0 ? y0 + q) = Mxk ? yk + q; 0 for all k, where for k > 0 k = (1 ? k?1) : : : (1 ? 0)0: 9
(3.16) (3.17) (3.18) (3.19)
Moreover, the sequence f(xk ; y k )g is bounded if and only if the solution set S is non{empty, in which case, for any (x; y ) 2 S , we have k minf1; 2g for all k, where
8 [ 2 (1? 1 )?2 1 ]0 mini yi0 < 0 0 0 0 [(1+ ) n +( x )T y0 +(x )T y0 +(x0 )T y ]kMx0?y0 +qk1 if Mx ? y + q 6= 0; 1 2 = : 1 if Mx0 ? y 0 + q = 0:
(3.20)
Thus, if S is nonempty, the Algorithm of Section 3 forces k to zero at a global linear rate with the convergence ratio less than 1 ? minf1; 2g. Therefore, by standard results in the interior point literature (e.g., see [23]), one can nd an element of S in O((minf1; 2g)?1L) iterations, where L denotes the size of the binary encoding of the problem. It is easily seen that 1?1 = O(pn), so it ponly remains to estimate 2?1. In the case where (x0; y0; 0) is chosen so that 2?1 is O( n) (such as when Mx0 ? y0 + q = 0), the iteration count is p O( nL). In the case where (x0; y0; 0) is the standard choice
x0 = pe; y0 = d e; 0 = pd ; p jjxnjj1 ; d maxf jjynjj1 ; jjpMe + qjj1g; where (x; y) is any element of S , the formula (3.20) yields 2?1 (13(4?+ )1?)n2 ; 2 1 1 so the iteration count is O(nL).
4 Concluding Remarks In Section 3, we present the rst rate of convergence result and the rst complexity result of any kind for a path following algorithm based on the Chen{Harker{Kanzow smoothing techniques. In the year following the announcement of this result there has been a urry of activity on rate of convergence results for non-interior path following and smoothing methods for the complementarity problems and variational inequalities [1, 3, 4, 6, 28, 32, 33]. All of this work builds on new neighborhood concepts [1, 19] for smoothing paths (e.g. the central path) that do not necessarily lie in the positive orthant. The rst global linear convergence result for non-interior path following methods appears in [1]. The work in [3, 4, 6, 32, 33] builds on the ideas presented in [1] and [19]. In [3, 4, 6] the authors extend the analysis to larger classes of smoothing functions [7, 17] and, in addition, establish the local quadratic or super{linear convergence of their methods. In [28], the authors build on the approach developed in [19] and establish the global linear convergence or the local super{linear convergence of their method depending on the choice of parameters. The interior point path following method studied in this paper is essentially a variation on standard interior point methods wherein the right hand side in the Newton equations is perturbed in a very special way. For this reason, it may be possible to analyze the algorithm within the framework developed by Mizuno. In [24, 25], Mizuno proposed a 10
class of feasible interior point algorithms for monotone LCP which are based on the search direction (xk ; yk) satisfying the following equations
M x ? y = 0; Y k x + X k y = vk+1 ? X k yk ;
(4.1) (4.2)
where vk+1 2 IRn++ . By choosing dierent sequences of fvk g, Mizuno is able to construct both path following and potential reduction methods and thereby provides a unifying framework within which a number of interior point methods can be studied. By setting
vik = k ?
2 k k k (xi ; yi );
i = 1; 2; : : : ; n;
(4.3)
we can see from Lemma 2.1 that the Newton equations (4.1) and (4.2) are the same as the equations (2.1) and (2.2). Therefore, it may also be possible to analyze the rescaled Newton direction discussed in Section 2 within Mizuno's framework. In order for this program to work, one must rst show that the sequence fvk g de ned in (4.3) satis es the following two properties: (A) the sequence fvk g is an {sequence for some 0, that is, vk+1 2 pN (vk ; ) for all k = 0; 1; 2; : : :, where N (v; ) = fu 2 IRn : kV ?0:5(v ? u)k vming; with V = diag (v), and p (B) there is an iteration index m = O( nL) such that 0 vm 2?2L+1 e: For the applications considered by Mizuno, it is a straightforward matter to verify (A) and (B). This is not the case with the sequence de ned in (4.3), indeed, this question remains open. If this question could be resolved in the armative, then it might be possible to extend Mizuno's analysis to this setting and thereby develop a deeper understanding of the relationship between the standard path following methods, potential reduction methods, and the path following method proposed in this paper.
Acknowledgment: We thank Bintong Chen for observing the identity (2.5) which greatly simpli ed the original proof of Lemma 2.1.
References [1] J. Burke and S. Xu. The global linear convergence of a non-interior path-following algorithm for linear complementarity problem. Preprint, Department of Mathematics, University of Washington, Seattle, WA 98195, December, 1996. [2] B. Chen. Personal communication, January, 1997. [3] B. Chen and X. Chen. A global linear and local quadratic continuation method for variational inequalities with box constraints. Preprint, Department of Management and Systems, Washington State University, Pullman, WA 99164-4736, March, 1997. 11
[4] B. Chen and X. Chen. A global and local super{linear continuation method for P0 + R0 and monotone NCP. Preprint, Department of Management and Systems, Washington State University, Pullman, WA 99164-4736, May, 1997. [5] B. Chen and P.T. Harker. A non{interior{point continuation method for linear complementarity problems. SIAM J. Matrix Anal. Appl., 14:1168|1190, 1993. [6] B. Chen and N. Xiu. A global linear and local quadratic non-interior continuation method for nonlinear complementarity problems based on Chen-Mangasarian smoothing functions. Preprint, Department of Management and Systems, Washington State University, Pullman, WA 99164-4736, February, 1997. [7] C. Chen and O. L. Mangasarian. A class of smoothing functions for nonlinear and mixed complementarity problems. Comp. Optim. and Appl. , 5:97|138, 1996. [8] T. De Luca, F. Facchinei, and C. Kanzow. A semismooth equation approach to the solution of nonlinear complementarity problems. Mathematical Programming, 75:406| 439, 1996. [9] F. Facchinei, H. Jiang, and L. Qi. A smoothing method for mathematical programs with equilibrium constraints. Preprint, School of Mathematis, University of New South Wales, Sydney, Australia, 1996. [10] F. Facchinei and J. Soares. Testing a new class of algorithms for nonlinear complementarity problems. In F. Giannessi and A. Maugeri, editors, Variational Inequalities and Network Equilibrium Problems, pages 69|83. Plenum Press, New York, 1995. [11] F. Facchinei and J. Soares. A new merit function for nonlinear complementarity problems and a related algorithm. SIAM J. Optimization, 7:225|247, 1997. [12] A. Fischer. A special Newton{type optimization method. Optimization, 24:269|284, 1992. [13] A. Fischer. An NCP{function and its use for the solution of complementarity problems. In D.-Z. Du, L. Qi, and R.S. Womersley, editors, Recent Advances in Nonsmooth Optimization, pages 88|105. World Scienti c Publishers, Singapore, 1995. [14] A. Fischer. A Newton{type method for positive semi{de nite linear complementarity problems. J. Optim. Theory Appl., 86:585|608, 1995. [15] A. Fischer. On the super{linear convergence of a Newton{type method for LCP under weak conditions. To appear in Optimization Methods and Software, 1996. [16] M. Fukushima, Z.-Q. Luo, and J.-S. Pang. A globally convergent sequential quadratic programming algorithm for mathematical programs with linear complementarity constraints. Preprint, Department of Mathematical Sciences, Whiting Scool of Engineering, The Johns Hopkins University, Baltimore, Maryland, 21218{2692, USA, 1996. 12
[17] S. A. Gabriel and J. J. More. Smoothing of mixed complementarity problems. In M. C. Ferris and J. S. Pang, editors, Complementarity and Variational Problems: State of the Art, pages 105|116. SIAM, Philadelphia, PA, 1997. [18] P.T. Harker and J.-S. Pang. A damped Newton method for the linear complementarity problem. In E.L. Allgower and K. Georg, editors, Computational Solution of Nonlinear Systems of Equations, pages 265|284. Lectures in Applied Mathematics, Volume 26, AMS, Providence, Rhode Island, 1990. [19] K. Hotta and A. Yoshise. Global convergence of a class of non{interior{point algorithms using Chen{Harker{Kanzow functions for nonlinear complementarity problems. Discussion Paper Series, No. 708, University of Tsukuba, Tsukuba, Ibaraki 305, Japan, December, 1996. [20] C. Kanzow. Global convergence properties of some iterative methods for linear complementarity problems. SIAM J. Optimization, 6:326|341, 1996. [21] C. Kanzow. Some noninterior continuation methods for linear complementarity problems. SIAM J. Matrix Anal. Appl., 17:851|868, 1996. [22] C. Kanzow and H. Jiang. A continuation method for (strongly) monotone variational inequalities. Preprint, Institute of Applied Mathematics, University of Hamburg, Bundesstrasse 55 D{20146 Hamburg, Germany, 1996. [23] M. Kojima, N. Meggido, T. Noma, and A. Yoshise. A uni ed approach to interior point algorithms for linear complementarity problems. Springer{Verlag, Berlin, 1991. [24] S. Mizuno. An O(n3L) algorithm using a sequence for a linear complementarity problem. Journal of the Operations Research Society of Japan, 33:66|75, 1990. [25] S. Mizuno. A new polynomial time method for a linear complementarity problem. Mathematical Programming, 56:31|43, 1992. [26] J.-S. Pang. Newton's method for B{dierentiable equations. Math. Oper. Res., 15:311|341, 1990. [27] J.-S. Pang. A B{dierentiable equation{based, globally and locally quadratically convergent algorithm for nonlinear programs, complementarity, and variational inequality problems. Math. Programming, 51:101|131, 1991. [28] L. Qi and D. Sun. Globally linearly, and globally and locally superlinearly convergent versions of the Hotta{Yoshise non{interior point algorithm for nonlinear complementarity problems. Applied Mathematics Report, School of Mathematics University of New South Wales, Sydney 2052, Australia, May, 1997. [29] P. Tseng. Simpli ed analysis of an O(nL){iteration infeasible predictor{corrector path{following method for monotone LCP. In R.P. Agarwal, editor, Recent Trends in Optimization Theory and Applications, pages 423|434. World Scienti c Press, Singapore, 1994. 13
[30] P. Tseng. Growth behavior of a class of merit functions for the nonlinear complementarity problem. J. Optim. Theory Appl., 89:17|37, 1996. [31] S.J. Wright. A path{following infeasible{interior{point algorithm for linear complementarity problems. Optimization Methods and Software, 2:79|106, 1993. [32] S. Xu. The global linear convergence of an infeasible non{interior path{folowing algorithm for complementarity problems with uniform P{functions. Technique Report, Department of Mathematics, University of Washington, Seattle, WA 98195, December, 1996. [33] S. Xu. The global linear convergence and complexity of a non-interior path-following algorithm for monotone LCP based on Chen-Harker-Kanzow-Smale smoothing function. Technique Report, Department of Mathematics, University of Washington, Seattle, WA 98195, February, 1997. p [34] Y. Ye and K. Anstreicher. On quadratic and O( nL){iteration convergence of a predictor{corrector algorithm for LCP. Math. Programming, 62:537|551, 1993. [35] Y. Zhang. On the convergence of a class of infeasible interior point algorithms for the horizontal linear complementarity problem. SIAM J. Optimization, 4:208|227, 1994.
14