A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES

Report 2 Downloads 102 Views
A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES MENGLIN CAO AND MICHAEL C. FERRIS Abstract. We explain and justify a path-following algorithm for solving the equations

AC (x) = a, where A is a linear transformation from IRn to IRn , C is a polyhedral convex subset of IRn, and AC is the associated normal map. When AC is coherently oriented, we are able to prove that the path following method terminates at the unique solution of AC (x) = a, which is a generalization of the well known fact that Lemke's method terminates at the unique solution LCP(q; M ) when M is a P{matrix. Otherwise, we identify two classes of matrices which are analogues of the class of copositive{plus and L{matrices in the study of the linear complementarity problem. We then prove that our algorithm processes AC (x) = a when A is the linear transformation associated with such matrices. That is, when applied to such a problem, the algorithm will nd a solution unless the problem is infeasible in a well speci ed sense.

1. Introduction In this paper we are concerned with the Ane Variational Inequality problem. The problem can be described as follows. Let C be a polyhedral set and let A be a linear transformation from IRn to IRn . We wish to nd z 2 C such that hA(z) ? a; y ? zi  0; 8y 2 C: (AVI) This problem has appeared in the literature in several disguises. The rst is the linear generalized equation, that is 0 2 A(z) ? a + @ C (z); (GE) where C () is the indicator function of the set C de ned by 8
0 and proceeding in the direction (?e; ?1). As the manifold M is nite, according to [3, Th. 15.13] the algorithm generates, in nitely many steps, either a point (x; ) in the boundary of M with F (x; ) = 0, or a ray in F ?1(0) di erent from the starting ray. As the boundary of M is NC  f0g, we see that in the rst case  = 0 and, by our earlier remarks, x then satis es AC (x) = a. Therefore in order to justify the algorithm we need only show that it cannot produce a ray di erent from the starting ray. The algorithm in question permits solving the perturbed system F (x;  ) = p(), where p() is of the form n p() = pii X

i=1

for appropriately chosen vectors pi . It is shown in [3] that p() is a regular value of F for each small positive , and it then follows by [3, Th. 9.1] that for such , F ?1(p()) is a connected 1-manifold Y (), whose boundary is equal to its intersection with the boundary of M, and which is subdivided by the chords formed by its intersections with the cells of M that it meets. Finally, for an easily computed function

b() =

n

X

i=1

bii

we have (w(1 ); 1) + b() 2 Y (), and for small positive  this point evidently lies on a ray in F ?1(p()). Because we start on this ray, Y () cannot be homeomorphic to a circle, and therefore it is homeomorphic to an interval. A simple computation at the starting point shows that the curve index [3, x12] at that point is ?1. By [3, Lemma 12.1] this index will be constant along Y (). However, a computation similar to that in [3, Lemma 12.3] shows that in each cell of M, if the direction of Y () in that cell is (r; ) then (sgn )(sgn det T ) = ?1 where T is the linear transformation associated with AC in the corresponding cell of NC . Under our hypotheses, det T must be positive, and therefore  is negative everywhere along Y (). But this means that the parameter  decreases strictly in each cell of linearity that Y () enters, and it follows from the structure of M that after nitely many steps we must have  = 0, and therefore we have a point x with AC (x) = a + p(). Now in practice the algorithm does not actually use a positive , but only maintains the information necessary to compute Y () for all small positive , employing the lexicographic

A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES

5

ordering to resolve possible ambiguities when  = 0. Therefore after nitely many steps it will actually have computed x0 with AC (x0) = a. Note that for linear complementarity problems, the above algorithm corresponds to Lemke's method [8]. It is well known that for linear complementarity problems associated with P{ matrices, Lemke's method terminates at a solution. For variational inequalities, we have a similar result due to the analysis above.

Theorem 2.3. Given the problem (NE), assume that AC is coherently oriented; then the path following method given in this section terminates at a solution of (NE).

3. Algorithm Implementation The previous section described a method for solving the Ane Variational Inequality over a general polyhedral set and showed (under a lexicographical ordering) that a coherently oriented normal equation (NE) can be solved in a nite number of iterations by a path{ following method. In this section, we describe the numerical implementation of such a method, giving emphasis to the numerical linear algebra required to perform the steps of the algorithm. We shall specialize to the case where C is given as C := fz j Bz  b; Hz = h g (2) and we shall assume that the linear transformation A(z) is represented by the matrix A in our current coordinate system. We can describe our method to solve the normal equation in three stages. Note that by \solving", we mean producing a pair (x; (x)), where x is a solution of (NE) and (x) is the projection of x onto the underlying set C . In the rst stage we remove lines from the set C , to form a reduced problem (over C~ ) as outlined in the theory above. The lineality space of C as de ned by (2) is B lin C = ker H We calculate bases for the lineality space and its orthogonal complement by performing a QR factorization (with column pivoting) of BT H T . If W V represents these bases, the reduced problem is to solve the normal equation A~C~ y = ~a (3) where ~  b; Hz ~ = h ; B~ = BV; H~ = HV: (4) C~ = z Bz Here (5) A~ = U T AU; ~a = V T I ? AZ a with Z = W (W T AW )?1W T ; U = (I ? ZA)V (6) "

i

h

n



#

h

i

o





6

MENGLIN CAO AND MICHAEL C. FERRIS

and Z satis es Z T AZ = Z T . In practice, A~ and ~a are calculated using one LU factorization of W T AW . Furthermore, the solution pair (x; (x)) of the original normal equation (NE) can be recovered from the solution pair (y; (y)) of (3) using the identities

xl = Z (b ? AV (y)) x = xl + V y (x) = xl + V (y) Therefore, we can assume that the problem has the form (3), with C~ given by (4) and that the matrix HB~~ has full column rank. In the second stage, we determine an extreme point of the set C~ , and using this information reduce the problem further by forcing the iterates to lie in the ane space generated by the equality constraints. More precisely, we have the following result: Lemma 3.1. Suppose ye 2 C~ and Y is a basis for the kernel of H~ . Then y solves (3) if and only if y = ye + Y x where x solves AC x = a (7) h

i

~ , a = Y T (~a ? Ay ~ e) and C = z BY ~ z  b ? By ~ e . Furthermore, BY ~ has Here A = Y T AY i h full column rank if and only if HB~~ has full column rank. n



o

Thus, to reduce our problem to one over an inequality constrained polyhedral set, it remains to show how we generate the point ye 2 C~ . In fact we show how to generate ye as an extreme point of C~ and further, how to project this extreme point into an extreme point of C . The following result is a well known characterization of extreme points of polyhedral sets [9, x3.4].

Lemma 3.2. Let u be partitioned into free and constrained variables (uF ; uC ). u is an extreme point of D = fu = (uF ; uC ) j Du = d; uC  0 g if and only if u 2 D and fdi j i 2 Bg are linearly independent, where B := F fj 2 C j uj > 0 g. S

If we adopt the terminology of linear programming, then the variables corresponding to B are called basic variables; similarly, the columns of D corresponding to B are called basic columns; extreme points are called basic feasible solutions. The extreme points of systems of inequalities and equalities are de ned in an analogous manner. Note that extreme points of C~ are (by de nition) precisely the extreme points of B~ ?I z = b ; s  0: (8) h H~ 0 s "

#" #

"

#

The slack variables s are implicitly de ned by z, so without ambiguity we will refer to the above extreme point as z. For other systems of inequalities and equations a similar convention will be used. The following lemma outlines our method for constructing the relevant extreme points.

A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES

Lemma 3.3. Suppose

h

7

i

has linearly independent columns, Y is a basis of the kernel of ~ . Then is an extreme point of (8) if and only if ye = y + Y z, for some H~ and B = BY ~  = h and z is an extreme point of y, z where Hy B~ H~ ye

" #

~ ; s  0: B ?I zs = b ? By

h

i

(9)

In our method we produce an extreme point of (8) as follows. Find orthonormal bases U and Y for im H~ and ker H~ respectively. This can be carried out by a singular value decomposition of H~ or by QR factorizations of H~ and H~ T (in fact, Y could be calculated as a by{product of stage 1 of the algorithm). Let y = UU T h and use this value of y in (9). If b 2= im B , then nd an extreme point of (9) by solving the following auxiliary problem with the revised simplex method: minimize zaux ~  ~  z  b ? By subject to B b ? By zaux zaux  0: h

i

"

#

Note that z = 0, zaux = 1 is an initial feasible point for this problem, with basic variables (z; zaux). In contrast to the usual square basis matrix (with corresponding LU factors), we use a QR factorization of the non square basis matrix. The calculations of dual variables and incoming columns are performed in a least squares sense using the currently available QR factorization. This factorization is updated at each pivot step either by using a rank{one update to the factorization or by adding a column to the factorization (see [6]). In order to invoke Lemma 3.1, we let ye = y + Y z be the feasible point needed to de ne (7). Note that in the well known method of Lemke, stages one and two are trivial since C = IRn+ has no lines and a single extreme point at 0. Furthermore, stage one is an exact implementation of the theory outlined in the previous section and stage two corresponds to determining an extreme point and treating the de ning equalities of C in an e ective computational manner. It remains to describe stage three of our method. We are able to assume that our problem is given as

AC x = a

(10)

  b , where B has full column rank and xe is an extreme point of C (easily with C = z Bz determined from z). We also have available a basis matrix corresponding to this extreme point along with a QR factorization, courtesy of stage two. The method that we use to solve this problem is precisely a realization of the general scheme for piecewise linear equations developed by [3]. The general method of Eaves (assuming a ray start and regular value v) moves along the curve F ?1(v) in the direction d1 from x1 and can be described as follows: n



o

8

MENGLIN CAO AND MICHAEL C. FERRIS

Algorithm 1

Initialize :

Let Lk denote the linear map representing F on the cell k . Determine (x1; 1; d1) satisfying L1 d1 = 0; d1 points into 1 at x1. F (x1) = v (11) x1 2 1 2 M; x1 2 int fx ? d1 j   0 g  F ?1v: (12)

Iteration :

Given (xk ; k ; dk ) let

k := sup f j xk + dk 2 k g If k = +1 then ray termination. If xk+1 := xk + k dk 2 @ M then boundary termination. Otherwise determine (xk+1; k+1; dk+1 ), dk+1 6= 0, satisfying Lk+1 dk+1 = 0; and dk+1 points into k+1 from xk+1. k+1 2 M n fk g with xk+1 2 k+1 Set k = k + 1 and repeat iteration.

(13)

(14)

How does this relate to the description we gave in the previous section? The manifold we consider is M = NC  IR+ and the corresponding cells A are given by (FA + NFA )  IR+ where FA are the faces of C .   b which are active. A face of C is described by the set of constraints from the system Bz Let A represent such a set so that FA = z BAz = bA; BI z  bI where I is the complement of the set A. The normal cone to the face (the normal cone to C at some point in the relative interior of FA) is given by B T u j uA  0; uI = 0 It now follows that an algebraic description of (x; ) 2 A is that there exist (x; z; uA; sI ; ) which satisfy BAz = bA BI z ? sI = bI ; sI  0 (15) x = z + BAT uA; uA  0   0 n

n



o

o

A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES

9

In particular, if xe is the given extreme point, the corresponding face of the set C is used to de ne the initial cell 1. The piecewise linear system we solve is F (x; ) := AC (x) ? (e + a) = 0 where e is a point in the interior of NC (xe ). An equivalent description of NC (xe) is given by BAT u j u  0 from which it is clear that the interior of this set is nonempty if and only if BA has full column rank. n

o

  b with active constraints A, then BA Lemma 3.4. If xe is an extreme point of z Bz n

has full column rank.



o

Proof. By de nition,

 (16) G := BBAI ?0I has linearly independent columns. If BA does not have linearly independent columns, then BAw = 0, for some w 6= 0, so that G BwI w = 0 with (w; BI ) 6= 0, a contradiction of (16). #

"

"

#

This is a simple proof (in this particular instance) of the comment from the previous section that the normal cone has interior at an extreme point. For consistency, we shall let e be any point in this interior BAT u j u  0 , and for concreteness we could take n

o

2 3

1 T  e = ?BA ... : 1 6 7 6 7 4 5

Hence F is speci ed, v = 0 and the cells of A are de ned. By solving the perturbed system F (x; ) = p() (as outlined in x2), we know that F ?1(p()) is a connected 1{ manifold whose boundary is equal to its intersection with the boundary of M and which is subdivided by the chords formed by its intersections with the cells of M that it meets. In practice, this means that (under the lexicographical ordering induced by p()) we may assume nondegeneracy. Thus, if ties ever occur in the description that follows, we will always choose the the lexicographical minimum from those which achieve the tie. Note that if (x; ) 2 A as de ned in (15) then  + x ? z ? e ? a F (x; ) = Az

10

MENGLIN CAO AND MICHAEL C. FERRIS

It follows that if (x; ) 2 A F ?1(0) (i.e. (x; ) is in one of the chords mentioned in the previous paragraph), then there exist (x; z; uA; sI ; ) satisfying  + e + a x ? z = ?Az BAz = bA BI z ? sI = bI ; sI  0 (17) T  x ? z = BAuA; uA  0   0 T

Furthermore, these equations determine the chord on the current cell of the manifold, or in the notation used to describe the algorithm of Eaves, the map LA . The direction is determined from (11) by solving LA d = 0, which can be calculated by solving x ? z = ?Az + e BAz = 0 (18) BI z ? sI = 0 x ? z = BAT uA At the rst iteration, BA has full column rank, so that z = 0, which also implies that sI = 0. The remaining system of equations is x = e x = BAT uA We choose  = ?1 in order to force the direction to move into 1 (as required by (11)), and then it follows that x = ?e for the choice of e outlined above uA = (1; : : : ; 1)T . The actual choice x1 = (w(); ) given in the previous section ensures that (12) is satis ed. We can now describe the general iteration and the resultant linear algebra that it entails. We are give a current point (x; z; uA; sI ; ) satisfying (17) for some cell A and a direction (x; z; uA; sI ; ) satisfying (18). The value of k to satisfy (13) can be calculated by the following ratio test; that is to nd the largest  such that

uA + uA  0 sI + sI  0 (19)  +   0 Ray termination occurs if uA  0, sI  0 and   0 . Obviously, if  +  = 0, then we have a solution. Otherwise, at least one of the fui j i 2 Ag or fsi j i 2 I g hits a bound in (19). By the lexicographical ordering we can determine the \leaving" variable from these uniquely. The set A is updated (corresponding to moving onto a new cell of the manifold) and a new direction is calculated as follows: if ui, i 2 A is the leaving variable, then A := A n fig, si = 1 and the new direction is found by solving (18); if si, i 2 I is the leaving variable, then A := A fig, ui = ?1 and the new direction is found by solving (18). Note that in both cases, the choice of one component of the direction ensures movement into the new (uniquely speci ed) cell A and forces a unique solution of (18). The linear algebra needed for an implementation of the method is now clear. The actual steps used to carry out stage 3 are now described. First of all, x is eliminated from (17) to S

A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES

give

11

 + e + a = BAT uA + BIT uI ?Az BAz ? sA = bA BI z ? sI = bI   0; uA  0; uI = 0; sI  0; sA = 0

Note that we have added in the variables which are set to zero for completeness. The QR factorization corresponding to the given extreme point is used to eliminate the variables z. In fact, we take as our initial active set A, the variables corresponding to QR^ , where R^ is the invertible submatrix of R. Thus z = BA?1(sA + bA) and substituting this into the above gives ?ABA?1(sA?1+ bA) + e + a = BAT uA + BIT uI BI BA (sA + bA) ? sI = bI   0; uA  0; uI = 0; sI  0; sA = 0 Essentially we treat this system as in the method of Lemke. An initial basis is given by (uA; sI ) and complementary pivots can then be executed (using the variables u and s as the complementary pair). Any basis updating technique or anti-cycling rule can be incorporated form the literature on linear programming and complementarity. In fact we have an initial QR factorization of the basis available from the given factorization if needed. We showed in the previous section that if AC was coherently oriented then following the above path gives a monotonic decrease in . However, the proof of the nite termination of the method (possibly ray termination) goes through without this assumption, and in the following section we will look at other conditions which guarantee that the method terminates either with a solution or a proof that no solution exists. The coherent orientation results are direct analogues of the P {matrix results for the linear complementarity problem { the results we shall give now generalize the notions of copositive plus and L{matrices. 4. Existence Results The following de nitions are generalizations of those found in the literature. De nition 4.1. Let K be a given closed convex cone. A matrix A is said to be copositive with respect to the cone K if hx; Axi  0; 8x 2 K A matrix A is said to be copositive{plus with respect to the cone K if it is copositive with respect to K and hx; Axi = 0; x 2 K =) (A + AT )x = 0 De nition 4.2. Let K be a given closed convex cone. A matrix A is said to be L{matrix with respect to K if both a ) For every q 2 ri(K D ), the solution set of the generalized complementarity problem z 2 K; Az + q 2 K D ; zT (Az + q) = 0 (20) is contained in lin K .

12

MENGLIN CAO AND MICHAEL C. FERRIS

b ) For any z 6= 0 such that

z 2 K; Az 2 K D ; zT Az = 0 there exists z0 6= 0, such that z0 is contained in every face of K containing z and ?AT z0 is contained in every face of K D containing Az. To see how do these de nitions relate to the standard ones given in the literature on linear complementarity problems (e.g. [10] and [2]), consider the case that C = IRn+ and K = rec C = IRn+ . Condition a) says that LCP(q; A) has a unique solution 0 for all q > 0. Condition b) states that, if z 6= 0 is a solution of LCP(0; A), then there exists z0 6= 0 such that z0 is contained in every face of IRn+ containing z and ?AT z0 is contained in every face of IRn+ containing Az. In particular, z0 2 fx 2 IRn j xi = 0 g, for all i 2 fi j zi = 0 g. Hence zi0 = 0 for each i such that zi = 0. That is, supp z0  supp z. In another words, there exists a diagonal matrix D  0 such that z0 = Dz. Similarly, there exists a diagonal matrix E  0 such that ?AT z0 = EAz. Hence (EA + AT D)z = 0 where D; E  0 and Dz 6= 0. Thus the notion of L{matrix de ned here is a natural extension of that presented in [10]. The following lemma shows that the class L{matrices contains the class copositive{plus matrices. Lemma 4.3. If a matrix A is copositive{plus with respect to a closed convex cone K , then it is an L{matrix with respect to K . Proof. Suppose that q 2 ri(K D ) and z 2 K n lin K , then (lin K)? (z ) 6= 0. Furthermore, there exists an  > 0, such that q ? (lin K)? (z) 2 K D , since a (K D ) = (lin K )? (cf. [12, Theorem 14.6]). It follows that



0  z; q ? (lin K)? (z) = hz; qi ?  z; (lin K)? (z) = hz; qi ?  (lin K)? (z) D

E



D

E

2

2

2

That is hz; qi   (lin K)? (z) 2 > 0. Also zT Az  0 since A is copositive with respect to K . Thus zT (Az + q) = zT Az + zT q  zT q > 0. This shows that the set K n lin K does not contain any solution of (20). Therefore the solution set of the problem (20) is contained in lin K . To complete the proof, note that for any z 2 K , such that Az 2 K D and zT Az = 0, we have Az + AT z = 0, or ?AT z = Az, since A is copositive{plus. So the condition b) of De nition 4.2 is satis ed with z0 = z. We now come to the main result of this section.

Theorem 4.4. Suppose C is a polyhedral convex set and A is an L{matrix with respect to rec C which is invertible on the lineality space of C . Then exactly one of the following occurs:  The method given above solves (AVI)  the following system has no solution Ax ? a 2 (rec C )D ; x 2 C (21)

A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES

13

Proof. Suppose that C = fz j Bz  b; Hz = h g. We may assume that (AVI) is in the form (10) due to Lemma A.4 and Lemma A.5. The pivotal method fails to solve (AVI) only if, at some iterate xk , it reaches an unbounded direction dk+1 in k+1. We know that xk satis es (17), and the direction dk+1 which satis es Lk+1 dk+1 = 0 can be found by solving (18). Suppose (x; z; uA; sI ; ) is a solution of (18), then uA  0; sI  0;   0 (22) provided that xk + dk+1 is an unbounded ray. By reference to (18), we have BAT uA + Az = e BAz = 0 (23) BI z = sI  0 That is, z satis es z 2 rec C Az ? e = BAT (?uA) 2 (rec C )D zT (Az ? e) = zT BAT (?uA) = ?(BAz)T uA = 0 If  > 0, then e 2 int NC (xe), hence ?e 2 int(rec C )D . The above system has a unique solution z = 0 by the fact that A is an L{matrix with respect to rec C and lin C = f0g. Therefore the terminating ray is the starting ray, a contradiction. Thus  = 0.  = 0, therefore there exist z~ 6= 0, It follows that z 2 rec C , Az 2 (rec C )D , and zT Az such that z~ is contained in every face of rec C containing z, and that ?AT z~ is contained in every face of (rec C )D containing Az. We observe that, since xk 2 k \ k+1 \ F ?1(0), there exist zk , uk , sk , and k such that (17) is satis ed. It is easy to verify that z is in the face n o G1 = z 2 rec C zT (B T uk ) = 0 of rec C , and Az is in the face n o G2 = z 2 (rec C )D z = B T u; u = (uA; 0)  0

of (rec C )D , and thus

? AT z~ = B T u~ 2 G2;

Consequently, by (17) we have

 k ? e a = xk ? zk + Az

 u~T (Bz and

for some u~ = (~uA; 0)  0

!

uTA; 0) s0I = 0 k ? b) = (~

z~T (xk ? zk ) = z~T B T uk = 0

(24)

14

MENGLIN CAO AND MICHAEL C. FERRIS

since z~ 2 G1. Therefore  k ) + u~T Bz  k + z~T (xk ? zk + Az  k ? e) u~T b + z~T a = u~T (b ? Bz = (B T u~ + AT z~)T zk ? eT z~ = ?eT z~ > 0 in which the last inequality is due to z~ 2 rec C and e 2 int NC (xe)  ? int (rec C )D . We now claim that the the system  ? a 2 (rec C )D ; x 2 C Ax (25) has no solution. To see this, let x 2 C , then  + z~T Ax  =0 u~T Bx as a result of (24). Subtract from this the inequality u~T b + z~T a > 0 which we have just proven, then  ? b) + z~T (Ax  ? a) < 0 u~T (Bx  ? b)  0, hence But it is obvious that u~T (Bx  ? a) < 0 z~T (Ax  ? a 2= (rec C )D . But z~ 2 rec C . Thus Ax The proof is complete by noting that (25) has a solution if and only if (21) has a solution. As a special case of this theorem, we have the following result for copositive{plus matrices. Corollary 4.5. Suppose C is a polyhedral convex set, A is copositive{plus with respect to rec C and invertible on the lineality space of C . Then exactly one of the following occurs:  The method given above solves (AVI)  the following system has no solution Ax ? a 2 (rec C )D ; x 2 C (26) Proof. Obvious, in view of Lemma 4.3. Appendix A. Invariance Properties of L{matrices

In this appendix we show that the property of L{matrix with respect to a polyhedral convex cone is invariant under the two reductions presented in x3. We begin with the following technical lemmas. Lemma A.1. Let C , C~ , and C be as in (AVI), (3) and (10); V and Y be as in (6) and Lemma 3.1. Then rec C = V (rec C~ ) (27) rec C~ = Y (rec C ) (28)

A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES

and

15

V T ((rec C )D ) = (rec C~ )D Y T ((rec C~ )D ) = (rec C )D

(29) (30)

Furthermore

V T (ri ((rec C ))D ) = ri (rec C~ )D (31) Y T (ri (rec C~ )D ) = ri (rec C )D (32) Proof. (27) and (28) are obvious from de nition. Based on these two equations and [12, Corollary 16.3.2], we have (rec C )D = ?(rec C )o = ?(V rec C~ )o = ?(V T )?1(rec C~ )o = (V T )?1 (rec C~ )D where K o = ?K D is the polar cone of K and (V T )?1 is the inverse image of the linear map V T (also see [12]). Similarly (rec C~ )D = (Y rec C )D = (Y T )?1(rec C )D So we have proven (29) and (30). (31) and (32) can be obtained from (29) and (30) by applying [12, Theorem 6.6]. Lemma A.2. For z 2 rec C , z~ 2 rec C~ , and z 2 rec C , de ne D(z) := d 2 (rec C )D j hd; zi = 0 D~ (~z) := d~ 2 (rec C~ )D d;~ z~ = 0 D (z) := d 2 (rec C )D d; z = 0 n

o



D

E



n

D

E

o

Then

D~ (~z) = V T D(V z~) D (z) = Y T D~ (Y z) where V and Y are as in (6) and Lemma 3.1.

(33) (34)

Proof. 



D~ (~z) = d~ 2 (rec C~ )D d;~ z~ = 0 = d~ 2 V T (rec C )D d;~ z~ = 0 = V T d 2 (rec C )D dT ; V z~ = 0 = V T D(V z~) The other equation can be proven similarly. Actually, for z 2 rec C , D(z) is the set of vectors de ning faces of rec C containing z, a vector z0 is in every face of rec C containing z if and only if hd; z0i = 0 for all d 2 D(z). Similar observation can also be made for the set C~ and C . D

n

E

D

E

D

n

o

E

o

16

MENGLIN CAO AND MICHAEL C. FERRIS

Lemma A.3. For w 2 (rec C )D , w~ 2 (rec C~ )D , and w 2 (rec C )D , de ne R(w) := fr 2 rec C j hr; wi = 0 g R~(w~ ) := r~ 2 rec C~ j hr~; w~i = 0 R(w ) := r 2 rec C j hr; wi = 0 n

o

n

o

Then

V R~ (V T w) = R(w) Y R (Y T w~ ) = R~ (w~) where V and Y are as in (6) and Lemma 3.1.

Proof.

(35) (36)

R(w) = fr 2 rec C j hr; wi = 0 g = r 2 V (rec C~ ) j hr; wi = 0 = V r~ 2 rec C~ r~; V T w = 0 = V R~ (V T w) The other equation can be proven similarly. Similar to the case of Lemma A.2, for w 2 (rec C )D , R(w) is the set of vectors de ning faces of (rec C )D containing w, a vector w0 is in every face of (rec C )D containing w if and only if hr; w0i = 0 for all r 2 R(z). The situation is similar for the set C~ and C . Now, we come to the invariance of the L{matrix property. Lemma A.4. Given the problems (3) and (10). Suppose A~ is an L{matrix with respect to rec C~ , then A is an L{matrix with respect to rec C . Proof. For z 2 rec C , Y z 2 rec C~ . For any q 2 ri (rec C )D , there exists q~ 2 ri (rec C~ )D such that q = Y T q~ due to (32). If Az + q 2 (rec C )D then ~ z + Y T q~ 2 (rec C )D Y T AY by de nition of A. Hence ~ z + q~; Y z = Y T AY ~ z + Y T q~; z  0; 8z 2 rec C AY It follows from (28) that ~ z + q~; z~  0; 8z~ 2 rec C~ AY Thus ~ z + q~ 2 (rec C~ )D AY Therefore z satis es  Az + q 2 (rec C )D ; and zT (Az + q) = 0 z 2 rec C; (37) with q 2 ri (rec C )D , implies Y z satis es ~ AY ~ z + q~ 2 (rec C~ )D ; and (Y z)T [A~(Y z) + q~] = 0 Y z 2 rec C; (38) with q~ 2 ri (rec C~ )D . Thus, the solution Y z of (38) is contained in lin C~ = f0g, which implies that z = 0. Thus the solution set of (37) is f0g  lin C . n

D

n

D

E

D

E

D

o

o

E

E

A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES

17

For any 0 6= z 2 rec C such that Az 2 (rec C )D and zT Az = 0 we have, 0 6= Y z 2 rec C~ , and ~ z 2 (rec C~ )D and (Y z)T A~(Y z) = 0 AY So, there exists 0 6= z~ 2 rec C~ such that z~ is contained in every face of rec C~ containing Y z, ~ z. That is and ?A~T z~ is contained in every face of (rec C~ )D containing AY d;~ z~ = 0 8d~ 2 D~ (Y z) ~ z) r~; ?A~T z~ = 0 8r~ 2 R~ (AY Consequently, there exists 0 6= z0 2 rec C such that z~ = Y z0. For any d 2 D (z), d = Y T d~ for some d~ 2 D~ (Y z). Hence d; z0 = Y T d;~ z0 = d;~ Y z0 = 0 So, z0 is contained every face of rec C containing z. Moreover, for any r 2 R (Az) r; ?AT z0 = Y r; ?A~T Y z0 = Y r; ?A~T z~ = 0 ~ z). We see that ?AT z0 is contained in every face of (rec C )D containing since Y z 2 R~(AY Az. Thus A is an L{matrix with respect to C . Lemma A.5. Given the problems (NE) and (3). Suppose A is an L{matrix with respect to rec C , then A~ is an L{matrix with respect to rec C~ . Proof. For any z~ 2 rec C~ , V z~ 2 rec C and U z~ = (V ? W (W T AW )?1W T AV )~z = V z~ ? W (W T AW )?1W T AV z~ 2 rec C since W (W T AW )?1W T AV z~ 2 lin C . For any q~ 2 ri (rec C~ )D , there exists q 2 ri (rec C )D such that q~ = V T q. If A~z~ + q~ 2 (rec C~ )D then U T AU z~ + V T q 2 (rec C~ )D ; q 2 (rec C )D by de nition of A~. But U T AU = V T AU ? V T AT W (W T AW )?T W T AU = V T AU since W T AU = 0, as can be directly veri ed. Thus V T (AU z~ + q) = V T AU z~ + V T q 2 (rec C~ )D ; q 2 (rec C )D which implies hAU z~ + q; V z~i = V T (AU z~ + q); z~  0; 8z~ 2 rec C~ D

E

D

D

D

E

E

E

D

E

E

D

D

It follows from (27) that

hAU z~ + q; zi  0;

D

E

D

E

E

8z 2 rec C

18

MENGLIN CAO AND MICHAEL C. FERRIS

Thus

AU z~ + q 2 (rec C )D

Also

(U z~)T [A(U z~) + q] = z~T A~z~ = 0

Therefore z~ satis es ~ A~z~ + q~ 2 (rec C~ )D ; and z~T (A~z~ + q~) = 0 z~ 2 rec C; (39) with q 2 ri (rec C~ )D implies U z~ satis es U z~ 2 rec C; AU z~ + q 2 (rec C )D ; and (U z~)T [A(U z~) + q] = 0 (40) with q 2 ri (rec C )D . Hence the solution U z~ 2 lin rec C = lin C . But then V z~ 2 W (W T AW )?1AT V z~ + lin C  lin C which, by the de nition of V , implies z~ = 0. This shows that the solution set of (39) is contained in lin C~ = f0g. For any 0 6= z~ 2 rec C~ such that A~z~ 2 (rec C~ )D and z~T A~z~ = 0 we have 0 6= U z~ 2 rec C , and V T AU z~ = U T AU z~ = A~z~ 2 (rec C~ )D which implies A(U z~) 2 (rec C )D . We also have (U z~)T A(U z~) = z~T A~z~ = 0 So, there exists 0 6= z0 2 rec C such that z0 is contained in every face of rec C containing U z~, and that ?AT z0 is contained in every face of (rec C )D containing A(U z~). That is hd; z0i = 0 8d 2 D(U z~) hr; ?Az0i = 0 8r 2 R(AU z~) Consequently, there exists 0 6= z~0 2 rec C~ , such that z0 = V z~0, and for any d~ 2 D~ (~z), we have d~ = V T d, for some d 2 D(V z~), but since d 2 (rec C )D , W T d = 0, therefore hd; V z~i = hd; U z~i, so d 2 D(V z~) implies d 2 D(U z~), hence d;~ z~0 = V T d; z~0 = hd; V z~0i = hd; z0i = 0 So, z~0 is contained in every face of rec C~ containing z~. For any r~ 2 R~ (A~z~) r~; ?A~T z~0 = r~; ?U T AT U z~0 = r~; ?U T AT V z~0 = r~; ?U T AT z0 D

D

E

E

E

D

E

D

E

D

E

D

= r~; ?V T AT z0 = V r~; ?AT z0 = r; ?AT z0 = 0 since r = V r~ 2 R(AU z~) as a result of (36). This proved that ?A~T z~0 is contained in every face of (rec C~ )D containing A~z~. D

E

D

E

D

E

A PIVOTAL METHOD FOR AFFINE VARIATIONAL INEQUALITIES

19

Acknowledgement. We are grateful to Prof. Stephen Robinson for several enlightening discussion on the material contained in this paper and for his insights into the practicality of the normal equation formulation of the ane variational inequality problem. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

References J.V. Burke and J.J. More. Exposing constraints. Mathematics and Computer Science Division MCSP308-0592, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois, 1992. R.W. Cottle, J.S. Pang, and R.E. Stone. The Linear Complementarity Problem. Academic Press, New York, 1992. B.C. Eaves. A short course in solving equations with pl homotopies. In R.W. Cottle and C.E. Lemke, editors, Nonlinear Programming, pages 73{143, Providence, RI, 1976. American Mathematical Society, SIAM{AMS Proceedings. B.C. Eaves. Computing stationary points. Mathematical Programming Study, 7:1{14, 1978. B.C. Eaves. Computing stationary points, again. In O.L. Mangasarian, R.R. Meyer, and S.M. Robinson, editors, Nonlinear Programming 3, pages 391{405. Academic Press, New York, 1978. G.H. Golub and C.F. Van Loan. Matrix Computations. The John Hopkins University Press, Baltimore, Maryland, 1983. S. Karamardian. Complementarity problems over cones with monotone and pseudomonotone maps. Journal of Optimization Theory and Applications, 18:445{454, 1976. C.E. Lemke. Bimatrix equilibrium points and mathematical programming. Management Science, 11:681{ 689, 1965. K.G. Murty. Linear and Combinatorial Programming. John Wiley & Sons, New York, 1976. K.G. Murty. Linear Complementarity, Linear and Nonlinear Programming. Helderman{Verlag, Berlin, 1988. S.M. Robinson. Normal maps induced by linear transformations. Mathematics of Operations Research, 17:691{714, 1992. R.T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, NJ, 1970. Y.Dai, G. van der Laan, J.J. Talman, and Y. Yamamoto. A simplicial algorithm for the nonlinear stationary problem on an unbounded polyhedron. SIAM Journal on Optimization, 1:151{165, 1991.

Computer Sciences Department, University of Wisconsin, 1210 West Dayton Street, Madison, Wisconsin 53706

E-mail address : [email protected]

[email protected]