CEJOR (2008) 16:215–238 DOI 10.1007/s10100-007-0054-7 ORIGINAL PAPER
Bi-parametric optimal partition invariancy sensitivity analysis in linear optimization Alireza Ghaffari-Hadigheh · Habib Ghaffari-Hadigheh · Tamás Terlaky
Accepted: 11 December 2007 / Published online: 4 January 2008 © Springer-Verlag 2007
Abstract In bi-parametric linear optimization (LO), perturbation occurs in both the right-hand-side and the objective function data with different parameters. In this paper, the bi-parametric LO problem is considered and we are interested in identifying the regions where the optimal partitions are invariant. These regions are referred to as invariancy regions. It is proved that invariancy regions are separated by vertical and horizontal lines and generate a mesh-like area. It is proved that the boundaries of these regions can be identified in polynomial time. The behavior of the optimal value function on these regions is investigated too. Keywords Linear optimization · Bi-parametric sensitivity analysis · Optimal partition · Invariancy region · Optimal value function 1 Introduction Let the bi-parametric LO problem be given as P(b, c, , λ) min{(c + λc)T x| Ax = b + b, x ≥ 0},
A. Ghaffari-Hadigheh (B) Department of Mathematics, Azarbaijan University of Tarbiat Moallem, Tabriz, Iran e-mail:
[email protected] H. Ghaffari-Hadigheh Department of Mathematics, Payame noor University, Shabestar, Iran e-mail:
[email protected] T. Terlaky School of Computational Engineering and Science, Department of Computing and Software, McMaster University, Hamilton, ON, Canada e-mail:
[email protected] 123
216
A. Ghaffari-Hadigheh et al.
where A ∈ R m×n , b ∈ R m and c ∈ R n are fixed data, and λ are two real parameters not necessarily equal, b ∈ R m and c ∈ R n are perturbing directions, and x ∈ R n is an unknown vector. The dual of P(b, c, , λ) is defined as D(b, c, , λ) max{(b + b)T y| A T y + s = c + λc, s ≥ 0}, where y ∈ R m and s ∈ R n are unknowns. The original primal and dual problems, without perturbation are referred to as P and D, respectively. Any x ≥ 0 satisfying Ax = b+b is called a primal feasible solution of P(b, c, , λ). Further, a vector (y, s) with s ≥ 0 is called a dual feasible solution of D(b, c, , λ) if it satisfies A T y +s = c +λc. Observe that primal feasibility of the solution x is independent of the value of parameter λ and it depends only on the value of parameter . Analogously, dual feasibility of the solution (y, s) depends only on λ. Therefore, we denote primal and dual feasible solutions of problems P(b, c, , λ) and D(b, c, , λ) by x() and (y(λ), s(λ)), respectively. For any primal feasible solution x() and any dual feasible solution (y(λ), s(λ)), the weak duality property holds i.e., (c + λc)T x() ≥ (b + b)T y(λ). If x() is a primal-feasible solution and (y(λ), s(λ)) is a dualfeasible solution, then the equality x()T s(λ) = 0 holds if and only if these feasible solutions are optimal. This property of primal and dual optimal solutions is referred to as the complementary slackness property, because for feasible solutions x() and (y(λ), s(λ)), the optimality condition x()T s(λ) = 0 is equivalent to xi ()si (λ) = 0 for all i = 1, . . . , n. In this case, the objective function values are equal, i.e., (c + λc)T x() = (b + b)T y(λ) (strong duality property). Let P(b, c, , λ) and D(b, c, , λ) denote the feasible solution sets of problems P(b, c, , λ) and D(b, c, , λ), respectively. Their optimal solution sets are denoted by P ∗ (b, c, , λ) and D∗ (b, c, , λ), correspondingly. Let (x ∗ (), y ∗ (λ), s ∗ (λ)) denote a primal-dual optimal solution pair of these problems. We say a primal-dual optimal solution (x ∗ (), y ∗ (λ), s ∗ (λ)) is strictly complementary if for all i ∈ {1, 2, . . . , n}, either xi∗ () or si∗ (λ) is zero but not both. For a primaldual strictly complementary optimal solution pair, we have x ∗ ()T s ∗ (λ) = 0 with x ∗ () + s ∗ (λ) > 0. By the Goldman–Tucker Theorem (1956), the existence of a primal-dual strictly complementary optimal solution is guaranteed if both problems P(b, c, , λ) and D(b, c, , λ) are feasible. The support set of a nonnegative vector v is defined as σ (v) = {i : vi > 0}. The index set {1, 2, . . . , n} can be partitioned into two subsets B(, λ) = {i| xi∗ () > 0 for a primal optimal solution x ∗ ()};
N (, λ) = {i| si∗ (λ) > 0 for a dual optimal solution (y ∗ (λ), s ∗ (λ))}. This partition is known as the optimal partition of the index set {1, 2, . . . , n} for problems P(b, c, , λ) and D(b, c, , λ), and is denoted by π(, λ) = (B(, λ), N (, λ)). The uniqueness of the optimal partition follows from the complementarity slackness property and the convexity of the optimal solution sets P ∗ (b, c, , λ) and D∗ (b, c, , λ). For b = 0 and c = 0, problems P(b, c, , λ) and D(b, c, , λ) are the unperturbed primal and dual problems. In this
123
Bi-parametric optimal partition
217
case, we drop b, c, and λ in all notations. Moreover, if either c = 0 or b = 0, then either c and λ or b and are removed from the notations, respectively. The simplex method, invented by Dantzig (1963) solves LO problems by using pivot algorithms and produces an optimal basic solution. There is no polynomial time simplex method known, while Interior Point Methods (IPMs) are polynomial time algorithms (Karmarkar 1984; Roos et al. 2005) to solve LO problems. A finite termination strategy for interior-point methods, though not necessarily strongly polynomial, was proposed by Ye (1992). By now it is well known that IPMs lead to a primal-dual strictly complementary optimal solution (Güler and Ye 1993; Mehrotra and Ye 1993; Roos et al. 2005). Using a strongly polynomial rounding procedure for the solution obtained by an IPM, one obtains a strictly complementary pair and by this also identifies the optimal partition as well (Illés et al. 2000; Roos et al. 2005). Variation in input data might cause the loss of optimality of current optimal solutions. Investigating the behavior of a problem, its optimal solutions and the optimal value function when input data changes is referred to as sensitivity analysis and parametric optimization. Usually, perturbation occurs in the right hand side (RHS) and/or the objective function data (OFD). If perturbation in the RHS and OFD happens with identical parameter, the problem is referred to as uni-parametric optimization problem and if the RHS and OFD parameters vary independently, the problem is called bi-parametric optimization problem. There are different approaches to parametric optimization. One of them is the so-called optimal partition invariancy sensitivity analysis. In this approach one wants to identify the range of parameters where the optimal partition remains invariant. In recent history, the first study with this point of view was published by Adler and Monteiro (1992). The cases when either b or c is zero, have been thoroughly studied in Roos et al. (2005). Further, the case = λ has been investigated in GhaffariHadigheh et al. (2007). In these cases, the range of the parameter is an interval of the real line (Ghaffari-Hadigheh et al. 2007; Roos et al. 2005) and was referred to as invariancy interval and the points that separate these intervals as transition points. Moreover, optimal partitions at transition points differ from the optimal partitions on their neighboring invariancy intervals. In this paper, we consider the problem P(b, c, , λ), when both b and c are nonzero vectors and parameters and λ are not necessarily equal. Let π = (B, N ) denote the optimal partition for parameter values (, λ). We are interested in finding the region where the optimal partition is invariant, i.e., the set: {(, λ) ∈ R 2 |π(, λ) = (B(, λ), N (, λ)) = (B, N )}. We refer to this region as the invariancy region. There is only a simple illustrative example in Jansen et al. (1984) that the authors have considered with independently varying parameters and calculated invariancy regions manually, without presenting a systematic method to identify them. In the 1980s biparametric LO problems were studied extensively. Almost all of the results (Hollatz and Weinert 1971; Nožiˇcka 1972, 1974; Weinert 1970) were published in East Germany, in German, and they remained unknown in the west. There were
123
218
A. Ghaffari-Hadigheh et al.
some publications in English (Guddat et al. 1985), too. Let us review their results briefly. In (Guddat et al. 1985), the two parametric linear optimization problem is considered. It starts with the definition of an arbitrary index set I ⊆ {1, 2, . . . , n}, that is used to define open and closed faces of the feasible solution set as a subset, where xi = 0 for i ∈ I and xi > 0 when i ∈ I for an open face, and xi ≥ 0 for a closed face. In this way an open face is defined uniquely by its associated index set, however, a closed face might be identified by more than one index set. However, it is possible to obtain a maximal index set associated with a closed face. This maximal index set was referred to as the characteristic index set which is necessarily unique. In this way the feasible solution set can be defined as the union of finite number of disjoint open faces. This representation is unique. Observe that this decomposition of the feasible solution set is completely geometric, while it can be described by algebraic equations. Recall that in LO, the optimal solution set is a closed face (it may be of dimension zero if the optimal solution is unique) and henceforth the definition of the optimal partition can be derived as a unique representation of the optimal face by the appropriate index set. However, no computable method was given in Guddat et al. (1985)1 to identify the optimal partition. The authors of Guddat et al. (1985) proved that the range of the parameter values, where the perturbed LO problem is well-defined (the problem has an optimal solution), can be partitioned to rectangular subregions. These regions are referred to as local stability sets, (invariancy regions in our terminology) and they are uniquely associated with a characteristic index set that is invariant for the optimal solutions in those local stability sets. In Theorem 3.2 of Guddat et al. (1985), the uniqueness of this partitioning is stated together with the linearity of the optimal value function for the case of nonsimultaneous perturbation. However, the authors of Guddat et al. (1985) remain silent about an explicit representation of the optimal value function (Nožiˇcka 1972). Their computational algorithm requires to choose a perturbation parameter , less than the smallest range within the rectangular partitions, and needs the actual unique optimal basic solution to be perturbed by this parameter. Thus a simplex step is made to recover optimality or feasibility. This method is potentially capable of building the optimal value function, and thus identifying the rectangular partition of the parameter space when the parametric optimization problem is well defined. This algorithm requires the selection of the parameter , uses the nondegeneracy assumption (can be handled by anti-cycling rules, they proposed lexicography) and a regularity condition. Even with these restrictions, potentially exponentially many problems may need to be solved to identify a single invariancy (optimality) interval. In this paper, we propose two (four) linear optimization problems that can be solved in polynomial time by interior point methods to identify the invariancy interval 1 Recall that IPMs lead to a strictly complementary optimal solution that gives the optimal partition (Roos
et al. 2005). Note that neither polynomial time IPMs were available at the time when (Guddat et al. 1985; Hollatz and Weinert 1971; Nožiˇcka 1972, 1974; Weinert 1970) were written, nor their ability to identify optimal partitions in polynomial time was known in those years.
123
Bi-parametric optimal partition
219
(region). Moreover, our method is independent of the type of the given optimal solution, and no extra regularity condition is required. The paper is organized as follows. In Sect. 2, some fundamental concepts are presented and the convexity of invariancy regions is proved. In the sequel, a basic theorem is proved that enables us to identify an invariancy region via solving auxiliary LO problems. The behavior of the optimal value function on its convex domain is studied at the end of this section. A computational algorithm is presented in Sect. 3 that enables us to identify all possible invariancy regions in addition to optimal partitions on them as well as on their borders. A simple example is presented in Sect. 4 to illustrate the results. The closing section contains a summary of our results and ideas for future research directions.
2 Invariancy regions Let us introduce some simplifying concepts and notations. Let π = (B, N ) be the optimal partition of the index set {1, 2, . . . , n} for problems P and D, where P = P(b, c, 0, 0) and D = D(b, c, 0, 0). Thus, the invariancy region is a set, where the optimal partition of this index set for problems P(b, c, , λ) and D(b, c, , λ) at any point (, λ) in this region is π = (B, N ). This region is not empty as the origin (0, 0) belongs to it. The invariancy region containing the parameter value (, λ) is denoted by IR(b, c, , λ). We refer to the invariancy region which contains the origin as the actual invariancy region and denote it shortly by IR(b, c). Analogous notation is used in the literature for the case of uniparametric programming. It will be proved that the boundaries between invariancy regions are vertical and horizontal line (half-line) segments. The line segment between two adjacent invariancy regions is referred to as transition line segment, and without confusing with uni-parametric optimization, the intersection of vertical and horizontal transition lines are called transition points. Observe that the actual invariancy region might be the singleton {(0, 0)} as a transition point, or a (half-)line segment containing the origin as a transition line. Recall that for λ = 0 (or c = 0) , problems P(b, c, , λ) and D(b, c, , λ) reduce to the following uni-parametric problems, respectively: P(b, )
min{c T x| Ax = b + b, x ≥ 0},
D(b, )
max{(b + b)T y| A T y + s = c, s ≥ 0}.
Let IR(b) denote the actual invariancy interval for problem P(b, ). It was proved that the dual optimal solution set D∗ (b, ) is invariant on the actual invariancy interval IR(b) (see Theorem IV.56 in Roos et al. (2005)). The following lemma presents two auxiliary LO problems to identify the end points of this interval. Lemma 2.1 Let (y ∗ , s ∗ ) be an arbitrary optimal solution of the dual problem D. The actual invariancy interval IR(b) of problem P(b, ) can be identified by solving
123
220
A. Ghaffari-Hadigheh et al.
the following two auxiliary LO problems: 1 = min{| Ax − b = b, x ≥ 0, x T s ∗ = 0}, T ∗
2 = max{| Ax − b = b, x ≥ 0, x s = 0}. Proof See Theorem IV.73 in Roos et al. (2005).
(1) (2)
On the other hand, for = 0 (or b = 0) , we have the following reduced uniparametric primal and dual LO problems: P(c, λ) D(c, λ)
min{(c + λc)T x| Ax = b, x ≥ 0}, max{b T y| A T y + s = c + λc, s ≥ 0}.
Let IR(c) denote the actual invariancy interval of problem P(c, λ). It was proved that the primal optimal solution set P ∗ (c, λ) is invariant on the actual invariancy interval IR(c) (see Theorem IV.60 in Roos et al. (2005)). The following lemma presents two auxiliary LO problems to identify the end points of this interval. Lemma 2.2 Let x ∗ be an arbitrary primal optimal solution of the primal problem P. The actual invariancy interval IR(c) of problem P(c, λ) can be identified by solving the following two auxiliary LO problems: λ1 = min{λ| A T y + s − λc = c, s ≥ 0, s T x ∗ = 0}, λ2 = max{λ| A T y + s − λc = c, s ≥ 0, s T x ∗ = 0}. Proof See Theorem IV.75 in Roos et al. (2005).
(3) (4)
Remark 2.3 Observe that the actual invariancy interval IR(b) might be the singleton {0}. This situation occurs when solving the two auxiliary LO problems (1) and (2) leads to 1 = 2 = 0. Moreover, if one of these problems is unbounded, then the actual invariancy interval IR(b) is unbounded too. Analogous argument is valid for the actual invariancy interval IR(c). Furthermore, all auxiliary LO problems (1)–(4) can be solved in polynomial time using an IPM (Roos et al. 2005). Remark 2.4 It is worth mentioning that all auxiliary LO problems in Lemmas 2.1 and 2.2 can be adjusted to characterize all possible invariancy intervals. We skip the details. The case when = λ has been considered in Ghaffari-Hadigheh et al. (2007). The authors proved that the invariancy interval in this case is the intersection of two invariancy intervals IR(b, ) and IR(c, ) i.e., IR(b, c, ) = IR(b, ) ∩ IR(c, ). The optimal value function of problem P(b, c, , λ) is defined as: φ(, λ) = (c + λc)T x ∗ () = (b + b)T y ∗ (λ),
123
(5)
Bi-parametric optimal partition
221
where b and c are fixed perturbation directions and (x ∗ (), y ∗ (λ), s ∗ (λ)) is a primal-dual optimal solution of problems P(b, c, , λ). It has been proved that the optimal value function is a continuous, convex and piecewise linear function for λ = 0 (see Theorem IV.51 in Roos et al. (2005)) and it is a continuous, concave and piecewise linear function for = 0 (see Theorem IV.50 in Roos et al. (2005)). Moreover, Ghaffari et al. (2007) proved that the optimal value function is a continuous and piecewise quadratic function when = λ and both b and c are not zero. The authors also proved that this function might fail to have first or second order derivatives at transition points. In all these cases, the range of the optimal value function is a convex set. 2.1 Fundamental properties The following lemma proves the convexity of the actual invariancy region. Analogous reasoning is valid for other invariancy regions. Lemma 2.5 The actual invariancy region IR(b, c) is a convex set. Proof If IR(b, c) is the singleton {(0, 0)}, then it is obviously convex. If it is not the singleton {(0, 0)}, then let (1 , λ1 ) and (2 , λ2 ) be two arbitrary points in the actual invariancy region IR(b, c). Further, let (x (1) , y (1) , s (1) ) and (x (2) , y (2) , s (2) ) be strictly complementary optimal solutions of problems P(b, c, , λ) and D(b, c, , λ) at these points, respectively. For an arbitrary point (, λ) on the line segment between the two points (1 , λ1 ) and (2 , λ2 ), there is a θ ∈ (0, 1) such that: = θ 1 + (1 − θ )2 , and λ = θ λ1 + (1 − θ )λ2 . Let us define x() = θ x (1) + (1 − θ )x (2) , y(λ) = θ y (1) + (1 − θ )y (2) , s(λ) = θ s (1) + (1 − θ )s (2) . It is easy to verify that x() ∈ P(b, c, , λ) and (y(λ), s(λ)) ∈ D(b, c, , λ). On the other hand, σ (x()) = σ (x (1) ) ∪ σ (x (2) ) = B and σ (s(λ)) = σ (s (1) ) ∪ σ (s (2) ) = N , which proves the optimality of these solutions for problems P(b, c, , λ) and D(b, c, , λ), as well as having the optimal partition π = (B, N ) at (, λ). The proof is complete. According to Lemma 2.5, to identify an invariancy region, it is enough to identify its boundary. Observe that the invariancy region might be unbounded. 2.2 Identifying the invariancy regions Now, we present a fundamental theorem that describes the relationship between the actual invariancy region IR(b, c) and two actual invariancy intervals IR(b) and IR(c). This relationship plays a significant role in identifying the actual invariancy region IR(b, c) and establishes the fact that this identification can be done in
123
222
A. Ghaffari-Hadigheh et al.
polynomial time. Analogous statements can be used to identify all possible invariancy regions. It is worth mentioning that this result was stated in Nožiˇcka (1972). Since its proof was published only in German, to keep this paper self-contained, we prefer to present a proof in our notation. The main idea of the proof is similar but some details differ.2 Theorem 2.6 Consider the bi-parametric LO problems P(b, c, , λ) and D(b, c, , λ). Let IR(b) be the actual invariancy interval of problems P(b, ) and D(b, ). Moreover, let IR(c) be the actual invariancy interval of problems P(c, λ) and D(c, λ). Then IR(b, c) = IR(b) × IR(c), where IR(b, c) is the actual invariancy region of the problem P(b, c, , λ). Proof Let π = (B, N ) be the optimal partition of the index set {1, 2, . . . , n} of problems P and D. Moreover, let (x ∗ , y ∗ , s ∗ ) be a strictly complementary optimal solution of these problems. Consequently, σ (x ∗ ) = B and σ (s ∗ ) = N . First we prove the inclusion IR(b) × IR(c) ⊆ IR(b, c).
(6)
Let ∈ IR(b) be a fixed parameter. Since the dual optimal solution set D∗ (b, ) is invariant for all ∈ IR(b), having a primal strictly complementary optimal solution x ∗ () ∈ P ∗ (b, ), one might consider (x ∗ (), y ∗ , s ∗ ) as a strictly complementary optimal solution of problems P(b, ) and D(b, ). Consequently, the equality σ (x ∗ ()) = B holds. Similarly, let λ ∈ IR(c) be an arbitrary fixed parameter. Since the primal optimal solution set P ∗ (c, λ) is invariant for any λ ∈ IR(c), having a dual strictly complementary optimal solution (y ∗ (λ), s ∗ (λ)) ∈ D∗ (c, λ), one can consider (x ∗ , y ∗ (λ), s ∗ (λ)) as a strictly complementary optimal solution for problems P(c, λ) and D(c, λ). Henceforth, the equality σ (s ∗ (λ)) = N holds, too. It is obvious that x ∗ () and (y ∗ (λ), s ∗ (λ)) are feasible solutions of problems P(b, c, , λ) and D(b, c, , λ), respectively. Moreover, the equality x ∗ ()T s ∗ (λ) = 0 holds by construction that proves the optimality of these solutions. It means (, λ) ∈ IR(b, c), that proves inclusion (6). Now, let (, λ) ∈ IR(b, c) be given and (x ∗ (), y ∗ (λ), s ∗ (λ)) be a strictly complementary optimal solution of problems P(b, c, , λ) and D(b, c, , λ). Thus, σ (x ∗ ()) = B and σ (s ∗ (λ)) = N . Therefore, (x ∗ (), y ∗ , s ∗ ) is a strictly complementary optimal solution of problems P(b, ) and D(b, ). Analogously, (x ∗ , y ∗ (λ), s ∗ (λ)) is a strictly complementary optimal solution of problems P(c, λ) and D(c, λ). The inclusion IR(b, c) ⊆ IR(b) × IR(c) follows from the fact that σ (x ∗ ) = σ (x ∗ ()) = B and σ (s ∗ ) = σ (s ∗ (λ)) = N . The proof is complete. 2 The authors are grateful to an anonymous referee who pointed out the publications (Guddat et al. 1985; Nožiˇcka 1972, 1974) of Noˇziˇcka et al. It is reassuring to see that we independently got to the same theoretical results, while we are able to produce efficient computational Method, too.
123
Bi-parametric optimal partition
223
Remark 2.7 If either IR(b) or IR(c) is the singleton {0}, then the actual invariancy region IR(b, c) is a transition line containing the origin. On the other hand, if both actual invariancy intervals IR(b) and IR(c) are equal to the singleton {0}, then IR(b, c) = {(0, 0)}. Otherwise, the invariancy region IR(b, c) is an open rectangle. Observe that transition lines and transition points are special invariancy regions, because the optimal partition at them differs from the optimal partitions in the actual invariancy region. Analogous statement has been asserted in Guddat et al. (1985). We refer to an invariancy region as a nontrivial invariancy region if it is neither a singleton as a transition point nor a transition line. In contrast, transition points and transition lines are called trivial invariancy regions. Remark 2.8 According to Theorem 2.6, to identify the actual invariancy region IR(b, c), it is enough to identify the invariancy intervals IR(b) and IR(c). Since the auxiliary LO problems (1)–(4), for identifying these invariancy intervals, can be solved in polynomial time by an IPM, the invariancy region IR(b, c) can be identified in polynomial time as well. As it was stated previously, for the algorithm presented in Guddat et al. (1985), potentially exponentially many problems may need to be solved to identify a single invariancy rectangle. However, with our method one only needs to solve four auxiliary LO problems that can be solved in polynomial time by an IPM. Moreover, these auxiliary LO problems are independent of what kind of an optimal solution is given. Remark 2.9 A direct consequence of Theorem 2.6 is that the optimal partitions on vertical and horizontal transition lines on the boundary of an invariancy region differ from the optimal partition at any point in that invariancy region. On the other hand, at the boundary of a nontrivial invariancy region, the optimal partitions on vertical transition lines are different from the optimal partitions on horizontal transition lines. Because, if any of these optimal partitions were identical, then this optimal partition would be identical to the optimal partition in the interior of that invariancy region, that would contradict Theorem 2.6. This observation might mislead us to the conclusion that the optimal partition at the intersection of the vertical and horizontal transition lines should differ from the optimal partitions on these transition lines themselves. Example 4.1 demonstrates that this impression is not always true. It is worth mentioning that for the rectangle that is generated in the algorithm presented in Guddat et al. (1985), the boundaries of the rectangle belong to this invariancy region of the associated basis, while in our approach, all of rectangles are open regions that are uniquely associated with optimal partitions. It means that the bordering line segments of the rectangles are special invariancy regions themselves, while in the point of view in Guddat et al. (1985) they do not have independent existence. The following lemma describes the relationship between the optimal partitions on a nontrivial invariancy region and its vertical boundaries. Lemma 2.10 The primal optimal solution set for any point lying inside a nontrivial invariancy region is a superset of the primal optimal solution set for any point on its vertical boundary transition lines.
123
224
A. Ghaffari-Hadigheh et al.
Proof We prove the statement for the nontrivial actual invariancy region. The proof for an arbitrary invariancy region goes analogously. Without loss of generality, let IR(b, c) = ( , u ) × (λ , λu ) be the nontrivial actual invariancy region of problem P(b, c, , λ) and π = (B, N ) be the optimal partition of the index set {1, 2, . . . , n} at any point (, λ) ∈ IR(b, c). For a fixed parameter value λ ∈ (λ , λu ), the LO problem P(b, c, , λ) reduces to a uni-parametric LO problem. Let π( , λ) = (B( , λ), N ( , λ)) and π(u , λ) = (B(u , λ), N (u , λ)) be optimal partitions at vertical transition lines = and = u , respectively. Using Corollary IV.63 in (Roos et al. 2005), we conclude that B( , λ) ⊆ B and B(u , λ) ⊆ B. The proof is complete because, λ is an arbitrary parameter value in (λ , λu ). Analogous to Lemma 2.10 and using Corollary IV.59 in Roos et al. (2005), one can establish the following relationship between the optimal partitions at any point inside a nontrivial invariancy region and its horizontal boundaries. The proof is similar to the proof of Lemma 2.10 and is omitted. Lemma 2.11 The primal optimal solution set at any point inside a nontrivial invariancy region is a subset of the primal optimal solution set for any point on its horizontal boundary transition lines. 2.3 The optimal value function on an invariancy region Let us investigate the behavior of the optimal value function on invariancy regions. First we mention a trivial observation following from Theorems IV.50 and IV.51 in Roos et al. (2005). It should be mentioned that some characteristics of the optimal value function, like continuity, its linearity for non-simultaneous cases of perturbation, quadraticality for the bi-parametric perturbation case has been stated in Guddat et al. (1985); Nožiˇcka (1972) without giving explicit representation. Lemma 2.12 Let (, λ) be a fixed point in the range of the optimal value function φ(, λ). Then the optimal value function φ(, λ) is a continuous, convex and piecewise linear uni-variate function of . Furthermore, the optimal value function φ(, λ) is a continuous, concave and piecewise linear uni-variate function of λ. Proof We prove the first part of the statement and the proof of the other part goes analogously. Let λ be a fixed parameter. Therefore, the bi-parametric LO problem P(b, c, , λ) reduces to the following non-simultaneous uni-parametric problem min c T x| Ax = b + b, x ≥ 0 , where c = c + λc. Thus, the optimal value function of this problem is a continuous, convex and piecewise linear uni-variate function of (see Theorem IV.51 in Roos et al. (2005)). The following theorem is a direct consequence of Lemma 2.12 and it characterizes the optimal value function on transition line segments.
123
Bi-parametric optimal partition
225
Theorem 2.13 The optimal value function φ(, λ) is a continuous linear uni-variate function on transition line segments. The following theorem presents the representation of the optimal value function φ(, λ) on the actual invariancy region. Theorem 2.14 The optimal value function φ(, λ) is a bivariate quadratic function on the actual invariancy region IR(b, c). Proof When the actual invariancy region is the singleton {(0, 0)}, there is nothing to prove. Moreover, the statement is true for the case when the actual invariancy region is a transition line segment. Let the actual invariancy region be a nontrivial one. Further, let (1 , λ1 ), (2 , λ2 ) and (3 , λ3 ) be three arbitrary points in the actual invariancy region that are not on a single line. Let (x (1) , y (1) , s (1) ), (x (2) , y (2) , s (2) ) and (x (3) , y (3) , s (3) ) be primal-dual optimal solutions at these three points, respectively. Moreover, let (, λ) be an arbitrary point in the interior of the triangle given with these three points. Therefore, there are θ1 , θ2 ∈ (0, 1) with 0 < θ1 + θ2 < 1 such that = 3 − θ1 1 − θ2 2 ,
(7)
λ = λ3 − θ1 λ1 − θ2 λ2 ,
(8)
where 1 = 3 − 1 , 2 = 3 − 2 , λ1 = λ3 − λ1 and λ2 = λ3 − λ2 . Let us define x ∗ () = x (3) − θ1 x (1) − θ2 x (2) , y ∗ (λ) = y (3) − θ1 y (1) − θ2 y (2) ,
(9)
s ∗ (λ) = s (3) − θ1 s (1) − θ2 s (2) , where x (1) = x (3) − x (1) , x (2) = x (3) − x (2) , y (1) = y (3) − y (1) , y (2) = y (3) − y (2) , s (1) = s (3) − s (1) and s (2) = s (3) − s (2) . It is easy to verify that (x ∗ (), y ∗ (λ), s ∗ (λ)) is a primal-dual optimal solution of problems P(b, c, , λ) and D(b, c, , λ). Substituting (7) and (9) in (5) gives φ(, λ) = (b + b)T y ∗ (λ) = a0 + a1 θ1 + a2 θ2 + a3 θ1 θ2 + a4 θ12 + a5 θ22 , (10) where a0 = (b + 3 b)T y (3) , a1 = −(b + 3 b)T y (1) − 1 b T y (3) , a2 = −(b + 3 b)T y (2) − 2 b T y (3) , a3 = 1 b T y (2) + 2 b T y (1) ,
(11)
a4 = 1 b T y (1) , a5 = 2 b T y (2) .
123
226
A. Ghaffari-Hadigheh et al.
On the other hand, solving equations (7) and (8) for θ1 and θ2 gives θ1 = α1 + β1 + γ1 λ,
(12)
θ2 = α2 + β2 + γ2 λ,
(13)
where α1 =
3 λ2 −λ3 2 1 λ2 −2 λ1 ,
α2 =
λ3 (1 +2 )−3 (λ1 +λ2 ) , 1 λ2 −2 λ1
β1 =
−λ2 1 λ2 −2 λ1 ,
β2 =
(λ2 +λ1 ) 1 λ2 −2 λ1 ,
γ1 =
2 1 λ2 −2 λ1 ,
γ2 =
−(2 +1 ) 1 λ2 −2 λ1 .
(14)
Substituting (11)–(14) in (10) leads to the following representation of the optimal value function: φ(, λ) = b0 + b1 + b2 λ + b3 λ + b4 2 + b5 λ2 ,
(15)
where b0 = a0 + a1 α1 + a2 α2 + a3 α1 α2 + a4 α12 + a5 α22 , b1 = a1 β1 + a2 β2 + a3 (α1 β2 + α2 β1 ) + 2a4 α1 β1 + 2a5 α2 β2 , b2 = a1 γ1 + a2 γ2 + a3 (α1 γ2 + α2 γ1 ) + 2a4 α1 γ1 + 2a5 α2 γ2 , b3 = a3 (γ1 β2 + γ2 β1 ) + 2a4 γ1 β1 + 2a5 γ2 β2 , b4 = a4 β1 2 + a3 β1 β2 + a5 β2 2 , b5 = a4 γ1 2 + a3 γ1 γ2 + a5 γ2 2 , that is a quadratic function of and λ. Because (1 , λ1 ), (2 , λ2 ) and (3 , λ3 ) are three arbitrary points in the nontrivial actual invariancy region, the claim of the theorem follows directly from (15). The proof is complete. Remark 2.15 Recall that for λ = 0, problem P(b, c, , λ) reduces to the uniparametric problem P(b, ). In this case, λ1 = λ2 = λ3 = 0 holds in three points (1 , λ1 ), (2 , λ2 ) and (3 , λ3 ). Without loss of generality one may consider that 1 < < 3 < 2 . Therefore = (1 − α)1 + α2 where α = (1 − θ1 )(1 − θ2 ). Since θ1 , θ2 ∈ (0, 1), thus α ∈ (0, 1) as well. On the other hand, the dual optimal solution set D∗ (b, ) is invariant on the associated invariancy region (interval). Thus, without
123
Bi-parametric optimal partition
227
loss of generality, we may consider that y (1) = y (2) = y (3) = y ∗ , where (y ∗ , s ∗ ) is a dual optimal solution of problem D. Consequently, the optimal value function φ(, 0) reduces to φ(, 0) = (b + 3 b)T y ∗ − (1 + 2 )b T y ∗ θ1 − 2 b T y ∗ θ2 = (b + b)T y ∗ , where is defined by (7). This result is in agreement with the fact that the optimal value function is linear on the actual invariancy interval IR(b). Remark 2.16 Representation of the optimal value function as given in (15) is based on the arbitrary dual optimal solutions (y (1) , s (1) ), (y (2) , s (2) ) and (y (3) , s (3) ). One may make use of primal optimal solutions x (1) , x (2) and x (3) to construct another equivalent representation of the optimal value function. Using this representation for the case = 0, the optimal value function reduces to φ(0, λ) = (c + λ3 c)T x ∗ − (λ1 + λ2 )c T x ∗ θ1 − λ2 λc T x ∗ θ2 = (c + λc)T x ∗ , where λ is defined by (8). This result is in agreement with the fact that the optimal value function is linear on the actual invariancy interval IR(c). Remark 2.17 It can be proved by rigorous algebraic calculations that b4 and b5 are always zero. Here we just mention that it can be proved by using Remarks 2.15 and 2.16. Remark 2.18 Recall that for = λ, problem P(b, c, , λ) reduces to a uniparametric LO problem with simultaneous perturbation in both the RHS and the OFD. In this case y ∗ (λ) can be rewritten as y ∗ (λ) = (1 − α)y (1) + (1 − α)y (2) , where α = (1 − θ1 )(1 − θ2 ). Consequently, the optimal value function (10) is a univariate quadratic function of α that is in agreement with the results obtained in GhaffariHadigheh et al. (2007). As discussed in Remark 2.9, at some of the transition points, the optimal partition changes at either the horizontal transition line or at the vertical transition line that pass through that point. At these transition points, according to Lemma 2.12, the slope of the optimal value function changes either horizontally or vertically. The following theorem is a direct consequence of this discussion. Theorem 2.19 The optimal value function φ(, λ) fails to have first order partial ∂φ derivatives ∂φ ∂ and ∂λ at transition points. Further, at least one of the first order partial derivatives does not exist on transition lines. In the rest of this section, we present a method to calculate the first order directional derivatives of the optimal value function at any point on transition lines. Let u = (u 1 , u 2 )T ∈ R 2 be a direction such that u 1 and u 2 are not zero and (, λ) be an
123
228
A. Ghaffari-Hadigheh et al.
arbitrary point on a transition line. Moreover, let D+ φ|u and D− φ|u denote the right and left directional derivatives of the optimal value function φ(, λ) at (, λ) in the direction u. Since u 1 and u 2 are not zero, the line passing through the point (, λ) and parallel to u is λ=
u2 ( − ) + λ. u1
(16)
Substituting (16) in problem P(b, c, , λ), converts it to the following uniparametric LO problem: min (c + c)T x| Ax = b + b, x ≥ 0 ,
P(b, c, ) where c =
u2 u 1 c
and c = c + (λ −
u2 u1
)c. The dual of P(b, c, ) is
max (b + b)T y| A T y + s = c + c, s ≥ 0 .
D(b, c, )
Since, problems P(b, c, ) and D(b, c, ) are uni-parametric LO problems with simultaneous perturbation in both the RHS and the OFD, to identify the directional derivatives D+ φ|u and D− φ|u , we apply Theorem 5.4 in Ghaffari-Hadigheh et al. (2007). Theorem 2.20 For a given (, λ) and u = (u 1 , u 2 )T where u 1 and u 2 are not zero, let λ = uu 21 ( − ) + λ and (x, y, s) be a primal-dual optimal solution of problems P(b, c, ) and D(b, c, ). Then, D− φ|u = min{b T y : A T y + s = c + c, s ≥ 0, s T x = 0} y,s
T
+ max{c x : Ax = b + b, x ≥ 0 x T s = 0}, x
D+ φ|u = max{b T y : A T y + s = c + c, s ≥ 0, s T x = 0} y,s
T
+ min{c x : Ax = b + b, x ≥ 0 x T s = 0}, x
where c =
u2 u 1 c
and c = c + (λ −
u2 u1
)c.
3 Identifying all possible invariancy regions In this section, an algorithmic procedure is presented that is capable of identifying all possible invariancy region in the first quarter of the “ − λ” plane.
123
Bi-parametric optimal partition
229
Algorithm: Transition regions and associated optimal partitions for bi-parametric LO problem Input: Perturbation vector: r = (b, c) A primal-dual strictly complementary optimal solution (x ∗ , y ∗ , s ∗ ) of P and D; k = 1; 0 = 0; λ0 = 0; x 0 = x ∗ , y 0 = y ∗ , s 0 = s ∗ ; ready1:= false; while not ready1 do begin Solve λk = max{λ : A T y + s − λc = c, s ≥ 0, s T x k−1 = 0}; if this problem is unbounded then ready1:= true, λˆ = λk + 1 else let (λk , y k , s k ) be the optimal solution; if λk ≈ λk−1 then λk is a transition line (point) and let λˆ = λk λk + λk−1 else λˆ = ; 2 begin k¯ := 1; ready2:=false; while not ready2 do begin ¯ Solve k¯ = max{ : Ax − b = b, x ≥ 0, x T s k−1 = 0} if this problem is unbounded then ready2:= true ¯ else let (k¯ , x k ) be the optimal solution, run Subalgorithm 2; ready3:=false; while not ready3 do begin run Subalgorithm 1; run Subalgorithm 2; ¯ Solve max{b T y|A T y + s = c, s ≥ 0, s T x k = 0}; if this problem is unbounded then ready2:=true ¯ ¯ else let (y k , s k ) be optimal solution; end k¯ := k¯ + 1; end solve min{c T x : Ax = b, x ≥ 0, x T s k = 0}; if this problem is unbounded then ready1:= true else let x k be an optimal solution; k =k+1 end end
123
230
A. Ghaffari-Hadigheh et al.
We are given a primal-dual optimal solution of an unperturbed problem. The algorithm starts from the origin and goes up and right to identify the first nonnegative λ1 as a horizontal transition line. Then, it continues to the left to identify all possible k¯ for the obtained λ1 . This algorithm is a combination of two algorithms which were presented in Roos et al. (2005) for identifying ’s and λ’s, separately. This algorithm calls two subalgorithms. Subalgorithm 1 is capable of shrinking the current nontrivial invariancy region from the left. Subalgorithm 2 does the same procedure but from above. In the main algorithm and two subalgorithms, the notation ‘≈’ is used to indicate that the associated two values are close enough to each other, so close that we may consider them numerically identical. Theoretically, they should be considered to be equal.
Subalgorithm 1 Input from the main Algorithm: λˆ , k−1 ¯ , k¯ , ready2; begin if k¯ ≈ k−1 then k¯ is a transition line (point) and let ˆ = k¯ ¯ elseif ready2:=true then ˆ = k¯ + 1 ¯ + k−1 ¯ ; else ˆ = k 2 begin Let ( yˆ , sˆ ) be a dual optimal solution of D(b, c, , ˆ λˆ ); begin Solve ˆ2 = max{ : Ax − b = b, x ≥ 0, x T sˆ = 0}; if this problem is unbounded then ˆ2 = ∞ else let (ˆ2 , xˆ2 ) be an optimal solution; end begin Solve ˆ1 = min{ : Ax − b = b, x ≥ 0, x T sˆ = 0}; if this problem is unbounded then ˆ1 = −∞ else let (ˆ1 , xˆ1 ) be an optimal solution; end ¯ + k−1 ¯ ¯ and x k = xˆ1 ; if ˆ1 > k−1 then k¯ = ˆ1 , ˆ = k ¯ 2 ¯ return (ˆ , k¯ , x k ); ¯ + k−1 ¯ ¯ and x k = xˆ2 , elseif ˆ2 < k¯ then k¯ = ˆ2 , ˆ = k 2 ¯ return (ˆ , k¯ , x k ); else ready3:=true; end end
123
Bi-parametric optimal partition
231
The following theorem states that this algorithm terminates in finite number of iterations and identifies all possible invariancy regions in the first quarter of the “ −λ” plane. Theorem 3.1 The algorithm terminates after a finite number of iterations and identifies all possible invariancy regions. Proof The algorithm includes two loops. Let us refer to them as outer and inner loops. In the first iteration of the outer loop, the algorithm begins with solving λ1 = max{λ : A T y + s − λc = c, s ≥ 0, s T x 0 = 0},
(17)
where x 0 is the given primal optimal solution of P. This problem is feasible, because (λ, y, s) = (0, y ∗ , s ∗ ) satisfies its constraints. Thus, this problem is either unbounded or it has an optimal solution (λ1 , y 1 , s 1 ). By Lemma 2.2, λ1 is the extreme point at the top of the actual invariancy region (interval) containing the origin. If problem (17) is unbounded, then (0, ∞) ⊆ IR(0, λ) and the optimal value function is linear on this interval (see Theorem 2.13). The algorithm continues by identifying a parameter value λˆ . If problem (17) is unbounded, we may conclude that the associated interval is unbounded from the right. Otherwise, When λ1 ≈ λ0 = 0, then the axes λ ≥ 0 is the vertical (actual) transition line. However, when λ1 > 0 is a finite real number, λˆ is the mid-point between the origin and λ1 . It should be mentioned that this algorithm runs on transition lines but not in the interior of the invariancy regions. Sometimes, a transition line might be on the boundary of more than one invariancy regions. For this reason, the parameter value λˆ will be used in Subalgorithm 1 to identify the first invariancy region alongside of the specified transition line. For each parameter value λk , the inner loop starts for all vertical transition lines that intersect the line λ = λk . It starts by solving ¯
k¯ = max{ : Ax − b = b, x ≥ 0, x T s k−1 = 0},
(18)
where s 0 is the slack variable in the given optimal solution (y 0 , s 0 ) = (y ∗ , s ∗ ) of problem D. This problem is feasible again, because (, x) = (0, x ∗ ) satisfies its constraints. Thus, it is either unbounded or it has an optimal solution (1 , x 1 ). Running Subalgorithm 2 in this stage identifies the optimal partition and the end points of the invariancy interval on the transition line passing through 1 . Here, we only need to run this subroutine for the input data ˆ = 1 without concerning the output of Subalgorithm 2. By now, the algorithm identifies the first vertical and horizontal transition lines, or detects that the current invariancy region is unbounded from the right and above. In executing Subalgorithm 1, we intend to choose a new parameter value between ˆ It helps us to find out if the current region is really a single 0 and 1 , referred to as . invariancy region or it is the union of some invariancy regions. To do this, it is first verified if 1 is far enough from 0 = 0. If it is not, then they are identical and in this case ˆ = 0 = 0. If problem (18) is unbounded, this auxiliary parameter value is close
123
232
A. Ghaffari-Hadigheh et al.
to 1 = ∞. However, when 1 > 0 is a finite real number, ˆ is the mid-point between the origin and 1 . It is worth mentioning that four cases might occur for λˆ and ˆ . • • • •
Case 1: λˆ = ˆ = 0; Case 2: λˆ = 0 but ˆ = 0; Case 3: λˆ = 0 but ˆ = 0; Case 4: both λˆ and ˆ are nonzero.
To continue executing Subalgorithm 1, we need an optimal solution ( yˆ , sˆ ) of ˆ It is easy to verify that in Cases 1 and 2, (y ∗ , s ∗ ) the optimal solution D(b, c, , ˆ λ). of D, can be considered as ( yˆ , sˆ ), because the dual optimal solution set is invariant in these cases. However, Cases 3 and 4 might not be identical. In the course of Subalgorithm 1, the following two auxiliary problems are solved to identify if the current region that contains (ˆ , λˆ ) is a real invariancy region. ˆ2 = max{ : Ax − b = b, x ≥ 0, x T sˆ = 0},
(19)
ˆ1 = min{ : Ax − b = b, x ≥ 0, x T sˆ = 0}.
(20)
It is easy to understand that (, x) = (0, x ∗ ) is feasible for both problems (19) and (20) in Cases 1 and 2, and for Cases 3 and 4, (, x) = (0, x) ˆ is feasible, where xˆ is an ˆ Observe that problem P(c, λ) ˆ has an optimal optimal solution of problem P(c, λ). solution because ( yˆ , sˆ ) is its dual optimal solution. Let us focus our proof on problem (19). In Cases 1 and 3, this problem has an optimal solution, that is ˆ2 = 0. In Cases 2 and 4, problem (19) might be unbounded or it has an optimal solution for some positive 1 . However, when 1 is a finite number then ˆ2 ≤ 1 . Analogous discussion is valid for problem (20). Subalgorithm 1 terminates with identifying whether the right side of the current region shrinks to produce a right transition line for an invariancy region, or = 1 is this border itself, or this invariancy region is unbounded from the right (or left). After completing running of Subalgorithm 1, the next step is executing Subalgorithm 2. The duty of this subalgorithm is analogous to the task of Subalgorithm 1, but it runs vertically. We skip the details. At the end stage of the inner loop, the slope of the optimal value function is calculated vertically and it terminates when all vertical transition lines are detected for the smallest value of λ as the first horizontal transition line. If the original horizontal transition line is not served in the inner loop, the outer loop continues with the new horizontal transition line with finding the slope of the optimal value function horizontally. Since the number of optimal partitions are finite, both the inner and the outer loops terminate after a finite numbers of iteration. The proof is complete. Changing the minimization and maximization problems suitably, and substituting the formulas for calculating ˆ and λˆ produce other algorithms that provide all possible invariancy regions on the “ − λ” plane. We skip the details.
123
Bi-parametric optimal partition
233
Subalgorithm 2 Input from the main Algorithm: ˆ , λk−1 , λk , ready2; begin if λk ≈ λk−1 then λk is a transition line (point) and let λˆ = λk elseif ready2:=true then λˆ = λk + 1 λk + λk−1 ; else λˆ = 2 begin ˆ Let xˆ be a primal optimal solution of P(b, c, , ˆ λ); begin Solve λˆ 2 = max{λ : A T y + s − λc = c, s T xˆ = 0}; if this problem is unbounded then λˆ 2 = ∞ else let (λˆ 2 , yˆ2 , sˆ2 ) be an optimal solution; end begin Solve λˆ 1 = min{λ : A T y + s − λc = c, s T xˆ = 0}; if this problem is unbounded then λˆ 1 = −∞ else let (λˆ 2 , yˆ2 , sˆ2 ) be an optimal solution; end if λˆ 1 > λk−1 then ready3:=false, λk = λˆ 1 , λk + λk−1 k λˆ = , y = yˆ1 and s k = sˆ1 2 elseif λˆ 2 < λk then ready3:=false, λk = λˆ 2 , λk + λk−1 k , y = yˆ2 and s k = sˆ2 λˆ = 2 else continue; return (λk , y k , s k ); end end
4 Illustrative example In this section, a simple example is presented to illustrate the obtained results. Example 4.1 Consider the following LO problem min{−x1 − x2 | x1 + x2 + x3 = 3, x2 + x4 = 2, x1 + x5 = 2.5, x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0, x5 ≥ 0}.
123
234
A. Ghaffari-Hadigheh et al.
Fig. 1 The mesh-like area generated by all invariancy regions in Example 4.1
Its dual is max{3y1 + 2y1 + 2.5y3 | y1 + y3 + s1 = −1, y1 + y2 + s2 = −1, y1 + s3 = 0, y2 + s4 = 0, y3 + s5 = 0, s1 ≥ 0, s2 ≥ 0, s3 ≥ 0, s4 ≥ 0, s5 ≥ 0}. It is easy to verify that the optimal partition of the index set {1, 2, 3, 4, 5} is π = (B, N ) = ({1, 2, 4, 5} , {3}). Let b = (1, −1, 1)T and c = (−1, 1, 0, 0, 0)T be the perturbing directions. Figure 1 depicts the mesh-like area constructed by all invariancy regions of this problem. In this figure bold lines denote the transition (half-)line segments. The actual invariancy region is the transition line segment {(, 0)| − 2.5 < < 1.5}. Optimal partitions on all invariancy regions, including transition (half-)line segments and nontrivial invariancy regions, are presented in Table 1. The optimal value function of this problem is depicted in Fig. 2. It clearly shows that this function in neither a convex nor a concave function. The following observations can be understood from Table 1. 1. The range of the optimal value function includes six nontrivial invariancy regions in addition to nine transition (half-)line segments and nine transition points on the intersection of transition lines. As it is seen from Table 1, the optimal partition changes both horizontally and vertically passing these points at only two transition points, i.e., at (−2.5, 1) and (2, −1) depicted with bold points in Fig. 1. This phenomenon does not happen at other transition points. 2. Consider the nontrivial invariancy region {(, λ)| − 2.5 < < 1.5, 0 < λ < 1}. The optimal partition in this region is π = (B, N ) = ({1, 2, 4} , {3, 5}). Its
123
Bi-parametric optimal partition
235
Table 1 Invariancy regions, optimal partitions and the representation of the optimal value function on the invariancy regions of Example 4.1 Invariancy regions −2.5 < < 1.5
λ=0
B
N
Optimal value function
{1, 2, 4, 5}
{3}
−3 −
= 1.5,
−1 < λ < 1
{1, 2}
{3, 4, 5}
−4.5 − 3λ
1.5 < < 2,
−1 < λ < 1
{1, 2, 3}
{4, 5}
−4.5 − 0.5λ − 2λ
−2.5 < < 2,
λ=1
{1, 2, 3, 4}
{5}
−5 − 2
= 2,
−1 < λ
{1, 3}
{2, 4, 5}
−4.5 − 4.5λ
= 2,
λ = −1
{1, 3, 5}
{2, 4}
–
= 2,
λ < −1
{3, 5}
{1, 2, 4}
−4 − 4λ
−2.5 < < 1.5,
0