Decentralized method for computing Pareto ... - Computer Science

Report 2 Downloads 62 Views
European Journal of Operational Research 117 (1999) 578±590

www.elsevier.com/locate/orms

Theory and Methodology

Decentralized method for computing Pareto solutions in multiparty negotiations Pirja Heiskanen

*

Faculty of Information Technology and Systems, Delft University of Technology, Melkweg 4, 2628 CD Delft, The Netherlands Received 2 March 1998; accepted 30 June 1998

Abstract This paper presents a decentralized method for computing Pareto-optimal solutions in multiparty negotiations over continuous issues. The method is based on the well known weighting method which is decomposed by introducing an own decision variable for each decision maker and by applying the dual decomposition method to the resulting problem. The method o€ers a systematic way for generating some or all Pareto-optimal solutions so that decision makers do not have to know each others' value functions. Under the assumption of quasilinear value function the requirement that a decision maker knows the explicit form for his value function can be relaxed. In that case the decision maker is asked to solve a series of multiobjective programming problems where an additional arti®cial decision variable is introduced. Ó 1999 Elsevier Science B.V. All rights reserved. Keywords: Multicriteria analysis; Negotiation; Mathematical programming

1. Introduction In this paper we present a decentralized method for computing Pareto-optimal solutions in multiparty negotiations over continuous issues. We call a method decentralized if its use does not require decision makers (DMs) to know each others' value functions and neither does it require anyone outsider to know all value functions. The computation of Pareto-optimal solutions in decentralized

* Present address: Helsinki School of Economics and Business Administration, P.O.Box 1210, FIN-00101 Helsinki, Finland. E-mail: pirja.heiskanen@hkkk.®

manner is interesting because of the negotiators' frequent failure to achieve ecient agreements in practice [12,15] and their unwillingness to disclose private information due to strategic reasons. The literature on negotiation analysis describes several approaches for identifying ecient agreements (see, e.g. [11,18]). A popular approach is to use multiobjective programming methods in a group setting (see, e.g. [3]). However, in this approach DMs are assumed to be able to agree on the overall objectives although their opinions about the relative importance of each objective may di€er. On the other hand, game theoretic models developed for computing di€erent equilibria in situations with con¯icting interests assume

0377-2217/99/$ ± see front matter Ó 1999 Elsevier Science B.V. All rights reserved. PII: S 0 3 7 7 - 2 2 1 7 ( 9 8 ) 0 0 2 7 6 - 8

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

that players' value functions are common knowledge [10,15]. Less attention has been paid to identifying ecient agreements in negotiations characterized both by unwillingness to reveal private information and by a con¯ict between the DMs' interests. To our knowledge, there are two decentralized approaches for generating Pareto-optimal solutions in con¯ict situation. In these approaches the DMs' value functions do not have to be elicited but a mediator, who works as a neutral coordinator, gathers information on DMs' preferences during an interactive procedure. In one of them the joint tangent of the DMs' value functions at a Pareto-optimal point is searched by iterating with arti®cial constraints and by asking the DMs to indicate their most preferred points on the constraints [4,7,9,17,21]. Using this approach one can generate all interior Pareto-optimal points, when the DMs' value functions are di€erentiable and strictly quasiconcave, as is shown in [9]. Due to the nonconvexity of the problem of adjusting the constraints, good initial values for the constraints are important for the convergence of the method. In the other approach jointly improving directions, and thus also joint improvements, are searched for starting from a tentative solution [5,6,19]. Using an appropriate rule for determining the search direction and under strictly concave value functions the solution is Pareto-optimal if joint gains can no longer be found in the search direction [6]. With this approach it is possible to ®nd also the Pareto-optimal solutions on the boundary of the decision set. With the method presented in this paper one can systematically generate all Pareto-optimal solutions both in the interior and on the boundary of the decision set. The method is based on the well known weighting method where the objective functions are scalarized into a single function by taking a weighted sum of them. It is decomposed by introducing an own decision variable for each decision maker and by applying the dual decomposition method to the resulting problem. The decomposition results into an interactive procedure between the DMs and a neutral mediator. At every iteration each DM solves his own optimization problem whereas on the upper level the

579

mediator updates the parameters of these optimization problems based on the DMs' optimal solutions. When the DMs' optimal solutions coincide, the common optimum is guaranteed to be Paretooptimal. The rest of the paper is organized as follows. Assumptions made about the DMs' value functions and the conditions for Pareto-optimality are stated in Section 2. In Section 3 the decentralized method is derived and its solutions are shown to coincide with all Pareto-optimal solutions. In Section 4 the interactive procedure between the mediator and the DMs is described and the mediator's problem is shown to be convex and differentiable. We also show that if a DM's value function is quasilinear; that is, linear with respect to one decision variable, a DM's problem can be interpreted as a nonlinear multiobjective programming problem where an arti®cial decision variable is introduced. In Section 5 a numerical algorithm for solving the mediator's problem is presented. The behaviour of the algorithm is illustrated by numerical examples in Section 6. Section 7 concludes the paper. 2. Conditions for Pareto-optimality Let us consider a situation with n DMs negotiating over m continuous decision variables. The value of the ith decision variable is denoted by xi T and the decision vector is x ˆ …x1 ; . . . ; xm † 2 X. The decision set X is a compact and convex subset of Rm having a non-empty interior. We also assume that X can be represented using nonlinear inequalities as follows: X ˆ fx 2 Rm j gj …x† 6 0 8j ˆ 1; . . . ; pg; where functions gj : Rm 7! R, j ˆ 1; . . . ; p, are convex. The DMs' value functions are denoted by ui : Rm 7! R. Later we make use of the following assumption about the value functions: Assumption A1. ui is strictly concave and ®nite on X for i ˆ 1; . . . ; n. Let us then recall the de®nition of Pareto-optimality:

580

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

De®nition 1. A decision vector x 2 X is Paretooptimal if and only if there is no other x 2 X such that ui …x† P ui …x † for all i ˆ 1; . . . ; n and the inequality is strict for at least one i. Pareto-optimality means that it is not possible to increase anyone's utility without decreasing someone else's utility. Under the assumption of strictly concave value functions the above mentioned de®nition is equivalent to the following condition: Theorem 1. Assume A1. Then x 2 X is Paretooptimal if and only if there is no direction d 2 Rm such that (i) x ‡ kd 2 X 8k 2 ‰0;  kŠ for some  k>0  and (ii) 8i ˆ 1; . . . ; n 9ki > 0 such that ui …x ‡ ki Š. ki d† P ui …x † 8ki 2 ‰0;  Proof. Suppose that there would exist a direction d 2 Rm as described in the theorem. De®ne then ^k ˆ minfk; k1 ; . . . ; kn g and take x ^ ˆ x ‡ 1=2^ kd. ^  ^ Since 1=2k 2 ‰0; kŠ, x 2 X. Since 1=2^ k 2 ‰0;  ki Š, 8i ˆ 1; . . . ; n, and by strict concavity of ui , ui …^ x† ˆ ui …1=2x ‡ 1=2…x ‡ ^ kd†† kd† P ui …x †: > 1=2ui …x † ‡ 1=2ui …x ‡ ^ So x is not Pareto-optimal. Assume then that x is ^2 not Pareto-optimal. Then there exists a point x  x† P ui …x †, 8i ˆ 1; . . . ; n. De®ne X such that ui …^ ^ ÿ x . By convexity of X, x ‡ kd 2 X dˆx 8k 2 ‰0; 1Š. By strict concavity of ui , ui …x ‡ kd† ˆ ui ……1 ÿ k†x ‡ k^ x†  x† > …1 ÿ k†ui …x † ‡ kui …^ P ui …x † 8k 2 ‰0; 1Š: Hence, there exists a direction d 2 Rm as in the theorem. h 3. Decentralized method 3.1. Weighting method In the well known weighting method the DMs' value functions are scalarized into a single function by taking a weighted sum of them. The

weights have to be nonnegative and at least one of them nonzero. The weighting method can be represented as follows: max x

n X

wi ui …x†

iˆ1

…1†

s:t: gj …x† 6 0; 8j ˆ 1; . . . ; p; Pn where wi P 0 for all i ˆ 1; . . . ; n and iˆ1 wi ˆ 1. Goe€rion [8] showed that if the value functions ui , i ˆ 1; . . . ; n, are concave, all properly Paretooptimal solutions and only them can be obtained by the weighting method when the weight vector n ‡ varies over the Pn set W ˆ fw 2 R jwi > 0 8i ˆ 1; . . . ; n and iˆ1 wi ˆ 1g. Here we show that all Pareto-optimal solutions (and only them) can be obtained using the weighting when strict concavity of the value functions is assumed. Theorem 2. Assume A1. Then x 2 X is Paretooptimal if and only if it is a solution to problem (1) for some Pn weight vector w satisfying w P 0, w 6ˆ 0 and iˆ1 wi ˆ 1. Proof. By convexity of X and concavity of ui , i ˆ 1; . . . ; n, the set Y ˆ fy 2 Rn jy ˆ …u1 …x†; . . . ; un …x††T for some x 2 Xg is Rnÿ -convex by Corollary 3.5. in [14]. By Theorem 3.4.4 in [14] the Pareto-optimality of x 2 X now implies that x is  a solution to problem (1) for someP weight vector w  P 0 and w  6ˆ 0. Since niˆ1 w  i > 0 and satisfying w the multiplication of the objective by a positive scalar do not change the optimal solution, the weight vector can be scaled to sum up to one. Thus x is a solution to problem (1). Let x 2 X then be an optimal solution to problem (1) for some Pn weight vector w satisfying w P 0, w 6ˆ 0 and iˆ1 wi ˆ 1. By strict concavity of ui , i ˆ 1; . . . ; n, and positiveness of at least one wi , the objective function is strictly concave. Therefore the optimal solution is unique (Theorem 3.4.2 in [1]). Suppose then, to the contrary, that x  2 X, x  6ˆ x such is not ecient. Then there is x  x† P ui …x † 8i ˆ 1; . . . ; n. Multiplying the that ui … inequalities by P wi and summing over i yields Pn n x† P iˆ1 wi ui …x † which contradicts iˆ1 wi ui …  the fact that x was the unique optimal solution to problem (1). Hence, x must be ecient. 

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

A di€erent formulation for problem (1) is obtained by introducing own decision variables for the individual DMs' value functions and requiring them to be equal. This leads to the following optimization problem: n X wi ui …xi †

max

x1 ;...;xn

iˆ1 i

s:t:

i‡1

x ÿx

ˆ 0; 8i ˆ 1; . . . ; n ÿ 1;

i

gj …x † 6 0; 8j ˆ 1; . . . ; p and 8i ˆ 1; . . . ; n: …2† Clearly, the solutions of this problem coincide with the solutions of problem (1).

wi ui …xi † ‡ ~yiT xi max i x

s:t: gj …xi † 6 0; 8j ˆ 1; . . . ; p; where 8 i > : iÿ1 ÿy

Since ui , i ˆ 1; . . . ; n, are concave, gj , j ˆ 1; . . . ; p, are convex and the interior of X is nonempty, by Theorem 28.2. in [13] there exists a Kuhn±Tucker vector for problem (2). It also follows that the optimal values of the primal and dual problems coincide (see Theorem 6.2.4 in [1]). Thus one can solve the dual problem instead of the original problem. The dual of problem (2) is given as min

h…y1 ; . . . ; ynÿ1 †;

…3†

where the dual function h…† is de®ned by ( n X 1 nÿ1 wi ui …xi † h…y ; . . . ; y † ˆ max x1 ;...;xn

iˆ1

nÿ1 X T ‡ yi …xi ÿ xi‡1 †jx1 ; . . . ; xn 2 X

) …4†

iˆ1

and y1 ; . . . ; ynÿ1 2 Rm are the Lagrange multiplier vectors corresponding to the equality constraints in problem (2). The objective function of the maximization problem (4) is additively decomposable with respect to i. In addition, there exist only separate constraints for each DM's decision variable. Hence, problem (4) can be solved by solving n independent optimization problems:

…6†

if i ˆ n:

max i

ui …xi † ‡ yiT xi

s:t: gj …xi † 6 0; 8j ˆ 1; . . . ; p;

3.2. Dual decomposition method

…5†

Since a multiplication of the objective function by a positive scalar in an optimization problem does not a€ect the optimal solution, it is possible to give a somewhat di€erent formulation for problem (5) by writing it into a form x

y1 ;...;ynÿ1

if i ˆ 1; if i ˆ 2; . . . ; n ÿ 1;

581

where 8 i > < y =wi yi ˆ …yi ÿ yiÿ1 †=wi > : iÿ1 ÿy =wi

…7†

if i ˆ 1; if i ˆ 2; . . . ; n ÿ 1;

…8†

if i ˆ n:

This formulation can, of course, be used only if wi > 0. However, this is true whenever a DMi's preferences are taken into account when computing a Pareto-optimal solution. 4. Interactive procedure The decomposition described in Section 3 leads to an interactive procedure between the DMs and a mediator. Here the mediator is a neutral coordinator who tries to help the negotiating parties to ®nd ecient agreements. In the beginning of the procedure for generating a Pareto-optimal point the mediator chooses initial values for the weights w for the DMs' value functions and initial values for the multipliers yi ; i ˆ 1; . . . ; n ÿ 1. The values of parameters yi ; i ˆ 1; . . . ; n ÿ 1, are calculated according to formula (8) and they are told to the DMs. Then each DM solves his own parametrized optimization problem (7) and tells the optimal solution to the mediator. If the DMs' optimal solutions coincide, the common optimum is the Pareto-optimal solution corresponding to the weights w.

582

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

Otherwise, the mediator updates the multipliers yi ; i ˆ 1; . . . ; n ÿ 1, based on the DMs' optimal solutions. The iteration continues until the DMs' optimal solutions coincide or are close to each other. The choice of the weights in the beginning of the procedure is important since together with the scaling of the DMs' value functions the weights determine where on the Pareto frontier the solution will lie. However, since the mediator does not have any information about the scales of the DMs' value functions when generating the ®rst Pareto-optimal solution, a reasonable choice is to use equal weights. Later, when the mediator ®nds out more about the relation between the weights and the location of the corresponding solutions, he can choose the following weights so that the distribution of the Pareto-optimal solutions will be as even as possible. Next we consider separately the properties of the mediator's and a DM's problem. 4.1. Properties of the mediator's problem The mediator's problem is to iteratively minimize the value of the dual function (3) and thus ®nd a point where the DMs' optimal solutions coincide. From the duality theory for convex optimization we obtain several important properties for the mediator's problem. These properties will be made use of in Section 5 where we derive an algorithm for solving the mediator's problem. First, the convexity of the dual function is showed in Theorem 3. Theorem 3. Assume A1. Then h…y1 ; . . . ; ynÿ1 † is a convex function. Proof. Since X is non-empty and compact and ui , i ˆ 1; . . . ; n, are continuous on X by concavity assumption, the result follows directly from Theorem 6.3.1 in [1]. h In addition to convexity we can show that under the assumption of strictly concave value functions, the dual function h…y† is di€erentiable as is stated in Theorem 4. Here we denote T y ˆ …y1T . . . ynÿ1T † .

Theorem 4. Assume A1. Then h…y† is di€erentiable and $h…y† ˆ …x1 …y†T ÿ x2 …y†T . . . xnÿ1 …y†T ÿ xn …y†T †T : Proof. Since X is a convex, compact set and by A1 ui , i ˆ 1; . . . ; n, are strictly concave, the set of optimizers to problem (4) is singleton for every y and for every i. Then the result follows directly from Theorem 6.3.3. in [1]. h 4.2. The DMi's problem During the interactive procedure a DM has to solve a sequence of constrained maximization problems (7) where a linear function is added to his value function. If a DM knows the explicit form for his value function, numerical methods for convex optimization can be used to solve the problem. An explicit form for a DM's value function could be available, for example, in a situation where the value functions represent pro®t functions of competing ®rms. Another example could be a transboundary air pollution negotiations where the value functions are the costs related to reducing the amount of emitted pollutant in di€erent countries. Under the assumption of a quasilinear value function, the requirement that a DM's value function has been explicitly constructed can be relaxed. In that case the DMi's maximization problem can be interpreted as a nonlinear multiobjective optimization problem where an additional arti®cial decision variable has to be taken into consideration when evaluating di€erent alternatives. Quasilinearity means that a DM's preferences can be represented using a value function that is linear in one decision variable. This is an assumption used in the literature on microeconomics [20]. Under this assumption a DMi's value function Ui can be written as follows: Ui …x; z† ˆ ui …x† ‡ z;

…9†

where x 2 Rm and z 2 R are the decision variables. Instead of solving the maximixation problem (7), under the quasilinearity assumption a DM is

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

asked to choose his most preferred point under a given set of alternatives. This can be done even if a DM does not know the explicit form for his value function. Mathematically expressed he has to solve the following problem: Ui …x; z†

max x;z

s:t:

…x; z† 2 F …yi †;

…10†

where F …yi † ˆ f…x; z† 2 Rm‡1 jx 2 X; z ˆ  yiT xg is the set of all feasible points. It is easy to show that if …x ; z † is DMi's most preferred point, i.e., it solves problem (10), then x is a solution to problem (7). Theorem 5. Assume that a DMi's value function is given by Eq. (9). Point …x ; z † is DMi's most preferred point under set F … yi † if and only if x solves problem (7). Proof. Assume that …x ; z † is the most preferred point. Then one has Ui …x ; z † P Ui …x; z†; 8…x; z† 2 F …yi †. By quasilinearity of the DMi's value function and by the de®nition of z, Ui …x; z† ˆ ui …x†‡ yiT x P z ˆ ui …x† ‡ yiT x. It follows that ui …x † ‡  iT  ui …x† ‡ y x, 8x 2 X. Hence, x solves problem (7). Assume then that x solves problem (7). It means yiT x, 8x 2 X. By quathat ui …x † ‡ yiT x P ui …x† ‡  yiT x † P silinearity of DMi's value function Ui …x ;  iT iT y x and Ui …x; y x†; 8x 2 X. By de®ning z ˆ  equivalently z ˆ yiT x it follows that Ui …x ; z † P yi †. Hence, …x ; z † is the Ui …x; z†; 8…x; z† 2 F … DMi's most preferred point.  If a DM ®nds it dicult to choose his most preferred point from the given set, it might be possible to support his selection problem by using some interactive nonlinear multiobjective programming method presented in the literature [16,22]. 5. Algorithm for solving the mediator's problem Assuming the strict concavity of the DMs' value functions the dual function h…y† ˆ

583

h…y1 ; . . . ; ynÿ1 † is convex and di€erentiable by Theorems 3 and 4. In addition, if we assume that at every iteration round the mediator obtains the optimal solutions as well as the optimal values of the DMs subproblems, all the convex optimization methods using derivative information (see, e.g., [1,2]) can be applied in solving the mediator's problem. However, in some situations the DMs may not be willing to reveal their optimal values due to strategic reasons or the optimal values may not be available because the DMs do not know even themselves the explicit form for their value functions (recall Section 4.2). Therefore we suggest here an algorithm that does not require the use of optimal values. The suggested algorithm is a gradient method with inexact line search. The exact line search would be too costly in the sense that it would require too many optimal solutions from the DMs. At each iteration round a step to the direction of the negative of the gradient is taken and inexact line search is performed to determine an appropriate step size. The step size is chosen so that the value of the dual function never increases. However, since the optimal values are not known, it is impossible to simply compare them at subsequent iteration steps. Neither can one use the rules, like the Armijo's rule or the Goldstein's test, suggested for determining the appropriate step size in the literature on convex optimization (see e.g. [2]). Therefore, the step size is determined here by observing the value of the directional derivative of the dual function in the line search direction. So, if T T T y…k† ˆ …y1 …k† . . . ynÿ1 …k† † is the current iterate and the line search direction is s…k† ˆ ÿrh…y…k††, then the directional derivative d as a function of the step size k P 0 is the following: T

d…k† ˆ

rh‰y…k† ‡ ks…k†Š s…k† : ks…k†k

…11†

By convexity and di€erentiability of the dual function the value of the directional derivative is zero when the step size is optimal, i.e., k ˆ k . For the values k < k , d…k† < 0 and for the values k > k , d…k† > 0 indicating that the value of the dual function ®rst decreases and starts to increase

584

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

after the optimal solution when moving to the line search direction. Hence, if the step size k is chosen so that d…k† 6 0, the value of the dual function is guaranteed not to increase. The termination condition for the iteration is that the error term e…k† ˆ

n 1X 1 p kxi …y…k†† ÿ xc k; n iˆ1 m

…12†

Pn where xc ˆ 1n iˆ1 xi …y…k††, is smaller than the prespeci®ed scalar  > 0. In (12) the distance from a DMi's optimum to the average of the optima is calculated. It is then scaled with respect to the number of the decision variables. By scaling the dependence of the error term on the dimension of the problem is decreased. Finally, the average of the scaled distances over the DMs is taken. An idea of the size of the error can be obtained from the fact that if the deviation of each component of every DM's optimum from the average of the optima is D, then the value of error term is exactly D. If the error term is zero, the gradient of the dual function is also zero. Due to the convexity of the dual function this point is a global optimum. Hence, the corresponding common optimal solution xc is guaranteed to be Paretooptimal. A stepwise description of the algorithm is the following: Algorithm for the mediator's problem: Step 1: Choose values for the parameters w, a < 1, b > 1 and  > 0. Scale w to sum up to one. Choose initial values y…1† and l…1†. Set k :ˆ 1 and go to Step 2. Step 2: Calculate values for  yi for i ˆ 1; . . . ; n, according to formula (8) and with y :ˆ y…k†. Let the DMs solve their own optimization problems where a DMi's problem is given by (7). Denote the optimal solution of the DMi's problem by xi …y…k†† for all i ˆ 1; . . . ; n. Go to Step 3. Step 3: Calculate the error e…k† according to formula (12) and the line search direction s…k† :ˆ ÿ rh…y…k†† ˆ ÿ…x1 …y…k†† T

If e…k† < , stop. xc is the solution. Otherwise go to Step 4. Step 4: If k ˆ 1, update the multipliers y…k ‡ 1† :ˆ y…k† ‡ l…k†s…k†, set k :ˆ k ‡ 1 and go back to Step 2. Otherwise go to Step 5. Step 5: Calculate the directional derivative d at y…k†; that is

T

T

T T

ÿ x2 …y…k†† . . . xnÿ1 …y…k†† ÿ xn …y…k†† † :

T

d :ˆ ÿ

s…k† s…k ÿ 1† : ks…k ÿ 1†k

If d 6 0, set l…k† :ˆ bl…k ÿ 1†, update the multipliers y…k ‡ 1† :ˆ y…k† ‡ l…k†s…k†, set k :ˆ k ‡ 1 and go back to Step 2. Otherwise, set l…k ÿ 1† :ˆ al…k ÿ 1†, update the multipliers y…k† :ˆ y…k ÿ 1† ‡l…k ÿ 1†s…k ÿ 1†, and go back to Step 2. Based on numerical tests good values for parameters a and b are 1=2 and 2, respectively. A good initial value y…1† is important for fast convergence of the method. We suggest value y0 ˆ …y01T y02T †T where ( i

y0 ˆ

…1 . . . 1†

T

if i is odd;

…0 . . . 0†

T

if i is even;

to be used if no information on a better value is available. When generating several Pareto-optimal solutions we can make use of the information obtained when computing the previous Pareto solutions to obtain a better initial value. This idea is described below in the procedure that can be used for generating an approximation to the Pareto frontier. For simplicity we describe the procedure in the special case where n ˆ 3. However, based on the description it should be clear how the procedure can be extended to the situation with arbitrary number of DMs. In the sequel, we will use the following notations: The parameter imax de®nes the number of Pareto-optimal solutions computed which is nÿ1 …2  imax ‡ 1† . The indices de®ning the weights used are i2 and i3 and the corresponding weight vector is w…i2; i3† ˆ …1 2i2 2i3 †T . The optimal multiplier vector obtained by using the algorithm for the mediator's problem with given weights w…i2; i3† is denoted by y…i2; i3†.

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

Approximation to the Pareto frontier set i2 :ˆ 0 set i3 :ˆ 0 calculate w…i2; i3† set y…1† :ˆ y0 calculate y…i2; i3† for i3 :ˆ 1 to imax calculate w…i2; i3† set y…1† :ˆ y…i2; i3 ÿ 1† calculate y…i2; i3† end for i3 :ˆ ÿ1 downto ÿ imax calculate w…i2; i3† set y…1† :ˆ y…i2; i3 ‡ 1† calculate y…i2; i3† end for i3 :ˆ ÿimax to imax for i2 :ˆ 1 to imax calculate w…i2; i3† set y…1† :ˆ y…i2 ÿ 1; i3† calculate y…i2; i3† end for i2 :ˆ ÿ1 downto ÿ imax calculate w…i2; i3† set y…1† :ˆ y…i2 ‡ 1; i3† calculate y…i2; i3† end end 6. Numerical examples In practice the solution of a DM's nonlinear multiobjective optimization problem may be laborious and time consuming. Thus the applicability of the decentralized method depends greatly on how many problems each DM is asked to solve. Therefore we will study here by numerical examples the number of optimal solutions required from the DMs when the proposed decentralized method is applied to compute Pareto-optimal solutions in di€erent negotiation situations. In the examples the optimal solutions for the DMs' individual subproblems are obtained by

585

maximizing given strictly concave value functions. The number of the DMs, the number of the decision variables and the shape of the value functions are varied. In all examples we generate an approximation to the Pareto frontier using the algorithm suggested in Section 5. In the algorithm for the mediator's problem the initial value for the step size is always 0.5 and the values for the parameters a and b are chosen to be 1=2 and 2. The iteration is continued until the error  is smaller than 0.02. 6.1. Example 1 As the ®rst example we consider a simple twoparty negotiation with two decision variables. Both value functions are di€erentiable. DM2's value function is quadratic but in DM1's value function an exponential term is added to the otherwise quadratic function. The value functions are given as follows: u1 …x† ˆ ÿ4x21 ÿ 0:2x22 ‡ x1 x2 ÿ ex2 ; u2 …x† ˆ ÿ0:5…x1 ÿ 1†2 ÿ 4…x2 ÿ 1†2 ÿ …x1 ÿ 1†…x2 ÿ 1†: The decision set X is de®ned by the following linear inequalities: 2x1 ‡ 4x2 ÿ 5 6 0; ÿ 8x1 ÿ 6x2 ‡ 5 6 0; 4x1 ÿ 6x2 ÿ 3 6 0: Fig. 1 shows isovalue contours of the DMs' value functions, the set of constraints and the unconstrained Pareto-optimal frontier (the Pareto-optimal frontier obtained by disregarding the constraints). From the picture one can notice that when the decision variables are constrained to set X, part of the Pareto-optimal frontier lies on the boundary of the decision set. When generating an approximation to the Pareto frontier the parameter imax was 5. Hence the number of points generated was 11. Table 1 shows the weights used, the solutions obtained as well as the exact Pareto-optimal solutions corresponding to the weights both in constrained and in unconstrained case. The last column in the table

586

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

Fig. 1. Decision set, isovalue contours of the DMs' value functions and the unconstrained Pareto frontier in Example 1.

tells how many times the gradient of the dual function needed to be evaluated to achieve the required accuracy for the solution. One should note that the number of gradient evaluations is equal to the number of the optimization problems solved by each DM. In Fig. 2 we have drawn the solution points and the corresponding constrained Pareto-optima. The corresponding weight is labeled by giving the weights for DM1 and DM2 in subscript of w.

Fig. 2. Solutions obtained (circles) and the corresponding constrained Pareto-optima (stars) when generating an approximation to the Pareto frontier in Example 1.

When computing the ®rst Pareto-optimal point, nine gradient evaluations were needed, i.e., each DM had to solve nine optimization problems. For the following Pareto-optimal solutions, less gradient evaluations were needed, since a better initial value for the multiplier y was used as suggested in the algorithm given in Section 5. The number of gradient evaluations varied between 2 and 9 with the average 3.73.

Table 1 The solutions obtained and corresponding constrained and unconstrained Pareto-optimal points in Example 1 Weights 1 T † 32 1 T † 16 1 T † 8 1 T † 4 1 T † 2 T

…1 …1 …1 …1 …1 …1 1† …1 2†T …1 4†T …1 8†T …1 16†T …1 32†T

Solution (0.283 (0.272 (0.240 (0.207 (0.157 (0.237 (0.314 (0.416 (0.500 (0.561 (0.599

Constrained Pareto T

0.456) 0.471)T 0.513)T 0.557)T 0.627)T 0.801)T 0.917)T 0.993)T 0.994)T 0.969)T 0.951)T

(0.283 (0.271 (0.250 (0.214 (0.158 (0.222 (0.307 (0.419 (0.512 (0.555 (0.586

T

0.456) 0.473)T 0.500)T 0.549)T 0.623)T 0.805)T 0.925)T 0.989)T 0.994)T 0.972)T 0.957)T

Unconstrained Pareto T

(ÿ0.058 ÿ0.545) (ÿ0.014 ÿ0.253)T (0.038 0.065)T (0.094 0.366)T (0.154 0.618)T (0.222 0.805)T (0.307 0.925)T (0.419 0.989)T (0.556 1.015)T (0.697 1.019)T (0.814 1.014)T

# of gradient evaluations 5 3 2 3 3 9 3 4 2 3 5

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

587

u3 …x† ˆ ÿ 0:8x21 ÿ 0:6x22 ÿ …x3 ÿ 1†2

6.2. Example 2 Since optimization problem becomes usually more dicult when the dimension of the problem increases, we next study a situation where the dimension of the mediator's problem is much higher than in the ®rst example. Here three DMs are negotiating over ®ve decision variables. Therefore the dimension of the mediator's problem …n ÿ 1†m is now 10 whereas in the previous example it was 2. All value functions are di€erentiable. DM2's value function is quadratic but the other value functions are nonquadratic. They are given as follows: u1 …x† ˆ ÿ 4x21 ÿ 0:2x22 ÿ 0:5x23 ÿ 0:5x24 ÿ 3x25 ‡ x1 x2 ‡ 0:4x2 x3 ‡ 0:2x3 x4 ‡ 0:4x4 x5 ÿ ex2 ÿ ex4 ; 2

2

u2 …x† ˆ ÿ 0:5…x1 ÿ 1† ÿ 4…x2 ÿ 1† ÿ 2…x3 ÿ 1†

2

ÿ 6x24 ÿ x25 ÿ …x1 ÿ 1†…x2 ÿ 1† ‡ 0:2…x2 ÿ 1†…x3 ÿ 1† ÿ 0:4x4 x5 ;

2

2

ÿ 0:6…x4 ÿ 1† ÿ 0:4…x5 ÿ 1† ÿ 0:4x1 x2 ÿ 0:4…x3 ÿ 1†…x4 ÿ 1† ÿ …x3 ÿ 1† ‡ ln …1 ÿ 0:25x1 †:

4

The values of the decision variables are restricted with the following simple bounds: 0 6 xi 6 1;

8i ˆ 1; . . . ; 5:

Table 2 shows the results obtained when an approximation to the Pareto frontier was generated in Example 2. Here imax was 2 and so the number of points generated was 25. In addition to the solutions obtained and the number of gradient evaluations needed we have calculated the distance between the solution and the corresponding constrained Pareto-optimal point. Since there may be di€erences in the behaviour of the algorithm when some of the constraints are active, we also indicate whether the Pareto-optimal point is on the boundary of the decision set. Here the number of

Table 2 The solutions obtained and the distance from the Pareto-optimal point in Example 2 Weights

Solution

…1 14 14†T …1 14 12†T …1 14 1†T …1 14 2†T …1 14 4†T …1 12 14†T …1 12 12†T …1 12 1†T …1 12 2†T …1 12 4†T …1 1 14†T …1 1 12†T …1 1 1†T …1 1 2†T …1 1 4†T …1 2 14†T …1 2 12†T …1 2 1†T …1 2 2†T …1 2 4†T …1 4 14†T …1 4 12†T …1 4 1†T …1 4 2†T …1 4 4†T

(0.081 (0.064 (0.037 (0.000 (0.000 (0.134 (0.113 (0.079 (0.030 (0.000 (0.191 (0.175 (0.136 (0.077 (0.005 (0.280 (0.256 (0.215 (0.149 (0.062 (0.391 (0.368 (0.325 (0.255 (0.155

0.398 0.376 0.335 0.274 0.199 0.633 0.606 0.557 0.480 0.375 0.813 0.790 0.750 0.681 0.577 0.928 0.915 0.889 0.841 0.759 0.992 0.985 0.971 0.944 0.893

0.702 0.780 0.874 0.957 1.000 0.806 0.852 0.914 0.978 1.000 0.887 0.913 0.949 0.994 1.000 0.941 0.953 0.973 1.000 1.000 0.971 0.977 0.988 1.000 1.000

0.000 0.000 0.074 0.224 0.409 0.000 0.000 0.047 0.158 0.313 0.001 0.000 0.027 0.097 0.211 0.000 0.000 0.014 0.054 0.126 0.000 0.000 0.007 0.028 0.070

0.029)T 0.058)T 0.113)T 0.206)T 0.343)T 0.028)T 0.054)T 0.104)T 0.190)T 0.320)T 0.015)T 0.048)T 0.091)T 0.167)T 0.286)T 0.020)T 0.038)T 0.074)T 0.136)T 0.239)T 0.009)T 0.028)T 0.054)T 0.100)T 0.181)T

Dist. from Pareto

Boundary point

# of gradient evaluations

0.017 0.019 0.009 0.014 0.013 0.024 0.007 0.008 0.008 0.006 0.009 0.013 0.005 0.007 0.006 0.009 0.013 0.008 0.015 0.006 0.024 0.015 0.009 0.011 0.007

yes yes no yes yes yes yes no no yes no yes no no yes yes yes no yes yes yes yes no yes yes

16 14 11 12 19 9 8 14 8 13 13 12 18 12 9 18 11 11 7 9 28 18 10 8 10

588

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

gradient evaluations varied between 7 and 28 with the average 12.72. Table 2 shows that the most gradient evaluations were needed when the weights for the DMs di€er a lot from each other. However, the algorithm seems to ®nd the Pareto-optimal points on the boundary as fast as the interior Pareto-optimal points. 6.3. Example 3 In the third example we study the behaviour of the method in a situation where the DMs' value functions are nondi€erentiable. Here two DMs are negotiating over two decision variables and both value functions are piecewise quadratic. ( if x1 6 0:5; ÿ0:5x21 ÿ 0:5x22 u1 …x† ˆ 2 2 ÿ4:5x1 ÿ 0:5x2 ‡ 1 otherwise: u2 …x† ( ˆ

Fig. 3. Isovalue contours of the DMs' value functions and the Pareto frontier in Example 3. 2

2

ÿ0:5…x1 ÿ 1† ÿ 5…x2 ÿ 1† ‡ 1 2

ÿ0:5…x1 ÿ 1† ÿ …x2 ÿ 1†

2

if x2 6 0:5; otherwise:

The decision set is given by simple bounds: ÿ 2 6 xi 6 2;

8i ˆ 1; 2:

Fig. 3 shows the isovalue contours of the DMs' value functions, the DMs' global optima as well as the Pareto-optimal frontier. Although in this case the Pareto frontier lies inside the constraints, it is nonsmooth due to the nondi€erentiability of the value functions. An approximation to the Pareto frontier was generated by using value 5 for parameter imax. Therefore the weights used were exactly the same than in Example 1. Table 3 shows the weights, the solutions obtained, the corresponding exact Pareto-optimal points and the number of gradient evalutions of the dual function. In Fig. 4 we have shown the solutions obtained and the corresponding Pareto-optimal solutions as well as the Pareto frontier. When computing the ®rst Pareto-optimal solution corresponding to equal weights both DMs had to solve ®ve optimization problems. The number of gradient evaluations varied between 4 and 15 with the average 8.64.

7. Conclusion In this paper a decentralized method for computing some or all Pareto-optimal solutions is developed. The method is applicable in situations where either the DMs themselves know their value functions but because of strategic reasons are not willing to reveal them or when the DMs' value functions are not elicited but can be assumed to be quasilinear. The method developed enjoys the desirable property that whenever the algorithm converges to a point where the DMs' optimal solutions coincide, Pareto-optimality of this common optimum is guaranteed. In addition, Pareto-optimal points can be calculated even if the DMs' value functions are nondi€erentiable. Based on the numerical examples the number of nonlinear multiobjective programming problems solved by a DM when searching for a Pareto solution seems to be reasonable when the dimension of the problem, i.e., the number of both DMs and decision variables, is low. However, the DMs may ®nd their task too laborious when the dimension of the mediator's problem increases. We believe

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590

589

Table 3 The solutions obtained and corresponding Pareto-optimal points in Example 3 Weights

Solution

…1 321 †T …1 161 †T …1 18†T …1 14†T …1 12†T …1 1†T …1 2†T …1 4†T …1 8†T …1 16†T …1 32†T

(0.032 (0.072 (0.115 (0.193 (0.324 (0.500 (0.524 (0.511 (0.519 (0.643 (0.774

0.226)T 0.381)T 0.478)T 0.474)T 0.506)T 0.672)T 0.800)T 0.902)T 0.956)T 0.955)T 0.961)T

Pareto solution (0.030 (0.059 (0.111 (0.200 (0.333 (0.500 (0.500 (0.500 (0.500 (0.640 (0.780

0.238)T 0.385)T 0.500)T 0.500)T 0.500)T 0.667)T 0.800)T 0.889)T 0.941)T 0.970)T 0.985)T

# of gradient evaluations 15 11 11 5 4 5 4 7 8 15 10

and by applying the dual decomposition method to the weighting method. An obvious extension of this work is to investigate the possibilities of decentralizing other multiobjective programming methods using a similar procedure. Acknowledgements This work has been partially supported by grants from the Finnish Cultural Foundation, the Emil Aaltonen Foundation and the Foundation of the Helsinki University of Technology. The author would also like to thank Y. Ermoliev for clarifying discussions and P.J. Korhonen and F.A. Lootsma for their comments.

Fig. 4. Solutions obtained (circles) and the corresponding Pareto-optima (stars) when generating an approximation to the Pareto frontier in Example 3.

that in those cases the number of problems solved by a DM can be reduced by further development of the mediator's algorithm. Attention should be paid especially to choosing a good initial value for the Lagrange multipliers and to the possibilities of approximating the Hessian matrix of the dual function using DMs' previous optimal solutions. In this paper the decentralized method was developed by introducing new decision variables

References [1] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming, Wiley, New York, 1993. [2] D.P. Bertsekas, Nonlinear Programming, Athena Scienti®c, Belmont, MA, 1995. [3] T. Bui, Co-oP-A Group Decision Support System for Cooperative Multiple Criteria Group Decision Making, Lecture Notes in Computer Science, Vol. 290, Springer, Berlin, 1987. [4] H. Ehtamo, R.P. H am al ainen, P. Heiskanen, J.E. Teich, M. Verkama, S. Zionts, Generating Pareto solutions in two-party negotiations by adjusting arti®cial constraints, Unpublished manuscript, Systems Analysis Laboratory, Helsinki University of Technology, 1997. [5] H. Ehtamo, E. Kettunen, R.P. H am al ainen, Searching for joint gains in multi-party negotiations, Unpublished

590

[6]

[7]

[8] [9]

[10] [11]

[12]

P. Heiskanen / European Journal of Operational Research 117 (1999) 578±590 manuscript, Systems Analysis Laboratory, Helsinki University of Technology, 1997. H. Ehtamo, M. Verkama, R.P. H am al ainen, How to select fair improving directions in a negotiation model over continuous issues, IEEE Transactions on Systems, Man, and Cybernetics (forthcoming). H. Ehtamo, M. Verkama, R.P. H am al ainen, On distributed computation of Pareto solutions for two decision makers, IEEE Transactions on Systems, Man, and Cybernetics ± Part A: Systems and Humans 26 (4) (1996) 1±6. A.M. Geo€rion, Proper eciency and the theory of vector maximization, Journal of Mathematical Analysis and Applications 22 (1968) 618±630. P. Heiskanen, H. Ehtamo, R.P. H am al ainen, Constraint proposal method for computing Pareto solutions in nparty negotiations, Unpublished manuscript, Systems Analysis Laboratory, Helsinki University of Technology, 1998. K.W. Hipel, L. Fang, D.M. Kilgour, Game theoretic models in engineering decision making, Journal of Infrastructure Planning and Management 470 (1993) 1±16. T. Jelassi, G. Kersten, S. Zionts, An introduction to group decision and negotiations support, in: C.A. Bana e Costa (Ed.), Readings in Multiple Criteria Decision Aid, Springer, Berlin, 1990. H. Rai€a, The Art and Science of Negotiation, Harvard University Press, Cambridge, MA, 1982.

[13] R.T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, 1970. [14] Y. Sawaragi, H. Nakayama, T. Tanino, Theory of Multiobjective Optimization, Academic Press, Orlando, FL, 1985. [15] J.K. Sebenius, Negotiation analysis: A characterization and review, Management Science 38 (1) (1992) 18±38. [16] R.E. Steuer, Multiple Criteria Optimization: Theory, Computation and Application, Wiley, New York, 1986. [17] J.E. Teich, H. Wallenius, M. Kuula, S. Zionts, A decision support approach for negotiation with an application to agricultural income policy negotiations, European Journal of Operational Research 81 (1995) 76±87. [18] J.E. Teich, H. Wallenius, J. Wallenius, Advances in negotiation science, Y oneylem Arastirmasi Dergisi/Transactions on Operational Research 6 (1994) 55±94. [19] J.E. Teich, H. Wallenius, J. Wallenius, S. Zionts, Identifying Pareto-optimal settlements for two-party resource allocation negotiations, European Journal of Operational Research 93 (1996) 536±549. [20] H.R. Varian, Microeconomic Analysis, 3rd ed., Norton, New York, 1992. [21] M. Verkama, H. Ehtamo, R.P. H am al ainen, Distributed computation of Pareto solutions in n-player games, Mathematical Programming 74 (1996) 29±45. [22] P.L. Yu, Multiple-Criteria Decision Making, Plenum Press, New York, 1985.