Knowledge-Based Systems 52 (2013) 53–64
Contents lists available at SciVerse ScienceDirect
Knowledge-Based Systems journal homepage: www.elsevier.com/locate/knosys
Hesitant fuzzy multi-attribute decision making based on TOPSIS with incomplete weight information Zeshui Xu a,b,⇑, Xiaolu Zhang b a b
School of Economics and Management, Southeast University, Nanjing, Jiangsu 211189, China College of Sciences, PLA University of Science and Technology, Nanjing, Jiangsu 210007, China
a r t i c l e
i n f o
Article history: Received 5 March 2013 Received in revised form 26 April 2013 Accepted 26 May 2013 Available online 5 June 2013 Keywords: Hesitant fuzzy set TOPSIS Multi-attribute decision making Maximizing deviation method Interval-valued hesitant fuzzy set
a b s t r a c t Hesitant fuzzy set (HFS), which allows the membership degree of an element to a set represented by several possible values, is considered as a powerful tool to express uncertain information in the process of multi-attribute decision making (MADM) problems. In this paper, we develop a novel approach based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and the maximizing deviation method for solving MADM problems, in which the evaluation information provided by the decision maker is expressed in hesitant fuzzy elements and the information about attribute weights is incomplete. There are two key issues being addressed in this approach. The first one is to establish an optimization model based on the maximizing deviation method, which can be used to determine the attribute weights. According to the idea of the TOPSIS of Hwang and Yoon [1], the second one is to calculate the relative closeness coefficient of each alternative to the hesitant positive-ideal solution, based on which the considered alternatives are ranked and then the most desirable one is selected. An energy policy selection problem is used to illustrate the detailed implementation process of the proposed approach, and demonstrate its validity and applicability. Finally, the extended results in interval-valued hesitant fuzzy situations are also pointed out. Ó 2013 Elsevier B.V. All rights reserved.
1. Introduction Multi-attribute decision making (MADM), which addresses the problem of making an optimal choice that has the highest degree of satisfaction from a set of alternatives that are characterized in terms of their attributes, is a usual task in human activities. In classical MADM, the assessments of alternatives are precisely known [2,3]. However, because of the inherent vagueness of human preferences as well as the objects being fuzzy and uncertain, the attributes involved in decision making problems are not always expressed in real numbers, and some are better suited to be denoted by fuzzy values, such as interval values [4–6], linguistic variables [7,8], intuitionistic fuzzy values [9–12], and hesitant fuzzy elements (HFEs) [13,14], just to mention a few. Since Zadeh [15] first proposed the basic model of fuzzy decision making based on the theory of fuzzy mathematics in 1965, fuzzy MADM has been receiving more and more attention. Many methods for MADM, such as the TOPSIS method [16–18], the maximizing deviation method [19–21], the gray relational analysis method [22], the VIKOR (VlseKriterijumska Optimizacija I Kompromisno Resenje)
⇑ Corresponding author at: School of Economics and Management, Southeast University, Nanjing, Jiangsu 211189, China. E-mail addresses:
[email protected] (Z. Xu),
[email protected] (X. Zhang). 0950-7051/$ - see front matter Ó 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.knosys.2013.05.011
method [23–25], the PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) method [26], and the ELECTRE (ELimination Et Choix Traduisant la REalité) method [27,28], have been extended to take different types of attribute values into account, such as interval values, linguistic variables, and intuitionistic fuzzy values. All of the above methods, however, have not yet been accommodated to fit the hesitant fuzzy assessments provided by the decision makers (DMs). Hesitant fuzzy set (HFS) [13,14], which has been introduced by Torra and Narukawa as an extension of fuzzy set [15], describes the situations that permit the membership of an element to a given set having a few different values, which is a useful means to describe and deal with uncertain information in the process of MADM. For example, to get a reasonable decision result, a decision organization, which contains a lot of DMs, is authorized to estimate the degree that an alternative should satisfy a criterion. Suppose that there are three cases, some DMs provide 0.3, some provide 0.5, and the others provide 0.6, and these three parts cannot persuade each other, thus the degree that the alternative should satisfy the criterion can be represented by a HFE {0.3, 0.5, 0.6}. It is noted that the HFE {0.3, 0.5, 0.6} can describe the above situation more objectively than the interval-value fuzzy set [0.3, 0.6], because the degrees that the alternative should satisfy the criterion are not the convex combination of 0.3 and 0.6, or the interval between 0.3 and 0.6, but just three possible values [31]. So, the use of hesitant
54
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
fuzzy assessments makes the DMs’ judgments more reliable and informative in decision making. Xia and Xu [29] developed some aggregation operators for hesitant fuzzy information, and gave their application for solving MADM problems under hesitant fuzzy environment. Xu and Xia [30] gave a detailed study on distance and similarity measures for HFSs and proposed an approach based on distance measures for MADM problems. Xia et al. [31] also proposed some other hesitant fuzzy aggregation techniques and applied them in group decision making. Yu et al. [32] proposed a hesitant fuzzy Choquet integral operator and applied it in MADM under hesitant fuzzy environment in which the weight vector of attributes is exactly known. Wei [33] also developed some prioritized aggregation operators for hesitant fuzzy information, and developed some models for hesitant fuzzy MADM problems in which the attributes are in different priority levels. Yu et al. [34] proposed the generalized hesitant fuzzy Bonferroni mean to solve MAGDM problems where the attributes are correlative under hesitant fuzzy environment. More recently, Qian et al. [35] generalized the HFSs using intuitionistic fuzzy sets in group decision making framework. The generalized HFS is fit for the situations when the DMs have a hesitation among several possible memberships under uncertainty. Chen et al. [36] also generalized the concept of HFS to that of interval-valued hesitant fuzzy set (IVHFS) in which the membership degrees of an element to a given set are not exactly defined, but denoted by several possible interval values, and meanwhile developed an approach to group decision making based on interval-valued hesitant preference relations in order to consider the differences of opinions between individual DMs. Obviously, most of these papers put their emphasis on the extensions of the aggregation techniques in MADM under hesitant fuzzy scenarios. However, when using these techniques, the associated weighting vector is more or less determined subjectively and the decision information itself is not taken into consideration sufficiently; and more importantly, a significant pitfall of the aforementioned methods is the need for the information about attribute weights being exactly known. In fact, in the process of MADM with hesitant fuzzy information, we often encounter the situations where the attribute values take the form of HFEs, and the information about attribute weights is incompletely known or completely unknown because of time pressure, lack of knowledge or data, and the expert’s limited expertise about the problem domain [37]. Considering that the existing methods cannot be suitable for dealing with such situations, in this paper, we propose a novel approach to objectively determine the attribute weights and sort the alternatives under the conditions that the attribute weights are completely unknown or partly known, and the attribute values take the form of HFEs. To do so, we organize the paper as follows: In Section 2, we review some concepts related to HFSs and IVHFSs. Section 3 develops a novel approach based on TOPSIS and the maximizing deviation method for solving the MADM problem with hesitant fuzzy information. Section 4 extends our results to interval-valued hesitant fuzzy environment. Section 5 gives the application of the developed approach to MADM involving energy policy selection and makes some comparison analysis. The paper finishes with some concluding remarks in Section 6.
2. Some basic concepts Hesitant fuzzy set [13,14], as a generalization of fuzzy set, permits the membership degree of an element to a set presented as several possible values between 0 and 1, which can better describe the situations where people have hesitancy in providing their preferences over objects in the process of decision making.
Definition 1 (13,14). Let X be a reference set, a hesitant fuzzy set (HFS) A on X is defined in terms of a function hA(x) when applied to X returns a subset of [0, 1], i.e.,
A ¼ f< x; hA ðxÞ > jx 2 Xg
ð1Þ
where hA(x) is a set of some different values in [0, 1], representing the possible membership degrees of the element x 2 X to A. For the sake of simplicity, Xia and Xu [29] called hA(x) a hesitant fuzzy element (HFE). It is noted that the number of values for different HFEs may be different, and the values are usually out of order, then we can arrange them in any order for convenience. Suppose that we arrange the values of a HFE h in an increasing order, and let hr(i) (i = 1, 2, . . . , lh) be the ith smallest value in h. For two HFEs a and b, let l = max{la, lb}, where la and lb are respectively the numbers of values in the HFEs a and b. In order to more accurately calculate the distance between two HFSs, Xu and Xia [38] suggested that we should extend the shorter one until both of them have the same length when we compare them with la – lb, and they gave the following regulations: If la < lb, then a should be extended by adding the minimal value in it until it has the same length with b; If la > lb, then b should be extended by adding the minimal value in it until it has the same length with a. At the same time, we can extend the shorter one by adding any value in it which mainly depends on the DMs’ risk preferences. Optimists anticipate desirable outcomes and may add the maximum value, while pessimists expect unfavorable outcomes and may add the minimal value. Although Xu and Xia’s extension rule is very reasonable, it does not consider the situation when the DM is assumed to be risk-neutral. We now develop a new method, which can reveal the DM’s risk preference (including risk-averse, risk-neutral, and risk-seeking) with a parameter g, to extend the shorter HFE until both of them have the same length when we compare them with la – lb. Definition 2. Assume a HFE h = {hr(i)ji = 1, 2, . . . , lh}, and stipulate that h+ and h are the maximum and minimum values in the HFE h, ¼ ghþ þ ð1 gÞh an extension value, respectively; then we call h where g(0 6 g 6 1) is the parameter determined by the DM according his/her risk preference. Therefore, we can add different values to the HFE using g according the DM’s risk preference. If g = 1, then the extension va ¼ hþ , which indicates that the DM’s risk preference is risklue h ¼ h , which means that the DM’s risk seeking; while if g = 0, then h preference is risk-averse. It is clear that Xu and Xia’s extension rule is consistent with our extension rule when g = 1 and g = 0. Moreover, when the DM’s risk preference is risk-neutral, we can add ¼ 1 ðhþ þ h Þ, i.e., g ¼ 1. Apparently, the the extension value h 2 2 parameter g provided by the DM reflects his/her risk preference which can affect the final decision results. Meanwhile, Torra [14] indicated that the envelope of a HFE is an intuitionistic fuzzy value (IFV), which is shown as follows: Definition 3 [14]. Given a HFE h, we define the IFV Aenv(h) as the envelope of h, where Aenv(h) can be represented as (h, 1 h+), with h = min{cjc 2 h} and h+ = max{cjc 2 h}. Based on the above operational laws and the principle of extension, Xu and Xia [38] defined the hesitant Euclidean distance for HFEs:
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u l u1 X d1 ða; bÞ ¼ t jarðiÞ brðiÞ j2 l i¼1
ð2Þ
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
However, sometimes it is somewhat difficult for the DMs to assign exact values for the membership degrees of certain elements to a given set, but a range of values belonging to [0,1] may be assigned. To cope with such a situation, Chen et al. [36] firstly put forward the concept of interval-valued hesitant fuzzy set (IVHFS), which permits the memberships of an element to a given set having a few different interval values: Definition 4 [36]. Let X be a reference set, and D[0, 1] the set of all closed subintervals of [0, 1]. An IVHFS on X is
~ ðx Þijx 2 X; i ¼ 1; 2; . . . ; ng e ¼ fhxi ; h A i e i A
ð3Þ
~ ðx Þ: X ? D[0, 1] denotes all possible interval membership where h eA i ~ ðxi Þ is called an e h degrees of the element xi 2 X to the set A. eA interval-valued hesitant fuzzy element (IVHFE), which satisfies
~ ~ ðx Þ ¼ fc ~ ~ ðx Þg ~ jc ~2h h A i A i
ð4Þ
~ ¼ ½c ~L ; c ~U is an interval number. c ~L ¼ inf c ~ and c ~U ¼ sup c ~ where c ~, respectively. Obviously, if express the lower and upper limits of c c~L ¼ c~U , then the IVHFEs reduce to the HFEs. ~ h ~1 and h ~2 be three IVHFEs, then Definition 5 [36]. Let h; ~ c ¼ f½1 c ~ ~U ; 1 c ~L jc ~ 2 hg; (1) h n o L L U U ~ ~ ~1 ; c ~2 ; ~ ~ ~1 ; c ~ 2 jc ~1 2 h ~2 2 h (2) h1 [ h2 ¼ max c1 ; c2 ; max c n o U U ~1 \ h ~ 2 ¼ min c ~1 ; c ~2 ; ~L1 ; c ~L2 ; min c ~1 ; c ~ 2 jc ~1 2 h ~2 2 h (3) h ~ k ¼ f½ðc ~ k > 0; ~L Þk ; ðc ~U Þk jc ~ 2 hg; (4) h k L ~ ~ k > 0; ~ Þ ; 1 ð1 c ~U Þk jc ~ 2 hg; (5) kh ¼ f½1 ð1 c n o ~1 h ~2 ¼ c ~1 ; c ~2 ; ~L1 þ c ~L2 c ~L1 c ~L2 ; c ~U1 þ c ~U2 c ~U1 c ~U2 jc ~1 2 h ~2 2 h (6) h n o ~1 h ~2 ¼ c ~1 ; c ~2 . ~L1 c ~L2 ; c ~U1 c ~U2 jc ~1 2 h ~2 2 h (7) h Motivated by the distance measures for HFSs in [38], in what follows, we propose some basic distance measures between IVHFEs, such as the interval-valued hesitant Hamming-Hausdorff distance, the interval-valued hesitant Euclidean-Hausdorff distance, the hybrid interval-valued hesitant Hamming distance, and the hybrid interval-valued hesitant Euclidean distance: ~ and h ~ , the distance measure Definition 6 [36]. For two IVHFEs h 1 2 ~ and h ~ , denoted as dðh ~ ;h ~ Þ, should satisfy the between h 1 2 1 2 following properties: ~1 ; h ~ 2 Þ 6 1; (1) 0 6 dðh ~ ~ ~1 ¼ h ~2 ; (2) dðh1 ; h2 Þ ¼ 0 if and only if h ~1 ; h ~2 Þ ¼ dðh ~2 ; h ~1 Þ. (3) dðh
55
~ an extension interval value, where g(0 6 g 6 1) is the ð1 gÞh parameter determined by the DM according his/her risk preference. Therefore, if la~ – lb~ , then we should add the different interval values to the IVHFE using the parameter g according the DM’s risk preference. In other word, when the DM’s risk preference is risk ~ ¼ 1 ðh ~þ þ h ~ Þ, neutral, we can add the extension interval value h 2 i.e., g ¼ 12; when the DM’s risk preference is risk-seeking, we can ~¼h ~ þ , i.e., g = 1; when the add the extension interval value h DM’s risk preference is risk-averse, we can add the extension inter ~¼h ~ , h val value i.e., g = 0. For instance, let a~ ¼ f½0:1; 0:2; ½0:2; 0:3; ½0:1; 0:3g, b~ ¼ f½0:3; 0:4; ½0:4; 0:5g, and la~ > lb~ . According to the rules mentioned above, we should extend ~ until it has the same length with a ~ , the risk-seeking DM may exb ~ as b ~ ¼ f½0:3; 0:4; ½0:4; 0:5; ½0:4; 0:5g, the risk-averse DM tend b ~ ¼ f½0:3; 0:4; ½0:3; 0:4; ½0:4; 0:5g, and the riskmay extend it as b ~ ¼ f½0:3; 0:4; ½0:35; 0:45; ½0:4; 0:5g. neutral DM may extend it as b Apparently, the parameter g provided by the DM reflects his/her risk preference which can affect the final decision results. In this paper, we assume that the decision makers are all risk-averse. Based on the well-known Hamming distance and the Euclidean distance as well as the above operational principles, Chen et al. [36] gave the distance measures for IVHFEs: l X ~L ~ ¼ 1 ~L þ a ~U ~ ; bÞ ~ UrðiÞ b d2 ða arðiÞ b rðiÞ rðiÞ 2l i¼1
~ ¼ ~ ; bÞ d3 ða
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 2 1 Xl ~L U ~L þ a ~U ~ b b a rðiÞ rðiÞ rðiÞ rðiÞ i¼1 2l
ð5Þ
ð6Þ
~ are two IVHFEs; a ~L and b ~U are the ith ~ and b ~ LrðiÞ ; a ~ UrðiÞ ; b where a rðiÞ rðiÞ L ~ U ~L L ~ ~ largest values in a ; a ; b and b , respectively, which will be used thereafter. Analogous to the distance measures for HFEs in [38], we further propose some other basic distance measures between IVHFEs: The interval-valued hesitant Hamming–Hausdorff distance:
o n ~ ¼ 1 max a ~L þ a ~U ~ ; bÞ ~ LrðiÞ b ~ UrðiÞ b d 4 ða r ðiÞ r ðiÞ 2 i
ð7Þ
The interval-valued hesitant Euclidean–Hausdorff distance:
2 2 U ~ ¼ 1 max a ~L þ a ~U ~ ; bÞ ~ LrðiÞ b ~ b d 5 ða rðiÞ rðiÞ rðiÞ 2 i
ð8Þ
The hybrid interval-valued hesitant Hamming distance:
~ ¼ ~ ; bÞ d 6 ða
l 1 1X ~L ~L þ a ~U ~ UrðiÞ b arðiÞ b rðiÞ rðiÞ 2 2l i¼1 o n 1 ~L ~L ~ U ~U þ max a r ðiÞ brðiÞ þ arðiÞ brðiÞ 2 i
ð9Þ
The hybrid interval-valued hesitant Euclidean distance:
In most cases, the number of intervals for different IVHFEs could be different, and the interval values are usually out of order. For convenience, let l ¼ maxfla~ ; lb~ g, where la~ and lb~ are the numbers ~ respectively, and we arrange the ~ and b, of intervals in IVHFEs a interval values in any order based on a possibility degree formula [39]. In order to more accurately calculate the distance between two IVHFEs, it is necessary that two IVHFEs have the same number of intervals. Similar to Definition 2, we can define a new extension rule for IVHFEs as follows:
3. MADM based on TOPSIS and the maximizing deviation method
~ ¼ fh ~ ji ¼ 1; 2; . . . ; l~ g, and stipDefinition 7. Assume an IVHFE h rðiÞ h þ ~ ~ ulate that h and h are the maximum and the minimum interval ~ ¼ gh ~ respectively; then we call h ~þ þ values in the IVHFE h,
This section puts forward a framework for determining attribute weights and the ranking orders for all the alternatives with incomplete weight information under hesitant fuzzy environment.
~ ¼ ~ ; bÞ d 7 ða
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 2 1 Xl ~L U ~L þ a ~U ~ b b a rðiÞ rðiÞ rðiÞ rðiÞ i¼1 2l
2 2 1 ~L ~U ~L ~U þ max a ð10Þ rðiÞ brðiÞ þ a rðiÞ brðiÞ 2 i
1 2
56
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
3.1. Problem description A MADM problem can be expressed as a decision matrix whose elements indicate the evaluation information of all alternatives with respect to an attribute. We construct a hesitant fuzzy decision matrix, whose elements are HFEs, which are given due to the fact that the membership degree of the alternative Ai satisfying the attribute xj may originate from a doubt between a few different values. Consider a MADM problem where there is a discrete set of m alternatives, A = {A1, A2, . . . , Am}. Let X be the discussion universe containing the attributes and X = {x1, x2, . . . , xn} be the set of all attributes. A HFS Ai of the ith alternative on X is given by Ai ¼ f< xj ; hAi ðxj Þ > jxj 2 Xg, where hAi ðxj Þ ¼ fcjc 2 hAi ðxj Þ; 0 6 c 6 1g, i = 1, 2, . . . m; j = 1, 2, . . . , n. hAi ðxj Þ indicates the possible membership degrees of the ith alternative Ai under the jth attribute xj, and it can be expressed as a HFE hij. The hesitant fuzzy decision matrix H, can be written as:
h11
h12
h1n
3
6h 6 21 H¼6 6 .. 4 .
h22 .. .
.. .
h2n .. .
7 7 7 7 5
hm1
hm2
hmn
2
Dij ðwÞ ¼
m X dðhij ; hkj Þwj ;
i ¼ 1; 2; . . . ; m;
j ¼ 1; 2; . . . ; n
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi P rðkÞ rðkÞ 2 where d1 ðhij ; hkj Þ ¼ 1l lk¼1 hij hkj denotes the hesitant Euclidean distance between the HFEs hij and hkj defined as in Eq. (2). Let m m X m X X Dj ðwÞ ¼ Dij ðwÞ ¼ wj i¼1
ð11Þ
ð12Þ
Pn
where 0 6 wj 6 1, j¼1 wj ¼ 1, and wj is the importance degree of each attribute. Due to the complexity and uncertainty of practical decision making problems and the inherent subjective nature of human thinking, the information about attribute weights is usually incomplete. For convenience, let D be a set of the known weight information [40–43], where D can be constructed by the following forms, for i – j: Form 1. A weak ranking: {wi P wj}; Form 2. A strict ranking: {wi wj P ai}(ai > 0); Form 3. A ranking of differences: {wi wj P wk wl}, for j – k – l; Form 4. A ranking with multiples: {wi P ai wj}(0 6 ai 6 1); Form 5. An interval form: {ai 6 wi 6 ai + ei}(0 6 ai 6 ai + ei 6 1). 3.2. Computing the optimal weights by the maximizing deviation method The estimation of the attribute weights plays an important role in MADM. The maximizing deviation method was proposed by Wang [44] to determine the attribute weights for solving MADM problems with numerical information. According to Wang [44], for a MADM problem, the attribute with a larger deviation value among alternatives should be assigned a larger weight, while the attribute with a small deviation value among alternatives should be signed a smaller weight. In other word, if the performance values of all the alternatives have small differences under an attribute, it shows that such an attribute plays a less important role in the priority procedure. On the contrary, if an attribute makes the performance values of all the alternatives have obvious differences, then this attribute plays a much important role in choosing the best alternative. So from the standpoint of ranking the alternatives, if one attribute has similar attribute values across alternatives, it should be assigned a small weight; otherwise, the attribute which makes larger deviations should be evaluated a bigger weight, in spite of the degree of its own importance. Especially, if all available
ð13Þ
k¼1
i¼1 k¼1
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 Xl rðkÞ rðkÞ 2 h h ; ij kj k¼1 l
j ¼ 1; 2; . . . ; n
Considering that the attributes have different importance degrees, the weight vector of all the attributes, given by the DMs, is defined by
w ¼ ðw1 ; w2 ; . . . ; wn ÞT
alternatives score equally with respect to a given attribute, then such an attribute will be judged unimportant by most of DMs. Wang [44] suggested that zero weight should be assigned to the corresponding attribute. Hence, we here construct an optimization model based on the maximizing deviation method to determine the optimal relative weights of attributes under hesitant fuzzy environment. For the attribute xj 2 X, the deviation of the alternative Ai to all the other alternatives can be expressed as:
ð14Þ
then Dj(w) represents the deviation value of all alternatives to other alternatives for the attribute xj 2 X. Based on the above analysis, we can construct a non-linear programming model to select the weight vector w which maximizes all deviation values for all the attributes, as follows:
ðM 1Þ
8 > > > max > >
> > > s:t: wj P 0; j ¼ 1; 2; . . . ; n; > :
j¼1
To solve the above model, we let
Lðw;nÞ ¼
! rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n X m X m ffi n X 1 Xl rðkÞ n X rðkÞ 2 2 w h h þ w 1 j kj k¼1 ij l 2 j¼1 j j¼1 i¼1 k¼1
ð15Þ
which indicates the Lagrange function of the constrained optimization problem (M-1), where n is a real number, denoting the Lagrange multiplier variable. Then the partial derivatives of L are computed as:
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m X m X @L 1 Xl rðkÞ rðkÞ 2 ¼ hij hkj þ nwj ¼ 0 k¼1 @wj l i¼1 k¼1
ð16Þ
! n @L 1 X ¼ w2j 1 ¼ 0 @n 2 j¼1
ð17Þ
It follows from Eq. (16) that
wj ¼
P Pm m i¼1 k¼1
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pl rðkÞ rðkÞ 2 1 hkj k¼1 hij l n
;
j ¼ 1; 2; . . . ; n
ð18Þ
Putting Eq. (18) into Eq. (17), we have
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! u rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 uXn Xm Xm 1 Xl rðkÞ rðkÞ 2 t n¼ hkj h j¼1 i¼1 k¼1 k¼1 ij l
ð19Þ
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P Pm Pl rðkÞ rðkÞ 2 1 Obviously, n < 0; m hkj means the i¼1 k¼1 k¼1 hij l
sum of deviations of all the alternatives with respect to the jth vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ! u rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 uP Pm Pm Pl rðkÞ rðkÞ 2 1 h h attribute, and t nj¼1 means i¼1 k¼1 k¼1 ij kj l the sum of deviations of all the alternatives with respect to all the attributes.
57
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
D E rðkÞ Aþ ¼ xj ; max hij jj ¼ 1; 2; . . . ; n
Then combining Eqs. (18) and (19), we can get
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pl rðkÞ rðkÞ 2 1 hkj i¼1 k¼1 k¼1 hij l wj ¼ vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi !ffi u rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 uP P P P rðkÞ rðkÞ m m l t n 1 hkj j¼1 i¼1 k¼1 k¼1 hij l
i
Pm Pm
ð20Þ
þ þ þ 1 2 l x2 ; h2 ; h2 ; . . . ; h2 ;...;
P Pm For the sake of simplicity, let Yj ¼ m i¼1 k¼1 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pl rðkÞ rðkÞ 2 1 h h ; j ¼ 1; 2; . . . ; n, and Eq. (20) can be rewritten k¼1 ij kj l as:
Yj wj ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pn 2 ; j¼1 Y j
j ¼ 1; 2; . . . ; n
ð21Þ
It can be verified from Eq. (21) easily that wj(j = 1, 2, . . . , n) are positive such that they satisfy the constrained conditions in the model (M-1) and the solution is unique. By normalizing wj (j = 1, 2, . . . , n), we make their sum into a unit, and get
wj wj ¼ Pn
j¼1 wj
Yj ¼ Pn
j¼1 Y j
;
j ¼ 1; 2; . . . ; n
ð22Þ
However, there are actual situations that the information about the weight vector is not completely unknown but partially known. For these cases, based on the set of the known weight information, D, we construct the following constrained optimization model:
ðM 2Þ
8 > > > max > > < > > > > s:t > :
DðwÞ ¼
n X m X m X wj j¼1 i¼1 k¼1
þ þ þ 1 2 l ; ¼ x1 ; h1 ; h1 ; . . . ; h1
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pl rðkÞ rðkÞ 2 1 h h k¼1 ij kj l
n X wj ¼ 1 w 2 D; wj P 0; j ¼ 1; 2; . . . ; n; j¼1
where D is also a set of constraint conditions that the weight value wj should satisfy according to the requirements in real situations. The model (M-2) is a linear programming model that can be executed using the MATLAB 7.4.0 mathematics software package. By solving this model, we get the optimal solution w = (w1, w2, . . . , wn)T, which can be used as the weight vector of attributes. 3.3. An approach to MADM with hesitant fuzzy information In general, after obtaining the attribute weight values on basis of the maximizing deviation method, analogous to the literature [19,21], we should utilize a certain kind of operator to aggregate the given decision information so as to get the overall preference value of each alternative, and then rank the alternatives and select the most desirable one(s). In the process of hesitant fuzzy information aggregation, however, it produces the loss of too much information due to the complexity of the aggregation process of hesitant fuzzy aggregation operators, which implies a lack of precision in the final results. Therefore, in order to overcome this disadvantage, we have extended the TOPSIS method to take hesitant fuzzy information into account and utilized the distance measures of HFEs to obtain the final ranking of the alternatives. TOPSIS, proposed by Hwang and Yoon [1], is a kind of method to solve MADM problems, which aims at choosing the alternative with the shortest distance from the positive ideal solution (PIS) and the farthest distance from the negative ideal solution (NIS), and is widely used for tackling the ranking problems in real situations. Under hesitant fuzzy environment, the hesitant fuzzy PIS, denoted by A+, and the hesitant fuzzy NIS, denoted by A, can be defined as follows:
þ þ þ 1 2 l xn ; hn ; hn ; . . . ; hn
ð23Þ
D E rðkÞ jj ¼ 1; 2; ;ng A ¼ fxj ; min hij i nD E D E 1 2 l 1 2 l ¼ x1 ; h1 ; h1 ; . .. ; h1 ; x 2 ; h2 ; h2 ; . . . ; h2 ; . .. ; D Eo 1 2 l ð24Þ xn ; hn ; hn ; . . .; hn
The separation between alternatives can be measured by Hamming distance or Euclidean distance. In order to measure the distances between HFEs, we adopt the hesitant fuzzy Euclidean distance proposed by Xu and Xia [38]. The separation measures, þ di and di , of each alternative from the hesitant fuzzy PIS A+ and the hesitant fuzzy NIS A, respectively, are derived from þ
di ¼
n n X X þ d hij ; hj wj ¼ wj j¼1
j¼1
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 Xl rðkÞ rðkÞ þ 2 hj h ; k¼1 ij l
i ¼ 1; 2; . . . ; n
di ¼
ð25Þ
n n X X d hij ; hj wj ¼ wj j¼1
j¼1
i ¼ 1; 2; . . . ; n
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 1 Xl rðkÞ rðkÞ 2 ; h h j k¼1 ij l ð26Þ
The relative closeness coefficient of an alternative Ai with respect to the hesitant fuzzy PIS A+ is defined as the following formula:
Ci ¼
þ di
di þ di
ð27Þ
where 0 6 Ci 6 1, i = 1, 2, . . . , m. Obviously, an alternative Ai is closer to the hesitant fuzzy PIS (A+) and farther from the hesitant fuzzy NIS (A) as Ci approaches 1. Therefore, according to the closeness coefficient Ci, we can determine the ranking orders of all alternatives and select the best one from a set of feasible alternatives. Based on the above models, we shall develop a practical approach for solving MADM problems, in which the information about attribute weights is incompletely known or completely unknown, and the attribute values take the form of hesitant fuzzy information. The schematic diagram of the proposed approach for MADM is provided in Fig. 1. The approach involves the following steps: Step 1. For a MADM problem, we construct the decision matrix H = [hij]m n, where all the arguments hij (i = 1, 2, . . . , m; j = 1, 2, . . . , n) are HFEs, given by the DMs, for the alternative Ai 2 A with respect to the attribute xj 2 X. Step 2. If the information about the attribute weights is completely unknown, then we can obtain the attribute weights by using Eq. (22); if the information about the attribute weights is partly known, then we solve the model (M-2) to obtain the attribute weights. Step 3. Utilize Eqs. (23) and (24) to determine the corresponding hesitant fuzzy PIS A+ and the hesitant fuzzy NIS A. Step 4. Utilize Eqs. (25) and (26) to calculate the separation meaþ sures di and di of each alternative Ai from the hesitant + fuzzy PIS A and the hesitant fuzzy NISA, respectively.
58
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
Fig. 1. The schematic diagram of the proposed approach for MADM.
Step 5. Utilize Eq. (27) to calculate the relative closeness coefficient Ci of each alternative Ai to the hesitant fuzzy PIS A+. Step 6. Rank the alternatives Ai(i = 1, 2, . . . , m) according to the relative closeness coefficients Ci(i = 1, 2, . . . , m) to the hesitant fuzzy PIS A+ and then select the most desirable one(s).
~ A ðx Þ ¼ fc ~A ðx Þ; c ~ jc ~2h h j j ~ ½0; 1g; i ¼ 1; 2; . . . ; m; j = 1, 2, . . . , n; i i c~ ¼ ½cL ; cU ; cL ¼ inf c~ and cU ¼ sup c~ express the lower and upper ~ A ðx Þ indicates the possible membership ~, respectively. h limits of c j i degrees of the ith alternative Ai under the jth attribute xj and can ~ . The interval-valued hesitant fuzzy be expressed as an IVHFE h ij
4. Extended results in interval-valued hesitant fuzzy situations Similar to Section 3.1, here we consider a MADM problem where there is a discrete set of m alternatives, A = {A1, A2, . . . , Am}. Let X be the discussion universe containing the attributes and e i of the X = {x1, x2, . . . , xn} be the set of all attributes. An IVHFS A ~ A ðx Þ > jx 2 Xg, where e i ¼ f< xj ; h ith alternative on X is given by A j j i
e can be written as: decision matrix H,
2
~11 h 6~ 6 h21 e ¼6 H 6 . 6 . 4 . ~m1 h
~12 h ~22 h .. . ~m2 h
3 ~1n h ~2n 7 7 h 7 .. .. 7 7 . . 5 ~ hmn
ð28Þ
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
In what follows, similar to Section 3.2, based on the maximizing deviation method, we shall construct a non-linear programming model to select the weight vector w which maximizes all deviation values for all the attributes:
ðM 3Þ
8 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n X m X m X > Pl > rðkÞL rðkÞL 2 rðkÞU rðkÞU 2 1 > > max DðwÞ ¼ w hkj þ hij hkj j > k¼1 hij 2l < j¼1 i¼1 k¼1
> > > > > : s:t:
wj P 0; j ¼ 1;2; . .. ; n;
n X w2j ¼ 1:
n X m X m X wj j¼1 i¼1 k¼1
ffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 2 1 Xl rðkÞL rðkÞU rðkÞL rðkÞU hkj hij hkj þ hij k¼1 2l ! n n X w2 1 þ 2 j¼1 j
2
ð29Þ
which indicates the Lagrange function of the constrained optimization problem (M-3), where n is a real number, denoting the Lagrange multiplier variable. Then the partial derivatives of L are computed as:
8 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi
m X m 2 2 X > Pl > rðkÞL rðkÞU rðkÞL rðkÞU @L 1 > > ¼ hkj þ hij hkj þ nwj ¼ 0 > k¼1 hij 2l < @wj i¼1 k¼1 ! n > > > @L ¼ 1 Xw2 1 ¼ 0 > > j @n 2 : j¼1
ð30Þ By solving Eq. (30), we can get sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi
2 2 Pm Pm Pl rðkÞL rðkÞU rðkÞL rðkÞU 1 h h þ h h i¼1 k¼1 k¼1 ij ij kj kj 2l wj ¼ pffiPn
j¼1
D E ~ rðkÞ jj ¼ 1;2; .. . ; n e ¼ xj ; min h A ij i nD D L U E D L U E D L U E E 1 1 2 2 l l ¼ x1 ; ; h1 ; . . .; h1 ; h1 ; h1 ; h1 ; h1 D D L U E D L U E D L U E E 1 1 2 2 l l ; h2 ; ; h2 ; h2 ;; h2 ; h2 ; h2 x2 ; D D L U E D L U E D L U E Eo 1 1 2 2 l l ; hn ; ; hn ; hn hn ; hn ; hn xn ;
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi!
2 Pl rðkÞL rðkÞL 2 rðkÞU rðkÞU 2 1 h h þ h h k¼1 k¼1 ij ij kj kj 2l ð31Þ Pm
P For convenience, let Yj ¼ m i¼1 k¼1 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 2 Pl rðkÞL rðkÞU rðkÞL rðkÞU 1 hkj þ hij hkj , and make the sum k¼1 hij 2l
ð34Þ
For measuring the separation between alternatives, we adopt the interval-valued hesitant fuzzy Euclidean distance proposed by Chen ~þ and d ~ , of each alternative et al. [36]. The separation measures, d i i þ e from the interval-valued hesitant fuzzy PIS A and the interval-vale , respectively, are derived from ued hesitant fuzzy NIS A
~þ ¼ d i
n X ~ ;h ~þ w d h ij j j j¼1
vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi u ! n X u 1 Xl rðkÞL rðkÞL þ 2 rðkÞU rðkÞU þ 2 t h þ h ; ¼ wj hj hj ij k¼1 ij 2l j¼1 i ¼ 1; 2; . . . ; m
Pm Pm i¼1
After obtaining the attribute weight values on basis of the maximizing deviation method, analogous to Section 3.3, below we extend the TOPSIS method to take into account interval-valued hesitant fuzzy information provided by the DMs and utilize it to obtain the final ranking of the alternatives. Under interval-valued hesitant fuzzy environment, the intervale þ , and the interval-valued valued hesitant fuzzy PIS, denoted by A e , can be defined as follows: hesitant fuzzy NIS, denoted by A D E ~rðkÞ jj ¼ 1;2;...;ng e þ ¼ fxj ;max h A ij i þ þ þ þ þ þ 1L 1U 2L 2U lL lU x1 ; ; h1 ;; h1 ; h1 ; h1 ; h1 ; h1 ¼ þ þ þ þ þ þ L U L U L U 1 1 2 2 l l x2 ; h2 ; h2 ; h2 ; h2 ;; h2 ; h2 ;; þ þ þ þ þ þ 1L 1U 2L 2U lL lU hn ; hn ; hn ; hn ;; hn ; hn ð33Þ xn ;
j¼1
To solve the above model, we let
Lðw; nÞ ¼
59
~ ¼ d i
ð35Þ
n X ~ ;h ~ w d h ij j j j¼1
n X ¼ wj j¼1
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 2 1 Xl rðkÞL rðkÞU rðkÞL rðkÞU h h þ h h ; ij j ij j k¼1 2l
i ¼ 1; 2; . . . ; m
ð36Þ
of wj(j = 1, 2, . . . , n) into a unit by normalizing wj (j = 1, 2, . . . , n), we can get
The relative closeness coefficient of an alternative Ai with e þ is defined as respect to the interval-valued hesitant fuzzy PIS A the following formula:
wj wj ¼ Pn
ei ¼ C
j¼1 wj
Yj ¼ Pn
j¼1 Y j
;
j ¼ 1; 2; . . . ; n
ð32Þ
However, as we have remarked earlier, there are actual situations that the information about the weighting vector is not completely unknown but partially known. For these cases, based on the set of the known weight information D, we construct the following constrained optimization model: 8 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n X m X m 2 2 X > P > rðkÞL rðkÞU rðkÞL rðkÞU > > max DðwÞ ¼ wj 2l1 lk¼1 hij hkj þ hij hkj > < j¼1 i¼1 k¼1 ðM 4Þ m > X > > s:t: W 2 D; w P 0; > wj ¼ 1; j ¼ 1;2; ... ;n > j : j¼1
where D is also a set of constraint conditions that the weight value wj should satisfy according to the requirements in real situations. Similarly, the solution to this model (M-4) could be found by using the MATLAB 7.4.0 mathematics software package or Lingo software package.
~ d i þ ~ ~ di þ d i
ð37Þ
e i 6 1; i ¼ 1; 2; . . . ; m. Obviously, an alternative Ai is clowhere 0 6 C e þ Þ and farther from ser to the interval-valued hesitant fuzzy PIS ð A e Þ as C e i approaches 1. the interval-valued hesitant fuzzy NIS ð A Therefore, according to the closeness coefficient Ci, we can determine the ranking orders of all alternatives and select the best one. Based on the above models, analogous to Section 3.3, we develop a practical approach for decision making with interval-valued hesitant fuzzy information, in which the information about attribute weights is incompletely known or completely unknown. The approach involves the following steps: Step 1. For a MADM problem, we construct the decision matrix h i ~ij ~ ij ði ¼ 1; 2; . . . ; m; e ¼ h , where all the arguments h H m n
j ¼ 1; 2; . . . ; nÞ are IVHFEs, given by the DMs, for the alternative Ai 2 A with respect to the attribute xj 2 X.
60
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
Step 2. If the information about the attribute weights is completely unknown, then we can obtain the attribute weights by using Eq. (32); if the information about the attribute weights is partly known, then we solve the model (M-4) to obtain the attribute weights. Step 3. Utilize Eqs. (33) and (34) to determine the interval-valued e þ and the interval-valued hesitant hesitant fuzzy PIS A e. fuzzy NIS A Step 4. Utilize Eqs. (35) and (36) to calculate the separation mea~þ and d ~ of each alternative A from the intervalsures d i i i e þ and the interval-valued hesivalued hesitant fuzzy PIS A e , respectively. tant fuzzy NIS A Step 5. Utilize Eq. (37) to calculate the relative closeness coeffie i to the interval-valued hesie i of each alternative A cient C þ e tant fuzzy PIS A . Step 6. Rank the alternatives Ai (i = 1, 2, . . . , m) according to the rele i ði ¼ 1; 2; . . . ; mÞ to the interative closeness coefficients C e þ and then select the most val-valued hesitant fuzzy PIS A desirable one (s). 5. An energy police selection example and comparison analysis of the derived results In this section, an energy police selection problem (adopted from [30,45]) is firstly used to demonstrate the applicability and the implementation process of our approach under hesitant fuzzy environment. Then the comparison analysis of the computational results is also conducted to show its superiority. Finally, the energy police selection problem is also used to demonstrate the applicability and the implementation process of our approach under interval-valued hesitant fuzzy environment. 5.1. An energy police selection problem under hesitant fuzzy environment and the analysis process Energy is an indispensable factor for the social and economic development of societies. The correct energy policy affects economic development and environment, the most appropriate energy policy selection is very important. Suppose that there are five alternatives (energy projects) Ai (i = 1, 2, 3, 4, 5), and four attributes: P1: technological; P2: environmental; P3: socio-political; P4: economic. Several DMs are invited to evaluate the performances of the five alternatives. For an alternative under an attribute, although all of the DMs provide their evaluation values, some of these values may be repeated. However, a value repeated more times does not indicate that it has more importance than other values repeated less times. For example, the value repeated one time may be provided by a DM who is an expert at this area, and the value repeated twice may be provided by two DMs who are not familiar with this area. In such cases, the value repeated one time may be more important than the one repeated twice. To get a more reasonable result, it is better that the DMs give their evaluations anonymously. We only collect all of the possible values for an alternative under an attribute, and each value provided only means that it is a possible value, but its importance is unknown. Thus the times that the values repeated are unimportant, and it
is reasonable to allow these values repeated many times appear only once. The HFE is just a tool to deal with such cases, and all possible evaluations for an alternative under the attributes can be considered as a HFE. The results evaluated by the DMs are contained in a hesitant fuzzy decision matrix, shown in Table 1. The hierarchical structure of this decision making problem is shown in Fig. 2. Obviously the numbers of values in different HFEs of HFSs are different. In order to more accurately calculate the distance between two HFSs, we should extend the shorter one until both of them have the same length when we compare them. According to the regulations mentioned above, we consider that the DMs are pessimistic in Example 1, and change the hesitant fuzzy data by adding the minimal values as listed in Table 2. Then, we utilize the approach developed to get the most desirable alternative (s), which involves the following two cases: Case 1. Assume that the information about the attribute weights is completely unknown; we get the most desirable alternative(s) according to the following steps: Step 1: Utilize the Eq. (22) to get the optimal weight vector::
w ¼ ð0:2341; 0:2474; 0:3181; 0:2004ÞT Step 2: Utilize Eqs. (23) and (24) to determine the hesitant fuzzy PIS A+ and the hesitant fuzzy NIS A, respectively:
A ¼ fh0:5; 0:3; 0:3; 0:3; 0:1i; h0:7; 0:4; 0:2; 0:1; 0:1i; h0:5; 0:1; 0:1; 0:1; 0:1i; h0:6; 0:4; 0:3; 0:3; 0:3ig Aþ ¼ fh0:9; 0:7; 0:6; 0:6; 0:6i; h0:9; 0:8; 0:7; 0:6; 0:6i; h0:9; 0:8; 0:7; 0:7; 0:7i; h0:9; 0:8; 0:6; 0:6; 0:6ig Step 3: Utilize Eqs. (25) and (26) to calculate the separation meaþ sures di and di of each alternative Ai from the hesitant fuzzy PIS + A and the hesitant fuzzy NIS A, respectively: þ
d2 ¼ 0:3277;
þ
þ
d1 ¼ 0:3555;
d1 ¼ 0:1976;
þ d3 d3 d5
d4 ¼ 0:3865 d4 ¼ 0:2040;
d2 ¼ 0:2384;
¼ 0:2418 ¼ 0:2905; ¼ 0:4023
Step 4: Utilize Eq. (27) to calculate the relative closeness Ci of each alternative Ai to the hesitant fuzzy PIS A+:
C 1 ¼ 0:3573; C 4 ¼ 0:3454;
C 2 ¼ 0:4211; C 5 ¼ 0:7027
C 3 ¼ 0:5458;
Step 5: Rank the alternatives Ai (i = 1, 2, 3, 4, 5), according to the relative closeness coefficients Ci (i = 1, 2, 3, 4, 5): A5 A3 A2 A1 A4, and thus the most desirable alternative is A5. Case 2: The information about the attribute weights is partly known and the known weight information is given as follows:
D ¼ f0:15 6 w1 6 0:2; 0:16 6 w2 6 0:18; 0:3 6 w3 6 0:35; 0:3 6 w4 6 0:45;
4 X wj ¼ 1g j¼1
Table 1 Hesitant fuzzy decision matrix.
A1 A2 A3 A4 A5
þ
d5 ¼ 0:1702;
P1
P2
P3
P4
{0.5, 0.4, 0.3} {0.5, 0.3} {0.7, 0.6} {0.8, 0.7, 0.4, 0.3} {0.9, 0.7, 0.6, 0.3, 0.1}
{0.9, 0.8, 0.7, 0.1} {0.9, 0.7, 0.6, 0.5, 0.2} {0.9, 0.6} {0.7, 0.4, 0.2} {0.8, 0.7, 0.6, 0.4}
{0.5, 0.4, 0.2} {0.8, 0.6, 0.5, 0.1} {0.7, 0.5, 0.3} {0.8, 0.1} {0.9, 0.8, 0.7}
{0.9,0.6,0.5,0.3} {0.7, 0.4, 0.3} {0.6, 0.4} {0.9, 0.8, 0.6} {0.9, 0.7, 0.6, 0.3}
61
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
GOAL Selection of the best energy policy
Technological P1
Environmental P2
A1
A2
Socio-political P3
A3
Economic P4
A4
A5
Fig. 2. The energy policy selection hierarchical structure.
Table 2 Hesitant fuzzy decision matrix.
A1 A2 A3 A4 A5
P1
P2
P3
P4
{0.5, 0.4, 0.3, 0.3, 0.3} {0.5, 0.3, 0.3, 0.3, 0.3} {0.7, 0.6, 0.6, 0.6, 0.6} {0.8, 0.7, 0.4, 0.3, 0.3} {0.9, 0.7, 0.6, 0.3, 0.1}
{0.9, 0.8, 0.7, 0.1, 0.1} {0.9, 0.7, 0.6, 0.5, 0.2} {0.9, 0.6, 0.6, 0.6, 0.6} {0.7, 0.4, 0.2, 0.2, 0.2} {0.8, 0.7, 0.6, 0.4, 0.4}
{0.5, 0.4, 0.2, 0.2, 0.2} {0.8, 0.6, 0.5, 0.1, 0.1} {0.7, 0.5, 0.3, 0.3, 0.3} {0.8, 0.1, 0.1, 0.1, 0.1} {0.9, 0.8, 0.7, 0.7, 0.7}
{0.9, 0.6, 0.5, 0.3, 0.3} {0.7, 0.4, 0.3, 0.3, 0.3} {0.6, 0.4, 0.4, 0.4, 0.4} {0.9, 0.8, 0.6, 0.6, 0.6} {0.9, 0.7, 0.6, 0.3, 0.3}
Step 1: Utilize the model (M-2) to construct the single-objective model as follows:
max DðwÞ ¼ 4:4467w1 þ 4:6999w2 þ 6:0431w3 þ 3:8068w4 s:t:
w 2 D; wj P 0; j ¼ 1;2;3;4
By solving this model, we get the optimal weight vector w = (0.17, 0.18, 0.35, 0.3)T. Step 2: Utilize Eqs. (23) and (24) to determine the hesitant fuzzy PIS A+ and the hesitant fuzzy NIS A, respectively:
A ¼ fh0:5; 0:3; 0:3; 0:3; 0:1i; h0:7; 0:4; 0:2; 0:1; 0:1i; h0:5; 0:1; 0:1; 0:1; 0:1i; h0:6; 0:4; 0:3; 0:3; 0:3ig þ
A ¼ fh0:9; 0:7; 0:6; 0:6; 0:6i; h0:9; 0:8; 0:7; 0:6; 0:6i; h0:9; 0:8; 0:7; 0:7; 0:7i; h0:9; 0:8; 0:6; 0:6; 0:6ig Step 3: Utilize Eqs. (25) and (26) to calculate the separation þ measures di and di of each alternative Ai: þ
d1 ¼ 0:1910;
d3 ¼ 0:2615;
d1 ¼ 0:3527; d2 ¼ 0:2313; þ d4
¼ 0:3865
d2 ¼ 0:3344;
þ
d3 ¼ 0:2645;
d4
¼ 0:2200;
þ
þ d5
¼ 0:1641;
d5 ¼ 0:4088
Step 4: Utilize Eq. (27) to calculate the relative closeness coefficient Ci of each alternative Ai to the hesitant fuzzy PIS A+:
C 1 ¼ 0:3514;
C 2 ¼ 0:4089;
C 4 ¼ 0:3653;
C 5 ¼ 0:7136
C 3 ¼ 0:5027;
Step 5: Rank the alternatives Ai (i = 1, 2, . . . , 5), according to the relative closeness coefficient Ci (i = 1, 2, . . . , 5). Clearly, A5 A3 A2 A4 A1, and thus the best alternative is A5.
Table 3 Intuitionistic fuzzy decision matrix.
A1 A2 A3 A4 A5
P1
P2
P3
P4
h0.3, 0.5i h0.3, 0.5i h0.6, 0.3i h0.3, 0.2i h0.1, 0.1i
h0.1, 0.1i h0.2, 0.1i h0.6, 0.1i h0.2, 0.3i h0.4, 0.2i
h0.2, 0.5i h0.1, 0.2i h0.3, 0.3i h0.1, 0.2i h0.7, 0.1i
h0.3, 0.1i h0.3, 0.3i h0.4, 0.4i h0.6, 0.1i h0.3, 0.1i
5.2. Comparison analysis with other similar methods More recently, Nan et al. [46] proposed a fuzzy TOPSIS method for solving MADM problems with intuitionistic fuzzy information. Since the HFEs’ envelopes are the intuitionistic fuzzy values, and at same time consider that under hesitant fuzzy environment, there has no investigation similar to our approach, in Subsection 5.2, we have made a comparison with the intuitionistic fuzzy TOPSIS (IF-TOPSIS) method of Nan et al. [46], which is the closest to our approach. Considering the HFSs’ envelopes, i.e., intuitionistic fuzzy data, and according to Definition 3, we can transform the hesitant fuzzy data of the energy police selection problem into the intuitionistic fuzzy data, listed in Table 3. Moreover, since the IF-TOPSIS method needs to know the weight values in advance, hence we also assume the weight vector as w = (0.2341, 0.2474, 0.3181, 0.2004)T. With the IF-TOPSIS method proposed in [46], we first need to e þ and the intuitionistic fuzdetermine the intuitionistic fuzzy PIS A e zy NIS A , respectively:
e ¼ fh0:1; 0:1i; h0:1; 0:1i; h0:1; 0:2i; h0:3; 0:1ig; A e þ ¼ fh0:6; 0:3i; h0:6; 0:1i; h0:7; 0:1i; h0:6; 0:1ig A
62
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
Then, according to the distance measure of IFVs:
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 Xn d8 ðAi ; Ak Þ ¼ w ððuij ukj Þ2 þ ðv ij v kj Þ2 þ ðpij pkj Þ2 Þ j¼1 j 2 ð38Þ ~þ and d ~ of each alternawe can calculate the separation measures d i i þ e tive Ai from the intuitionistic fuzzy PIS A and the intuitionistic fuze , respectively: zy NIS A
~þ ¼ 0:3261; d 1 ~þ ¼ 0:1044 d 3 ~ ¼ 0:4209; d 3
~ ¼ 0:2138; d 1
~þ ¼ 0:3372; d 2
~ ¼ 0:1521; d 2
~þ ¼ 0:3715; d 4
~ ¼ 0:1035; d 4
~þ ¼ 0:2335; d 5
~ ¼ 0:2615 d 5 Furthermore, we also calculate the relative closeness coefficient e þ as follows: e i of each alternative Ai to the intuitionistic fuzzy PIS A C
e 1 ¼ 0:3960; C e 5 ¼ 0:5283 C
e 2 ¼ 0:3108; C
e 3 ¼ 0:7942; C
e 4 ¼ 0:2458; C
Finally, according to the relative closeness coefficients e i ði ¼ 1; 2; 3; 4; 5Þ, we rank the alternatives Ai (i = 1, 2, 3, 4, 5): A3 C A5 A1 A2 A4. Thus the most desirable alternative is A3. It is noticed that the obtained ranking order by our approach is A5 A3 A2 A1 A4. Obviously, the ranking order of the alternatives obtained by the Nan et al.’s method is remarkably different from that obtained by the approach proposed in this paper. The differences are the ranking orders between A3 and A5, and between A1 and A2, i.e., A3 A5 and A1 A2 for the former while A5 A3 and A2 A1 for the latter. Namely, the ranking orders of these pairs of alternatives are just converse. The main reason is that the our approach considers the hesitant fuzzy information which is represented by several possible values, not by a margin of error (as in HFEs), while if adopting the Nan et al.’s method, it needs to transform HFEs into IFVs, which gives rise to a difference in the accuracy of data in the two types, it will have an effect on the final decision results. Thus it is not hard to see that our approach has some desirable advantages over the Nan et al.’s method [46] as follows: (1) Our approach, by extending the TOPSIS method to take into account the hesitant fuzzy assessments which are well-suited to handle the ambiguity and impreciseness inherent in MADM problems, does not need to transform HFEs into IFVs but directly deals with these problems, and thus obtains better final decision results. In particular, when we meet some situations where the information is represented by several possible values, our approach shows its great superiority in handling those decision making problems with hesitant fuzzy information. (2) Our approach utilizes the maximizing deviation method to objectively determine the weight values of attributes, which is more reasonable; while the Nan et al.’s method needs the DM to
provide the weight values in advance, which is subjective and sometime cannot yield the persuasive results. In addition, although there has no investigation similar to the proposed approach in this paper, there have few methods about the extension of information aggregation techniques [29,31–34] for MADM under hesitant fuzzy scenarios. Comparing with those of [29,31–34], our approach not only is capable of dealing with the situations that the weight information of the alternatives is unknown or partly known, but also can reduce the information loss, which always happens in the process of information aggregation. 5.3. The analysis process under interval-valued hesitant fuzzy environment In the example of Subsection 5.1, if the decision information provided by the DMs is expressed in IVHFEs, as listed in the interval-valued hesitant fuzzy decision matrix (see Table 4). Obviously the numbers of values in different IVHFEs of IVHFSs are different. In order to more accurately calculate the distance between two IVHFSs, we should extend the shorter one until both of them have the same length when we compare them. According to the regulations mentioned above, we consider that the DMs are pessimistic in this example, and change the interval-valued hesitant fuzzy data by adding the minimal values as listed in Table 5. Then, we proceed to utilize the approach developed to get the most desirable alternative (s), which involves the following two cases: Case 3: Assume that the information about the attribute weights is completely unknown, then we get the most desirable alternative(s) according to the following steps: Step 1: Utilize the Eq. (32) to get the optimal weight vector:
w ¼ ð0:2537; 0:2428; 0:29; 0:2135ÞT Step 2: Utilize Eqs. (33) and (34) to determine the interval-valued e þ and the interval-valued hesitant fuzzy NIS A e, hesitant fuzzy PIS A respectively:
e ¼ A
h½0:2; 0:5; ½0:2; 0:3; ½0:2; 0:3; ½0:2; 0:3; ½0:1; 0:3i; h½0:6; 0:7;
½0:3; 0:4; ½0:1; 0:2; ½0:1; 0:2; ½0:1; 0:2i; h½0:3; 0:5; ½0:1; 0:3; ½0:1; 0:2; ½0:1; 0:3; ½0:1; 0:2i; h½0:5; 0:6; ½0:3; 0:4; ½0:2; 0:3;
½0:2; 0:3; ½0:2; 0:3i eþ ¼ A
h½0:7; 0:9; ½0:6; 0:7; ½0:5; 0:6; ½0:5; 0:6; ½0:5; 0:6i; h½0:7; 0:9;
½0:7; 0:8; ½0:6; 0:7; ½0:5; 0:6; ½0:5; 0:6i; h½0:7; 0:9; ½0:7; 0:8; ½0:6; 0:7; ½0:6; 0:7; ½0:6; 0:7i; h½0:8; 0:9; ½0:7; 0:8; ½0:5; 0:6;
½0:5; 0:6; ½0:5; 0:6i
Table 4 Interval-valued hesitant fuzzy decision matrix. P1
P2
A1 A2 A3 A4 A5
{[0.3, 0.5], [0.3, 0.4], [0.2, 0.3]} {[0.2, 0.5], [0.2, 0.3]} {[0.5, 0.7], [0.5, 0.6]} {[0.7, 0.8], [0.6, 0.7], [0.3, 0.4], [0.2, 0.3]} {[0.7, 0.9], [0.6, 0.7], [0.5, 0.6], [0.2, 0.4], [0.1, 0.3]} P3
{[0.7, 0.9], [0.7, 0.8], [0.6, 0.7], [0.1, 0.3]} {[0.7, 0.9], [0.6, 0.7], [0.4, 0.6], [0.4, 0.5], [0.1, 0.2]} {[0.7, 0.9], [0.5, 0.6]} {[0.6, 0.7], [0.3, 0.4], [0.1, 0.2]} {[0.6, 0.8], [0.6, 0.7], [0.5, 0.6], [0.3, 0.4]} P4
A1 A2 A3 A4 A5
{[0.3, 0.5], [0.3, 0.4], [0.1, 0.2]} {[0.6, 0.8], [0.4, 0.6], [0.4, 0.5], [0.1, 0.3]} {[0.6, 0.7], [0.4, 0.5], [0.2, 0.3]} {[0.7, 0.8], [0.1, 0.3]} {[0.7, 0.9], [0.7, 0.8], [0.6, 0.7]}
{[0.8, 0.9], [0.4, 0.6], [0.4, 0.5], [0.2, 0.3]} {[0.6, 0.7], [0.3, 0.4], [0.2, 0.3]} {[0.5, 0.6], [0.3, 0.4]} {[0.7, 0.9], [0.7, 0.8], [0.5, 0.6]} {[0.7, 0.9],[0.5, 0.7], [0.5, 0.6], [0.2, 0.3]}
63
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64 Table 5 Interval-valued hesitant fuzzy decision matrix. P1
P2
A1 A2 A3 A4 A5
{[0.3, 0.5], [0.3, 0.4], [0.2, 0.3], [0.2, 0.3], [0.2, 0.3]} {[0.2, 0.5], [0.2, 0.3], [0.2, 0.3], [0.2, 0.3], [0.2, 0.3]} {[0.5, 0.7], [0.5, 0.6], [0.5, 0.6], [0.5, 0.6], [0.5, 0.6]} {[0.7, 0.8], [0.6, 0.7], [0.3, 0.4], [0.2, 0.3], [0.2, 0.3]} {[0.7, 0.9], [0.6, 0.7], [0.5, 0.6], [0.2, 0.4], [0.1, 0.3]} P3
{[0.7, 0.9], [0.7, 0.8], [0.6, 0.7], [0.1, 0.3], [0.1, 0.3]} {[0.7, 0.9], [0.6, 0.7], [0.4, 0.6], [0.4, 0.5], [0.1, 0.2]} {[0.7, 0.9], [0.5, 0.6], [0.5, 0.6], [0.5, 0.6], [0.5, 0.6]} {[0.6, 0.7], [0.3, 0.4], [0.1, 0.2], [0.1, 0.2], [0.1, 0.2]} {[0.6, 0.8], [0.6, 0.7], [0.5, 0.6], [0.3, 0.4], [0.3, 0.4]} P4
A1 A2 A3 A4 A5
{[0.3, 0.5], [0.3, 0.4], [0.1, 0.2], [0.1, 0.2], [0.1, 0.2]} {[0.6, 0.8], [0.4, 0.6], [0.4, 0.5], [0.1, 0.3], [0.1, 0.3]} {[0.6, 0.7], [0.4, 0.5], [0.2, 0.3], [0.2, 0.3], [0.2, 0.3]} {[0.7, 0.8], [0.1, 0.3], [0.1, 0.3], [0.1, 0.3], [0.1, 0.3]} {[0.7, 0.9], [0.7, 0.8], [0.6, 0.7], [0.6, 0.7], [0.6, 0.7]}
{[0.8, 0.9], [0.4, 0.6], [0.4, 0.5], [0.2, 0.3], [0.2, 0.3]} {[0.6, 0.7], [0.3, 0.4], [0.2, 0.3], [0.2, 0.3], [0.2, 0.3]} {[0.5, 0.6], [0.3, 0.4], [0.3, 0.4], [0.3, 0.4], [0.3, 0.4]} {[0.7, 0.9], [0.7, 0.8], [0.5, 0.6], [0.5, 0.6], [0.5, 0.6]} {[0.7, 0.9], [0.5, 0.7], [0.5, 0.6], [0.2, 0.3], [0.2, 0.3]}
Step 3: Utilize Eqs. (35) and (36) to calculate the separation mea~þ and d ~ of each alternative A from the interval-valued hessures d i i i e þ and the interval-valued hesitant fuzzy NIS A e, itant fuzzy PIS A respectively:
~þ ¼ 0:3293; d 1 ~ dþ3 ¼ 0:2329 ~ ¼ 0:2451; d 3
~ ¼ 0:1760; d 1
~þ ¼ 0:2977; d 2
~ ¼ 0:1807; d 2
~þ ¼ 0:3209 d ~ ¼ 0:2156; d 4 4
~þ ¼ 0:1580; d 5
Step 4: Utilize Eq. (37) to calculate the relative closeness coefficient e i to the hesitant fuzzy PIS A eþ. e i of each alternative A C
e 2 ¼ 0:3777; C
e 3 ¼ 0:5127; C
e 4 ¼ 0:4019; C
Step 5: Rank the alternatives Ai (i = 1, 2, 3, 4, 5), according to the e i ði ¼ 1; 2; 3; 4; 5Þ, clearly, A5 A3 relative closeness coefficients C A4 A2 A1, and thus, the most desirable alternative is A5. Case 4: The information about the attribute weights is partly known and the known weight information is given as follows:
D ¼ 0:15 6 w1 6 0:2; 0:16 6 w2 6 0:18; 0:3 6 w3 6 0:35; 0:3 6 w4 6 0:45;
4 X wj ¼ 1 j¼1
Step 1: Utilize the model (M-4) to construct the single-objective model as follows:
max DðwÞ ¼ 4:4063w1 þ 4:2167w2 þ 5:0365w3 þ 3:7098w4 s:t: w 2 D;
wj P 0;
j ¼ 1; 2; 3; 4
By solving this model, we get the optimal weight vector w = (0.19, 0.16, 0.35, 0.3)T. Step 2: Utilize Eqs. (33) and (34) to determine the interval-vale þ and the interval-valued hesitant fuzzy ued hesitant fuzzy PIS A e NIS A , respectively:
e ¼ A
h½0:2; 0:5; ½0:2; 0:3; ½0:2; 0:3; ½0:2; 0:3; ½0:1; 0:3i;
h½0:6; 0:7; ½0:3; 0:4; ½0:1; 0:2; ½0:1; 0:2; ½0:1; 0:2i; h½0:3; 0:5; ½0:1; 0:3; ½0:1; 0:2; ½0:1; 0:3; ½0:10:2i; h½0:5; 0:6; ½0:3; 0:4;
½0:2; 0:3; ½0:2; 0:3; ½0:2; 0:3i eþ ¼ A
~þ ¼ 0:3401; d 1 ~þ ¼ 0:2581 d 3 ~ ¼ 0:2181; d 3
~ ¼ 0:3419 d 5
e 1 ¼ 0:3483; C e 5 ¼ 0:6839 C
Step 3: Utilize Eqs. (35) and (36) to calculate the separation mea~þ and d ~ of each alternative A (i = 1, 2, 3, 4, 5): sures d i i i
h½0:7; 0:9; ½0:6; 0:7; ½0:5; 0:6; ½0:5; 0:6; ½0:5; 0:6i; h½0:7; 0:9;
½0:7; 0:8; ½0:6; 0:7; ½0:5; 0:6; ½0:5; 0:6i; h½0:7; 0:9; ½0:7; 0:8; ½0:6; 0:7; ½0:6; 0:7; ½0:6; 0:7i; h½0:8; 0:9; ½0:7; 0:8; ½0:5; 0:6;
½0:5; 0:6; ½0:5; 0:6i
~ ¼ 0:1625; d 1
~þ ¼ 0:3043; d 2
~þ ¼ 0:3139 d ~ ¼ 0:2286; d 4 4
~ ¼ 0:1749; d 2 ~þ ¼ 0:1552; d 5
~ ¼ 0:3511 d 5 Step 4: Utilize Eq. (37) to calculate the relative closeness coefficient eþ: e i of each alternative Ai to the interval-valued hesitant fuzzy PIS A C
e 1 ¼ 0:3233; C e 5 ¼ 0:6934 C
e 2 ¼ 0:3650; C
e 3 ¼ 0:4581; C
e 4 ¼ 0:4214; C
Step 5: Rank the alternatives Ai (i = 1, 2, 3, 4, 5), according to the rele i (i = 1, 2, 3, 4, 5), A5 A3 A4 A2 A1, ative closeness coefficients C and thus the most desirable alternative is A5. 6. Conclusions In general, many real-world MADM problems take place in a complex environment and usually adhere to imprecise data and uncertainty. The HFS or IVHFS is adequate for dealing with the vagueness of a DM’s judgments over alternatives with respect to attributes. In this paper, based on the idea that the attribute with a larger deviation value among alternatives should be evaluated with a larger weight, we have first developed a method called the maximizing deviation method to determine the optimal relative weights of attributes under hesitant fuzzy or interval-valued hesitant fuzzy environment. An important advantage of the proposed method is its ability to relieve the influence of subjectivity of the DMs and at the same time remain the original decision information sufficiently. Then we have proposed a novel approach on the basis of TOPSIS to solve MADM problems with hesitant fuzzy or interval-valued hesitant fuzzy information. The approach is based on the relative closeness of each alternative to determine the ranking order of all alternatives, which avoids producing the loss of too much information in the process of information aggregation. Finally, the effectiveness and applicability of the proposed method has been illustrated with an energy police selection example. Apparently, our approach is straightforward and has less loss of information, and can be applied easily to other managerial decision making problems under hesitant fuzzy or interval-valued hesitant fuzzy environment. Acknowledgement The authors are very grateful to the anonymous reviewers for their insightful and constructive comments and suggestions that have led to an improved version of this paper. The work was sup-
64
Z. Xu, X. Zhang / Knowledge-Based Systems 52 (2013) 53–64
ported by the National Natural Science Foundation of China (Nos.71071161 and 61273209). References [1] C.L. Hwang, K. Yoon, Multiple Attribute Decision Making: Methods and Applications, Springer, Berlin, Heidelberg, New York, 1981. [2] J.S. Dyer, P.C. Fishburn, R.E. Steuer, J. Wallenius, S. Zionts, Multiple criteria decision making, multiattribute utility theory: the next ten years, Management Science 38 (1992) 645–654. [3] T.J. Stewart, A critical survey on the status of multiple criteria decision making theory and practice, Omega 20 (1992) 569–586. [4] Q.W. Cao, J. Wu, The extended COWG operators and their application to multiple attributive group decision making problems with interval numbers, Applied Mathematical Modelling 35 (2011) 2075–2086. [5] Z.L. Yue, An extended TOPSIS for determining weights of decision makers with interval numbers, Knowledge-Based Systems 24 (2011) 146–153. [6] X. Zhang, P.D. Liu, Method for multiple attribute decision-making under risk with interval numbers, International Journal of Fuzzy Systems 12 (2010) 237– 242. [7] Z.P. Fan, B. Feng, A multiple attributes decision making method using individual and collaborative attribute data in a fuzzy environment, Information Sciences 179 (2009) 3603–3618. [8] R.O. Parreiras, P.Y. Ekel, J.S.C. Martini, R.M. Palhares, A flexible consensus scheme for multicriteria group decision making under linguistic assessments, Information Sciences 180 (2010) 1075–1089. [9] K.T. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets and Systems 20 (1986) 87– 96. [10] K.T. Atanassov, More on intuitionistic fuzzy sets, Fuzzy Sets and Systems 33 (1989) 37–46. [11] K.T. Atanassov, Operators over interval-valued intuitionistic fuzzy sets, Fuzzy Sets and Systems 64 (1994) 159–174. [12] Z.S. Xu, Intuitionistic preference relations and their application in group decision making, Information Sciences 177 (2007) 2363–2379. [13] V. Torra, Y. Narukawa, On hesitant fuzzy sets and decision, in: The 18th IEEE International Conference on Fuzzy Systems, Jeju Island, Korea, 2009, pp. 1378– 1382. [14] V. Torra, Hesitant fuzzy sets, International Journal of Intelligent Systems 25 (2010) 529–539. [15] L.A. Zadeh, Fuzzy sets, Information and Control 8 (1965) 338–356. [16] G.R. Jahanshahloo, F.H. Lotfi, M. Izadikhah, An algorithmic method to extend TOPSIS for decision-making problems with interval data, Applied Mathematics and Computation 175 (2006) 1375–1384. [17] Y.M. Wang, T.M.S. Elhag, Fuzzy TOPSIS method based on alpha level sets with an application to bridge risk assessment, Expert Systems with Applications 31 (2006) 309–319. [18] Z.L. Yue, An extended TOPSIS for determining weights of decision makers with interval numbers, Knowledge-Based Systems 24 (2011) 146–153. [19] Z.S. Xu, A deviation-based approach to intuitionistic fuzzy multiple attribute group decision making, Group Decision and Negotiation 19 (2010) 57–76. [20] Z.S. Xu, Deviation measures of linguistic preference relations in group decision making, Omega 33 (2005) 249–254. [21] Z.B. Wu, Y.H. Chen, The maximizing deviation method for group multiple attribute decision making under linguistic environment, Fuzzy Sets and Systems 158 (2007) 1608–1617. [22] G.W. Wei, GRA method for multiple attribute decision making with incomplete weight information in intuitionistic fuzzy setting, KnowledgeBased Systems 23 (2010) 243–247. [23] P.D. Liu, M.H. Wang, An extended VIKOR method for multiple attribute group decision making based on generalized interval-valued trapezoidal fuzzy numbers, Scientific Research and Essays 6 (2011) 766–776.
[24] S. Opricovic, G.H. Tzeng, Compromise solution by MCDM methods: a comparative analysis of VIKOR and TOPSIS, European Journal of Operational Research 156 (2004) 445–455. [25] S. Opricovic, G.H. Tzeng, Extended VIKOR method in comparison with outranking methods, European Journal of Operational Research 178 (2007) 514–529. [26] J.P. Brans, B. Mareschal, P.H. Vincke, PROMETHEE: a new family of outranking methods in multi-criteria analysis, in: J.P. Brans (Ed.), Operational Research, vol. 84, North-Holland, New York, 1984, pp. 477–490. [27] P.D. Liu, X. Zhang, Research on the supplier selection of supply chain based on entropy weight and improved ELECTRE-III method, International Journal of Production Research 49 (2011) 637–646. [28] B. Roy, Multicriteria Methodology for Decision Aiding, Kluwer, Dordrecht, The Netherlands, 1996. [29] M.M. Xia, Z.S. Xu, Hesitant fuzzy information aggregation in decision making, International Journal of Approximate Reasoning 52 (2011) 395–407. [30] Z.S. Xu, M.M. Xia, Distance and similarity measures for hesitant fuzzy sets, Information Sciences 181 (2011) 2128–2138. [31] M.M. Xia, Z.S. Xu, N. Chen, Some hesitant fuzzy aggregation operators with their application in group decision making, Group Decision and Negotiation 22 (2) (2013) 259–297. [32] D.J. Yu, Y.Y. Wu, W. Zhou, Multi-criteria decision making based on Choquet integral under hesitant fuzzy environment, Journal of Computational Information Systems 7 (2011) 4506–4513. [33] G.W. Wei, Hesitant fuzzy prioritized operators and their application to multiple attribute decision making, Knowledge-Based Systems 31 (2012) 176–182. [34] D.J. Yu, Y.Y. Wu, W. Zhou, Generalized hesitant fuzzy Bonferroni mean and its application in multi-criteria group decision making, Journal of Information & Computational Science 9 (2012) 267–274. [35] G. Qian, H. Wang, X. Feng, Generalized of hesitant fuzzy sets and their application in decision support system, Knowledge-Based Systems 37 (2013) 357–365. [36] N. Chen, Z.S. Xu, M.M. Xia, Interval-valued hesitant preference relations and their applications to group decision making, Knowledge-Based Systems 37 (2013) 528–540. [37] Z.S. Xu, A method for multiple attribute decision making with incomplete weight information in linguistic setting, Knowledge-Based Systems 20 (8) (2007) 719–725. [38] Z.S. Xu, M.M. Xia, On distance and correlation measures of hesitant fuzzy information, International Journal of Intelligent Systems 26 (2011) 410–425. [39] Z.S. Xu, Q.L. Da, The uncertain OWA operator, International Journal of Intelligent Systems 17 (2002) 569–575. [40] K.S. Park, S.H. Kim, Tools for interactive multi-attribute decision making with incompletely identified information, European Journal of Operational Research 98 (1997) 111–123. [41] S.H. Kim, B.S. Ahn, Interactive group decision making procedure under incomplete information, European Journal of Operational Research 116 (1999) 498–507. [42] K.S. Park, Mathematical programming models for characterizing dominance and potential optimality when multicriteria alternative values and weights are simultaneously incomplete, IEEE Transactions on Systems, Man, and Cybernetics – Part A, Systems and Humans 34 (2004) 601–614. [43] Z.S. Xu, J. Chen, An interactive method for fuzzy multiple attribute group decision making, Information Sciences 177 (2007) 248–263. [44] Y.M. Wang, Using the method of maximizing deviations to make decision for multi-indices, System Engineering and Electronics 7 (1998) 24–26. [45] C. Kahraman, I. Kaya, A fuzzy multicriteria methodology for selection among energy alternatives, Expert Systems with Applications 37 (2010) 6270–6281. [46] J.X. Nan, D.F. Li, M.J. Zhang, TOPSIS for Multiattribute decision making in IF-set setting, Operations Research and Management Science 3 (2008) 34–37.