Towards attribute reduction in multi-adjoint concept lattices? J. Medina1 and M. Ojeda-Aciego2 1
2
Department of Mathematics. University of C´ adiz Email:
[email protected] Dept. Matem´ atica Aplicada. Universidad de M´ alaga Email:
[email protected] Abstract. In Formal Concept Analysis, attribute reduction is a important step in order to reduce the complexity of the calculation of the concept lattice. This reduction is more complex in fuzzy environments. In this paper, we will present a first approximation to reduce the set of attributes in the multi-adjoint concept lattice.
1
Introduction
Formal concept analysis extract information from data bases, in which the attributes A and objects B are related, R. This information is classified in concepts and an order among them is obtained, the final structure is called concept lattice. Usually, the set of attributes is numerous and the complexity to built the concept lattice is very great. The aim of this section will be to use linguistic labels in order to obtain a method to reduce the number of original attributes losing as few information as we might. [?, ?, ?, ?, ?, ?, ?]
2
Preliminaries
-introducir FCA cl´ asico -introducir FCA multi-adjunto, pero muy poco porque no vamos a utilizar nada.
3
Attribute reduction in classical formal concept analysis
Definition 1. Given two concept lattices B(A1 , B, R1 ) and B(A2 , B, R2 ). If for any (X, Y ) ∈ B(A2 , B, R2 ) there exists (X 0 , Y 0 ) ∈ B(A1 , B, R1 ) such that X = X 0 , then we say that B(A1 , B, R1 ) is finer than B(A2 , B, R2 ) and we will write: B(A1 , B, R1 ) ≤ B(A2 , B, R2 ) ?
Partially supported by the Spanish Science Ministry projects TIN2009-14562-C05-01 and TIN2009-14562-C05-03, and by Junta de Andaluc´ıa project P09-FQM-5233.
If B(A1 , B, R1 ) ≤ B(A2 , B, R2 ) and B(A2 , B, R2 ) ≤ B(A1 , B, R1 ), then these two concept lattices are said to be isomorphic to each other, and we will write: B(A1 , B, R1 ) ∼ = B(A2 , B, R2 ) Given a context (A, B, R), if we consider a subset of attributes, Y ⊆ A and the restriction relation RY = R ∩ (Y × B), the triple (Y, B, RY ) is also a formal context, which can be interpreted as a subcontext of the original. Hence, we can Y apply the mappings ↓ and ↑ , in this subcontext we will write ↓ and ↑Y . It is clear that, given X ⊆ B we have that X ↑Y = X ↑ ∩ Y . Theorem 1 ( [?]). Let (A, B, R) be a formal context. For any Y ⊆ A, such that Y 6= ∅, B(A, B, R) ≤ B(Y, B, RY ) holds. Definition 2. Given a context (A, B, R), if there exists a set of attributes Y ⊆ A such that B(A, B, R) ∼ = B(Y, B, RY ), then Y is called a consistent set of (A, B, R). Moreover, if B(Y r {y}, B, RY r{y} ) ∼ 6 B(A, B, R), for all y ∈ Y , then = Y is called a reduct of (A, B, R). The intersection of all the reducts of (A, B, R) is called the core of (A, B, R). Theorem 2 ( [?]). Let (A, B, R) be a formal context, Y ⊆ A and Y 6= ∅. Then, Y is a consistent set ⇔ B(Y, B, RY ) ≤ B(A, B, R) In [?], the authors used the three types of attributes in a formal context originally proposed by Pawlak [?] for rough set theory. Definition 3. Let Λ be an index set and (A, B, R) be a formal context, and consider the set {Yi | Yi is a reduct, i ∈ Λ} of all reducts of (A, B, R). Then A can be divided into the following three parts: T 1. Absolutely necessary attributes (core attribute) Ac = i∈Λ Yi . S T 2. Relatively necessary attributes Ar = ( i∈Λ Yi ) r ( i∈Λ Yi ). S 3. Absolutely unnecessary attributes Au = A r ( i∈Λ Yi ). It can be checked that {ac }↓ 6= {ar }↓ , {ar }↓ 6= {au }↓ , {ac }↓ 6= {au }↓ for all ac ∈ Ac , ar ∈ Ar , au ∈ Au . Now, we will introduce a mechanism in order to obtain a reduct from a given context. The most important feature is that it is not necessary to obtain all the concepts in order to classify the attributes. Proposition 1. Let (A, B, R) be a context and consider a ∈ A. If aR = ai ∈ {a}↓↑ r {a}}, then A r {a} is a consistent set.
T
{ai R |
Proof. Let us write A for A r {a}. Thus, we need to prove that, given (X, Y ) ∈ B(A, B, R), there exists (X 0 , Y 0 ) ∈ B(A, B, RA ) such that X = X 0 . 2
If a 6∈ Y , then Y ⊆ A and we consider (X 0 , Y 0 ) = (X, Y ). Otherwise, we have that {a}↓↑ ⊆ Y and we consider Y 0 = Y r {a}. Hence, \ X =Y↓ = {ai R | ai ∈ Y } \ \ = {ai R | ai ∈ Y r {a}↓↑ } ∩ {ai R | ai ∈ {a}↓↑ } \ (∗) \ = {ai R | ai ∈ Y r {a}↓↑ } ∩ {ai R | ai ∈ {a}↓↑ r {a}} A
= (Y 0 )↓ = X 0 t u
where (∗) follows by hypothesis.
Lemma 1. Let (A, B, R) be a context, and a1 , a2 ∈ A. If {a1 }↓↑ = {a2 }↓↑ , then a1 R = a2 R (which is equivalent to {a1 }↓ = {a2 }↓ ). Given a context (A, B, R), with A = {a1 , . . . , am }, we have the following set of intents I0 = {{a1 }↓↑ , . . . , {am }↓↑ }. Set equality defines an equivalence relation in I0 , and we denote by [Y ] the equivalence class of Y , for all Y ⊆ A. Proposition 2. Let (A, B, R) be a context, and a ∈ A. If [{a}↓↑ ] = {{a1 }↓↑ , . . . , {an }↓↑ } and |{a}↓↑ | = n, with n ≥ 2, then a1 , . . . , an are relatively necessary attributes. Proof. The hypothesis |{a}↓↑ | = n implies that {a}↓↑ = {a1 , . . . , an }, n ≥ 2. Without loss of generality we can assume that a = an and, therefore, [{an }↓↑ ] = {{a1 }↓↑ , . . . , {an }↓↑ } now, by Lemma 1, we have that a1 R = a2 R = · · · = an R. As a result, given the concept ({an }↓ , {an }↓↑ ) ∈ B(A, B, R), the extent {an }↓ is equal to each of the ai R, with i ∈ {1, . . . , n}. In particular, \ a1 R = ai R | ai ∈ {an }↓↑ r {a1 } by Proposition 1, we obtain that Y = A r {a1 } is consistent and the pair Y Y ({an }↓ , {an }↓ ↑Y ) is an element of the concept lattice B(Y, B, RY ), where Y Y {an }↓ = {an }↓ = an R, and {an }↓ ↑Y = {a2 , . . . , an }. The procedure above can be successively applied to the attributes a2 , . . . , Z Z an−1 , obtaining that Z = Ar{a1 , . . . , an−1 } is consistent and ({an }↓ , {an }↓ ↑Z ) Z is an element of the concept lattice B(Z, B, RZ ), where {an }↓ = {an }↓ = an R, Z and {an }↓ ↑Z = {an }. Let us see that an belongs to at least one reduct. For each reduct Z 0 ⊆ Z, the attribute an must belong to Z 0 since, otherwise, there would not exist an Z Z element in the resulting concept lattice which is related to ({an }↓ , {an }↓ ↑Z ), 0 and this would imply that Z is not consistent, which is contradictory. 3
Now, it is easy to check that an cannot belong to every reduct, since the initial procedure could have been done with respect to any other attribute among a1 , . . . , an−1 , with n ≥ 2. Therefore, an (actually any ai ) is a relatively necessary attribute. t u The previous proposition can be extended to the case in which the cardinality of the intent {a}↓↑ is greater than n. The obtained result depends on whether the cardinality is equal to n + 1 or strictly greater than that value. Proposition 3. Let (A, B, R) be a context, and a ∈ A. Assume that \ [{a}↓↑ ] = {{a1 }↓↑ , . . . , {an }↓↑ } and aR = ak R | ak ∈ {a}↓↑ r {a1 , . . . , an } then the following statements hold: – If |{a}↓↑ | = n + 1, then all the elements in {a}↓↑ are relatively necessary. – If |{a}↓↑ | ≥ n + 2, then a1 , . . . , an are absolutely unnecessary. Proof. Follows the idea of the previous proposition.
t u
Under certain circumstances, we can recognize absolute unnecessary and relative necessary attributes, and it is possible to prove that the rest of attributes are absolute necessary. Therefore, given a context (A, B, R), where A = {a1 , . . . , am }, we have a method that computes the character of all attributes and the reducts of (A, B, R). Firstly, we compute the subsets {{a1 }↓↑ , . . . , {am }↓↑ } and we apply Proposition 1, 2 and 3 in order to obtain consistent sets of attributes and classifying them. Finally, note that when we cannot apply Proposition 1, we have obtained a reduct of (A, B, R). Notice that it is possible to obtain reducts before building the concept lattice, as in [?,?]. As a result, we can notably reduce the complexity of its computation of the concept lattice.
4
Attribute reduction with linguistic labels
Attribute reduction is an interesting method in order to reduce the complexity of the computation of concept lattices [?]. The extension of the methods used in classical formal concept analysis to fuzzy environments is very complex. In this section, starting with the multi-adjoint concept lattice, firstly, we will apply a weak defuzzification, using linguistic labels and a tolerance level given by the user, obtaining a new set of attributes, then we apply the result in the previous section in order to reduce the cardinality of this new set attributes with the goal of reducing the size of the original set of attributes. From now on we will consider the lattice (L, ) as the unit interval ([0, 1], ≤). For practical matters, the use of the unit interval is excessively expressive since it is often the case that only several degrees are needed. Thus, to begin with, assume that the user is asked about how many degrees will be required, and we 4
will consider a partition of the unit interval in such a number of subintervals. For instance,we will consider In+1 = {[x0 , x1 ], (x1 , x2 ], . . . , (xn , xn+1 ]} such that x0 = 0 and xn+1 = 1, for all i ∈ {0, 1, . . . , n − 1}. Now, a set H of linguistic labels such as low, medium, high, very, more or less, much, essentially, slightly, . . . , will be assigned to the previous partition of the unit interval by a mapping φ : H → In+1 . Note that in the rest of the paper, we will often directly refer to the set of labels as an ordered set H = {h0 , h1 , . . . , hn } to denote that φ(hi ) = (xi , xi+1 ]. For example, if we consider H = {Low, Medium-Low, Medium-High, High} we can assume the following regular partition: I4 = {[0, 1/4], (1/4, 2/4], (2/4, 3/4], (3/4, 1]} where φ assigns Low with [0, 0.25], Medium-Low with (0.25, 0.5], Medium-High with (0.5, 0.75], and High with (0.75, 1]. Now, we will consider a multi-adjoint context which set of attributes will be reduced by using the ideas described at the beginning of the section. Let (A, B, R, σ) be a multi-adjoint context and H = {h0 , h1 , . . . , hn } be a list of labels. The cardinality of H depends on the level of tolerance than the user may assume. Thus, we will be working with a regular partition of the unit interval in n + 1 pieces, In+1 , and the mapping φ : H → In+1 . We consider a new crisp context (AH , B, Rφ ) where the set of objects is equal to the original one, the set of attributes is extended by composing each of the labels with each attribute from the original A, that is, AH = {hi a | i ∈ {0, . . . , n}, a ∈ A}, and finally the relation Rφ : AH × B → {0, 1} is defined as ( 1 if R(a, b) ∈ φ(hi ) Rφ (hi a, b) = 0 otherwise The following example will be used the rest of the paper in order to show the definitions and the procedure we are introducing. Example 1. Let us consider an example in which a number of journals are considered as objects and several parameters appearing in the ISI Journal Citation Report are the set of attributes. The sets of attributes and objects are the following: A = {Impact Factor, Immediacy Index, Cited Half-Life, Best Position} B = {AMC, CAMWA, FSS, IEEE-FS, IJGS, IJUFKS, JIFS} where the “best position” means the best quartile of the different categories under which the journal is included, and the journals considered are Applied Mathematics and Computation (AMC), Computer and Mathematics with Applications (CAMWA), Fuzzy Sets and Systems (FSS), IEEE transactions on 5
Fuzzy Systems (IEEE-FS), International Journal of General Systems (IJGS), International Journal of Uncertainty Fuzziness and Knowledge-based Systems (IJUFKS), Journal of Intelligent and Fuzzy Systems (JIFS). The fuzzy relation between them, R : A × B → [0, 1], is the normalization to the unit interval [0, 1] of the information in the JCR, and can be seen in Table 1.
Table 1. Fuzzy relation between the attributes and the objects. R Impact Factor Immediacy Index Cited Half-Life Best Position
AMC CAMWA FSS IEEE-FS IJGS IJUFKS JIFS 0.34 0.21 0.52 0.85 0.43 0.21 0.09 0.13 0.09 0.36 0.17 0.1 0.04 0.06 0.31 0.71 0.92 0.65 0.89 0.47 0.93 0.75 0.5 1 1 0.5 0.25 0.25
Before computing the multi-adjoint concept lattice it certainly advantageous to reduce the number of attributes in order to decrease the complexity of its computation. In order to do this, let as assume a level of tolerance for the defuzzification. In this example we will consider a list of of four labels, H = {Low, Medium-Low, Medium-High, High}. Hence, we will be working with the regular partition I4 = {[0, 1/4], (1/4, 2/4], (2/4, 3/4], (3/4, 1]}. Hence, we have a new context (AH , B, Rφ ), where AH and Rφ are shown in Table 2, applying the definitions above. Table 2. Crisp relation Rφ between the new attributes AH and the objects B. Rφ AMC CAMWA FSS IEEE-FS IJGS IJUFKS JIFS Low IF 0 1 0 0 0 1 1 Medium-Low IF 1 0 0 0 1 0 0 Medium-High IF 0 0 1 0 0 0 0 High IF 0 0 0 1 0 0 0 Low II 1 1 0 1 1 1 1 Medium-Low II 0 0 1 0 0 0 0 Medium-High II 0 0 0 0 0 0 0 High II 0 0 0 0 0 0 0 Low CHL 0 0 0 0 0 0 0 Medium-Low CHL 1 0 0 0 0 1 0 Medium-High CHL 0 1 0 1 0 0 0 High CHL 0 0 1 0 1 0 1 Low BP 0 0 0 0 0 1 1 Medium-Low BP 0 1 0 0 1 0 0 Medium-High BP 1 0 0 0 0 0 0 High BP 0 0 1 1 0 0 0
6
Once we have obtained a crisp context we can apply any method to reduce the number of attributes. Although we have a new objective, reducing, if it is possible, the attributes associated with the same original attribute. Sometimes, we two crisp attributes are related and we only can reduce one of them, in this case we will eliminate the crisp attribute related with the original attribute that has more crisp reduction. For instance, in Example 1, if the attributes: Low IF, Medium-Low IF and High IF can be eliminated, and, moreover, Medium-High IF is related with Medium-Low II then we will reduce Medium-High IF instead of Medium-Low II because reducing Low IF, Medium-Low IF, High IF and Medium-High IF we will reduce, in the original context, the attribute Impact Factor. Therefore, this will be the final objective, looking for blocks of crisp attributes in order to reduce original attributes, that is, if we can reduce the crisp attributes Ha = {h0 a, h1 a, . . . , hn a}, then we will reduce the attribute a in the original context. ´ ¿METODOS PARA REDUCIR? Example 2. Following with Example 1, we can check that: Medium-High IF, High IF, Medium-Low II, Medium-High II, High II, Low CHL, Medium-High BP can be reduced. The final context is showed in Table 3. Therefore, the reduction Table 3. Crisp relation after reduction. Rφ AMC CAMWA FSS IEEE-FS IJGS IJUFKS JIFS Low IF 0 1 0 0 0 1 1 Medium-Low IF 1 0 0 0 1 0 0 Low II 1 1 0 1 1 1 1 Medium-Low CHL 1 0 0 0 0 1 0 Medium-High CHL 0 1 0 1 0 0 0 High CHL 0 0 1 0 1 0 1 Low BP 0 0 0 0 0 1 1 Medium-Low BP 0 1 0 0 1 0 0 High BP 0 0 1 1 0 0 0
given in the crisp context has not direct consequences in the original context. Thus, we may affirm that the original attributes are quite independents. Now, let go to assume, in the original relation, that R(Best Position, IJGS) = 0.25 and that R(Best Position, CAMWA) = 0.75. Hence, the new context is showed in Table 4. The attributes that may directly be reduce are Medium-High IF, High IF, Medium-Low II, Medium-High II, High II, Low CHL and Medium-Low BP. But, now, attribute Low IF or Low BP, and Medium-Low IF or Medium-Low BP can be reduce. Therefore, we choose Low IF and Medium-Low IF in order to complete a set of label to Impact Factor. Finally we obtain the context given in Table 5. As a consequence, Impact Factor is eliminated in the original context. 7
Table 4. New relation R0φ between the new attributes AH and the objects B. R0φ AMC CAMWA FSS IEEE-FS IJGS IJUFKS JIFS Low IF 0 1 0 0 0 1 1 Medium-Low IF 1 0 0 0 1 0 0 Medium-High IF 0 0 1 0 0 0 0 High IF 0 0 0 1 0 0 0 Low II 1 1 0 1 1 1 1 Medium-Low II 0 0 1 0 0 0 0 Medium-High II 0 0 0 0 0 0 0 High II 0 0 0 0 0 0 0 Low CHL 0 0 0 0 0 0 0 Medium-Low CHL 1 0 0 0 0 1 0 Medium-High CHL 0 1 0 1 0 0 0 High CHL 0 0 1 0 1 0 1 Low BP 0 1 0 0 0 1 1 Medium-Low BP 0 0 0 0 0 0 0 Medium-High BP 1 0 0 0 1 0 0 High BP 0 0 1 1 0 0 0
Table 5. Crisp relation after reduction. R AMC CAMWA FSS IEEE-FS IJGS IJUFKS JIFS Low II 1 1 0 1 1 1 1 Medium-Low CHL 1 0 0 0 0 1 0 Medium-High CHL 0 1 0 1 0 0 0 High CHL 0 0 1 0 1 0 1 Low BP 0 1 0 0 0 1 1 Medium-High BP 1 0 0 0 1 0 0 High BP 0 0 1 1 0 0 0
5
Information retrieval
In this section we will use the linguistic labels to information retrieval. We will define a measure in order to complete some information that it is not given by the relation R in a context. The initial idea will be similar to the given one in the previous section, that is, we will consider a list of labels in order to obtain a new crisp context and, then we will extract information from the relation among the attributes. The aim in this section will be, given a multi-adjoint context, where some information in the relation has been lose or, simply, we haven’t it, complete this information using relations between attributes and, as a consequence, we can obtain an approximate multi-adjoint concept lattices. Let (A, B, R, σ) be a multi-adjoint context, where we don’t know the value R(a0 , b0 ), for an attribute a0 and an object b0 , and we want to obtain an approximation of R(a0 , b0 ) in order to obtain the multi-adjoint concept lattice. This 8
approximation can be obtained from the relation among the attribute a and the rest of attributes in A. The procedural will be as following. Firstly we consider a list of labels H = {h0 , h1 , . . . , hn } be a The cardinality of H depends about the error/tolerance than the user may assume, although this can be improved later. Now, we consider a regular partition of the unit interval in n+1 pieces, In+1 , and a correspondences mapping φ : H → In+1 . We consider a new context (AH , Bb0 , Rbφ0 ), which will be crisp and where the set of object is Bb0 = B r {b0 }, the set of attributes will be the composition of the labels and each attribute from the original A, that is, AH = {hi a | i ∈ {0, . . . , n}, a ∈ A}, and Rbφ0 : AH × Bb0 → {0, 1} is defined as ( 1 if R(a, b) ∈ φ(hi ) φ Rb0 (hi a, b) = 0 otherwise for all hi a ∈ AH and b ∈ Bb0 . In order to assure that a relation between two attributes is good we need to introduce a measure. The mapping m : AH × AH → [0, 1] is defined, for each a∗i , a∗j ∈ AH , as3 m(a∗i , a∗j ) = 1 −
|a∗i R∆a∗j R| |B|
where | · | is the cardinality mapping and ∆ is the symmetric difference between a∗i R and a∗j R, that is, a∗i R∆a∗j R = (a∗i R ∪ a∗j R) r (a∗i R ∩ a∗j R)}. Given two attributes a∗i , a∗j ∈ AH , we say that a∗i and a∗j are related with measure m(a∗i , a∗j ). This mapping will add to the user extra information to decide what approximation of the value R(a0 , b0 ) may be considered. Firstly, we will obtain values to R(hj a0 , b0 ), for all hj ∈ H, and, later, we will decide about R(a0 , b0 ). We can easily check that, if we find an attribute a∗j , with a∗j = hj a, such that m(hj a0 , a∗j ) = 1, then both attributes are related with the same objects and we can conclude that RH (hj a0 , b0 ) = RH (a∗j , b0 ), with a probability of |Bb0 |/|B|. Hence, if m(hj a0 , a∗j ) is close to 1 we may write that R(hj a0 , b0 ) = R(a∗j , b0 ), with probability, |Bb0 r (hj a0 R∆a∗j R)| |B| Otherwise, if m(hj a0 , aj ) = 0, then aj and hj a0 are complementary and we can ensure that RH (hj a0 , b0 ) = 1 − R(aj , b0 ) with the same probability as above, |Bb0 |/|B|, and, if m(hj a0 , aj ) is next to 0 we may complete that information like |Bb0 r (ai R∆aj R)| R(hj a0 , b0 ) = R(aj , b0 ) with probability, 1 − . |B| This mapping can be generalized in order to compare a subset of attributes Y H ⊆ AH with an attribute aj . We obtain the mapping, m∩ : P(AH ) × AH → 3
In order to reduce the notation, we will use ∗ to write the elements in AH .
9
[0, 1] defined, for each Y H ∈ P(AH ) and aj ∈ AH , as m∩ (Y H , aj ) = 1 −
|Y∩H R∆aj R| |Bb0 |
T where Y∩H R = {hj ak R | hj ak ∈ Y H } and ∆ is the symmetric difference. We could introduce a mapping m∪ but this will not add good values to R(a0 , b0 ). The interpretation of this mapping is similar to the measure m, explained above. Mainly, if m∩ (Y H , aj ) is close to 1, then we assume that RH (hj a0 , b0 ) = ∧{RH (hj ak , b0 ) | hj ak ∈ Y H } with probability (|Bb0 r (Y∩H R∆aj R)|)/|B|. Moreover, if m∩ (Y H , aj ) is close to 0, then we can write that RH (hj a0 , b0 ) = 1 − ∧{RH (hj ak , b0 ) | hj ak ∈ Y H } with probability 1 − (|Bb0 r (Y∩H R∆aj R)|)/|B|. In the following example we will consider the context given in the last section, and we will assume that we don’t know some value of relation R. Example 3. Let us to consider Table 1, in Example 1, and that we don’t know the value R(Impact Factor, CAMWA). Therefore, we consider the relation R without the column given to CAMWA and we apply the list of labels H = {Low, Medium-Low, Medium-High, High}, the regular partition I4 = {[0, 0.25], (0.25, 0.5], (0.5, 0.75], (0.75, 1]} and the correspondences mapping φ : H → I4 to obtain the context (AH , Bb0 , Rbφ0 ), where b0 = CAMWA, given in Table 6. Now we need to fixe the relation among the labels of Impact Factor and the rest of attributes with labels. In Example 2, we showed that attributes Low IF and Low BP was related. Now, they are also related and we can explain one from the other one. The measure that we obtain is the following: As Medium-Low II and Medium-High IF are related with the same objects, the measure is the bigger, m(Medium-Low II, Medium-High IF) = 1. Therefore, we can assume that RH (Low IF, CAMWA) = RH (Low BP, CAMWA) = 1 with probability, |Bb0 |/|B| = 6/7. As R(Immediacy Index, CAMWA) has only a value, we can suppose that this is in φ(Low) = [0, 0.25] and we can complete the relation with this value. Check that the original value was 0.09 which is in the selected interval [0, 0.25]. In order to obtain the final concept lattices we can consider the subintervals of [0, 1] lattice obtaining the relation given in Table 7 and where a value x is considered as the interval [x, x]. Furthermore, once we are studied the complete relations between the fixed attribute and the rest of attributes we can eliminate the non related attributes and to consider more labels in order to obtain more precision in the interval to R(a0 , b0 ). 10
Table 6. Crisp relation Rbφ0 between the new attributes AH and the objects Bb0 . Rbφ0 AMC FSS IEEE-FS IJGS IJUFKS JIFS Low IF 0 0 0 0 1 1 Medium-Low IF 1 0 0 1 0 0 Medium-High IF 0 1 0 0 0 0 High IF 0 0 1 0 0 0 Low II 1 0 1 1 1 1 Medium-Low II 0 1 0 0 0 0 Medium-High II 0 0 0 0 0 0 High II 0 0 0 0 0 0 Low CHL 0 0 0 0 0 0 Medium-Low CHL 1 0 0 0 1 0 Medium-High CHL 0 0 1 0 0 0 High CHL 0 1 0 1 0 1 Low BP 0 0 0 0 1 1 Medium-Low BP 0 0 0 1 0 0 Medium-High BP 1 0 0 0 0 0 High BP 0 1 1 0 0 0
Table 7. Fuzzy relation between the attributes and the objects. R Impact Factor Immediacy Index Cited Half-Life Best Position
6
AMC CAMWA FSS IEEE-FS IJGS IJUFKS JIFS 0.34 0.21 0.52 0.85 0.43 0.21 0.09 0.13 [0, 0.25] 0.36 0.17 0.1 0.04 0.06 0.31 0.71 0.92 0.65 0.89 0.47 0.93 0.75 0.5 1 1 0.5 0.25 0.25
Conclusions and future work
Remark that the partition should not be very big because the complexity of the calculation will increase very much.
11