630
JOURNAL OF COMPUTERS, VOL. 8, NO. 3, MARCH 2013
A New Attribute Reduction Recursive Algorithm Based On Granular Computing Daoguo Li School of Management, Hangzhou Dianzi University, Hangzhou, China Email:
[email protected] Zhaoxia Chen School of Management, Hangzhou Dianzi University, Hangzhou, China. Email:
[email protected] Jie Yin School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, china. Email:
[email protected] Abstract—Existing representative research achievement of attribute reduction mainly focused on two aspects. One is how to improve the efficiency of attribute reduction algorithms for all attributes including the added properties. Such as the recursive algorithm to change conjunctive normal form into disjunctive normal form based on the Boolean matrix and algorithm based on radix sorting for computing core and reductions of a given information system, etc. On the other hand focus on objects recursive algorithms. The drawback is that these methods have not fully use knowledge gained when some attributes was added to a discussion on domain. Therefore, in this paper, the regularity of core and reduction’s changes under adding new attributes into a given information system were discussed. Moreover, the new incremental recursive reduction algorithms from an information system were proposed based on Granular computing. Experiments show that these algorithms can quickly and exactly calculate new core and reduction of new information system by taking advantage of knowledge of previous information system. Index Terms—granular computing, rough set, information system, restrained relative positive region, recursive algorithm, attribute reduction.
I. INTRODUCTION So far Granular Computing is accepted formal definition yet [1, 2]. Essentially any computing theory or technology that involves elements and granules may be called granular computing which elements are data and granules are basic knowledge. So granular computing can be interpreted as varieties agents, classifying, clustering, or groups about some reality and its ability to discern some phenomena, processes, objects etc. [3, 4, 5]. It is an effective tool to deal with imprecision and vagueness and plays an important role in infrastructures for AIEngineering [6, 7], uncertainty management, data mining, Knowledge engineering, machine learning, and cognition, calculation, etc of information [8, 9, 10].
© 2013 ACADEMY PUBLISHER doi:10.4304/jcp.8.3.630-637
Information system is an important way to express knowledge portrayed by its core and reductions [11, 12, 13]. When an information system is accessioned an attribute or a group attributes, the representative research achievement for attribute reduction mainly focused on two aspects [14, 15]. On the one hand is how to improve efficiency of attribute reduction algorithms for all attributes including added properties without imposing knowledge of primary information system [16, 17]. Such as recursive algorithm to change conjunctive normal form into disjunctive normal form based on Boolean matrix and algorithm based on radix sorting for computing core and reductions of a given information system, etc [18, 19, 20]. On the other hand is designing objects recursive algorithms [21, 22, 23]. The drawback is that these methods have not fully use knowledge gained when some attributes was added to a discussion on domain. In this paper, varying regularity of core and reduction’s changes under adding new attributes into a given information system were discussed. Compared with the existed scheme, the proposed algorithm can improve the computation accuracy by taking the existed knowledge of the information system. Moreover, the proposed new incremental recursive reduction algorithm in information system is based on Granular computing, which will be able to improve the convergence and decrease computation time cost. Experiments show that these algorithms can quickly and exactly calculate new core and reduction of new information system by taking advantage of knowledge of previous information system. II. BASIC CONCEPTS AND THEOREMS A. Basic Concepts About Rough Sets In the section, some important concepts about the classic rough sets are firstly introduced, and then definitions of restrictive relative positive region and set of necessary attributes objects are put forward. The essence
JOURNAL OF COMPUTERS, VOL. 8, NO. 3, MARCH 2013
about classifying ability of information system is interpreted based on above concepts. We prove three theorems. Finally, change law of core and reductions of information system with attributes increasing is discusses. Definition 1 [18]: Let quaternion IS = (U , C ,V , f ) is information system, for ∀c ∈ C , if POS (C ) ≠ POS (C − {c}) , then c is called indispensable. This means when attribute c is deleted; classifying ability of information system would be weakened. The set of all indispensable attributes in c is defined core of information system, and will be denoted CORE (C), which it is unique. Obviously CORE (C ) = {c | c ∈ C , POS (C − {c}) ≠ POS (C )} . If ∃B ⊆ C satisfies follows two conditions: (1) B is independent if ∀b ∈ B, POS ( B ) ≠ POS ( B − {b}) (2) POS ( B ) = POS (C ) .Then B is called a reduction of information system. Generally, an information system may have many reductions, where RED(C ) is family of all reductions of information system. The following is an important property establishing relationship between core and reductions. Proposition 1: CORE (C ) = I RED(C ) . For arbitrary given two information systems IS1 and IS2 , then we call that knowledge of IS1 expression is specialization of IS2 expression knowledge, reversely, knowledge of IS 2 expression is coarser than knowledge of IS1 expression if and only if both IND( P ) p IND(Q) and POS P (Q) = U are void.
B. Concepts of Restrictve Relative Positive Regi -on and Set of Necessary Attributes Objects Definition 2: restrictive relative positive region POS S (T)|M. Let IS = (U , C ,V , F ) is a information system, for ∀S , T ⊆ C , ∀M ⊆ U , then POS S (T ) | M = {x | (∀x ∈ M ) ∧ ([ x]IND ( S ) ⊆ [ x]IND (T ) )} is called positive domain being limited on objects set M of S relative to T. In Pawlak’ Rough Sets, relative positive region is established on all objects of given universe. But in many practice problems solved, we may distinguished classifying ability between attributes according to a subset M of a universe U which is useful for decreasing complexity of problems solved by restrictive relative positive region. Definition 3: Let IS = (U , C ,V , F ) be a information system, for ∀S ⊆ C , T ⊆ S , then set of all objects not to be classified correctly according to indiscernibility relation over {S − T } is called indispensable objects set of attribute set S relative to attribute set T , and will be denoted by Δ S (T ) . Obviously Δ S (T ) = U − {x | x ∈ U ,[ x]S −T ⊆ [ x]S } and Δ S (T ) = U − POS S −T ( S ) hold.
© 2013 ACADEMY PUBLISHER
631
Classifying ability of a information system can be understood better by definition 3. If Δ C ({c}) ≠ ∅ , then attribute c will belong to core of information system. If ∃B ⊆ C , satisfies formulas Δ B ({b}) ≠ ∅(∀b ∈ B) and POS ( B ) = POS (C ) , attribute subset B will be surely a reduction of information system, which may be proved easily. C. Basic Theorems Following three theorems are foundation of attribute incremental algorithms presented in this paper, which will be proved according to above basic concepts. Theorem 1 let IS0 = (U , C , V , f ) be information system. IS1 = (U , C ∗ , V ∗ , F ∗ ) is a new information system that be formed by adding a attribute β to IS0 , where C * = C ∪ {β } , then attribute
β ∈ Core( IS1 ) iff POSC ({β }) ≠ U .
(1)
Proof: if POSC ({β }) ≠ U , then ∃x ∈ U ,[ x]C ⊄ [ x]{β } , [ x]C − [ x]{β } ≠ ∅
having
⇒
[ x]C ≠ [ x]C ∪{β } ⇒
POS (C − {β }) = POS (C ) ≠ POS (C ) , namely Δ C* ({β }) ≠ ∅ , according to definition 2, then *
*
β ∈ Core( IS1 ) is void. If β ∈ Core( IS1 ) , then POSC ({β }) ≠ U holds obviously. Theorem 1 shows that if a new attribute added to a information system increases classifying ability of information system, It will belong to core of new information system formed by adding new attribute to primary information system. Theorem2: let IS0 = (U , C , V , f ) be information system. IS1 = (U , C * ,V * , f * ) is new information system that be formed by adding an attribute β to IS0 , where C * = C ∪ {β } , for ∀a ∈ Core( IS0 ) then, a ∉ Core( IS1 ) . iff ( POS{β } ({α }) | Δ C ({α })) = Δ C ({α })
(2)
Proof: According to core definition of information system, an attribute belongs to core, which is equivalent to its indispensable objects set not to be empty set, in other words, when classifying ability of information system do not change, attribute is not replaced by other attribute over C . According to definition 3, classifying ability of indiscernibility relation C on objects set U − Δ C ({a}) is equal to indiscernibility relation C − {a}. In addition, ( POS{β } ({a}) | Δ C ({a})) = Δ C ({a})
Theorem 3 let IS0 = (U , C , V , f ) be a information system. IS1 = (U , C * ,V * , f * ) is a new information system that be formed by adding a attribute β to IS0 , where C * = C ∪ {β } .
Give
P0 ∈ RED(C ) ,
we
define
632
JOURNAL OF COMPUTERS, VOL. 8, NO. 3, MARCH 2013
replaceable set of attribute β relative to reduction of information system IS0 , here,
K RED ( β ) is replaceable set of attribute β is relative to RED , its definition as follows Δ RED ({ a }) ({a}) = Δ RED ({a})} K P0 ( β ) = {c | (c ∈ P0 ) ∧ (( POS{β } ({a}) | Δ P0 ({a})) = Δ P0 ({a})} K RED ( β ) = {c | c ∈ RED ∧ RPOS{β } (5) When a attribute of K P0 ( β ) is deleted from P0 , we can Where, S is set, get a new P1 . In succession, we compute K P1 ( β ) , if 1∗ S = S , 0 * S = ∅ , and A o K (α ) = {{ A − α } | α ∈ K (α )} defined. RED ' in K P1 ( β ) ≠ ∅ , we will keep on to delete a attribute of formula 2 represents a reduction obtained by last times K P1 ( β ) from P1 , in series, we stop computing until computing, which initial value of computing is RED0 . K Pn ( β ) = ∅ holds. If POS Pn ({β }) ≠ U holds, then Operation symbol “ * ” in formula 4 represents Pn ∪ {β } is a reduction of information system IS1 , continuous iterative operations until satisfying termination condition. otherwise, Pn is a reduction of new information system Therefore, core and reductions of new information IS1 . system may be obtained by means of attribute incremental algorithm. III. INCREMENTAL ALGORITHM Formula 6 and formula 7 are separately induced by formula 3 and formula 4. In this section, change laws of core and reductions of a information system added a attribute or many attributes Core( IS1 ) = Core( IS0 ) + Θ − Ω . (6) are discussed, then attribute incremental algorithms is presented. RED1 = ( RED0 o Γ)* + Ψ. (7) A. Single Attribute Incremental Algorithm Here, When a information system added a new attribute Γ = K ( β ) , Θ = sgn({β }) ∗ {β } , Ψ = sgn'({β }) ∗ {β } , forms a new information system, core and reductions of new information system may be obtained through and modification of core and reductions of primary Ω = {a | a ∈ Core( IS0 ) ∧ RPOS{ΔβC}({a}) ({a}) = Δ C ({a})} . information system according to theorem 1~3. Algorithm 1: Building of a single attribute Let IS0 = (U , C , V , F ) is a primary information system, incremental algorithm Core( IS0 ) is core of IS0 , RED0 represents a reduction Procedure of single attribute incremental algorithm as ∗ ∗ ∗ follows. of IS0 . IS1 = (U , C , V , F ) represents new information Input: IS0 = (U , C , V , f ) , Core( IS0 ) , RED0 , system formed by adding attribute set B to IS , 1
where C ∗ = C ∪ B , Core( IS1 ) is core of IS1 , RED1 obtained by modifying RED0 represents a reduction of IS1 . Here Let IS+ = (U , C , V + , F + ) be information system composed by attribute set B and universe of discourse U . Core( IS + ) represents core of IS+ , and RED+ represents a reduction of IS + . Then we may get follows equality relations. Core( IS1 ) = Core( IS0 ) − {a | a ∈ Core( IS0 )Λ (( POS RED+ | Δ C ({a})) = . Δ C ({a}))} + {b | b ∈ Core( IS + )Λ Δ B ({b}) ∩ (U − POS RED0 ( RED+ ) ≠ ∅}
(3)
RED1 = ( RED0 o K RED0 ({REDW }))∗ , . here REDW = ( RED+ o K RED + ({RED0 }))∗ .
(4)
Here, sgn({β }) and sgn'({β }) are class symbol functions, namely,
{
sgn({β }) = 1, when POS RED0 ({β }) ≠ U ;0.other}
{
sgn'({β }) = 1, when POS REDt ({β }) ≠ U ; 0, other}.
β , IS1 = (U , C ∪ {β },V * , f * ) Output: Core( IS1 ) , RED1 . First step: initializing Core( IS1 ) = Core( IS0 ) RED1 = RED0 , go to second step. POS RED ({β }) Second step: compute 0
, ,
if POS RED0 ({β }) = U , go to the fourth step, otherwise, go to the third step. The third step: let RED1 = RED1 ∪ {β } , go to the fourth step. The fourth step: let S = ∅ , for ∀a ∈ Core( IS0 ) , compute M = Δ C ({a}) and N = RPOS{Mβ } ({a}) , if M = N , then S = S ∪ {a} , go to the fifth step. The fifth step: Core( IS1 ) = Core( IS1 ) − S , go to the sixth step. The sixth step: let T = ∅ , for ∀b ∈ RED1 , compute M = Δ RED1 ({b})
if M = N
and
N = RPOS{Mβ } ({b})
,
and POS RED1 −{b} ({b}) ⊂ POS RED1 −{b} ({β }) ,
then T = T ∪ {b} , go to the eighth step. The seventh step: the t = arg min(Card (Δ RED1 ({w})) w∈T
attribute obtained by computing Card (Δ RED1 ({w}) , then
© 2013 ACADEMY PUBLISHER
JOURNAL OF COMPUTERS, VOL. 8, NO. 3, MARCH 2013
633
the attribute t will be deleted from RED1 , and then the new RED1 will get, go to the sixth step. The eighth step: if T ≠ ∅ , go to the seven step, otherwise go to the ninth step. The ninth step: if POS RED1 ({β }) ≠ U or RED0 ≠ RED1 , then RED1 = RED1 ∪ {β } , go to the tenth step. The tenth step: the algorithm terminates, then output Core( IS1 ) and RED1 . B. Multi-attribute Incremental Algorithm Change of core and reductions with adding many attributes to a given information system are discussed based on idea of single attribute algorithm. Similarly follows formulas and theorems may be obtained according to theorem 1~3 and idea of single attribute algorithm. Let IS0 = (U , C , V , f ) is a primary information system, Core( IS0 ) is core of IS0 , RED0 represents a reduction of IS0 . IS1 = (U , C ,V , f ) represents a new information system formed by adding attribute set B to IS0 , where C * = C ∪ B , Core( IS1 ) is core of IS1 , RED1 obtained by modifying RED0 represents a *
*
*
reduction of IS1 . Here let IS+ = (U , B, V + , f + ) be information system composed by attribute set B and universe of discourse U . Core( IS + ) represents core of IS+ , and RED+ represents a reduction of IS + . Then we may get follows equality relations. Core(IS1) = Core(IS0 ) −{a | a ∈Core(IS0 ) ∧ ((POSRED+ | ΔC ({a})) = ΔC ({a}))}+{b | b∈Core(IS+ ) ∧ΔB ({b}) ∩(U − POSRED0 (RED+ ) ≠ ∅}.
(8) RED1 = ( RED0 o K RED' ({REDw }))* , here REDw 0 = ( RED+ o K RED' ({RED0 }))* .
(9)
+
formulas 8 and 9 show interrelationship among core or reductions of primary information and extension information system formed by adding new attribute set B to primary information as well as new information system formed by only new attribute set B on universe U. Formula 8 may be deduced follows equality. Core( IS1 ) = Core( IS0 ) + Core( IS + ) − {a | a ∈ Core( IS0 ) ∧ (( POS RED+ | Δ C ({a})) (9) = Δ C ({a}))} − {b | b ∈ Core( IS+ ) ∧ Δ B ({b}) ∩ (U − POS RED0 ( RED+ ) = ∅}. Furthermore, we will have next equality. Core( IS1 ) =Core( IS0 ) + Core( IS + ) − {a | a ∈ Core( IS0 ) ∧ (( POS RED+ | Δ C ({a})) = Δ C ({a}))} − {b | b ∈ Core( IS+ ) ∧ (( POS RED0 | Δ B ({b})) = Δ B ({b}))}.
(10)
Formula 10 reveals core of extension information IS1 obtained by deleting redundant attributes in union set
© 2013 ACADEMY PUBLISHER
of cores of primary information system IS0 and new information system IS+ . Simultaneous reductions of extension information system IS1 obtained by removing superfluous attributes in union set of new information system IS+ and primary information system IS0 . Many attribute incremental algorithm will be presented based on formulas 8, 9 and 10. Algorithm 2: a many attributes incremental algorithm design Input: IS0 = (U , C ,V , F ) , Core( IS0 ) , RED0 , IS+ = (U , B, V + , f + ) , RED+ , s.
Output: Core( IS1 ) , RED1 . Procedure of many attributes incremental algorithm Step 1: let Core( IS1 ) = Core( IS0 ) ∪ Core , ( IS+ ) REDw = RED+ , RED1 = RED0 , go to Step2. Step 2: let S0 = ∅ , for ∀c ∈ Core( IS0 ) , compute M0 M 0 = Δ C ({c}) and N 0 = RPOS RED ({c}) , if M 0 = N 0 , +
then S0 = S0 ∪ {c} , go to Step3. Step 3: let S + = ∅ , for ∀c ∈ Core( IS + ) , compute M+ M + = Δ B ({c}) ∧ N + = RPOS RED ({c}) , if M + = N + , 0
then S + = S + ∪ {c} , go to Step4. Step 4: let Core( IS1 ) = Core( IS1 ) − S0 − S + , go to Step5. Step 5: for ∀b ∈ REDw , compute M = Δ REDw ({b})
M N = RPOS RED ({b}) 0
and
,
if M = N and POS REDw −{b} ({b}) ⊂ POS REDw −{b} ( RED0 ) ,
Then T+ = T+ ∪ {b} , go to Step7. Step 6: take t = arg min(Card ( Δ RED ({w})) , next to w∈T+
remove attribute t from REDw , get REDw , = REDw − {t} go to Step5. Step 7: if T+ ≠ ∅ , go to sixth step, otherwise go to Step8. Step 8: let T0 = ∅ , for ∀b ∈ RED1 , compute M ({b}) , if M = N and M = Δ RED1 ({b}) and N = RPOS RED wt
then T0 = T0 ∪ {b} , go to Step 10. Step 9: take t = arg min(Card (Δ RED1 ({w})) , and w∈T0
remove attribute t from RED1 , namely RED1 = RED1 − {t} , go to Step8. Step 10: if T0 = ∅ , go to the ninth step, otherwise go to Step 11. Step11: the algorithm terminates, then output Core( IS1 ) and RED1 . C. Knowledge Acquisition basd on Attribute Recursiv -e Algorithms Theorem 4 IS = (U , C , V , f ) be a given information system. IS + = (U , C ∪ C + , V , f ) be a new information system generated by adding attribute set C + to IS . Let α ∈ C + ∧ α ∉ C , for ∀X ⊆ U , ∀P ∈ RED ( IS ) , then
634
JOURNAL OF COMPUTERS, VOL. 8, NO. 3, MARCH 2013
(P ∪ {a})( X ) = P ( X ) ∪ {a}( X ) ∪ Y .
(11)
( P ∪ {a})( X ) = X ∪ (Δ P ( X ) − Z ).
(12)
Here Δ P ( X ) = X − P ( X ), Δ P ( X ) = P ( X ) − X ; Y = {x | ( x ∈ (Δ P ( X ) ∩ Δ{α } ( X ))) ∧ (I{[ x]β | β ∈ P ∪ {α }} ⊆ X )} Z = {x | (∀β ∈ P ∪ {α }) ∧ ( x ∈ I{Δ β ( X )}) ∧ ([ x]P ∪{α } ⊆ I{Δ β })}.
Proof: According to properties of Rough P ( X ) ∪ {α }( X ) ⊆ P ∪ {α }( X ) holds. Then ,
Set,
Sexondly, x ∈ P ∪ {α }( X ) iff [ x]P ∪{α } ⊆ X P ∪{α }
.
= ∩{[ x]β | β ∈ P ∪ {α }} ⊆ X is valid, so
formula 11 holds. There P ∪ {α }( X ) = X ∪ Δ P ∪{α } ( X ) is valid properties of Rough Set. Owing x ∈ P ∪ {α }( X ) ∧ x ∉ X → x ∈ Δ P ∪{α } ( X ) and and
| β ∈ P ∪ {α }} ∩ X = ∅
I{Δ β ( X )
so
[ x]P ∪{α } ∩ X ≠ ∅
Then draws [ x]P ∪{α } ⊄
Step2: r (α ) = 1 − D(π (C ∪ C + ) / π (C ∪ C + − {α }) , and let C * = {α | r (α ) > 0, α ∈ C ∪ C + } , for ∀α ∈ C , calculate by rough approximation operators C * ∪(α ∉ C * ) and C * − {α }(α ∈ C * ) .
x ∈ X ∧ x ∉ P ( X ) ∪ {α }( X ) iff x ∈ Δ P ( X ) ∩ Δ{α } ( X ).
Thus, [ x]
Input: IS + = (U , C ∪ C + , V ' , f ' ) , Core (IS), RED (IS), Step1: computing π (C ∪ C + ) = U / IND(C ∪ C + ) = {Ei | Ei = [ x]C ∪C + } .
If C * ⊆ C + C + , U / IND( S ) ≤ U / IND(C ∪ C + ) , then go to Step4, otherwise go to next. Step3: C * ⇐ C * ∪ {α } . Step4: Output: RED (IS+)=C*.
by
IV. EXAMPLES ANALYSIS Example 1: Given following medical diagnosis information system, see TABLE I.
both hold.
TABLE I. AN INFORMATION SYSTEM IS
I {Δ β ( X )} ⇒ x ∉ Z ⇒ x
β ∈P ∪{α }
∈ ΔP ( X ) − Z .
Theorem 5: IS = (U , C , V , f ) be a given information system. IS + = (U , C ∪ C + , V , f ) be a new information system generated by adding attribute set C + to IS . Let α ∈ S ∧ S ⊆ C + C + , for ∀X ⊆ U , then S − {a}( X ) = S ( X ) ∩ (Δ S −{a} ( X ))C .
(13)
S − {a}( X ) = X ∪ Δ ( x) ∪ W .
(14)
Where: W = {x | ( x ∈
Proof
I {Δ β ( X )}) ∧ ([ x]S −{α } ⊄
β ∈S −{α }
I {Δ β ( X )})}
β ∈S −{α }
S − {α }( X ) ⊆ S ( X ) ⊆ X ⇒ Δ S −{α } ( X )
and
S ( X ) ∩ Δ S ( X ) = ∅ , so, S − {α }( X ) = S ( X ) ∩ Δ S −{α } X ) ∩ (Δ S X ))c )c Δ S −{α } X ) ∩ (Δ S X ))c
=
{x | ( x ∈
Δ S −{α } ( X )) ∧ ( x ∉ Δ S ( X ))} Obviously (x ∉ X ) ∧ (x ∈
S − {α }( X ) = X ∪ Δ S −{α } ( X )
.both
a
b
c
d
e
1 2 3 4 5
0 1 1 2 1
1 2 0 1 1
2 0 1 0 0
0 2 0 1 2
1 0 2 1 0
Where U represents universe of information system IS , and C = {a, b, c, d , e} represents attribute whole set of IS . Firstly, we get core Core( IS ) = {b} and reduction RED = {a, b} of information system IS according to methods of classic rough sets. Next to obtain core and reduction of information system IS based on way of single attribute incremental algorithm. Changes of relative parameters can be sawn in following table 2. Annotation, Δ C ({α }) is denoted simply Δ C (α ) , Δ RED ({α }) is denoted simply Δ R (α ) , and POS{β } ({α }) | M is denoted simply R( β , α ) . Analysis change of table 2, we can discover association relationship among cores of extension information system and primary information system as well as new information system, in other words, if new attribute added to information system IS may 0
S − {α }( X )) → x ∈ Δ S −{α } ( X ) and Δ S ( X ) ⊆ Δ S −{α } ( X ) , at same time, x ∈ S − {α }( X ) iff [ x]S −{α } ⊄ I {Δ β ( X )} , β ∈S −{α }
thus x ∈ S − {α }( X ) ∧ x ∉ X ⇒ x ∈ W , namely formula 14 holds. Algorithm 3: constructing of Knowledge Acquisition based on Attribute Recursive Algorithms.
© 2013 ACADEMY PUBLISHER
U
discernibility all objects of indispensable objects set of core ( IS ), It must belong core of extension information 0
system, otherwise it must not belong core of extension information system. Meanwhile, if new adding attribute may discernibility all objects of indispensable objects set of RED , then it replaces all attributes whose classifying 0
ability is weaker than new adding attribute without changing classifying ability, therefore reductions of
JOURNAL OF COMPUTERS, VOL. 8, NO. 3, MARCH 2013
extension information system may be obtained , otherwise it does not belong to reductions of IS . 1 Example 2: We shall compute core and reduction of information system IS based on way of many attributes
635
incremental algorithm. changes of relative parameters can be sawn in following table III.
TABLE II. CHANGE LAWS OF PARAMETERS ON ALGORITHM 1 AND 2 Parameter items
+a
+b
+c
+d
+e
POS RED0 ({β })
∅
{1, 4}
U
U
U
∅
ΔC (a) =U
∅
R(b, a ) = {2,3}
∅
Δ R (a) =U
∅
R(b, a ) = {2,3}
{a}
Core( IS )
RED
Δ C (a) = {1, 4,5}
Δ C (a) } = {4,5}
Δ C (b) = {2,3,5}
Δ C (b) = {2,5}
R ( c, a ) = {1} R (c , b ) = {3}
R(d , a) = {4,5} R ( d , b) =∅
R (e, b) =∅
Δ R (a) = {1, 4,5}
Δ R (a) = {1, 4,5}
Δ R (a ) = {1, 4,5}
Δ R (b) = {2,3,5}
Δ R (b) = {2,3,5}
Δ R (b) = {2,3,5}
R ( c, a ) = {1}
R(d , a) = {1, 4,5}
R(e, a ) = {5}
R (c , b ) = {3}
R ( d , b) = {3}
R ( e, b ) = {3}
{a, b}
{a, b, c}
{a, b, c, d }
{a, b, c, d , e}
{a}
{a, b}
{a, b}
{b}
{b}
{a}
{a, b}
{a, b}
{a, b}
{a, b}
Δ C ({α })
POS{β } ({α }) | Δ C ({a}
Δ RED0 ({α })
POS{β}({α})| ΔRED0 ({a})
Δ C (b) = {2,5}
After adding attributes to current attribute set
A of
IS
TABLE III CHANGE LAWS OF PARAMETERS ON ALGORITHM 2 +b (Core(IS+ ) = {b}, RED+ = {b})
+{c, d , e}
Parameter items
+a
POS RED0 ( RED+ )
∅
{1, 4}
Δ C ({α })
∅
Δ C (a) = U
Δ B ({β })
∅
Δ B (b) = U
ΔC ({a}) RPOS RED ({α }) +
∅
R ( RED+ , a) = {2,3}
Δ B ({ β }) RED0
∅
R ( RED0 , b) = {1, 4}
-
-
∅
∅
{a}
{a, b}
{a, b, c, d , e}
Core( IS )
{a}
{a, b}
{b}
RED
{a}
{a, b}
{a, b}
RPOS
({β })
REDw After adding attributes to current attribute set
A of IS
Owing to making full use of knowledge of primary information and information system formed by new attributes, efficiency of computing of core and reductions of extension information system based on many attributes incremental algorithm is improved. V. CONCLUSIONS
© 2013 ACADEMY PUBLISHER
(Core( IS+ ) = ∅, RED+ = {c}) U Δ C (a) = {1, 4,5}
Δ C (b) = {2,3,5}
- R ( RED+ , a ) = {1, 4,5} R ( RED+ , b) = {2}
In this paper, regularity of core and reduction’s Changes under adding new attributes into a given information system are discussed based on concepts defined, and many attributes incremental algorithm1and algorithm are presented. Examples show that efficiency of computing of core and reductions of extension
636
JOURNAL OF COMPUTERS, VOL. 8, NO. 3, MARCH 2013
information system based on incremental algorithms was improved.
[14]
ACKNOWLEDGMENT This work was supported in part by a grant from Humanities and Social Sciences Base of Zhejiang (No.GK090205001-25), open foundations of Embedded System and Service Computing of Education Ministry Key Laboratory in Tongji University, Defense Industrial Technology Development Program (A3920110001), National Science Foundation of China (No: 60175016) and Hang Zhou Dianzi University (NO: KYS 091507053) are gratefully acknowledged. REFERENCES [1] T.Y. Lin, R.Barot, and S.Tsumoto, “Some Remarks on the Concept of Approximations from the View of Knowledge Engineering, ” International Journal of Cognitive Informati -cs and Natural Intelligence, vol.4, pp.1-11, April 2010. [2] L.A.Zadeh, “The concept of a Z-number - A new direction in uncertain computation, ” IEEE, Piscataway, NJ, USA, 2011 IEEE International Conference on Information Reuse & Integration (IRI 2011), pp. xxii-xxiii, Aug 2011. [3] Skowron Andrzej, Wasilewski Piotr,” Information systems in modeling interactive computations on granules,” Theore -tical Computer Science, vol.412.No.42, pp.5939-5959, SEP 2011. [4] Wu Weizhi, Leung Yee, “Theory and applications of granu -lar labelled partitions in multi-scale decision tables,” Information Sciences, vol.181.No.18, pp3878-3897, SEP 2011. [5] W. Pedrycz, “Information granules and their use in schemes of knowledge management,” Scientia Iranica, vol.18.No.3, pp. 602-610, JUN 2011. [6] A. Rowhanimanesh, M. R. T. Akbarzadeh, ”Perceptionbased heuristic granular search: Exploiting uncertainty for analysi s of certain functions,” Scientia Iranica, vol. 18, No. 3, pp. 617-626, JUN 2011. [7] Pedrycz Witold, Song Mingli, “Analytic Hierarchy Process (AHP) in Group Decision Making and its Optimization With an Allocation of Information Granularity, ” IEEE Transactions On Fuzzy Systems, vol.19.No.3, pp.527-539. JUN 2011. [8] Polkowski Lech, Artiemjew Piotr, “Granular Computing in the Frame of Rough Mereology. A Case Study: Classification of Data into Decision Categories by Means of Granular Reflections of Data,” International Journal of Intelligent Systems, vol.26.No.6, pp.555-571, JUN 2011. [9] Wang Guoyin, Zhang Qinghua, and Ma Xiao, “Granular Computing Models for Knowledge Uncertainty,” Journal of Software, vol.22.No.4, pp.676-694, April 2011. [10] Qian Yuhua, Liang Jiye, and Wu Weizhi, “Information Gra -nularity in Fuzzy Binary GrC Model,” IEEE Transactions On Fuzzy Systems, vol.19.No.2, pp.253-264, APR 2011. [11] Yuhua Qian, Jiye Liang, and W.Pedrycz, “An efficient acc -elerator for attribute reduction from incomplete data in rough set framework,” Pattern Recognition, vol.44.No.8, pp.1658-1670, Aug 2011. [12] Xu.Yitian, Wang Laisheng, and Zhang Ruiyan, “A dynamic attribute reduction algorithm based on 0-1 integer program –ming,” Knowledge-Based Systems, vol.24.No.8, pp.1341-1347, DEC 2011. [13] Yitian Xu, Zhiquan Qi, and Laisheng Wang, “Generalized difference matrix-based attribute reduction algorithm, ” © 2013 ACADEMY PUBLISHER
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
ICIC Express Letters, vol.5.No.3, pp.841-846, March 2011. Yao Yuehua, Hong Shan, “Rough Set Attribute Reduction Based on Adaptive Ant Colony Algorithm,” Computer En –gineering, vol.37.No.3, pp.198-200, Feb. 2011. Han Zhidong, Wang Zhiliang, and Gao Jing, “Efficient Attribute Reduction Algorithm Based on the Idea of Discernibility Object Pair Set,” Journal of Chinese Comput -er Systems, vol.32.No.2, pp.299-304, Feb. 2011. T.Luba, J.Rybnik, “Algorithmic approach to discernability function with respect to attributes and objects reduction.” Foundations of Computing and Decision Sciences, vol.18. No.34, pp.241-258, JAN 1993. Xu ZhangYan, Liu Zuo Peng, and Yang BingRu, “A quick attribute reduction algorithm with complexity of max (O (|C||U|), O (|C|2|U/C|)), ” Chinese Journal of Computers, vol.29.No.3, pp.391-399, March 2006. Wang Jue, Miao Duoqian, “Analysis on attribute reduction strategies of rough set,” Journal of Computer Science and Technology, vol.13.No.2, pp.189-193, March 1998. Hu Qinghua, Xie Zongxia, and Yu Daren, “Hybrid attribute reduction based on a novel fuzzy-rough model and inform -ation granulation,” Pattern Recognition, vol.40.No.12, pp. 3509-3521, DEC 2007. Y.H Han, L.C.Dai, “Improved algorithm for incremental learning based on rough sets theory”, Computer Engineering and Applications, Vol.43, pp.185-188, 2010. R. Zhu, J. Wang, “Power-Efficient Spatial Reusable Channel Assignment Scheme in WLAN Mesh Networks,” Mobile Networks and Applications, vol. 17, no. 1. pp. 1-11, 2012. R. Zhu, “Intelligent Rate Control for Supporting Real-time Traffic in WLAN Mesh Networks,” Journal of Network and Computer Applications, vol. 34, no. 5, pp. 1449-1458, 2011. R. Zhu, Y. Qin and C.-F. Lai, “Adaptive Packet Scheduling Scheme to Support Real-time Traffic in WLAN Mesh Networks,” KSII Transactions on Internet and Information Systems, vol. 5, no. 9, pp. 1492-1512, 2011.
Daoguo Li (1965- ) was born in Shanxi in 1965. LI Dao guo graduated from Shanxi Normal University, major in Mathematics, and got the Bachelor Degree of Science in 1987. In 2000, LI Dao guo obtained the Master Degree of Science in Shanxi University, major in Mathematics. He graduated from Tongji in 2003, major in Mathematics and obtained Doctor Degree of Engineering. His main research areas are: Granular Computing, Rough Set Theory, Intelligent Information System, etc. LI Dao guo is now a professor and supervisor of postgraduate in Hangzhou Dianzi University. Meanwhile, LI Dao guo is the deputy director of the Management Science and Information Engineering Research Institute in Hangzhou Dianzi University. LI Dao guo is the member of the CRSSC (Rough Set and Soft Computing Society, Chinese Association for Artificial Intelligence), and the member of the Shanghai Computer Society.
JOURNAL OF COMPUTERS, VOL. 8, NO. 3, MARCH 2013
Zhaoxia Chen (1989- )born in Sichuan China, graduated from School of Electronic Information, Hangzhou Dianzi University, Hangzhou, China in 2011, now is studying in School of Management, Hangzhou Dianzi University as postgraduates student. During her undergraduate studying, she has joined in design of efficient light, the active loudspeaker box and a power divider in radio-frequency circuit. She also has been awarded top ten college students in School of Electronic Information in 2009 and earned the major award for two times and minor award for four times. During her postgraduate studying, she has joined in the Defense Industrial Technology Development Program. Chen Zhaoxia participated in the scientific research works in the following research topics, the open foundations of the Embedded System and Service Computing of Education Ministry Key Laboratory in Tongji University and the Humanities and Social Sciences Base of Zhejiang (No. GK09 0205001-25).
© 2013 ACADEMY PUBLISHER
637
Jie Yin (1989- ) born in Shanxi, China, graduated from School of Information Engineering, Communication University of China, Beijing, China in 2011, and now is studying in School of Information and Communication Engineering , Beijing University of Posts and Telecommunications as a graduate student. Yin Jie has worked in the State Administration of Radio, Film and Television as an intern. While studying in Communication University of China as an undergraduate student, he has participated the Beijing college students electronic design contest with the third prize in 2010. He also has studied in speech recognition. Now he studies in the information retrieval of text during his postgraduate study. Yin Jie participated in the scientific research works in the following research topics, the open foundations of the Embedded System and Service Computing of Education Ministry Key Laboratory in Tongji University and the Humanities and Social Sciences Base of Zhejiang (No.GK090205001-25).