The PeerBehavior Model based on Continuous ... - Semantic Scholar

Report 3 Downloads 19 Views
JOURNAL OF NETWORKS, VOL. 9, NO. 1, JANUARY 2014

223

The PeerBehavior Model based on Continuous Behavioral Observation P2P Network Neighbors Xianwen Wu 1*, Zhiliang Xue 1, and Jingwen Zuo 2 1. Department of Information Engineering, Hunan Railway Professional Technology College, Zhuzhou 421001, China 2. Computer Center, College of ChengNan, Changsha University of Science & Technology, Changsha 410076, China *Corresponding author, Email: [email protected], [email protected], [email protected]

Abstract—Honor-based trust mechanism is an important means to evaluate the behavior of the P2P network node, and it is used to ensure the health of the P2P network application. Trust mechanisms need to evaluate a node to other node local trust value and local trust value because they do not consider the policy node and human evaluation error of two important factors. The calculation is difficult to accurately reflect the characteristics of the nodes of the network. Evaluation models of the behavior of a P2P network neighbor. The PeerBehavior model has used a deterministic finite state machine (DFA) depicts the continuous behavior of the neighbor state changes cause a negative evaluation of any continuous behavior by focusing neighbors almost died, both able to more accurately discover policies node in the network, but also be able to tolerate a certain degree of human evaluation error simulation experiment showed that this model was significantly improve the accuracy of the local trust value, and reducing the estimation error of the global trust value, was significantly superior to the calculation method of the current value of the other local trust. Index Terms—P2P Network; Trust Mechanism; Reputation; Strategic Peer; Human Judgment Error

I.

INTRODUCTION

Peer-to-peer (P2P) online communities can be seen as truly distributed computing applications in which peers (members) communicate directly with one another to exchange information, distribute tasks, or execute transactions. They can be implemented either on top of a P2P network [1] or using a conventional client-server platform. Gnutella is an example of P2P communities that are built on top of a P2P platform. Person-to-person online auction sites such as eBay and many business-tobusiness (B2B) services such as supply-chainmanagement network are examples of P2P communities built on top of the client-server architecture. In eCommerce settings P2P communities are often established dynamically with peers that are unrelated and unknown to each other. Peers have to manage the risk involved with the transactions without prior experience and knowledge about each other’s reputation. One way to address this uncertainty problem is to develop strategies for establishing trust and develop systems that can assist peers in assessing the level of trust they should place on an E-Commerce transaction. For example, in a buyerseller market, buyers are vulnerable to risks because of © 2014 ACADEMY PUBLISHER doi:10.4304/jnw.9.1.223-230

potential incomplete or distorted information provided by sellers. Trust is critical in such electronic markets as it can provide buyers with high expectations of satisfying exchange relationships. Recognizing the importance of trust in such communities, an immediate question to ask is how to build trust. There is an extensive amount of research focused on building trust for electronic markets through trusted third parties or intermediaries [7]. However, it is not applicable to self-regulating P2P communities where peers are equal in their roles and there are no entities that can serve as trusted third parties or intermediaries. Reputation systems provide a way for building trust through social control by utilizing community-based feedback about past experiences of peers to help making recommendation and judgment on quality and reliability of the transactions. The challenge of building such a reputation based trust mechanism in a P2P system is how to effectively cope with various malicious behavior of peers such as providing fake or misleading feedback about other peers. Another challenge is how to incorporate various contexts in building trust as they vary in different communities and transactions. Further, the effectiveness of a trust system depends not only on the factors and metrics for building trust, but also on the implementation of the trust model in a P2P system. Most existing reputation mechanisms require a central server for storing and distributing the reputation information. It remains a challenge to build a decentralized P2P trust management system that is efficient, scalable and secure in both trust computation and trust data storage and dissemination. Lastly, there is also a need for experimental evaluation methods of a given trust model in terms of the effectiveness and benefits. With the increase of P2P network, P2P applications gradually occupy the majority of network traffic [1], and because some of the characteristics of P2P networks, such as various forms of malicious nodes distributed, anonymous network affected development of many applications, especially in large-scale P2P E-Commerce applications. malicious node trying to deceive other nodes in order to achieve the purpose of profit or damage the system, so how to evaluate the node in the P2P network, in particular, to distinguish the malicious node is crucial.

224

Honor-based trust mechanism proved to be an effective means to solve the above problems, and recently on eBay trust mechanism. A user feedback system is one hundred and eleven studies that [2-3]. Positive feedback increases the seller's revenue, while negative feedback reduces their income. Currently the most trust mechanisms are provided by the node local trust value, the use of certain methods to calculate the global trust value, and thus the local trust value accurate or not greatly affect the accuracy of the global trust value [4]. Local trust value calculation method, such as the simple average [5], a moving average [6], Bayesian learning [7], unable to prevent the destruction of strategic node, these strategies node every certain number of transactions of honest deception remains will get a higher local trust value, and thus be able to hide the node type. In this paper, a the neighbor behavior evaluation model PeerBehavior the model by the observation of the continuous behavior of neighbors, concerned about the neighbors, the probability of negative evaluation, taking into account node evaluation error caused in any consecutive trading, which can help a more accurate evaluation of node neighbor's behavior. The experimental results show that, compared to PeerBehavior and other methods can significantly improve the accuracy of the trust value in the local estimation error, thereby reducing the global trust value caused by inaccurate due to the value of the local trust. Reputation-based trust research stands at the crossroads of several distinct research communities, most notably computer science, economics and sociology. We first review general related reputation research ECommerce and agent systems and then review a number of recent works on reputation based systems in P2P networks. Dellarocas [12] provides a working survery for research in game theory and economics on the topic of reputation. Mui et al. [2] also give a review summarizing existing works on reputation across diverse disciplines including distributed artificial intelligence, economics, and evolutionary biology. The game theory based research [14] lays the foundation for online reputation systems research and provides interesting insight into the complex behavioral dynamics. Most of the game theoretic models assume that stage game outcomes are publicly observed. Online feedback mechanisms, in contrast, rely on private (pair-wise) and subjective ratings of stage game outcomes. This introduces two important considerations, the incentive for providing feedback and the credibility or the truthfulness of the feedback [8]. A number of reputation systems and mechanisms were proposed for online environments and agent systems. Abdul-Rahman et al. [15] proposed a model for supporting trust in virtual communities, based on direct experiences and reputation. They introduced the semantic distance of the ratings. However, there are certain aspects of their model that are ad-hoc, such as the four trust degrees and fixed weightings assigned to the feedback. Pujol et al. [6] applied network flow techniques and proposed a generalized algorithm that extracts the

© 2014 ACADEMY PUBLISHER

JOURNAL OF NETWORKS, VOL. 9, NO. 1, JANUARY 2014

reputation in a general class of social networks. Josang et al. [7] developed and evaluated the beta reputation system for electronic markets based on b distribution by modeling reputation as posterior probability given a sequence of experiences. Among other things, they showed that a market with limited duration rather than infinite longevity of transaction feedback provides the best condition. Sabater et al. [3] proposed regret system and showed how social network analysis can be used in the reputation system. Sen et al. [11] proposed a word-ofmouth reputation algorithm to select service providers. Their focus is on allowing querying agent to select one of the high-performance service providers with a minimum probabilistic guarantee. Yu et al. [9] developed an approach for social reputation management and their model combines agents’ belief ratings using combination schemes similar to certainty factors. The reputation ratings are propagated through neighbors. Managing Trust [8] is the first P2P network trust management based on the honor system, since the research community many honors management mechanisms, such as PeerTrust [4], EigenTrust [5], PowerTrust [9], GossipTrust [10] and the literature [11]. Most honor-based trust management systems are dependent on local trust value of each node to calculate the value of global trust. Far as we know, there are three categories of local trust value calculation. They are a simple average moving average and Bayesian learning methods. The simple average method is the simple sum of the node individual transactions evaluation, and then averaging EigenTrust [5] that the use of this method to calculate the local trust value. EBay feedback system is to use a similar method, for all transactions Evaluation The sum of such methods has obvious flaws, such as difficult to guard against malicious nodes, every once in a while, the strategy of betrayal node, the node in the betrayal still be able to get a positive local trust value. Moving average [6] is another method to calculate the local trust value, the method given recent transaction evaluation weight. Moving average method has high sensitivity evaluation, but still unable to prevent the policy node. If you give a new evaluation of excessive rights, misjudgment of bona fide node will reduce its local trust value, which means that the method assumes that all evaluation node neighbors are correct, this assumption is not always is established. Trust management mechanism using Bayesian learning method to calculate local trust value [7], and PowerTrust [9], the method and the two methods differ, Bayesian learning method is based on analysis method, the node evaluation as reflect neighbor behavioral characteristics of the sample. Estimated results become more accurate when the sample is large. There will be a large error when the sample is less. II.

PEERBEHAVIOR MODEL

A. Neighbor Type of Behavior to Determine Method (1) The DFA description of the method of the neighbor type of behavior judgment

JOURNAL OF NETWORKS, VOL. 9, NO. 1, JANUARY 2014

This paper to describe the method to design a deterministic finite automaton (DFA), the DFA from the 7-state, they can be divided into four groups: an initial state, the normal-state set, the punished state and the observed state where each state represents at he node of the result of the determination of the neighbors acts, while the transfers between the different states represent nodes on neighbor views change process. DFA formally described as follows. A. Behavioral state set QD {Initial status ( q0 ), the normal state ( q1 , q2 , q3 , q4 ), punishment state ( q5 ), the observed state ( q6 )}.

B.  is an input parameter set C, B, m, n,   C, B, M}, in which: C represents the neighbor's behavior caused the positive evaluation of the node, the node determines cooperation; B expresses the neighbor behavior caused negative evaluation of the node, node determines betrayal; m represents the number of neighbor penalty condition to the normal state require continuous cooperation, i.e. penalize the cycle length; n represents the number of neighbors from the observation state to the normal state requires a continuous cooperation, i.e., the length of the observation period. The  parameter states q3 to state q3 probability, the node forgotten probability, that node tolerance once negative experience. C. FD can be accepted set of states, FD  QD . D.  states transfer function. E. DFA state transition diagram shown in figure 1.

225

It can be obtained the EA   ; said a partnership from state q3 to q4 success, the number of can deduce, experienced an average of 1/  cooperation, the state q3 direct steering to q4 ; cooperation will make the rest 1/   1 the q3 steering to q2 . Thereby the relationship between T and tolerance  are as follows:  3  2 ^ 1/   1 , 0    1 T   3,   1

(1)

Assumed that the malicious nodes in the network every  times cooperation betrayal, betrayal of a probability of 1/  and   3 , also assumes that human evaluation error probability of occurrence for the  and   1/ 3 in the only meet   1/ T  1/  of the case, the neighbor behavior determining method to be able to distinguish between a policy node malicious behavior and human error. a. 1/ T  1/  , it is found that T   , by equation (1) to 3  2 ^ 1/   1   and   3 solution was: 1    1  lb   3 , if   3       1, if   3 

(2)

b.   1/ T , easily obtained, and T  1/  , i.e., the 3  2 ^ 1/   1  1/  and   1/ 3 . 1    1  1b 1/   3 , if   1/ 3       1, if   1/ 3 

(3)

Formula (2) and formula (3) tolerance of the range of values of the parameter of  : 1 1   1  lb 1/   3    1  lb 1/   3 , if   3,   1/ 3 (4)           1, if   3,   1/ 3  Figure 1. The DFA transition diagram of the evaluation method of neighbor’s behavior

(2) Neighbor behavior judgment analysis A. The node tolerance ranges of the parameter α Let T to betrayal of B from state q1 experience once to q2 , go through several cooperation to reach the state q4 required average number of transactions, the 1/ T represents the biggest betrayal frequency can be tolerated in any continuous time. Event A ={the state q3 experienced a partnership after turning state q4 }, and Pr  A  1   , Pr  A  0  1  

© 2014 ACADEMY PUBLISHER

Due to the strategy node malicious behavior and human evaluation error will cause a negative evaluation, so only reasonable to select the node latitude to be able to distinguish between good both cases. The formula (4) gives the reasonable range of the distinguished node between malicious behavior and evaluation of error tolerance. B. Feature analysis The method introduces a tolerance α, that is, negative evaluation after the neighbors a behavior caused, the next two trading forgotten, or in more rounds were forgotten with probability α, α reflects the degree of tolerance of a node, also said that the degree of tolerance of human assessment error. Node tolerance introduced, can prevent the malicious node by multiple transactions Evaluation speculated strategy, thereby ensuring the effectiveness of the strategy.

226

JOURNAL OF NETWORKS, VOL. 9, NO. 1, JANUARY 2014

Soon enter the neighbor experienced punishment state n round observation status, behavior is judged in the observed state of the betrayal, the state re-set to punish state, experienced continuous n round behavior is judged to be cooperative transaction neighbor node state to the normal state, thereby eliminating the negative impact of the behavior of the past. B. Local Trust Value Calculation Local trust value is calculated based on the historical behavior of the neighbors to give an overall assessment. Which comprises two steps: first, the accumulated partial trust value calculated according to the history of cooperation of the neighbor; then combined the neighbor behavior type judgment method results to be updated on the local trust value. A. Use the following method to calculate the historical behavior of the neighbors: n

LR 

 Evaluation  i   T i  i 1

n

T i 

(5)

i 1

Parameter Description: Evaluation  i  is a node on the evaluation of the behavior of the ith neighbors; T  i  is the ith transaction from the current time span. B. Local trust value calculation Local trust value has calculated by combining the result of the neighbor behavior type judgment method, the formula (5) in the calculation result of update. Neighbor type of behavior to determine methods of punishment strategy trigger their normal local trust value halved punishment.  LR / 2, Punishment LocalTrust    LR, Other

III.

THE EXPERIMENTAL RESULTS AND ANALYSIS

A. Simulation Parameters Settings We conducted in the P2P simulation software Peersim on simulation experiments driven use Peersim cycle engine to simulate P2P networks using 150 nodes, with the characteristics of the Power-law topology Simulation trust overlay network. Experiment assumes the existence of certain malicious nodes, and occupied the end of the Power-law curve malicious nodes. The global trust value calculation method does not depend on the local trust value, the paper selects a common method of calculation of the value of global trust, different local trust value calculated on the basis of this unified global trust value. Each node before the transaction using the method of maximum likelihood estimation (MLE) [13] to calculate the global trust value of the neighbor, we use the following equation to calculate the value of global trust estimation error, where  i is the calculated global trust value,  i' represents the type of node. 1 indicates that the node is well-intentioned, and 0 is malicious, k is the number of transactions in the experiment nodes. k

MeanError 

(6)

From formula (5) and (6) can be seen, the negative evaluation led to the neighbor's local trust value exponentially decline, a positive assessment can only make local trust value increases linearly. C. PeerBehavior Analysis The PeerBehavior model had general said to have the following characteristics: A well-intentioned, it is assumed that every stranger is a bona fide node. After two fault tolerance, the neighbors a behavior caused a negative assessment, the node is not eager to punish, but observed it in the next 3  2 ^  1    /   round trading again is judged to be betrayed. Finalized malicious nodes and its malicious behaviors have server for penalties, which can tolerate behavior misjudgment. Angered when the opponent's behavior is ultimately determined to be malicious, in each round of trading will no longer tolerate their betrayal in a significant reduction in the local trust values to punish every betrayal.

© 2014 ACADEMY PUBLISHER

Tolerance in the opponent's betrayal after m round after punishing round of the observation period and n, forgotten, and ultimately forgiving neighbors last betrayal. PeerBehavior tit-for-tat strategy [12] is not only has excellent characteristics, but also has the characteristics of fault tolerance. These features can propel the entire network toward the direction of the development of cooperation, while the introduction of fault tolerance, making it better able to adapt to the actual environment, resulting in a more accurate evaluation.

 i 1

i

 i'

k

(7)

In the experiment, we let each encounter node consecutive trading 4 times, and after each transaction to evaluate the behavior of the opponent, also assumes that the well-intentioned nodes with a certain probability misjudgment opponent malicious node every cycle make your betrayal. Experiment some of the parameters the set values or ranges, such as shown in Table I: TABLE I.

THE VALUE OR RANGE OF THE PARAMETERS IN THE SIMULATION

Parame Basic Definition ter Number of peers in the P2P system. N The number of cycles our simulation runs. C m The length of staying in punished state in ETFT. n The length of staying in observed state in ETFT. The cheat interval of strategic peers. C1

Iv

  

Initial reputation of a stranger. Percentage of strategic peers in the system. The probability of human judgment error. Tolerance degree in the ETFT.

Default Value or Range 150 100 2 4 6 [0.5, 0.7] [0.1, 0.9] [0.01, 0.05] [0, 1]

JOURNAL OF NETWORKS, VOL. 9, NO. 1, JANUARY 2014

227

We designed two different types of experiments: the first category is the comparison of the accuracy of the calculation of local trust value of different calculation methods; the other is to investigate different local trust value calculated in the different parameters of the impact of the global trust value. They are different local trust value calculation method comparison, the human evaluation errors on different local trust value calculated PeerBehavior performance under different scenarios. According to the equation MeanError  f  ,  ,   to derive the simulation results, mainly examine the policy node share probability of  , human error  and evaluation node tolerance  on the influence of the average error. B. Simulation Results and Its Analysis (1) Comparison of the accuracy of the calculation results of the different local trust value calculation method

completely coincide with the increase in the proportion of policy node, a substantial increase in the proportion of local trust value error. (2) Different local trust value calculation method Error of estimated global trust value We compare the same set of experimental parameters PeerBehavior and three kinds of local trust value calculation of the global trust value estimation error. In the experiment, we let the policy node in the network the proportion of from 0.1 up to 0.9, and the growth rate was 0.1, while assuming human evaluation error 0.01, the node tolerance in PeerBehavior 0.5. That is, according to the equation MeanError  f  ,0.01,0.5 to produce the experimental results, calculate the global trust value with MLE estimation error, and then we examine the performance of the different local trust value calculation. 0.7 0.6

MeanErrore

The Rate of Feedback Disaccords with Peers' Type

0.5

0.8 0.7 0.6

0.4 0.3

0.5

0.2

0.4

0.1

0.3

0

0.2

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Percentage of Strategic Peers

0.1 Simple Average

0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Percentage of Strategic Peers Simple Average

Moving Average

Bayesian Learning

PeerStrategy

Figure 2. The trend of the mean error with the increase of the proportion of strategic peers in the network

Node neighbors local trust value calculated with the neighbor before the transaction, if local trust value is greater than 0.5, while the neighbors are good-node or local trust value is less than 0.5, and the neighbor is a malicious node, we call these two cases local trust values are correctly reflect the type of neighbor, otherwise error to reflect the value of the local trust node type. We statistical local trust value node type of error is reflected proportion in the case of the different proportions of the policy node in the network, i.e., the error ratio of the local trust value varied with the increase of the proportion of the network policy node. We compared in the same experimental parameter settings local trust value error ratio calculated PeerBehavior and three kinds of local trust value calculation method. In the experiment, we let the policy node in the network the proportion of from 0.1 up to 0.9, and the growth rate was 0.1, while assuming human evaluation error 0.01, the node tolerance in PeerBehavior 0.5. Figure 2 shows the value of the error ratio of the local trust with the increase in the proportion of the policy node changes, it can be seen PeerBehavior significantly better than the other three kinds of calculation methods, and always less than 0.1. The other three methods, including the simple average and Bayesian learning © 2014 ACADEMY PUBLISHER

Moving Average

Bayesian Learning

PeerStrategy

0.9

Figure 3. The trend of mean error of the global trust with the proportion of the strategic peers in the network

Four different local trust values calculating the performance of the method is easy to see from figure 3. Obviously, PeerBehavior caused by global trust value estimation error is minimized, significantly better than the value calculated for several trust. In this calculation method, the moving average method is better than a simple average calculation method, Bayesian learning methods appear in the range of 0.5 before and after a large variation. When the proportion of the policy node in the network is less than 0.5, Bayesian learning method performance worse than the other two methods, and when greater than 0.5, the performance of the Bayesian learning methods than the other two methods is good. However, with the increase in the policy node, the current estimation error of the other three methods had greatly increased. Because they are dependent on the historical behavior of the node, strategy node strategy can easily escape punishment, which, after the betrayal still able to obtain a higher local trust value, the higher the proportion of the policy node, the greater the error. Binding to figure 2 and figure 3 can be seen, the small local trust error ratio algorithm having a smaller error in the calculation of the global trust value remains, this is because the global trust value calculation depends on the local trust value, if the value of the local trust can not reflect the node characteristics, global trust value, however combination of local trust values are unable to find the type of node, resulting in greater estimation error.

228

JOURNAL OF NETWORKS, VOL. 9, NO. 1, JANUARY 2014

We look at the reaction the current three kinds of methods and PeerBehavior of error and strategies for human evaluation node, we let the proportion of malicious nodes in the network to increase from 0.1 to 0.9, and the growth rate was 0.1. We also examine the various methods three different human evaluation error performance. i.e. according to the equation , MeanError1  f  ,0.01,0.5

0.7

MeanError

0.6 0.5 0.4 0.3 0.2 0.1 0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Percentage of Strategic Peers human judgment error=0.01 human judgment error=0.03

human judgment error=0.02

(a) 0.7

MeanError

0.6 0.5 0.4 0.3 0.2 0.1 0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Percentage of Strategic Peers human judgment error=0.01 human judgment error=0.03

human judgment error=0.02

MeanError

(b) 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

MeanError2  f  , 0.03, 0.5

,

MeanError 3  f  ,0.05,0.5  to produce experimental results. As can be seen from figure 4, simple average, moving average, and Bayesian learning methods in different human evaluation error performance identical, in other words, they are not the estimated error into consideration. In addition, if the human evaluation error is large, most of the estimated error is caused by the misjudgment of bona fide node, so when the increase of the proportion of the policy node, the estimation error to show a downward trend. When the human evaluation of error is small, most of the global trust value estimation error is caused by a misjudgment strategy node, so with the increase in the proportion of the policy node, the upward trend estimation error. (4) The punishment strategy node and tolerance tradeoff between human evaluation errors In this experiment, we mainly investigated the performance of PeerBehavior tolerance of different human evaluation error and node. Node set tolerance growth from 0 to 1, the growth rate was 0.1, and assume that the proportion of malicious nodes in the network 0.7 and human evaluation of growth error rate of 0.01 from 0.01 to 0.05. The formula will becomes MeanError  f  0.7,  ,   .

0.9

0.5

Percentage of Strategic Peers

0.45 human judgment error=0.01 human judgment error=0.03

human judgment error=0.02

0.4 0.35

MeanError

MeanError

(c) 0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

0.3 0.25 0.2 0.15 0.1 0.05 0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Tolerance Degree 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Percentage of Strategic Peers human judgment error=0.01 human judgment error=0.05

human judgment error=0.03

(d) Figure 4. The trend of mean error of the global trust with the variance of strategic peers’ proportion under different human judgment error. (a) Simple average; (b) Moving average; (c) Bayesian learning; (d) PeerBehavior.

(3) Human evaluation of error of different local trust value calculated © 2014 ACADEMY PUBLISHER

human judgment error=0.01 human judgment error=0.03 human judgment error=0.05

human judgment error=0.02 human judgment error=0.04

Figure 5. The influence of tolerance degree and human judgment error on the mean error of PeerBehavior

Figure 5 has been shown the performance of PeerBehavior in different nodes tolerance, easy to see, when the node tolerance certain higher, human evaluation error, global trust value the greater the estimated error. In addition, in certain human evaluation error, node

JOURNAL OF NETWORKS, VOL. 9, NO. 1, JANUARY 2014

tolerance is larger or smaller, will cause the estimated error is larger. This is because, the node small tolerance, PeerBehavior some goodwill node mistakenly believe that a malicious node, and human evaluation error is larger, resulting in a system estimation error is large; while nodes tolerance is large, PeerBehavior malicious node will mistakenly believe is well-intentioned node, which led to a larger system estimation error. Comprehensive figure 4 and figure 5 can be seen, human evaluation errors and strategies of malicious node strategy will impact global trust value estimation error, so it should be tolerated to strike a balance between human evaluation errors and punish node that is to select the appropriate node tolerance. IV.

229

[6]

[7]

[8]

[9]

CONCLUSTIONS

Impact due to the presence of human assessment error and Policies node local trust values reflect the characteristics of network nodes, thus increasing the value of global trust estimation error. To address this issue, we propose the neighbors behavioral evaluation model PeerBehavior based on continuous behavior observed, the simulation results show that human assessment error and the strategy of the malicious node strategy will lead to misjudgment of the local trust value calculated on the characteristics of network nodes, PeerBehavior by adjusting node tolerance applies to a different environment, to strike a balance between the tolerance of human assessment error and punishment policy node, compared with other current methods, can significantly improve the accuracy of local trust reflects the value of the network nodes characteristics, reducing the global trust estimated the impact of the error. In short, PeerBehavior model can be able to more accurately calculate local trust evaluation value node neighbors and local trust value is the basis of most of the trust mechanism, which can reduce the estimated error in global trust value trust mechanism. ACKNOWLEDGMENT This work is supported by the Project Supported by Scientific Research Fund of Hunan Provincial Education Department (No. 11C0875, No. 12B005). REFERENCES [1] Iwasa Y, Ohtsuki H, “How should we define goodness reputation dynamics in indirect reciprocity,” Journal of Theoretical Biology, vol. 231, no. 1, pp. 107-120, 2004. [2] Houser D, Wooders J, “Reputation in auctions: Theory and evidence from eBay,” Journal of Economics and Management Strategy, vol. 15, no. 2, pp. 353-369, 2006. [3] Melnik M I, Alm J, “Does a seller’s ecommerce reputation matter’s evidence from eBay auctions,” Journal of Industrial Economics, vol. 50, no. 3, pp. 337-349, 2010. [4] Xiong L, Liu L, “PeerTrust: Supporting reputation-based trust in peer-to-peer communities,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 7, pp. 843857, 2006. [5] Kamvar S, Schlosser M, Garcia-Molina H, “The EigenTrust algorithm for reputation management in P2P networks,” In Proceedings of the 12th International © 2014 ACADEMY PUBLISHER

[10]

[11]

[12] [13]

[14]

[15]

[16] [17] [18]

Conference on World Wide Web, New York: ACM, pp. 640-651, 2003. Bin Yu, Singh M P, Sycara K, “Developing trust in largescale peer-to-peer systems,” In Symposium on Multi-Agent Security and Survivability, New York: IEEE, pp. 1-10, 2004. Buchegger S, Boudec J-Y L, “A robust reputation system for P2P and mobile ad-hoc networks,” In Proceedings of the 2nd Workshop on Economics of P2P Systems, New York: IEEE, pp. 1-37, 2004. Aberer K, Despotovic Z, “Managing trust in a peer-2-peer information system,” In Proceedings of the 10th International Conference on Information and Knowledge Management, New York: ACM, pp. 310-317, 2001. Zhou R, Hwang K, “PowerTrust: A robust and scalable reputation system for trusted P2P computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 18, no. 4, pp. 460-473, 2007. Zhou R, Hwang K, “Gossip-based reputation aggregation for unstructured peer-to-peer networks,” IEEE Transactions on Knowledge and Data Engineering, vol. 20, no. 9, pp. 1282-1295, 2008. Jiang Shouxu, Li Jianzhong, “A reputation-based trust mechanism for P2P E-commerce systems,” Journal of Software, vol. 18, no. 10, pp. 2551-2563, 2007. Axelrod R M, The Evolution of Cooperation. New York: Basic Books, 1984. Despotovic Z, Aberer K, “Maximum likelihood estimation of peers performances in P2P networks,” In Proceedings of the 2nd Workshop on the Economics of Peer-to-Peer Systems, New York: IEEE, pp. 1-9, 2004. Xiaonan LIU, Zhiyi FANG, Huanhuan TANG, “A Direction-based Search Algorithm in P2P Network,” Journal of Computational Information Systems, vol. 6, no. 1, pp. 25-31, 2010. Fuyong YUAN, Jian LIU, Chunxia YIN, Yulian ZHANG, “A Distributed Recommendation Mechanism based on Collaborative Filtering in Unstructured P2P Networks,” Journal of Computational Information Systems, vol. 4, no. 3, pp. 1111-1118, 2008. A. Abdul-Rahman, S. Hailes, “Supporting trust in virtual communities,” In Proceedings of 33rd Annual Hawaii International Conference on System Sciences, 2000. C. Dellarocas, “The digitization of word-of-mouth: Promise and challenges of online reputation mechanism,” Management Science, vol. 49, no. 10, pp. 34-40, 2007. J. M. Pujol, R. Sanguesa, J. Delgado, “Extracting reputation in multi-agent systems by means of social network topology,” In Proceedings of 1st International Joint Conference on Autonomous Agents and Multiagent Systems, 2002.

Xianwen Wu, born in 1973, received her B.S. degree in Engineering from Xiangtan University, Xiangtan, China in 1996, her M.S. degree in computer science and technology from National University of Defense Technology, Changsha, China in 2004. Also, she is an associate professor at the College of Information Technology, Hunan Railway Professional Technology College, Zhuzhou, China. Her current research interests include Wireless Sensor Networks security and High Performance Network. Zhiliang Xue, born in 1976, received her B.S. degree in National University of Defense Technology, Changsha, China in 2001, her M.S. degree in computer science and technology

230

from Central South University, Changsha, China in 2010. Also, she is a lecturer at hunan railway professional technology college, Zhuzhou, China. Her current research interests include Wireless Sensor Networks Security and Vocational Theory. Jingwen Zuo, born in 1977, received his B.S. degree in China University of Political Science and Law, Beijing, China in 1999,

© 2014 ACADEMY PUBLISHER

JOURNAL OF NETWORKS, VOL. 9, NO. 1, JANUARY 2014

his M.S. degree in Industry & Business Administration from HeFei University of Technology, Hefei, China in 2002. Also, he is a engineer at Changsha University of Science and Technology, Changsha, China. His current research interests include Peer-to-Peer Network Security and Image Segmentation.