An Optimization Model Based on Game Theory - Semantic Scholar

Report 21 Downloads 61 Views
JOURNAL OF MULTIMEDIA, VOL. 9, NO. 4, APRIL 2014

583

An Optimization Model Based on Game Theory Yang Shi, Yongkang Xing, Chao Mou, and Zhuqing Kuang College of Computer Science, Chongqing University, Chongqing 400030, China Email: [email protected]

Abstract—Game Theory has a wide range of applications in department of economics, but in the field of computer science, especially in the optimization algorithm is seldom used. In this paper, we integrate thinking of game theory into optimization algorithm, and then propose a new optimization model which can be widely used in optimization processing. This optimization model is divided into two types, which are called “the complete consistency” and “the partial consistency”. In these two types, the partial consistency is added disturbance strategy on the basis of the complete consistency. When model’s consistency is satisfied, the Nash equilibrium of the optimization model is global optimal and when the model’s consistency is not met, the presence of perturbation strategy can improve the application of the algorithm. The basic experiments suggest that this optimization model has broad applicability and better performance, and gives a new idea for some intractable problems in the field of artificial intelligence. Index Terms—Game Theory; Artificial Learning Model; Nash Equilibrium

I.

Intelligence;

INTRODUCTION

In the area of optimal algorithm, there are more computing models that merit attention, such as artificial neural network (ANN) [1], simulated annealing (SA) [2], genetic algorithm (GA) [3] and ant colony algorithm (ACA) [4], but the algorithms are abstract about our real world. In recent years, Game theory grows at an exorbitant rate, because Game theory has particular mathematics model which could connect real world. Von Neumann and Morgan Stein proposed Game theory in 1944 [5], and this theory was promoted by Nash, in the 1950s [6] [7]. The theory has been applied widely in the fields of economics, politics, statistics, social psychology, law and philosophy [8] [9] [10]. However, in the area of computer science, Game theory only has some studies which tend to approach mathematics and a lot of applications of particular problems. In the mathematical study of Game theory, it is a challenged problem to find the Nash equilibrium effectively and quickly. The paper [11] studied three computational intelligence methods, which are covariance matrix adaptation evolution strategies, particle swarm optimization and differential evolution, [12] developed a new algorithm for computing Nash equilibrium of N-player games. In the fields of application of Game theory, the attentions of most researchers are evolutionary game and multi-agent. The evolutionary game is a branch of Game theory, it bases on an assumed theory: evolutionary change was caused © 2014 ACADEMY PUBLISHER doi:10.4304/jmm.9.4.583-589

by natural selection within the group, and evolved to search evolutionary stable strategy by choice of behavior which has frequent dependence [13]. In the area of multiagent, [14] designed a team of agents that can accomplish consensus over a common value for the agents’ output using cooperative game theory approach. [15] showed that cooperative multi-agent systems should be designed in games with dominant strategies that may lead to social dilemmas, and non-cooperative multi-agent systems, on the other hand, should be designed for games with no clear dominant strategies and high degree of problem complexity. In additional, in order to improve the efficiency, researcher used Game theory to solve multiagent task allocation [16]. [17] based on game theory, and developed the multi-domain network model with involving multiple quality of service parameters. YAMAOKA [18] based on game theory, proposed an dynamic and distributed routing control method. [19] discussed a problem which was cooperative game theory in the manipulation of vote. Game theory provides an environment which benefits the learning and optimization of multi-objection and has full opportunities and challenges. Every player needs to select the best strategy, and in this process, sometimes, they have to consider the choices of others, so the result of game depends on the every player’s selection of strategies [20]. In the area of multi-objective optimization, some researchers proposed an algorithm that based on evolutionary game theory to solve multi-objective optimization problem [21]. [22] studied a model of multiobjective game and had a great interesting to solve problems in economics and environmental equilibrium. Zamarripa [23] developed a multi-objective mixed integer linear programming model which devised to optimize the planning of supply chains using Game Theory optimization for decision making in cooperative and/or competitive scenarios. Lee [24] showed how Game Strategies can be hybridized and coupled to MultiObjective Evolutionary Algorithms to accelerate convergence speed and to produce a set of high quality solutions. Dhingra [25] developed a new optimization method which combines game theory and fuzzy set theory. Rao [26] described the relationship between Pareto-optimal solutions and game theory and develop A computational procedure for solving a general multiobjective optimization problem using cooperative game theory.

584

Game theory is used with optimization problem that is not easy, because we must put the optimization problem in close contact with learning algorithm of Game theory, such as design players, find contradiction in the players and draft payoff functions. In this paper, we propose an optimization model bases on Game theory. First of all, we have derived “Existence of multi-dimensional Nash Equilibrium with continuous payoffs” from “Existence of Mixed-Strategy Nash Equilibrium” and “Existence of Nash Equilibrium in infinite games with continuous payoffs”. This theory is a mathematical foundation of the optimization model. And, secondly, we propose the optimization model: “Intelligent Game System”, there are some natural players and only one virtual player in the optimization model, and the games have also been divided into two groups: the game between natural players and the game between natural player and virtual player. When these natural players do not have conflicts of interest with each other, the optimization model degenerates into a simple game between natural player and virtual player without considering the relationships of natural players between each game. The classification of strategy sets based on the initial strategy was transformed into the classification based on the policy results because of the payoff matrix proposed, so that the whole becomes simple and efficient optimization model. In the model, an important constraint condition is consistency. When the problem has complete consistency, it is a basic Intelligent Game System and is called “IGS of complete consistency”, and the main job of solving is to find the Nash equilibrium of IGS. When the problem does not have complete consistency or the problem has partial consistency, we must use strategy of disturbance under some conditions to make the optimization model turn towards global optimization. At this moment, the optimization model is called “IGS of partial consistency”. IGS of partial consistency improves universality on the basis of IGS of complete consistency, and we should design the strategy of disturbance by the specific problem. We can use traditional methods, such as gradient descent, also may directly amend the parameters, but the only limitation is to make the result of optimization model better. It is a crux that how to define payoff function of natural player and payoff function of virtual player in the optimization model. A good payoff function can greatly improve the efficiency of the optimization model. The definition of payoff function of virtual player is simple, because the optimized objective of the problem always is the virtual player itself. While the payoff function of the natural player is more difficult to define, this is needed to analyze according to the specific issues on a case-by-case basis. We should change payoff function of natural player when efficiency of optimization model is poor, and the efficiency of optimization model may improve vastly . In this paper, the two experiments are bin packing problem, the optimization of Chebyshev neural networks, and these optimization models are IGSs of partial consistency. In the bin packing problem, we define every object as natural player and all bins as virtual player. The

© 2014 ACADEMY PUBLISHER

JOURNAL OF MULTIMEDIA, VOL. 9, NO. 4, APRIL 2014

Experimental results show that this optimization model can acquire the speed and the accuracy of traditional algorithms in the bin packing problem. In the experiment of Chebyshev neural networks, we discuss in depth the effect of disturbance on the optimization model. Experimental results show that the model has done well in the partial consistency and the optimization result is improved. II.

INTELLIGENT GAME SYSTEM

A. The proof of Operational Research Game theory is a theory on strategy choice between individuals with some sensible, as the theoretical basis of the optimization model, we give definitions and corresponding proofs. Definition 1 Pure strategy game is a tuple G: G   N , si  , i  N , ui  , i  N  . In the tuple, N  1, 2,

, n are players, si is finite

strategy set of player i , ui as payoff function of player i , for each strategy combinations, the opponent i have an effect value. Definition 2 For each i  N , pure strategy set of





player i is Si  S1i  , S2i  , every



pure

Xi  x1  , x2  , i

i

strategy

, xm i i



i , Sm i . If player i choose

i 

Sk

with

i xk  ,

is called a mixed strategy of

player i . In this, xk   0, k  1, 2, i

probability

mi

, mi ,  xki   1 . k 1

From Definition 1 and Definition 2, the pure strategy game is a special case of the mixed strategy game. When each strategy in the mixed strategy game probability is 1, mixed strategy game converts to pure strategy game. Next, we give the definition of the Nash equilibrium: A Nash equilibrium is a profile of strategies such that each player's strategy is an optimal response to the other players' strategies. Definition 3 If for all players i , the model has

ui  i* ,  i*   ui  si ,  i*  , si  S , mixed strategy profile

 * is Nash equilibrium. Theorem 1 Existence of Mixed-Strategy Nash Equilibrium: Every finite strategic-form game has mixed-strategy equilibrium [6]. Theorem 1 as the fundamental theorem of The Game Theory, and Theorem 2 broadened to continuous pure strategy game theory on the basis of Theorem 1, Theorem 2 is following. Theorem 2 Existence of Nash Equilibrium in infinite games with continuous payoffs: Consider a strategic-form game whose strategy spaces Si are nonempty compact convex subsets of an Euclidean space. If the payoff functions ui are continuous in s and quasi-concave in si , there exists a pure-strategy Nash equilibrium.

JOURNAL OF MULTIMEDIA, VOL. 9, NO. 4, APRIL 2014

Lemma 1 C is a convex set in linear space E , f : C  R , the necessary and sufficient condition that f is quasi concave functions in C is r  R , {x  C : f ( x)  r} is convex sets. Lemma 2 X and Y are two Hausdorff topological spaces, f : X  Y  R is continuous, G : Y  P0 ( X ) is continuous, and y  Y , G( y ) is compact set, so setvalued mapping M ( y)  {x  G( y) : f ( x, y)  V ( y)} is upper semi-continuous in Y . Lemma 3 (Fan-Glicksberg fixed point theorem) X is nonempty convex compact set in Hausdorff locally convex linear topological space E , in the F : X  P0 ( X ) , x  X , F ( x) is nonempty compact convex set, and F is upper semi-continuous in X , then there is x  X , x  F ( x) . Theorem 3 Existence of multi-dimensional Nash Equilibrium with continuous payoffs Proof: We define i  N , for X , the total number is m , such as X1 , X 2 , , X m , and we define set-valued mapping is Fi : Zi  p0 ( X1i , X 2i , , X mi ) , zi  Z i , and , Fi ( zi )  max fi ( X1i , X 2i , , X mi , zi ) x1i  X1i , x2i  X 2i , x1i  X1i , xmi  X mi . It is similar to Theorem 2, due to the payoff function continuous, fi ( zi ) is non-empty compact collection. We define c  max fi [(u1i , u2i , , umi ), zi ] , u1i  X1i , u2i  X 2i , , umi  X mi , Then for any x1 , x2 , , xm , fi [( x1 , x2 , , xm ), zi ]  c . Because the payoff functions: are u1i  fi (u1i ), u2i  fi (u2i ), , umi  fi (umi ) quasiconcave functions, By the Theorem 2.1 Fi ( zi )  {x1i  X 1i : fi ( x1i , zi )  r1 , x2i  X 2i : f i ( x2i , zi )  r2 , , xmi  X mi : fi ( xmi , zi )  rm } are convex sets. If , then r  max(r1 , r2 , , rm ) Fi ( zi )  fi ( x1i , x2i , , xmi )  r is convex sets. For zi  Z i , Fi ( zi ) is non-empty compact convex subset, and because Fi : Zi  p0 ( x1i , x2i , , xmi ) is upper semicontinuous, by the theorem 2.3, the equilibrium point exists, and Fi ( zi )  max fi ( X1i , X 2i , , X mi , zi ) . In the nest process, which is development of game theory and optimization model, a crucial question is to construct the mathematical model of Game theory in the special problem. In this section, we propose definition of IGS (Intelligent Game System), and the model effectively solve the questions, such as designing players, finding contradiction in the players and drafting payoff functions. B. Definition of IGS Before define the IGS, we need to introduce the concept of virtual player. Definition 4 Virtual player is an artificial game player, and it shows that global model, in other words, the virtual player’s gain is the optimization model’s global gain, so the virtual player’s gain is determined by all of nature

© 2014 ACADEMY PUBLISHER

585

players. Virtual player have independent payoff function, but it does not have independent strategy set, it means virtual player only passively chose whether to accept nature player’s strategy. Definition 5 IGS is a Multi-tuple: G  (( N ),(Q),{si }, i  N , f pay i ( Ni , si ), Fpay (Q)) In the definition: N  {1, 2, i , n} , are nature players which composed IGS, every nature player means individual or an independent element in special question, Q  f ( N ) is virtual player which composed IGS, which is an artificial player. si is nature player's strategic set.

f pay i ( Ni , si ) is nature player's payoff function. Fpay (Q) is virtual player's payoff function For special questions, the payoff functions of nature players always are different, but the payoff function of virtual player generally is clear: the optimize function of global model. Definition 6 Nature player’s correlation means nature player’s benefit coefficients between each other. The nature player’s benefit coefficients could show on one or more values, but in this paper, we should classify IGS by whether IGS has nature players’ correlation. Unrelated nature player is that nature players are not related to each other and not have common interests, related nature player is that nature players are related to each other. So we can classify IGS to unrelated IGS and related IGS, in unrelated IGS, f pay i ( Ni , si ) degenerate

f pay i (si ) , C. Bi-level Programming In the IGS, there are two game models, which are different types: the game between nature players (first layer) and the game between nature player and virtual player (second - layer). But in the special issue, the second layer is confused, because virtual player have no independent strategy set. So we introduce the payoff matrix to instead the strategy set. First, we define the payoff matrix. Definition 7 Payoff matrix is matrix which is composed of payoff expectation which come from the change of strategy in the game between nature player and virtual player. Virtual player have no independent strategy set, so virtual player can only be passively selected whether accept nature player’s strategy, but virtual player have its payoff function, so it could classified. Here, we divide into three categories: strategy classified set of the payoff change expectation is rise, strategy classified set of the payoff change expectation is equality, and strategy classified set of the payoff change expectation is decline. For nature player: {si }  {{si  up},{si  equality},{si  down}} For virtual player: {t}  {{t  up},{t  equality},{t  down}} so, payoff matrix:

586

JOURNAL OF MULTIMEDIA, VOL. 9, NO. 4, APRIL 2014

( si  up, t  equality) ( si  up, t  down)   ( si  up, t  up) ( s  equality, t  up) ( s  equality, t  equality) ( s  equality, t  down)  i i  i   ( si  down, t  up) ( si  down, t  equality) ( si  down, t  down)  Payoff matrix which is composed of payoff change player have positive gains, corresponding to the payoff expectation with strategy sets and payoff functions which matrix is (si  up, t  up) . in the strategy classified set are significance Definition 9 IGS’ Nash equilibrium characteristics of IGS, like this, when we design the game For IGS G , when it meets any nature player instable model, we need not regard the low-level of game model, of payoff matrix, and any nature players cannot improve such as strategy, only need to pay attention on the results their payoff in the condition which cannot impair the of strategy. When doing it, the second – layer has partial virtual player, the IGS is in the Nash equilibrium, characteristic of pure strategy game, nature player want to recorded as Gnash . select a pure strategy which is accepted by virtual player. Theorem 5 In the consistency conditions, IGS could Although virtual player is not active, it also has a right to achieve global optimum select pure strategy. Assume G( N , Q, Fpay  max ) Gnash ( N , Q)   , it must Theorem 4 If payoff matrix has equilibrium, the equilibrium of payoff function exists in strategy classified has S  G( N , Q, Fpay  max ) , and S  Gnash ( N , Q) , for a set. nature player x in the S , there must be a strategy m to Strategy classified set only classifies strategy set by reach Nash equilibrium. From the point of view of the payoff change expectation , does not change any payoff matrix, at least the nature player can choose strategies and payoff functions, so when the IGS is ( s  up , t  up) , (sx  up, t  equality) . If S  is a Nash x balance in a point which is in strategy classified set of equilibrium’s condition, and S   Gnash ( N , Q) , payoff matrix, it means the strategy classified set have a strategy which would make game to balance. Therefore f pay ( N x , m)  f pay ( N x ), x  N . the equilibrium of payoff function is existent in strategy In the consistency conditions, classified set.  , so form Definition 8, sgn( f ( N , m )  f ( N ))  0 pay x pay x Theorem 4 means, as long as the payoff matrix has equilibrium, the equilibrium must exist in strategy classified set. And Theorem 4 does not restrict how to determine the equilibrium in the payoff matrix, which means, in this step, we add an external selection from outside of the game; the model can still get a pure strategy in strategy classified set. It is very important that game avoid detail in the IGS, because if we think of every part in the game, it is very confused. In the traditional Game theory, according to the sequence action, game is divided into static game and dynamic game, according to result of the game as common knowledge, for every player, game is divided into complete information game and incomplete information game. In this paper, we should define all of game in IGS is complete information game, but is static game or dynamic game, it is a difficult question. In fact, from an IGS’s perspective, the result of game is important, in other words, whatever static game or dynamic game, there is no effect on the IGS, so it gives users a very large flexibility D. Definition of Consistency First, we give the definition of consistency conditions: Definition 8 For two different strategy sets m and m , if sgn( Fpay (Q, m)  Fpay (Q, m))  sgn( f pay (m)  f pay (m)) ,

m and m are consistency. The definition of consistency condition shows that nature player’s change and virtual player’s change is consistency, nature player’s payoff and virtual player’s payoff are in same direction when meet the consistency conditions. Due to any of the players are rational, in general, the consistency condition applies to nature player and virtual © 2014 ACADEMY PUBLISHER

sgn( Fpay (Q, m)  Fpay (Q))  0 , and (sx  up, t  equality)

may be selected. So equal sign may sgn( Fpay (Q, m)  Fpay (Q))  0 .

be

set

up:

sgn( Fpay (Q, m)  Fpay (Q))  0

When

,

Fpay (Q, m)  Fpay (Q) , for S , there is another player condition:

S  , payoff of

Fpay

is increase, and

S  G( N , Q, Fpay  max ) , it is contradiction. sgn( Fpay (Q, m)  Fpay (Q))  0

When

,

Fpay (Q, m)  Fpay (Q) , for S , there is another player condition:

S ,

and

Fpay (Q, m)  Fpay (Q)

,

so

S   G( N , Q, Fpay  max ) , but also because the contradiction

S   Gnash ( N , Q) G( N , Q, Fpay  max ) Gnash ( N , Q)   ,

between

and in

summary,

Theorem 5 is proved. III.

IGS OF COMPLETE CONSISTENCY AND PARTIAL CONSISTENCY

From the definitions in section 2 and section 3, we first discuss IGS of complete consistency, this IGS is divided into two categories by the correlation: unrelated IGS of complete consistency and related IGS of complete consistency. A. IGS of Complete Consistency In the unrelated IGS of complete, every nature player can maximize its benefit and do not need regard other players’ action in the first– layer, and in the second – layer, every nature player has game with virtual. So, from

JOURNAL OF MULTIMEDIA, VOL. 9, NO. 4, APRIL 2014

the macro view, one global static game becomes some miniature games. In the related IGS of complete consistency, every nature player relates to other nature players, so nature player must regard others in the determination, and every nature player come to a decision, then they into procession which have game with virtual player. B. IGS of Partial Consistency Consistency conditions are an important condition for IGS, but some problems have not been to meet the consistency conditions in the optimization process. Therefore, consistency condition is divided into complete consistency and partial consistency, the Nash equilibrium of partial consistency is Gnash  false , and Gnash  false is not global optimum, G( N , Q, Fpay  max ) Gnash  false ( N , Q)   . This time, we need to add disturbance, make the system to unbalanced state. Definition 10 Definition of disturbance Definite G is IGS, G ’s global optimum is G( N , Q, Fpay  max ) , if G is in Gnash ( N , Q) and

Gnash ( N , Q)  G( N , Q, Fpay  max ) , the IGS needs to disturbance, make the IGS to unbalanced state. Strategy of disturbance is Fpay (m)  Fpay (m) , virtual player’s global payoff cannot be reduced, and the payoff matrix also needs to add the disturbance. In the payoff matrix, in fact, strategy of disturbance is only one: (si  down, t  up) . Therefore, payoff matrix of the IGS which is partial consistency is: ( si  up, t  equality)   ( si  up, t  up) ( s  down, t  up) ( s  equality, t  equality)  i  i  It should be noted that partial consistency and complete consistency is not static, an IGS, may be at a certain stage as partial consistency, and to the next stage is complete.

587

equilibrium may be not global optimization, the IGS needs add disturbance. The related IGS of partial consistency, this is complicated IGS, like related IGS of complete consistency, nature players cannot determine independently. In addition, we must know that virtual player’s payoff function or global benefit of the optimization model is not maximal in the Nash equilibrium, and every disturbance, the equilibrium also be broken, all of nature players have to find other equilibrium under the virtual player’s restrict. IV.

EXPERIMENTS

A. Bin Packing Problem Bin packing problem is a significant optimization problem, because bin packing problem is a NP-hard problem [27]. In recent years, there are some algorithms, such as Next Fit (NF), First Fit (FF), Best Fit (BF), and First Fit Decreasing (FFD), in addition, there are some algorithms like simulated annealing (SA) [24]. Problem definition There are some objects N , N  (1, 2, , i) , everyone has a volume v , v1 , v2 , , vi , every bin has a maximum value, it means bin‘s volume V , we need keep minimum numbers of bin to hold the objects. We construct IGS, the Multi-tuple as follows: G  (( N ),(Q),{si }, i  N , f pay i ( Ni , si ), Fpay (Q))

N is nature player, in this problem, N is v1 , v2 , , vi Q is virtual player, inhere, Q is bin packing problem f pay i ( Ni , si ) are payoff functions of nature players, we should have attention on it, every f pay i ( Ni , si ) means all of volume values in one bin. For example, if v1 , v3 , v4 is in one bin, and v1  v3  v4  V , every nature player payoff is v1  v3  v4 .

Fpay (Q) is payoff function of virtual player, in this question, it shows sum of all nature players’ payoff. Experiment and result We use public date sets, there are 8 problems, and proposed by Wäscher and Gau in 1996, the date sets could download in http://paginas.fe.up.pt/~esicup/index.php TABLE I. Name Figure 1. TEST0058’s payoff curve

IGS of partial consistency is divided into two categories by the correlation: unrelated IGS of partial consistency and related IGS of partial consistency. In the unrelated IGS of partial consistency, like unrelated IGS of complete consistency, every nature player isolates other nature players, so nature player need not regard others in the determination. But we have to attention in Nash equilibrium, because the Nash

© 2014 ACADEMY PUBLISHER

TEST0005 TEST0014 TEST0022 TEST0030 TEST0058 TEST0065 TEST0068 TEST0082

A known solution 29 24 15 28 20 16 12 24

SOME BIN PARKING PROBLEMS optimal

GameModel 29 24 15 28 21 16 13 25

FFD

BFD

29 24 15 28 21 16 13 25

29 24 15 28 21 16 13 25

In this table, we would know about this result, IGS have reached FFD and BFD which are general algorithms in this time, it means, at the least, IGS can ensure the accuracy of FFD and BFD.

588

JOURNAL OF MULTIMEDIA, VOL. 9, NO. 4, APRIL 2014

Like Figure 1, this is TEST0058 payoff curve, and there is no disturbance, so the maximal payoff is 8.6925. Like Figure 2, this is TEST0058 payoff curve and have disturbance from the Nash equilibrium, so the maximal payoff is 9.6429

Chebyshev neural network as an significant example because the Chebyshev neural network has a well-defined individual neurons than Multi-layer Perceptron Neural Network individual neurons, every one of Chebyshev neural network neurons have a clear mathematical definition. We construct IGS, the Multi-tuple as follows: G  (( N , Q),{si }, i  N , f pay i ( Ni , si ), Fpay (Q))

N is the weight of the Chebyshev neural network, as nature players in the IGS. Q is Chebyshev neural network, as virtual player. f pay i ( Ni , si ) is payoff function of nature player ci , in

Figure 2. Disturbance on TEST0058’s payoff curve

B. Related IGS of Partial Consistence-Chebyshev Neural Network Problem Definition Chebyshev neural network was proposed in the early 1990s [25], its theoretical foundation come from Chebyshev polynomial. The Chebyshev neural network structure is completely different from the structure of the traditional neural network. The structure of one variable Chebyshev neural network model like this: Figure 3 Here, xr is an input value, yr is the corresponding output value. Function is yr  f ( xr ) , (r  1, 2,3 n) . Ti is Chebyshev polynomial of degree, which is composed hidden lays. Ck are the weights of Tk .

this paper, we define that the payoff function means every Chebyshev polynomial’s fitting degree for the set of sample points; Fpay (Q) is payoff function of virtual player, here is the fitting of IGS for the set of sample points. Payoff function of virtual player does not have objection, the fundamental purpose of itself, which is also Chebyshev neural network training. But there are some different definitions for payoff function of the nature player, although nature player’s benefit is not our purpose, a better payoff function of nature player could significantly increase the training efficiency of the system. Experiment and result y  f ( x)  x , input value is x-coordinate of 20 points which come from 0 to 1.9, output value is y-coordinate of 20 points which come from 0 to 1.9.Chebyshevneural network nodes is 6: payoff function of N  {c0 , c1 , c2 , c3 , c4 , c5 } ,

f pay i ( Ni , si ) and Fpay (Q, t ) are Euclidean Distance, evaluation function is 1     x0  x0  x1  x1  x2  x2   xn  xn  n In case no disturbance, function eventually stabilized at 4.9325, the evaluation function shows in Figure 4:

Figure 3. Chebyshev neural network

Here is the definition of the Chebyshev polynomial: Definition 11 Chebyshev polynomial of degree recursive polynomial as: Tn 1 ( x)  2 xTn ( x)  Tn 1 ( x) , (n  0,1, 2 ) , Definition 12 The Chebyshev polynomials group of recurrence relation as: T0 ( x)  1

T1 ( x)  x T2 ( x)  2 x 2  1 T3 ( x)  4 x 3  3 x T4 ( x)  8 x 4  8 x 2  1 T5 ( x)  16 x 5  20 x 3  5 x

© 2014 ACADEMY PUBLISHER

Figure 4. No disturbance

Obviously, this IGS are partial consistency, only hope the system itself cannot achieve the required precision, which needs for disturbance. Disturbance strategy uses traditional BP (Back Propagation) strategy, in case that adding disturbance strategy, function eventually stabilizes at 1.3655, the evaluation function show in (Figure 5).

JOURNAL OF MULTIMEDIA, VOL. 9, NO. 4, APRIL 2014

589

[12]

[13] [14]

[15]

[16] Figure 5. Disturbance

We can be seen something from Figure3: after joining disturbance, the system accuracy is improved. V.

[17]

CONCLUSION

We propose a new optimization model which uses Game theory and has broad applicability and preferable accuracy. There are some issues that still need to resolve in future studies, such as discussion cooperation between nature player, and stability of IGS to solve more complex problems, are focus of future research.

[18]

ACKNOWLEDGMENT

[20]

We would like to acknowledge Project No. CDJXS12 180008 supported by the Fundamental Research Funds for the Central Universities, and also thanks the anonymous reviewers for valuable suggestions.

[19]

[21]

[22]

REFERENCES [1] W. S. McCulloch, W. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 1943. [2] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi. Optimization by simulatedannealing. Science. 1983, 220 pp. 671-680. [3] J. Holland. Adaptation in natural and artificial systems. The University of Michigan Press, 1975 [4] E. A. Hansen, S. Zilberstein. Monitoring and control of anytime algorithms: A dynamic programming approach. Artificial Intelligence. 2001, 126 pp. 139-157. [5] J. V. Neuman, O. Morgenstern. The theory of games and economic behavior. Princeton Univ. Press, 1944. [6] J. F. Nash. Equilibrium points in n-person games, Proc. Natl. Acad. Sci. 1950, 36 pp. 48-49. [7] J. F. Nash. Noncooperative games, Ann. Math. 1951, 54 pp. 289-295. [8] R. Aumann, S. Hart. Handbook of Game Theory. Handbooks in Economics. North-Holland, Amsterdam, 1992. [9] C. F. Camerer. Progress in behavioral game theory. J. Econ. Perspect. 1997, 11 pp. 167-188. [10] R. B. Myerson. On the value of game theory in social science. Ration. Soc. 1992, 4 pp. 62-73. [11] Pavlidis, K. E. Parsopoulos, M. N. Vrahatis. Computing Nash equilibria through computational intelligence

© 2014 ACADEMY PUBLISHER

[23]

[24]

[25]

[26]

[27]

[28] [29]

methods Journal of Computational and Applied Mathematics. 2005, 175 pp. 113-136 S. Govindana, R. Wilson. Computing Nash equilibria by iterated polymatrix approximation. Journal of Economic Dynamics and Control. 2004, 28 pp. 1229-1241. J. M. Smith. Evolution and the Theory of Games, 1992. E. Semsar-Kazerooni, K. Khorasani. Multi-agent team cooperation: A game theory approach, Automatic. 2009, 45 pp. 2205-2213 Parag C. Pendharkar. Game theoretical applications for multi-agent systems, Expert Systems with Applications. 2012, 39 pp. 273-279 Guoquan Wang, Haibin Yu, Jingqing Xu, Sbiquan Huang. A Multi-agent Model Based on Market Competition for Task Allocation: a Game Theory Approach. Proceedings of the 2004 IEEE International Conference on Networking, 2004. L. Guo, J. W, W. Hou, Y. Liu, L. Zhang, H. Li. Hybrid protection algorithms based on game theory in multidomain optical networks. Optical Fiber Technology. 2011, 17 pp. 523-535 K. YAMAOKA, S. SUGAWARA, Y. SAKAI Connection Oriented Packet Communication Control Method based on Game Theory. IEICE Transactions on Communications. 1999, J82-B pp. 530-539. Y. Bachrach, E. Elkind, P. Faliszewski, Coalitional Voting Manipulation: A Game-Theoretic Perspective. TwentySecond International Joint Conference on Artificial Intelligence 2011 Barcelona M. J. Osborne, A. Rubinstein, A Course in Game Theory, MIT Press, Cambridge, MA, 1994. K. B. Sim, D. W. Lee, J. Y. Kim. Game Theory Based Coevolutionary Algorithm: A New Computational. International Journal of Control, Automation, and Systems. 2004, 2(4) pp. 467-474. C. Lee. Multi-objective game-theory models for conflict analysis in reservoir watershed management, Chemosphere. 2012, 87 pp. 608-613. M. Zamarripa, A. Aguirre, C. Méndez. Integration of Mathematical Programming and Game Theory for Supply Chain Planning Optimization in Multi-objective competitive scenarios. Computer Aided Chemical Engineering. 2012, 30 pp. 402-406 D. Lee, L. F. Gonzalez, J. Periaux. Hybrid-Game Strategies for multi-objective design optimization in engineering. Computers & Fluids 2011, 47 pp. 189-204 A. K. Dhingra, S. S. Rao. A cooperative fuzzy game theoretic approach to multiple objective design optimization. European Journal of Operational Research. 1995, 83 pp. 547-567 S. S. Rao Game theory approach for multi-objective structural optimization. Computers & Structures. 1987, 119-127 R. Karp. Reducibility among Combinatorial Problems. R. E. Miller and J. M. Thatcher, (eds.) Complexity of Computer. Computations. Plenum Press, 1972 pp. 85-103. S. Kirkpatrick, C. D. Gelatt Jr, M. P. Vecchi. Optimization by Simulated Annealing. Science, 1983, 220 pp. 671-680 A. Namatame, U. N. Pattern. Classification with Chebyshev neural networks. International Journal of Neural Networks, 1992, 3 pp. 23-31.