LNCS 3827 - Counting Stable Strategies in Random ... - Springer Link

Report 2 Downloads 96 Views
Counting Stable Strategies in Random Evolutionary Games Spyros Kontogiannis1,2 and Paul Spirakis2 1

2

Computer Science Department, University of Ioannina, 45110 Ioannina, Greece [email protected] Research Academic Computer Technology Institute, N. Kazantzakis Str., University Campus, 26500 Rio-Patra, Greece {kontog, spirakis}@cti.gr

Abstract. In this paper we study the notion of the Evolutionary Stable Strategies (ESS) in evolutionary games and we demonstrate their qualitative difference from the Nash Equilibria, by showing that a random evolutionary game has on average exponentially less number of ESS than the number of Nash Equilibria in the underlying symmetric 2-person game with random payoffs.

1

Introduction

Game theory is the study of interactive decision making, in the sense that those involved in the decisions are affected not only by their own choices, but also by the decisions of others. This study is guided by two principles: (1) The choices of players are affected by well-defined (fixed) preferences over outcomes of decisions. (2) Players act strategically, ie, they take into account the interaction between their choices and the ways other players act. The dominant aspect of game theory is the belief that players are rational and this rationality is common knowledge. This common knowledge of rationality gives hope to equilibrium play: Players use their equilibrium strategies because of what would happen if they had not. The point of departure for evolutionary game theory is the view that the players are not always rational. In evolutionary games, “good” strategies emerge from a trial-and-error learning process, in which players discover that some strategies perform better than others, or decide to play in a different way in hope of getting a better payoff. The players may do very little reasoning during this process. Instead, they simply take actions by rules of thumb, social norms, analogies for similar situations, or by other (possibly more complex) methods for converting stimuli into actions. Thus, in evolutionary games we may say that the players are “programmed” to adopt some strategies. Typically, the evolution process deals with an infinite population of players. As time proceeds, many small games are conducted (eg, among pairs of players that “happen” to meet). One 

This work was partially supported by the 6th Framework Programme under contract 001907 (DELIS).

X. Deng and D. Du (Eds.): ISAAC 2005, LNCS 3827, pp. 839–848, 2005. c Springer-Verlag Berlin Heidelberg 2005 

840

S. Kontogiannis and P. Spirakis

then expects that strategies with higher payoffs will spread within the population (by learning, copying successful strategies, or even by infection). Indeed, evolutionary games in large populations of players create a dynamic process, where the frequencies of the strategies played (by population members) change in time because of the learning or selection forces guiding the players’ strategic choices. Clearly, the rate of changes depends on the current strategy mix in the population. Such dynamics can be described by stochastic or deterministic models. The subject of evolutionary game theory is exactly the study of these dynamics. An excellent presentation of the evolutionary game dynamics can be found in [7]. For a more thorough study the reader is referred to [3]. Not surprisingly, evolutionary game processes that converge to stable states have usually the property that those states are also self-confirming equilibria (eg, Nash equilibria). This is one of the most robust results in evolutionary game theory, the “folk theorem”, that stability implies Nash equilibrium. In fact, one of the main approaches in the study of evolutionary games is the concept of Evolutionary Stable Strategies (ESS), which are nothing more than Nash Equilibria together with an additional stability property. This additional property is interpreted as ensuring that if an ESS is established in a population, and if a small proportion of the population adopts some mutant behavior, then the process of selection (or learning) will eliminate the latter. Once an ESS is established in a population, it should therefore be able to withstand the pressures of mutation and selection. Related Work and Our Contribution. Evolutionary game theory is quite interesting on its own, and Evolutionary Stable Strategies (ESS) are the most popular notion of stability in these games. Consequently, knowing the number of ESS that an evolutionary game may end up in, is a very important issue since it demonstrates the inherent complexity of the game. On the other hand, evolutionary game theory may be seen as a methodology for either computing (approximate) solutions, or estimating the convergence times to such solutions, for hard combinatorial problems. For example, [5] exploited the popular notion of replicator dynamics of evolutionary games, to prove that for an evolutionary game whose underlying strategic game is a single–commodity selfish routing game, the convergence time to an ε−approximate Nash Equilibrium is polynomial in 1/ε and logarithmic in the maximum-to-optimal latency ratio. A weakened version of this result was also shown for the more general case of evolutionary games for which the underlying strategic game is a multi–commodity network congestion game. Some years earlier ([1]) the notion of replicator dynamics had been used as a heuristic for the NP–hard Max Weight Clique problem. Recently ([4]) the problem of deciding the existence of an ESS in an arbitrary evolutionary game has been proved to be both NP–hard and coNP–hard. On the other hand, the existence of a regular ESS is an NP–complete problem. Concerning the number of ESS in an evolutionary game, to our knowledge, there is not much in the literature. The only interesting work that we could find is [2], which gives some (worst-case) exponential upper and lower bounds on the maxi-

Counting Stable Strategies in Random Evolutionary Games

841

mum number of ESS in evolutionary games whose underlying payoff matrix is symmetric. On the other hand, some very interesting results have appeared concerning the expected number of Nash equilibria in strategic games. In particular, [9] provides a (computationally feasible) formula for the mean number of Nash Equilibria in a random game where all the players have the same strategy set and the entries of their payoff matrices are independent (uniform) random variables. This formula is used in [10] in order to prove that the expected number of Nash equilibria in a random 2-person (strategic) game tends to exp(0.281644N + O(log N )) as N (the number of pure strategies of a player) tends to infinity. Although it is trivial to show that each ESS indicates a (symmetric) Nash Equilibrium for the underlying strategic game, it was not known (until the present work) whether there is a qualitative difference between the set of ESS in an evolutionary game and the set of Nash Equilibria of the underlying strategic game. Indeed, [2] gives some evidence that the notion of ESS may be of the same hardness as that of Nash Equilibria by giving exponential (worst-case) bounds on their number. On the other hand, in this work we show that the number of ESS in a random evolutionary game is exponentially smaller than the number of Nash Equilibria in the underlying random symmetric 2-person game. We prove this by exploiting a quite interesting (necessary and sufficient) condition for a strategy being an ESS of an evolutionary game, given that the underlying symmetric strategies profile is a Nash equilibrium ([6]). Our approach is based on constructing sufficiently many (independent of each other) certificates for an arbitrary strategy (such that the underlying profile is a symmetric Nash Equilibrium) being an ESS, for which the joint probability of being true is very small. The paper is organized as follows: In section 2 we give the main definitions and notation of non-cooperative game theory, we introduce the symmetric 2-player games, the evolutionary games and the evolutionary stable strategies. In section 3 we present our result on the number of ESS in a random 2-player game.

2

Preliminaries

Non-cooperative Games and Equilibria. We restrict our view in this paper to the class of finite games in strategic form. More precisely, let I = [n]1 be a set of players, where n is a positive integer. For each player i ∈ I, let Si be her (finite) set of allowable actions, called the action set of player i. The deterministic choice of a specific action si ∈ Si by a player i ∈ I, is called a pure strategy for this player. Thus the action set of a player can also be seen as her pure strategy set. A vector s = (s1 , . . . , sn ) ∈ ×i∈I Si , where si ∈ Si is the pure strategy adopted by player i ∈ I, is called a pure strategies profile or a configuration of the players. The space of all the pure strategies profiles in the game is thus the Cartesian product S = ×i∈I Si of the players’ pure strategies sets (usually called the configuration space). For any configuration s ∈ S and any player i ∈ I, let πi (s) be a real number indicating the payoff to player i upon the adoption of this configuration by all the players of the game. The finite collection of the real numbers {πi (s) : s ∈ S} 1

For any integer k ∈ IN, [k] ≡ {1, 2, . . . , k}.

842

S. Kontogiannis and P. Spirakis

defines player i’s pure strategies payoff function. Let π(s) = (πi (s))i∈I be the vector function of all the players’ payoffs. Thus, a game in strategic form is simply described by a triplet Γ = (I, S, π) where I is the set of players, S is the configuration space of the players, and  π is the vector function of all the players’ payoffs. Let Pk ≡ {z ∈ [0, 1]k : i∈[k] zi = 1} denote the (k − 1)dimensional simplex. For any player i ∈ I, any point xi = (xi,1 , xi,2 , . . . , xi,mi ) ∈ Pmi (where mi = |Si |) is called a mixed strategy for her. A player that adopts the mixed strategy xi is assumed to determine her own choice of an action randomly (ie, using xi ) and independently of all the other players. Pmi is the mixed strategies set of player i. A mixed strategies profile is a vector x = (x1 , . . . , xn ) whose components are themselves mixed strategies of the players, ie, ∀i ∈ I, xi ∈ Pmi . We call the Cartesian product ∆ = ×i∈I Pmi ⊆ IRm the mixed strategies space of the game (m = m1 + · · · + mn ). When the players adopt a mixed strategies profile x ∈ ∆, we can compute what is the  average payoff, ui , that player i gets  (for x) in the usual way: P (x, s) · π (s), where, P (x, s) ≡ ui (x) ≡ i s∈S i∈I xi (si ) is the occurrence probability of configuration s ∈ S wrt the mixed profile x ∈ ∆. This (extended) function ui : ∆ → IR is called the (mixed) payoff function for player i. Let us indicate by (xi , y−i ) a mixed strategies profile where player i adopts the mixed strategy xi ∈ Pmi , and all other players adopt the mixed strategies that are determined by the profile y ∈ ∆. This notation is particularly convenient when a single player i considers a unilateral “deviation” xi ∈ Pmi from a given profile y ∈ ∆. A best response of player i to a mixed strategies profile y ∈ ∆ is any element of the set Bi (y) ≡ arg maxxi ∈Pmi {ui (xi , y−i )}. A Nash Equilibrium (NE in short) is any profile y ∈ ∆ having the property that ∀i ∈ I, yi ∈ Bi (y). The nice thing about NE is that each finite strategic game Γ = (I, S, π) has at least one NE [11]. The subclass of symmetric 2-player games provides the basic setting for much of the evolutionary game theory. Indeed, many of the important insights can be gained already in this (special) case. Definition 1. A finite strategic game Γ = (I, S, π) is a 2-player game when I = {1, 2}. It is called a symmetric 2-player game if in addition, S1 = S2 and ∀(s1 , s2 ) ∈ S, π1 (s1 , s2 ) = π2 (s2 , s1 ). Note that in the case of a symmetric 2-player strategic game, the payoff functions of Γ can be represented by two |S1 | × |S2 | real matrices Π1 , Π2 such that Π1 [si , sj ] = π1 (si , sj ), ∀(s1 , s2 ) ∈ S and Π1 = Π2T = Π. For any mixed strategies profile x = (x1 , x2 ) ∈ ∆, the expected payoff of player 1 for this profile is u1 (x) = x1 T Πx2 while the payoff of player 2 is u2 (x) = x1 T Π2 x2 = x2 T Πx1 . Thus we can fully describe a symmetric 2-player game by (S, Π), where S is the common action set and Π is the payoff matrix of the row player. Two useful notions, especially in the case of 2-player games, are the support and the extended support. In a 2-player strategic game Γ = ({1, 2}, S, π), the support of a mixed strategy x1 ∈ Pm1 (x2 ∈ Pm2 ) is the set of allowable actions of player 1 (player 2) that have non-zero probability in x1 (x2 ). Formally, ∀i ∈ {1, 2}, supp(xi) ≡ {j ∈ Si : xi (j) > 0}. The extended support of a mixed

Counting Stable Strategies in Random Evolutionary Games

843

strategy x2 ∈ Pm2 of player 2is the set of pure best responses of player 1 to x2 . That is, extsupp(x2 ) ≡ j ∈ S1 : u1 (j, x2 ) = maxx1 ∈Pm1 {u1 (x1 , x2 )} . Similarly, the extended support of a mixed strategy x1 ∈ Pm1 of player 1, is the set of pure best responses of player 2 to x1 . That is, extsupp(x1 ) ≡  j ∈ S2 : u2 (x1 , j) = maxx2 ∈Pm2 {u2 (x1 , x2 )} . The following lemma is a direct consequence of the definition of NE (a proof is provided in the full version of the paper): Lemma 1. If (x1 , x2 ) ∈ ∆ is a NE of a 2-player strategic game, then supp(x1 ) ⊆ extsupp(x2 ) and supp(x2 ) ⊆ extsupp(x1 ). Due to their connection to the notion of ESS, a class of NE of particular interest is that of symmetric Nash equilibria. A strategy pair (x1 , x2 ) ∈ Pmi ×Pmi for a symmetric 2-player game Γ = (S, Π) is a symmetric Nash Equilibrium (SNE in short), if and only if (a) (x1 , x2 ) is a NE for Γ , and (b) x1 = x2 . Not all NE of a symmetric 2-player game need be symmetric. However it is known that each finite symmetric 2-player game has at least one SNE [11]. When we wish to argue about the vast majority of symmetric 2-player games, one way is to assume that the real numbers in the set {Π[i, j] : (i, j) ∈ S} are independently drawn from a probability distribution F . For example, F can be the uniform distribution in an interval [a, b] for some a, b ∈ IR : a < b. Then, a typical symmetric 2-player game Γ is just an instance of the implied random experiment that is described in the following definition. Definition 2. A symmetric 2-player game Γ = (S, Π) is an instance of a (symmetric 2-player) random game wrt the probability distribution F , if and only if ∀i, j ∈ S, the real number Π[i, j] is an independently and identically distributed random variable drawn from F . Evolutionary Stable Strategies. From now on we shall only consider symmetric 2-player strategic games. So, fix such a game Γ = (S, Π), for which the mixed strategies space is ∆ = Pn × Pn . Suppose that all the individuals of an infinite population are programmed to play the same (either pure or mixed) incumbent strategy x ∈ Pn , whenever they are involved in Γ . Suppose also that a small group of invaders appears in the population. Let ε ∈ (0, 1) be the share of invaders in the post–entry population. Assume that all the invaders are programmed to play the (pure or mixed) strategy y ∈ Pn whenever they are involved in Γ . Pairs of individuals in this dimorphic post–entry population are now repeatedly drawn at random to play always the same symmetric 2-player game Γ against each other. If an individual is chosen to participate, the probability that her (random) opponent will play strategy x is 1−ε, while that of playing strategy y is ε. This is equivalent to saying that the opponent is an individual who plays the mixed strategy z = (1 − ε)x + εy. The post–entry payoff to the incumbent strategy x is then u(x, z) and that of the invading strategy y is just u(y, z) (u = u1 = u2 ). Intuitively, evolutionary forces will select against the invader if u(x, z) > u(y, z). The most popular notion of stability in evolutionary games is the Evolutionary Stable Strategy (ESS). A strategy x is evolutionary stable

844

S. Kontogiannis and P. Spirakis

(ESS in short) if for any strategy y = x there exists a barrier ε¯ = ε¯(y) ∈ (0, 1) such that ∀0 < ε  ε¯, u(x, z) > u(y, z) where z = (1 − ε)x + εy. An ESS x is called regular if it holds that its support matches its extended support, ie, supp(x) = extsupp(x). One can easily prove the following characterization of ESS, which sometimes appears as an alternative definition (cf. [7]): Proposition 1. Let x ∈ Pn be a (mixed in general) strategy profile that is adopted by the whole population. The following sentences are equivalent: (i) x is an evolutionary stable strategy. (ii) x satisfies the following properties, ∀y ∈ ∆ \ {x}: [P1] u(y, x)  u(x, x), and [P2] If u(y, x) = u(x, x) then u(y, y) < u(x, y). Observe that the last proposition implies that (x, x) has to be a SNE of the underlying symmetric 2-player strategic game Γ (due to [P1]) and x has to be strictly better than any invading strategy y ∈ ∆ \ {x}, against y itself, in case that y is a best-response strategy against x in Γ (due to [P2]). A mixed strategy x ∈ ∆ is completely mixed if and only if supp(x) = S. It is well (Haigh 1975, [6]) that if a completely mixed strategy x ∈ Pn is an ESS, then it is the unique ESS of the evolutionary game.

3

Mean Number of ESS in Random Evolutionary Games

In this section we study the expected number of ESS in a generic evolutionary game with an action set S = [n] and a payoff matrix which is an n × n matrix U whose entries are iid r.v.s drawn from a probability distribution F . For any nonnegative vector x ∈ Pk for some k ∈ IN, let Yx ≡ Pk \ {x}. The following stateFig. 1. The partition of C ment, proved by Haigh, is a necessary and sufficient condition for a mixed strategy s ∈ Pn being an ESS, given that (s, s) is a SNE of Γ = (S, U ). Lemma 2 (Haigh 1975, [6]). Let (s, s) ∈ Pn ×Pn be a SNE for the symmetric game Γ = (S, U ) and let M = extsupp(s). Let also x be the projection of s on M , and C the submatrix of U consisting of the rows and columns indicated by M . Then s is an ESS if and only if ∀y ∈ Yx , (y − x)T C(y − x) < 0 . The following lemma also holds (a simple consequence of the definition of ESS): Lemma 3. Let s ∈ Pn be an ESS of an evolutionary game whose underlying (symmetric) 2-player game is Γ = (S, U ). Then (s, s) is a SNE for Γ . Combining lemmas 2 and 3 we observe that suffices to examine only SNE of Γ = (S, U ) in order to find out if this game possesses an ESS. In the sequel we focus our attention to random evolutionary games for which the payoff matrix has entries that are independent r.v.s which are symmetric about their mean:

Counting Stable Strategies in Random Evolutionary Games

845

Definition 3. A random variable X is called symmetric about the mean if and only if ∀a  0, P {µ  X  µ + a} = P {µ − a  X  µ}, where µ = E {X}. The following lemma will be useful in our investigation (a proof is in the full version of the paper): Lemma 4. Let X, Y be two independent, continuous, symmetric about the mean r.v.s, such that µ = E {X} = E {Y }. Then P {X  Y } = 12 . We now show our main theorem: Theorem 1. Let Γ = (S, U ) be an instance of a random symmetric 2-player game whose payoff entries are iid r.v.s drawn uniformly from [0, A], for some constant A > 0. Then E {#ESS} = o(E {#SN E}). Proof. Consider an arbitrary SNE (s, s) ∈ Pn × Pn , and assume wlog that extsupp(s) = [m] for some 1  m  n (by reordering the action set S). Assume also that s1  s2  · · ·  sr > 0 = sr+1 = · · · = sm for some 1  r  m (ie, its support is supp(s) = [r]). Let x = s|[m] ≡ (s1 , . . . , sm ) ∈ Pm be the projection of s to its extended support. Let also C = U |[m],[m] be the submatrix of U consisting of its first m rows and columns. By lemmas 2 and 3 we know that a necessary condition for s being an ESS is that C is negative definite, ie, ∀y ∈ Yx , (y − x)T C(y − x) < 0. We shall prove that this is highly unlikely to hold for any mixed strategy s with support of size r  1. Set ε = sr > 0. m ∗ Consider  the following collection of vectors from Yx : ∀1  k  r ≡ min{r, 2 }, ε ε and zk = yk − x = , . . . , xm + m−k yk = x1 , . . . , xk−1 , xk − ε, xk+1 + m−k   ε ε 0, . . . , 0, −ε, m−k , . . . , m−k . Then we have: ∀1  k  r∗ ,

(zk )T Cz = ε2 · Ck,k −

m  ε2 ε2 [Ck,j + Cj,k ] + m−k (m − k)2 j=k+1



Ci,j (1)

k+1i,jm

By lemma 2 we know that a necessary condition for the mixed strategy s (for which we already assumed that it is such that (s, s) is a SNE for Γ ) to be an /∗ ε>0 ∗/

ESS is the following: ∀k ∈ [r∗ ], (yk − x)T C(yk − x) = (zk )T Cz < 0 =⇒  1 m 1 ⇒ ∀k ∈ [r∗ ], m−k k+1i,jm Ci,j (2) j=k+1 [Ck,j + Cj,k ] > Ck,k + (m−k)2   Consider now the collection of events E = Ek ≡ I{Sk >Ck,k +Zk } 1kr∗ where  1 m 1 Sk ≡ m−k + Cj,k ] and Zk ≡ (m−k) 2 k+1i,jm Ci,j . We know j=k+1 [Ck,j  that s is an ESS only if 1kr∗ Ek holds true. Applying repeatedly the Bayes rule we get:

∗ (3) P {∩1kr∗ Ek } = P {Er∗ } P {Er∗ −1 | Er∗ } · · · P E1 | ∩rj=2 Ej where r∗ = min{r, m 2 }. In order to prove that this probability is very small, we proceed as follows: We calculate first that for any 1  k  r∗ , the probability of Ek being true (independently of all other events).

846

S. Kontogiannis and P. Spirakis

Lemma 5. ∀1  k  r∗ , P {Ek } = 12 . Proof. Fix arbitrary 1  k  r∗ . Observe that: (a) Zk does not include in this sum any of Ck,j , Cj,k , Ck,k . Thus, Zk is independent of Sk and Ck,k . (b) Sk is not affected at all by the value of Ck,k . Therefore, Sk is also independent of Ck,k . Let Rk ≡ Ck,k + Zk . By our previous remarks, we get the following claim: Claim. Rk is a r.v. independent of Sk . By linearity of expectation and assuming that any r.v. that is distributed according  to F has an expectation µ, 1we have that 2E {Rk } = E {Ck,k } + 1 k+1i,jm E {Ci,j } = µ + (m−k)2 · (m − k) · µ = 2µ. Similarly, (m−k)2 1  1 E {Sk } = m−k k+1jm [E {Ck,j } + E {Ck,j }] = m−k · (m − k) · 2µ = 2µ. That is, we deduce that Claim. E {Rk } = E {Sk }. Notice also that the following claim holds, whose proof is straightforward: Claim. Let X1 , . . . , Xt be iid uniform r.v.s drawn from [0, A]. Then X = t tA X j=1 i is a symmetric r.v. about its expectation E {X} = 2 , in the interval [0, tA]. Since {Ck,j , Cj,k }k+1jm is a collection of 2(m − k) iid uniform r.v.s on [0, A], (m−k)·Sk is symmetric r.v. about its expectation 2(m−k)µ in [0, 2(m−k)A], or equivalently, Sk is a symmetric r.v. about its expectation 2µ in [0, 2A]. Similarly, {Ci,j }k+1i,jm is a collection of (m − k)2 iid uniform r.v.s on [0, A] and thus, (m−k)2 Zk is a symmetric r.v. on [0, (m−k)2 A], or equivalently, Zk is a symmetric r.v. (around E {Zk } = µ) on [0, A]. So, Rk = Zk + Ck,k is also a symmetric r.v. (about its expectation 2µ) on [0, 2A]. Thus, we conclude that: Claim. Rk and Sk have the same expectation 2µ, they are independent of each other, and they are both symmetric r.v.s in the interval [0, 2A]. The following exploits the symmetry of the r.v.s that we compare in each certificate (of s being an ESS) Ek : Claim. P {Ek } = P {Sk > Rk } = 12 . Proof. Ek compares the values of the (independent) r.v.s Sk and Rk . But these two r.v.s have the same expectation and they are symmetric about their (common) mean. By applying lemma 4 we get that P {Ek } = 12 . This completes the proof of lemma 5. Recall that we are interested in P {Ek | ∩k+1jr∗ Ej } , ∀1  k  r∗ −1 (see equation (3)). Our goal is to determine an upper bound on the occurrence probability of each event Ek , despite its dependence on its nested events {Ej }k+1jr∗ . Observe that we reveal the outcome of the conditional events involved in eq. (3) sequentially starting from Er∗ , then Er∗ −1 |Er∗ , etc, up to E1 | ∩2jr∗ Ej .

Counting Stable Strategies in Random Evolutionary Games

847

  Lemma 6. The probability of a SNE s being an ESS is P E ∗ k 1kr ∗     r 2 ∗ ∗ ·(m−r ∗ )2 A 1+δ  −1 −1 + 2 exp − δ 4+(4/3)δ − δ · rm−1 and δ > 0. , where δ  = δ + rm−1 2 Proof. ∀k ∈ [r∗ − 1], Zˆk denotes the sum of r.v.s from C involved in Zk , (ie, Zˆk = (m − k)2 Zk ); Sˆk is the sum of the r.v.s in Sk , (ie, Sˆk = (m − k)Sk ). Then, ∀1  k  r∗ − 1, Zˆk = Ck+1,k+1 + Zˆk+1 + Sˆk+1

(4)

Observe now that we can get a new set of inequalities, exploiting the fact that we check for the truth of an event Ek conditioned on the truth of all its nested ˆ events: For Er∗ , Sˆr∗ > (m − r∗ )Cr∗ ,r∗ + Zr∗ ∗ , for Er∗ −1 |Er∗ , Sˆr∗ −1 > (m − r∗ + ˆr∗ −1 Z 1)Cr∗ −1,r∗ +1 + m−r ∗ +1

. . ., for E1 | ∩2jr∗ r∗ 1)C1,1 + j=2 Cj +

4 =⇒

m−r

/∗ ( ), Er∗ ∗/

ˆr∗ Z Sˆr∗ −1 > (m−r∗ +1)Cr∗ −1,r∗ −1 +Cr∗ + m−r ∗, /∗ (4), E2 ,...,Er∗ ∗/ ˆ1 Z Ej , Sˆ1 > (m − 1)C1,1 + m−1 =⇒ Sˆ1 > (m − ˆr∗ Z m−r ∗ .

From this we define a new necessary condition:

∀1  k  r∗ , Sk > Ck,k +

m − r∗ Zr ∗ m−k

(5)

So, we consider a new collection of events {Ek }1kr∗ (described by (5)), each of which involves a r.v. (Sk ) that is twice the average of 2(m − k) iid uniform r.v.s, a unique uniform r.v. (Ck,k ) of expectation A/2 that is not considered at all by the other events, and a last term that is handled as a constant that is asymptotically equal to the expected value A/2 of a uniform r.v. in [0, A] (since m − r∗ = O(m) and Zr∗ , the average value of O m2 iid r.v.s, tends rapidly to their common expectation A/2, provided A m). Observe that these events are independent, since each of them compares the values of two (unique) independent and symmetric r.v.s, of (asymptotically) the same expectation. Using an argument similar to that of lemma 5, it is not hard to prove that the probability of s being an ESS is upper bounded by the value claimed in the statement of the lemma. The complete presentation of this proof is provided in the full version of the paper. As a direct consequence of lemma 6, we get the following corollary: Corollary 1. The probability of a SNE (s, s) indicating an ESS s is upper m  √ √ m √ 3+δ 3 m ∗ √1 . = O( m) · , if we set r = and δ = O bounded by 2 · 2 2 2 m It is worth mentioning that for the random 2-player games considered in [10] (where each entry of the payoff matrices is uniformly distributed in the unit sphere) almost all the NE have supports whose sizes are approximately 0.315915n. The scaling of the unit interval [0, 1] to [0, A] for any A > 0 does not affect this result, since a NE is determined by linear constraints wrt the support (each support is defined as a system of linear inequalities for a 2-player game, cf. [8]) and we can scale by A each inequality, so long as A > 0.

848

S. Kontogiannis and P. Spirakis

Let now q = #N E be the number of NE in our game, that is of course a random variable. For each NE x of the game, let I(x) = I{x is ESS} be the corresponding indicator variable of x being also an ESS. Since an ESS {I(x)} = P {I(x) = 1} = implies our event E ≡ ∩1kr∗ Ek , we have that E  √ m √ 3 P {x is an ESS}  P {E} ⇒ E {I(x)} = O( m) · 2 ⇒ E {#ESS} =  √ m  √ · E {#N E} (by Wald’s inequality for a E { x=N E I(x)} = O( m) · 23 random sum of random variables). So, we have established that, since almost √ all NE have support 0.315915n  r  m, E {#ESS} = E {#N E} · O( n) ·  √ 0.315915n 3 , which proves our main theorem. 2 Remark 1. If we also adopt the numerical analysis of [10] on the expected number of NE in such a game, according to which the expected number of NE is exp (0.281644n + O(log n)), then we will come to the conclu √ 0.315915n √ sion that E {#ESS} = exp (0.281644n + O(log n)) · O( n) · 23 = exp (0.137803n + O(log n)). This is still exponential, but also exponentially smaller than the expected number of NE. Acknowledgments. The authors wish to thank an anonymous referee for some nice remarks on their structural argument, in an earlier version of the paper.

References 1. Bomze I.M., Pelillo M., Stix V. Approximating the maximum weight clique using replicator dynamics. IEEE Transactions on Neural Networks, 11(6):1228–1241, November 2000. 2. Broom M. Bounds on the number of esss of a matrix game. Mathematical Biosciences, 167(2):163–175, October 2000. 3. Cressman R. Evolutionary dynamics and extensive form games. MIT Press, 2003. 4. Etessami K., Lochbihler A. The computational complexity of evolutionary stable strategies. Technical Report 55, Electronic Colloquium on Computational Complexity (ECCC), 2004. ISSN 1433-8092. 5. Fischer S., V¨ ocking B. On the evolution of selfish routing. In Proc. of the 12th European Symposium on Algorithms (ESA ’04), pages 323–334. Springer, 2004. 6. Haigh J. Game theory and evolution. Advances in Applied Probability, 7:8–11, 1975. 7. Hofbauer J., Sigmund K. Evolutionary game dynamics. Bulletin of the American Mathematical Society, 40(4):479–519, 2003. 8. Koutsoupias E., Papadimitriou C. Worst-case equilibria. In Proc. of the 16th Annual Symposium on Theoretical Aspects of Computer Science (STACS ’99), pages 404–413. Springer-Verlag, 1999. 9. McLennan A. The expected numer of nash equilibria of a normal form game. Econometrica, 73(1):141–174, January 2005. 10. McLennan A., Berg J. The asymptotic expected number of nash equilibria of two player normal form games. To appear in the Games and Economic Behavior, 2005. 11. Nash J. F. Noncooperative games. Annals of Mathematics, 54:289–295, 1951.