Hamiltonian Potential Functions for Differential Games Davide Dragone, Luca Lambertini, Arsen Palestini Department of Economics, University of Bologna
∗
George Leitmann College of Engineering, University of California at Berkeley
Abstract We introduce the concept of Hamiltonian potential function for noncooperative open-loop differential games and characterise necessary and sufficient conditions for its existence. We also identify a class of games admitting a Hamiltonian potential and illustrate appropriate examples pertaining to oligopoly games where price or quantity competition goes along with noncooperative investments either in advertising or in R&D, so as to provide a plausible intuitive interpretation of the meaning of the Hamiltonian potential.
Keywords: differential games, game theory, optimal control.
1
Introduction
Following Monderer and Shapley (1996), a relatively large literature has investigated potential functions for static games. In a potential game, the information about Nash equilibria is nested into a single real-valued function (the potential function) over the strategy space. The specific feature of a potential function defined for a given game is that the gradient of the corresponding potential function coincides with the vector of first derivatives of the individual payoff functions of the original game itself. As stressed by Slade (1994), the interest of this line of research is that, in a game admitting a potential function it is as if players were jointly maximising that single function instead of competing to maximise their respective payoffs. To the best of our knowledge, no attempt has been made as yet concerning the construction of a potential function for differential games. Here, we will confine our attention to the solution of noncooperative open-loop differential games. ∗ We thank Michael Caputo, Gustav Feichtinger and Arkady Kriazhimskyi for useful comments and suggestions. The usual disclaimer applies.
1
Given that the necessary conditions for the solution of an open-loop dynamic game contain the adjoint equations in addition to the first order conditions on controls, verifying the existence of a potential function for such a game is essentially different from carrying out the same task for a static game. What a potential function must accomplish in a dynamic game is to reproduce the same dynamic system (state and control equations) and achieve the same open-loop solution(s) the original game yields. We refer to this function as a Hamiltonian potential function. We provide a sufficient condition for the existence of a Hamiltonian potential function in a generic noncooperative open-loop differential game, and subsequently identify a class of games admitting a Hamiltonian potential function. The necessary condition we single out for the existence of a Hamiltonian potential function is the absence of a multiplicative interplay between one player’s state variable and the other players’ control variables either in the instantaneous payoff function or in the state dynamics. We define this specific interplay as dynamic absorptive capacity, borrowing this terminology from Kamien and Zang (2000). There emerges a strong connection between the presence or absence of absorptive capacity and the resulting intensity of competition between players involved in the game under consideration, in such a way that games featuring absorptive capacity are intrinsically more competitive than those not featuring it. As a result, the former class of games does not admit a Hamiltonian potential function, while the latter class does. In looser but more intuitive terms, this amounts to saying that the absence of absorptive capacity allows one to look upon the game as if players were jointly maximising a common objective function (the Hamiltonian potential). To complement the theoretical analysis, we illustrate two examples belonging to the theory of industrial organization, that help our intuitive understanding of the Hamiltonian potential function and the economic interpretation of dynamic absorptive capacity. Then, further applications of our toolkit to other areas of economics are also laid out. The remainder of the paper is structured as follows. The acquired wisdom about the construction of potential functions for static games in continuous are briefly summarised in section 2. The necessary and sufficient conditions for the existence of a potential function in an open-loop differential game are investigated in section 3. Section 4 illustrates the examples. Final comments are in section 5.
2
Preliminaries: potential in static games
We briefly recall the concept of potential in static noncooperative, full information games. We borrow from physics the following well-known results: Definition 1. A given vector field F = (F1 (s), . . . , Fk (s)) is conservative if 2
there exists a differentiable function P (s) such that: ∂P (s) = Fi (s) , ∂si
i = 1, . . . , k.
(2.1)
P (s) is called a potential function for F . Theorem 2. A vector field F defined on an open convex subset of Rn is conservative if and only if ∂Fj ∂Fi = , ∂sj ∂si
∀ i, j = 1, . . . , k, i 6= j.
(2.2)
Given a game G where N = {1, 2, 3, ..., n} is the set of players, each one of them endowed with the profit function πi (·), if the vector field of the first order partial derivatives is conservative, then it admits a potential function and it is an exact potential game in the sense of Monderer and Shapley (1996). For illustrative purposes, we now briefly summarise the construction of the potential function for a static Cournot-Nash game, a simplified version of the more general Cournot game considered by Slade (1994) and Monderer and Shapley (1996). Let G be a static market game where n firms simultaneously set output levels qi , i = 1, 2, 3..., n, to maximise individual profits πi (q) , where q = (q1 , . . . , qn ) is P the vector of quantities. p = a − ni qi is the (linear) market demand function,1 and firms share the same productive technology, described by the cost function Ci = cqi . Therefore, the individual profits function can be written as n X qj − c qj . (2.3) πi (q) = a − qi − j6=i
The first order condition for non-cooperative profit maximisation is n
X ∂πi (q) = a − 2qi − qj − c = 0. ∂qi
(2.4)
j6=i
The vector
µ
∂π1 ∂πn ,..., ∂q1 ∂qn
1
¶
Interestingly, it can be shown that the potential function for the market game illustrated ` ´β P here does exist also for a class of non-linear demand function where p = a − n , for i=1 qi all β > 0. For any β 6= 1, the game is a best-response potential game. For the definition of best-response potential game, see Voorneveld (2000).
3
is conservative and it admits the following potential function: X b Pb(q) = Π(q) + qi qj ,
(2.5)
j6=i
where b Π(q) =
n X
−qi2 + a −
i=1
X
qj − c qi + Z,
(2.6)
j6=i
where Z is a constant of integration. It is easy to verify that Pb(q) yields the same gradient as the original game: X ∂ Pb(q) ∂πi (q) = −2qi + a − qj − c = . ∂qi ∂qi
(2.7)
j6=i
A potential in a static framework contains all the relevant information of the original static n-player game. Analogously, our intuition is that, if it exists, a Hamiltonian potential for differential games must indeed contain all the relevant information of the original dynamic n-player game. In the next subsection, we consider a class of differential games and we provide some requirements allowing for the construction of a Hamiltonian potential. Note that this definition will necessarily differ from the one given in Monderer and Shapley (1996) and the rest of the related literature, as state variables do not appear in static games. An additional interesting feature of the ensuing construction is the following. The state of the art concerning the existence of potential functions in noncooperative games (in continuous strategies) is currently confined to one-stage games like the Cournot model outlined above. That is, as yet we have no theoretical results clarifying whether potential functions exist or not in multistage static games where, e.g., firms first invest in R&D for some type of innovation (w.r.t. processes or products) and then compete on the market either in output levels or in prices. The crucial difficulty in this respect appears to be generated by the backward induction method usually associated with the way subgame perfect equilibria are generated in such games, or equivalently by the fact that players do not take the first order conditions w.r.t. all of their strategic variable simultaneously. As we shall see in the remainder of the paper, takling this issue in dynamic games allows us to bypass this obstacle and investigate the existence of potential functions for games whose static counterparts would be multistage structures, precisely because in dynamic games players take all the first order conditions at the same time, at every instant all over the time span along which the game unravels itself.
4
3
A potential for differential games
Here, we investigate the possibility of identifying classes of games admitting a potential function, and the necessary and sufficient conditions that must hold for the latter to exist.
3.1
The dynamic set up: definitions
In the following, we define a (normal) differential game, together with the appropriate requirements allowing for the construction of a Hamiltonian potential function, if it exists. Consider an infinite horizon differential game Γ with the following features: • n is the number of players; • x(t) = (x1 (t), . . . , xn (t)) ∈ X ⊂ Rn , where X is a bounded and open set, is the vector of state variables; • u(t) = (u1 (t), . . . , un (t)) ∈ U ⊂ Rm , m ≥ n is the vector of control variables; player, ui (t) = (ui1 (t), . . . , uivi (t)) is the vector of controls related to the Pi-th n vi being the number of controls of the i-th player, so that m = i=1 vi . The set U is also bounded and open. • the i-th player is endowed with the instantaneous payoff πi (x(t), u(t), t) and is supposed to maximize the discounted objective functional: Z ∞ Ji ≡ e−ρt πi (x(t), u(t), t)dt (3.1) t0
subject to the kinematic equation: ( x˙ i (t) = gi (x(t), u(t), t) xi (t0 ) = xi0
(3.2)
where x (t0 ) = (x1 (t0 ), . . . , xn (t0 )) is the vector of initial conditions on states, gi (·) ∈ C 2 (X × U × [t0 , ∞)), i = 1, . . . , n and ρ is the intertemporal discount rate, constant and common to all agents.2 The standard technique requires constructing the current value Hamiltonian function of each agent, as follows: X Hi (·) = πi (x(t), u(t), t) + λii (t)gi (x(t), u(t), t) + λij (t)gj (x(t), u(t), t), (3.3) j6=i 2
For the sake of simplicity, we assume that all agents have the same time preferences.
5
where λij (t) is the costate variable associated by player i to state variable xj ; suppose Hi ∈ C 2 (X × U × Rn×n × [t0 , ∞)). We focus on interior open-loop Nash (i.e., simultaneous) equilibria. The related necessary conditions taken on (3.3) are (omitting arguments for brevity): X ∂gj ∂Hi ∂gi ∂πi =0⇔ + λii + λij =0 ∂uil ∂uil ∂uil ∂uil
(3.4)
j6=i
∂Hi = λ˙ ii − ρλii ∂xi
(3.5)
∂Hi = λ˙ ij − ρλij ∀j 6= i ∂xj
(3.6)
− −
plus the transversality conditions: lim e−ρt λij = 0 ∀i, j.
t→∞
(3.7)
In a static game, the construction of the corresponding potential function (if it exists) requires integrating the first-order conditions on choice variables, summing up the integrals and checking whether what results from this procedure is indeed a conservative field (as in Slade, 1994). In a differential game framework, we are looking for the potential function of a game with m controls and n states, and therefore also n costates. Thus, the whole set of necessary conditions taken for the population of players consists of m FOCs on controls (like (3.4), vi for the i-th player) and n × n costate equations (like (3.5-3.6), n for each player). Now, there immediately arises a difficulty with costates, as (3.4) (and, most likely, (3.5)), will contain λij (for at least some j 6= i). Hence, integrating all partial derivatives ∂Hi /∂xj , j 6= i, and summing them up, would yield a function containing not n but n × n costates. Clearly, if it can be established that either λij = 0 for all j 6= i, or costates λij do not appear in (3.4-3.5), then, taking into account FOCs (3.4), the set of partial derivatives to be integrated reduces to m + n and the game may admit a potential function. These properties can be checked either by examination of the necessary conditions or when one of the following cases occurs. • Consider the most general case of all, with x˙ i = gi (x, u, t) for all i = 1, . . . , n. If so, then ∂ 2 Hi ∂ 2 Hi 6= 0 and 6= 0. (3.8) ∂uil ∂λij ∂xi ∂λij Therefore, we need (3.6) to admit the solution λij = 0 at all t.
6
• A simpler case is that where x˙ i = gi (x, ui , t) for all i = 1, . . . , n. Here, ∂ 2 Hi ∂ 2 Hi = 0 but 6= 0, ∂uil ∂λij ∂xi ∂λij
(3.9)
and again we need (3.6) to admit the solution λij = 0 at all t. • The third case is that where x˙ i = gi (xi , u, t) for all i = 1, . . . , n. also here we need (3.6) to admit the solution λij = 0 at all t, since ∂ 2 Hi ∂ 2 Hi 6= 0 and = 0. ∂uil ∂λij ∂xi ∂λij
(3.10)
• The last, and simplest, case is the setup where x˙ i = gi (xi , ui , t) for all i = 1, . . . , n. As the game exhibits separate state dynamics, ∂ 2 Hi ∂ 2 Hi = 0 and =0 ∂uil ∂λij ∂xi ∂λij
(3.11)
and therefore the solution of (3.6) w.r.t. λij is in fact irrelevant.3 Whenever the game does not feature separate state dynamics, a sufficient condition for (3.6) to admit the null solution is the following: Proposition 3. If there exist n − 1 functions $j (x) such that ∂Hi = λij $j (x) ∂xj
(3.12)
for all i, j = 1, . . . , n, i 6= j, then (3.6) admits the solution λij = 0 at all t, for all j 6= i. Proof. We rewrite the adjoint equations (3.6) in this case: λ˙ ij − ρλij = −λij $j (x),
(3.13)
∀j 6= i. This equations can be elementarily solved by separation of variables, and the solutions are: R∞ (ρ−$j (x(s)))ds , (3.14) λij (t) = λij (t0 ) e t0 obviously null for the initial condition λij (t0 ) = 0.¥ In the remainder, we confine our attention on games where either (i) state dynamics are separate, or (ii) any adjoint equation (3.6) admits the solution λij = 3 Observe that, if λij is either nil or irrelevant, the symmetry assumption concerning time discounting is not critical.
7
0 at all t. This applies, for instance, to the class of linear state games where the open-loop Nash equilibrium is strongly time consistent.4 It is now convenient to define vector λii := (λ11 , λ22 , . . . , λjj , . . . , λnn ). Definition 4. Given a differential game Γ with Hamiltonians Hi , we define the Hamiltonian potential for game Γ as the function HP (x, u, λii ), such that: µ ¶ µ ¶ ∂HP ∂HP ∂HP ∂HP ∂H1 ∂Hn ∂H1 ∂Hn ,..., , ,..., = ,..., , ,..., . ∂x1 ∂xn ∂u11 ∂unvn ∂x1 ∂xn ∂u11 ∂unvn (3.15) If it exists, a Hamiltonian potential function HP for game Γ is a real-valued function that contains all the relevant information of the original differential game. The requirement on HP is that the partial derivatives of the individual Hamiltonian functions with respect to own states and controls are replicated, i.e., that the gradient of HP w.r.t. own states xi (t) and controls ui (t) replicates ∂Hi (·)/∂xi (t) and ∂Hi (·)/∂uil (t) for all i, l. We would like to stress that the condition (2.2) stated in Theorem 2 holds if and only if the relevant vector F coincides with the vector field (3.15).
3.2
Hamiltonian potential: construction and properties
The Hamiltonian potential is the Hamiltonian function of a single player replacing the original n players involved in game Γ, and endowed with the task of replicating the set of FOCs on controls as well as the n adjoint equations that are neither irrelevant nor admit a null solution, and finally reproducing the same control dynamics as in the original game. The final outcome of this procedure must be the same dynamic system of state and control equations as in game Γ. What follows is a sufficient condition for the existence of a Hamiltonian potential: Proposition 5. If there exists a function
such that:
Pb(u11 , . . . , unvn , x1 , . . . , xn , λii )
(3.16)
∂ x˙ j ∂ x˙ i X ∂Hi ∂ Pb + λii + λjj = ∂uil ∂uil ∂uil ∂uil
(3.17)
∂ x˙ j ∂ Pb ∂ x˙ i X ∂Hi + λii + λjj = ∂xi ∂xi ∂xi ∂xi
(3.18)
j6=i
and
j6=i
4 For an exhaustive overview of linear state games, see Mehlmann (1988, ch. 4) and Dockner et al. (2000, ch. 7).
8
for all i, j = 1, . . . , n, and all l = 1, ..., vi , then the game Γ with Hamiltonians Hi admits a Hamiltonian potential with the following form: HP (u, x, λii ) = Pb(u11 , . . . , unvn , x1 , . . . , xn , λii ) +
n X
λii x˙ i .
(3.19)
i=1
Proof. It immediately follows from Definition 4.¥ Relying on the above-mentioned construction, next we will focus our attention on some Hamiltonian functions of differential games which might be endowed with a Hamiltonian potential structure.
3.3
Different Hamiltonian functions and related potentials
Definition 6. A Hamiltonian of the type: X X γj (xj , λii ) αj (uj , λii ) + γi (xi , λii ) + Hi (u, x, λii ) = αi (ui , λii ) +
(3.20)
j6=i
j6=i
where αi (·) and γi (·) are C 2 functions with respect to all their variables, for all i = 1, . . . , n, is called additively separable in states and controls. Proposition 7. Every differential game Γ with additively separable Hamiltonians as in Definition 6 admits a Hamiltonian potential. Proof. The additively separable form of the Hamiltonian P functions allows us to obtain a conservative vector field. In fact, consider the ni=1 vi + n coordinate vector µ ¶ ∂α1 ∂αn ∂γ1 ∂γn ,..., , ,..., . (3.21) ∂u11 ∂unvn ∂x1 ∂xn The j-th component of this vector is a function which only depends on the costate variables and on either the j-th control or the (j − n)-th state, so that a potential for this vector field exists and it is easily calculable.¥ In this case it may be useful to reformulate the FOCs and the adjoint equations of the differential game to achieve the open-loop equilibrium trajectories: ∂HP ∂αi = 0 ⇐⇒ =0 ∂uil ∂uil
(3.22)
∂HP ∂γi λ˙ ii − ρλii = − =− (3.23) ∂xi ∂xi plus the appropriate transversality conditions. By differentiating the FOCs with respect to time and by substitution in (3.23), we obtain a set of first order differential equations. Together with the equations of 9
motion of the state variables, the dynamic system in states and controls completely describes the optimal trajectories of the game. The next Hamiltonian structure we are going to investigate points out a further condition to be verified in order to have a Hamiltonian potential, which will be fully discussed in the examples illustrated in the next Section. Consider a Hamiltonian admitting the solution λij = 0 for all j 6= i at all t’s, which can therefore be written as follows: Hi (x, u, λii ) = πi (xi , u) + λii gi (xi , u(t)),
(3.24)
where πi (·) and gi (·) are C 2 functions with respect to all their variables. Remark that the i-th Hamiltonian (3.24) does not depend on the remaining n − 1 state variables, i.e. ∂Hi = 0 for all i 6= j. (3.25) ∂xj The next definition is based on the concept of absorptive capacity, mirroring the interpretation originally attached by Kamien and Zang (2000) to the role of absorptive capacity in static R&D games. Definition 8. A Hamiltonian of the type (3.24) is called dynamically absorptive if there exist two integers 1 ≤ j ≤ n, j 6= i and 1 ≤ s ≤ vj such that ∂ 2 Hi 6= 0. ∂xi ∂ujs
(3.26)
The main property of a dynamically absorptive Hamiltonian consists in the interaction between its state variable and at least one of any other player’s control variables. Property (3.26) makes interaction stronger than otherwise, as the spillover exerted by player j on the evolution of player i’s state depends on the current stock of such a state. Conditions (3.25) and (3.26) are sufficient to ensure the impossibility of constructing a Hamiltonian potential for the related game: Proposition 9. No differential game endowed with a dynamically absorptive Hamiltonian (3.24) admits a Hamiltonian potential. Proof. If a Hamiltonian (3.24) is dynamically absorptive, the condition for conservativity (2.2) cannot hold, because for all j 6= i and for all 1 ≤ s ≤ vj , we have ∂ 2 Hj = 0, (3.27) ∂xi ∂ujs which contradicts (3.26).¥ As an intuitive appraisal of Definition 8 and Proposition 9, we may say that dynamically absorptive games are intrinsically more competitive than those where 10
(3.27) holds, and therefore they do not lend themselves to being reinterpreted as optimal control models where a single player sets all controls to maximise a single objective function (or, equivalently, all players concur to maximise this single objective).
4
Examples
The first two examples deal with two well known problems belonging industrial economics, where firms oligopolistically compete in some form of long-run investment as well as in a given market variable, either price or quantity. These two examples illustrate a specific advantage of our dynamic approach to the existence and construction of a potential function, as compared to the acquired wisdom in the domain of static games.
4.1
An advertising game
We analyze two simplified versions of a dynamic model of advertising based on a Cournot oligopoly, where firms invest so as to increase consumers’ reservation prices. We follow the pattern outlined by Cellini and Lambertini (2003) and further investigated by Cellini et al. (2005) and Lambertini and Palestini (2009). In tis example and in the following one we shall remark the difference made by the possible presence of a dynamically absorptive Hamiltonian.
4.2
Advertising with dynamically absorptive Hamiltonians
Consider the following differential game over t ∈ [t0 , ∞) , where two single-product Cournot firms face the following demand functions: pi (t) = ai (t) − qi (t) − sqj (t) , i, j = 1, 2; i 6= j
(4.1)
where parameter s ∈ [0, 1] measures the exogenous degree of substitutability between the two goods. Firm i’s instantaneous profit function is πi (t) = [pi (t) − c] qi (t)− bki2 (t) , where ki (t) is the advertising effort carried out by firm i at time t to increase the reservation price for its good, ai (t) . Therefore, the game features four controls (outputs and advertising efforts) and two states (the reservation prices). Firm i wants to Z ∞ max Ji ≡ e−ρt [(ai (t) − qi (t) − sqj (t) − c)qi (t) − bki2 (t)]dt, (4.2) qi (t),ki (t)
subject to:
t0
(
a˙ i (t) = ki (t) − βkj (t)ai (t) − δai (t) ai (t0 ) = ai0 , i = 1, 2. 11
(4.3)
In the above state dynamics, δ > 0 is the usual depreciation rate, whereas β 6= 0 is a spillover parameter measuring the (positive or negative) externality that firm i receives from the rival. Note that this representation of the state dynamics features absorptive capacity as the extent of the spillover depends on the current size of ai (t). Dropping the time argument, the i-th current value Hamiltonian of the model reads as: Hi = (ai − qi − sqj − c)qi − bki2 + λii (ki − βai kj − δai ) + λij (kj − βaj ki − δaj ), (4.4) and since the costate equations λ˙ ij = (ρ + βki + δ)λij
(4.5)
admit the solutions λij = λji ≡ 0 by Proposition 3, then its first order derivatives respectively are: ∂Hi = ai − 2qi − sqj − c, (4.6) ∂qi ∂Hi = λii − 2bki , (4.7) ∂ki ∂Hi = qi − βkj λii − δλii . (4.8) ∂ai The non-existence of a Hamiltonian potential can be proved by applying Proposition 9. The following Proposition includes a complete explanation: Proposition 10. The advertising game with absorptive capacity does not admit a Hamilton potential. Proof. Suppose that a Hamiltonian potential (3.19) exists. Then the function Pb is supposed to be a potential for the 6-component vector field (a1 − 2q1 − sq2 − c, a2 − 2q2 − sq1 − c, λ11 − 2bk1 , λ22 − 2bk2 , q1 − (βk2 + δ)λ11 , q2 − (βk1 + δ)λ22 ). But this vector field is not conservative, as we can deduce from the mixed partial derivatives: ∂ (λ11 − 2bk1 ) = 0, ∂a2
(4.9)
∂ (q2 − (βk1 + δ)λ22 ) = −βλ22 . ∂k1 The conditions (2.2) hold either if β = 0, that is in a model without spillover effect, or if the costate variables are all vanishing, which would contradict the hypotheses for Pontryagin’s maximum principle, because λii = 0 is not a solution to the related costate equation. Then we can conclude that no Hamiltonian potential exists for this game.¥ 12
4.3
Advertising without dynamically absorptive Hamiltonians
We now investigate a second version of the model, endowed with the same objective functionals (4.2) but subject to different kinematic equations: ( a˙ i (t) = ki (t) − βkj (t) − δai (t) (4.10) ai (t0 ) = ai0 , i = 1, 2. In (4.10) the spillover term does not involve an absorptive capacity among firms, so no dynamically absorptive Hamiltonian appears. The current value Hamiltonian of the model reads as: Hi (·) = (ai − qi − sqj − c)qi − bki2 +
(4.11)
λii (ki − βkj − δai ) + λij (kj − βki − δaj ). Then the FOCS of the model turn out to be: ∂Hi = ai − 2qi − sqj − c = 0, ∂qi
(4.12)
∂Hi = −2bki + λii − βλij = 0. ∂ki
(4.13)
The adjoint equations write as: λ˙ ii = (ρ + δ)λii − qi ,
(4.14)
λ˙ ij = (ρ + δ)λij .
(4.15)
(4.15) obviously admit the solutions λij = λji ≡ 0, hence Proposition 5 applies: by taking the function Pb(q1 , q2 , k1 , k2 , a1 , a2 , λ11 , λ22 ) = −(q12 + q22 + sq1 q2 ) +
2 X
ai qi +
(4.16)
j=1
−c(q1 + q2 ) − b(k12 + k22 ) + (βk2 + δa1 )λ11 + (βk1 + δa2 )λ22 , the corresponding Hamiltonian potential reads as: HP (q1 , q2 , k1 , k2 , a1 , a2 , λ11 , λ22 ) = Pb(q1 , q2 , k1 , k2 , a1 , a2 , λ11 , λ22 )+ +λ11 (k1 − βk2 − δa1 ) + λ22 (k2 − βk1 − δa2 ).
13
(4.17)
4.4
An R&D game
We introduce and analyze two versions of a dynamic game of R&D based on a model by D’Aspremont and Jacquemin (1988),5 showing that also in this setup the concept of absorptive capacity plays a crucial role for the determination of a Hamiltonian potential structure for this game. The dynamic setup is based upon Cellini and Lambertini (2005, 2009).
4.5
R&D with dynamically absorptive Hamiltonians
In the following differential game over t ∈ [t0 , ∞) , two identical firms endowed with the following inverse demand functions: pi (t) = a − qi (t) − sqj (t) , i, j = 1, 2; i 6= j engage in Cournot-competition, producing the quantities q1 (t), q2 (t) and investing k1 (t), k2 (t) in R&D in order to reduce marginal production costs c1 (t), c2 (t). Unlike the previous model, in this case the reservation price a > 0 is constant and symmetric across firms. The i-th agent’s profit function writes as πi (q1 (t), q2 (t), k1 (t), k2 (t), c1 (t), c2 (t)) = (a − qi (t) − sqj (t) − ci (t))qi (t) − bki2 (t), (4.18) hence it aims to maximize (omitting the time argument) Z ∞ Ji ≡ e−ρt [(a − qi − sqj − ci )qi − bki2 ]dt, (4.19) t0
s.t.
(
c˙i = −ki − βci kj + δci ci (t0 ) = ci0
(4.20)
i = 1, 2. Note that the state dynamics (4.20) features absorptive capacity, similarly to what appears in the static version of the game investigated by Kamien and Zang (2000). The parameters s ∈ [0, 1] and β 6= 0 play the same role as in the advertising model. In fact, the Hamiltonian function and its significant first order derivatives respectively are: Hi = (a−qi −sqj −ci )qi −bki2 +λii (−ki −βci kj +δci )+λij (−kj −βcj ki +δcj ), (4.21) ∂Hi = a − 2qi − sqj − ci , ∂qi 5
See also Kamien et al. (1992), Suzumura (1992) and Amir (2000). 14
(4.22)
∂Hi = −λii − βcj λij − 2bki , ∂ki
(4.23)
∂Hi = qi − βkj λii + δλii . ∂ci
(4.24)
λ˙ ii = (ρ − δ + βkj )λii + qi ,
(4.25)
λ˙ ij = (ρ − δ + βki )λij ,
(4.26)
The adjoint equations are:
the latter admitting the trivial nil solution by Proposition 3. We can easily prove the following: Proposition 11. The R&D game with absorptive capacity does not admit a Hamilton potential. Proof. The proof is substantially analogous to the one for Proposition 10. The mixed partial derivatives not verifying (2.2) are: ∂ (−λ11 − 2bk1 ) = 0, ∂c2
(4.27)
∂ (q2 − (βk1 + δ)λ22 ) = −βλ22 6= 0. ∂k1 Therefore, no Hamiltonian potential can exist for this game.¥
4.6
R&D without dynamically absorptive Hamiltonians
Consider the same R&D game by substituting (4.20) with the following dynamics: ( c˙i = −ki − βkj + δci (4.28) ci (t0 ) = ci0 i = 1, 2. Here, no absorptive capacity operates, as the spillover that i receives from j does not depend on the current productive efficiency of i. The FOCS of the model write as: ∂Hi = a − 2qi − sqj − ci , ∂qi ∂Hi = −λii − βλij − 2bki . ∂ki The adjoint equations are: λ˙ ii = (ρ − δ)λii − qi , λ˙ ij = (ρ − δ)λij . 15
All of the considerations made for the advertising model without absorptive capacity may be confirmed, hence Proposition 5 applies by taking the Hamiltonian potential HP (q1 , q2 , k1 , k2 , c1 , c2 , λ11 , λ22 ) = Pb(q1 , q2 , k1 , k2 , c1 , c2 , λ11 , λ22 )+
where
(4.29)
+λ11 (−k1 − βk2 + δc1 ) + λ22 (−k2 − βk1 + δc2 )
(4.30)
Pb(q1 , q2 , k1 , k2 , c1 , c2 , λ11 , λ22 ) = −(q12 + q22 + sq1 q2 )+
(4.31)
2 X + (a − ci )qi − b(k12 + k22 ) + (βk2 − δc1 )λ11 + (βk1 − δc2 )λ22 .
(4.32)
j=1
In the following subsections, we illustrate two further examples of differential games (again applied to economic issues) admitting the Hamiltonian potential.
4.7
A macroeconomic policy game
The setup is a dynamic version of Lambertini and Rovelli (2004). The game takes place in a single country, between a central bank (B) controlling the nominal interest ¡ ¢ rate r ≥ 0 and a fiscal authority (F ) controlling the budget deficit f ∈ −f , f . Where f is introduced to measure an exogenous upper (lower) bound to surplus (deficit). The states are the inflation rate π and GNP (gross national product) y, whose respective dynamic equations are: ·
π = βf + δ (r − π) ·
∗
y = ϕ (y − y ) + ηf
(4.33) (4.34)
where β, δ, η, ϕ are positive parameters and y ∗ is the benchmark (or full employment) level of income, exogenously given. The instantaneous payoff functions of players are represented by the following loss functions: LB = (π − π ∗ )2 + θ (r − r∗ )2 (4.35) LF = (y − y ∗ )2 + υf 2
(4.36)
where θ and υ are positive parameters. As usual in this literature, the central banker is keen on using monetary policy to affect the behaviour of national income, the more so the higher is parameter θ. The Hamiltonian functions are: HB = (π − π ∗ )2 +θ (r − r∗ )2 +λBπ [βf + δ (r − π)]+λBy [ϕ (y − y ∗ ) + ηf ] (4.37) HF = (y − y ∗ )2 + υf 2 + λF π [βf + δ (r − π)] + λF y [ϕ (y − y ∗ ) + ηf ] . 16
(4.38)
The set of necessary conditions is: ∂HB = 2θ (r − r∗ ) + δλBπ = 0 ∂r
(4.39)
∂HF = 2υf + βλF π + ηλF y = 0 ∂f
(4.40)
·
λBπ = (ρ + δ) λBπ − 2 (π − π ∗ ) ·
·
λF π
λBy = (ρ − ϕ) λBy ¶ Z 2 µZ X ∂Hi ∂Hi = (ρ + δ) λF π dki + dqi ∂ki ∂qi
(4.41) (4.42) (4.43)
i=1
·
λF y = (ρ − ϕ) λF y − 2 (y − y ∗ )
(4.44)
Clearly, on the basis of Proposition 3 costate equations (4.42-4.43) admit the solution λBy = λF π = 0 at all t. Since Proposition 5 applies, we show the Hamiltonian potential of the present game: Z Z Z Z ∂HB ∂HB ∂HF ∂HF HP (u, x, λii ) = dr + dπ + df + dy (4.45) ∂r ∂π ∂f ∂y = θ (r − r∗ )2 +(π − π ∗ )2 +υf 2 +(y − y ∗ )2 +λF y (ηf + ϕy)+λBπ (r − π) δ (4.46) · · = Pb (·) + λBπ π + λF y y
(4.47)
with Pb (·) = θ (r − r∗ )2 + (π − π ∗ )2 + υf 2 + (y − y ∗ )2 − λBπ βf + λF y ϕy ∗ .
4.8
(4.48)
A marketing game with product quality improvement
Here we propose a simple marketing game where two firms invest in R&D to increase their respective product qualities, which in turn determine market shares. This game nests into a wide literature in marketing and management (see Feichtinger et al., 1994, and Jørgensen and Zaccour, 2004, for exhaustive surveys). Each firm i sells a single good of quality qi , i ∈ {H, L} , with Q > qH ≥ qL ≥ 0, Q being the exogenously given utmost quality level which the industry can supply. At every instant, the market share of firm i is σi = α + qi − ξqj , ξ ∈ [0, 1] , and σi + σj ≤ 1. Qualities are state variables evolving according to the dynamics: ·
q i = ki − ςqi 17
(4.49)
where ki ≥ 0 is the R&D control and ς > 0 is a constant decay rate common to both firms. The instantaneous R&D cost is Ci (ki ) = cki2 , and the instantaneous profit of firm i is πi = pi σi − cki2 . We consider the price pi as an exogenously given parameter. The resulting Hamiltonian of firm i is: Hi = pi [α + qi − ξqj ] − cki2 + λii (ki − ςqi ) + λij (kj − ςqj ) .
(4.50)
The first order condition on control is ∂Hi = −2cki + λii = 0 ∂ki
(4.51)
and the adjoint equations are λ˙ ii = λii (ρ + ς) − pi
(4.52)
λ˙ ij = λij (ρ + ς) + ξpi
(4.53)
Now observe that the differential equation describing the dynamics of λij does not admit a null solution. Notwithstanding that, the costate equation (4.53) is indeed irrelevant as the FOC on control ki only contains λii , therefore the set of partial derivatives to be integrated reduces to 2n. Since the Hamiltonians are additively separable, Proposition 7 applies and the following Hamiltonian potential can be constructed: HP (k, q, λii ) = p1 q1 + p2 q2 − c(k12 + k22 )+ (4.54) +λ11 (k1 − ςq1 ) + λ22 (k2 − ςq2 ) == Pb (·) + λ11 q˙11 + λ22 q˙22 with
5
Pb (·) = p1 q1 + p2 q2 − c(k12 + k22 ).
(4.55) (4.56)
Concluding remarks and extensions
Here we have started developing the analysis of potential functions for differential games, studying a specific class of open-loop games that can be taken as a point of departure for further developments in several different directions, all of which are equally relevant. In particular, we have analysed open-loop games with and without what we have albelled as dynamic absorptive capacity. This clearly leaves an open question as to the possibility of proving the existence of potential functions for more general classes of games, and, specifically, those which must necessarily be defined under the feedback information structure, whenever the open-loop solution is not strongly time consistent (or equivalently, subgame perfect). One additional open question is whether a Hamiltonian potential can be found to exist in setups where 18
the structure is different from the one we have considered here, e.g., where there is a single state variable for all players. These issues are left for future research. REFERENCES
Amir, R. (2000). Modelling imperfectly appropriable R&D via spillovers. International Journal of Industrial Organization, 18, 1013-1032. Cellini, R. and L. Lambertini (2005). R&D incentives and market structure: a dynamic analysis. Journal of Optimization Theory and Applications, 126, 85-96. Cellini, R. and L. Lambertini (2009). Dynamic R&D with spillovers: competition vs cooperation. Journal of Economic Dynamics and Control, 33, 568-582. Cellini, R., L. Lambertini and G. Leitmann (2005). Degenerate feedback and time consistency in differential games. In Hofer EP, Reithmeier E (eds), Modeling and Control of Autonomous Decision Support Based Systems. Proceedings of the 13th International Workshop on Dynamics and Control. Shaker Verlag: Aachen, 185-192. d’Aspremont, C. and A. Jacquemin (1988), “Cooperative and Noncooperative R&D in Duopoly with Spillovers”, American Economic Review, 78, 11331137. Dockner, E.J, S. Jørgensen, N.V. Long and G. Sorger (2000). Differential Games in Economics and Management Science. Cambridge, Cambridge University Press. Feichtinger, G., R.F. Hartl and P.S. Sethi (1994). Dynamic optimal control models in advertising: recent developments. Management Science, 40, 195226. Jørgensen, S. and G. Zaccour (2004). Differential Games in Marketing. Dordrecht, Kluwer. Kamien, M.I. and I. Zang (2000). Meet me halfway: research joint ventures and absorptive capacity. International Journal of Industrial Organization, 18, 995-1012. Kamien, M.I., E. Muller and I. Zang (1992). Cooperative joint ventures and R&D cartels. American Economic Review, 82, 1293-1306.
19
Lambertini, L. and A. Palestini (2009). Dynamic advertising with spillovers: cartel vs competitive fringe. Optimal Control Applications and Methods, forthcoming. Lambertini, L. and R. Rovelli (2004). Independent or coordinated? Monetary and fiscal policy in EMU. In R. Beetsma et al. (eds), Monetary Policy, Fiscal Policies and Labour Markets. Macroeconomic Policy Making in the EMU. Cambridge, Cambridge University Press. Mehlmann, A. (1988). Applied Differential Games. New York, Plenum Press. Monderer, D. and L.S. Shapley (1996). Potential games. Games and Economic Behavior, 14, 124-143. Slade, M.E. (1994). What does an oligopoly maximize?. Journal of Industrial Economics, 42, 45-61. Suzumura, K. (1992). “Cooperative and noncooperative R&D in duopoly with spillovers. American Economic Review, 82, 1307-1320. Voorneveld, M. (2000). Best-response potential games. Economic Letters, 66, 289-295.
20