Incentive Compatibility and Differentiability: New ... - Semantic Scholar

Report 6 Downloads 124 Views
Incentive Compatibility and Differentiability: New Results and Classic Applications∗ George J. Mailath†

Ernst-Ludwig von Thadden‡

November 2011

Abstract We provide several generalizations of Mailath’s (1987) result that in games of asymmetric information with a continuum of types incentive compatibility plus separation implies differentiability of the informed agent’s strategy. The new results extend the theory to classic models in finance such as Leland and Pyle (1977), Glosten (1989), and DeMarzo and Duffie (1999), that were not previously covered. JEL Classification Numbers: C60, C73, D82, D83, G14. Keywords: Adverse selection, separation, differentiable strategies, incentivecompatibility.



First version September 2010. We thank Navin Kartik for helpful comments. Mailath thanks the National Science Foundation for financial support (grant SES-0961540). † Department of Economics, University of Pennsylvania, [email protected]. ‡ Department of Economics, Universität Mannheim, [email protected].

Electronic copy available at: http://ssrn.com/abstract=1683471

1

Introduction

In many problems of asymmetric information, one agent has private information upon which she bases her actions, and uninformed agents act based on inferences from these actions. Because the informed agent can reveal information through her actions, she chooses her actions strategically. If the informed agent’s action as a function of her private information is one-toone, then her strategy is said to be separating and her actions completely reveal her private information. If the agent’s private information (her “type") is given by a continuously distributed real-valued random variable, incentive-compatible separating strategies in such interactions can easily be characterized by a differential equation, if the strategy is known to be differentiable. But exactly because the strategy is not known, differentiability cannot be taken for granted. This poses a serious problem for the determination and uniqueness of equilibrium. In many cases, however, differentiability is an implication of incentivecompatibility. For a large class of signaling games and related settings, Mailath (1987) has shown that any incentive-compatible separating strategy of the informed agent must be differentiable and hence satisfy the standard differential equation. Unfortunately, the assumptions in Mailath (1987) rule out many important applications. In particular, as we describe below, they do not cover the models of Leland and Pyle (1977), Glosten (1989), and DeMarzo and Duffie (1999) that are at the core of modern theories of corporate finance and market microstructure. In this paper, we provide appropriate generalizations of Mailath (1987) to cover these models. The new results can be grouped into three categories. First, we show that the original results extend to non-compact (in particular, unbounded) type spaces, and to compact action spaces (in Mailath (1987) the action space is R). This is important since many applications, in particular in finance, naturally involve unbounded type spaces, for example, when using normally distributed returns, and bounded action spaces, for example when short-selling is not allowed. Second, we provide sufficient conditions that apply locally instead of globally. These conditions can be used when the sufficient conditions do not hold globally (because, for example, a derivative vanishes somewhere), but do hold locally (because the derivative cannot vanish everywhere). We provide an example in which global differentiability can be shown by “patching together" local arguments. Third, we show that differentiability can obtain even in linear models, which are not covered by Mailath (1987). This extends the analysis to the many models in corporate finance that use risk-neutrality, where the classic first-order conditions of 1

Electronic copy available at: http://ssrn.com/abstract=1683471

asset pricing do not apply. Our interest in the differentiability of separating strategies leads us to study environments with differentiable payoffs. The uniqueness of separating equilibrium strategies also holds in environments with continuous payoffs. See Roddie (2011) for global conditions that guarantee uniqueness of the separating equilibrium when there is a continuum of types.

2

The Model

An informed agent knows the state of nature  ∈ Ω ⊂ R and one or more uninformed agents react to the informed agent’s action  ∈ X ⊂ R on the basis of inferences drawn from  about . The sets Ω and X are intervals; they may be bounded or unbounded, and we do not require the intervals to be open or closed. For our purposes, this interaction can be summarized by the C 2 function  : Ω2 × X −→ R (1) (  b  ) 7−→  (  b  )

denoting the informed agent’s payoff from taking action  when the true state of nature is  and the uninformed agents believe it is  b . For values 2 (  b  ) on the boundary of Ω × X , we assume twice continuous one-sided differentiability in the appropriate directions. As an example, consider the canonical signaling game (Spence (1973); Cho and Kreps (1987)). There is an informed agent who, knowing , chooses an action (or costly message) , followed by an uninformed agent who observing  but not , chooses a response  ∈ R ⊂ R. The informed agent’s payoff is given by (  ). Given  and beliefs  ∈ ∆(Ω), denote by ( ) a best response for the uninformed player. In particular, if the uninformed agent has point belief  b after observing , (  b ) is a best response. The informed agent’s payoff, given the uninformed agent’s best response , can then be written as  (  b  ) ≡ ( (  b ) )

which is of the form assumed in (1) if  and  are twice continuously differentiable. Note, however, that the framework is more general than just the signaling model. In Section 3.2, we apply our results to a screening model. We study interactions in which the informed agent’s information is fully revealed (as in separating equilibria in signalling games): The informed agent’s action is given by a one-to-one function  : Ω → X , so that  6=  0 2

Electronic copy available at: http://ssrn.com/abstract=1683471

implies () 6= (0 ).1 Furthermore,  must be incentive-compatible, which means that the informed agent finds it optimal to follow this strategy when she knows : () ∈ arg max  (  −1 () )

(IC)

∈(Ω)

The following assumptions adopt Mailath’s (1987) local concavity conditions (4) and (5) to our setting.2 Assumption 1 The first-best contracting problem (the problem under full information), max  (  ) ∈X

has a unique solution for all  ∈ Ω, denoted    (), for all  ∈ Ω. Note that if X is compact the first-best may lie on the boundary of X . We denote the interior of a set  by int(). Assumption 2

1. For all  ∈ int(Ω), 33 (     ())  0.

2. There exists   0 such that for all ( ) ∈ Ω × X , 33 (  ) ≥ 0 ⇒ |3 (  )|   Here and in the rest of the paper, for values (  b  ) on the boundary of Ω2 × X , all derivatives of  are the appropriate one-sided derivatives. Note that Assumption 1 only implies 33 (     ()) ≤ 0 for interior . The strengthening of Assumption 2.1 is needed in the proof of Lemma B in the appendix. Assumption 2.2 is weaker than strict concavity but stronger than strict quasi-concavity of  (  ·). The following theorem is the key result of Mailath (1987). 1 Environments with private information typically have many equilibria, not all of them separating. In signaling games, refinements in the spirit of those proposed by Kohlberg and Mertens (1986) and Cho and Kreps (1987) provide formal support for a focus on separation. While Kohlberg and Mertens’s (1986) strategic stability has an abstract continuity motivation, the “intuitive” motivations for some of its implications seem less persuasive (Mailath, Okuno-Fujiwara, and Postlewaite, 1993). 2 Since Mailath (1987) took R as the action space, while we allow for arbitrary real intervals, the assumption that the first order condition 3 = 0 has a unique solution has been replaced with the more general requirement that the first-best contracting problem has a unique solution.

3

Theorem 1 (Mailath (1987)) Let Ω = [ 1  2 ] and X = R and let  be one-to-one and incentive-compatible. Suppose Assumptions 1 and 2 hold, b  ) 6= 0 and 2 (  b  ) 6= 0 for all (  b  ) ∈ Ω2 × X . and 13 (  1. If 3 (  b  (b  ))2 (  b  (b )) is a strictly monotone function of  for all  b , then  is differentiable in the interior of Ω, int(Ω).

b  )  0 for all (  b  ) ∈ Ω2 × 2. (a) If (1 ) =    ( 1 ) and 2 (  X , then  is differentiable on Ω \ { 1 }. b  )  0 for all (  b  ) ∈ Ω2 × (b) If (2 ) =    ( 2 ) and 2 (  X , then  is differentiable on Ω \ { 2 }. If  is differentiable, then it satisfies the differential equation  0 () = −

2 (  ())  3 (  ())

(DE)

The differential equation (DE) is a trivial consequence of the incentive constraint (IC), which yields the first-order condition 2 +  0 3 = 0, given differentiability. The assumption that 2 never equals zero, and so never changes sign (“belief monotonicity”) implies that the direction of desired belief manipulation by the informed agent is unambiguous: if 2  0, she benefits from the uninformed side believing her to be of a higher type (respectively, of a lower type if 2  0). The assumption that 13 never changes sign (“type monotonicity") means that the informed agent’s marginal utility from  is monotone in her type. Neither assumption need be satisfied in standard examples, as the following section shows. The condition that 3 2 is a strictly monotone function of  for all (b   (b  )) is a weak form of single crossing; we discuss the role of the single crossing property in Section 4 when we introduce Theorem 4. For signaling games satisfying standard monotonicity properties, the initial value condition pinning down the value of  at either  1 or 2 in parts 2a and b of Theorem 1 is a simple consequence of sequential rationality:3 Suppose 2  0. Then  ˆ =  1 is the worst belief the uninformed agents can have about the informed agent. It is then immediate that in any Nash equilibrium with  separating, if    ( 1 ) ∈ (Ω) then (1 ) =    (1 ).4 3

For signaling games with finite type and action spaces, sequential rationality is formalized as sequential equilibrium (Kreps and Wilson, 1982), and with infinite type and action spaces, by various versions of perfect Bayes equilibrium. 4 Suppose  −1 (   (1 )) = 0 6= 1 . Then,  (1  0     (1 ))   (1   1     (1 ))   ( 1   1  (1 ))

4

On the other hand, if    ( 1 ) 6∈ (Ω), then in response to a deviation to the action  =    ( 1 ), sequential rationality requires the uninformed agents to choose a best reply to some belief, and 2  0 again implies that 1 has a profitable deviation. Finally, note that since 3 (     ()) = 0 when X = R, (DE) implies | 0 ()| → ∞ as  →  for any  satisfying (  ) =    (  ), and so  cannot be differentiable at the indicated endpoints in parts 2a and b.

3 3.1

Three Examples Equity Issues

The classic model of equity issues of Leland and Pyle (1977) considers an owner of a firm who wants to raise funds on the stock market by selling her holdings. The uninformed side of the market is “the stock market”: a large group of equally informed and well-diversified investors. Investors are willing to invest if in expectation they earn the risk-free rate, normalized to 0. The company is worth  +  in the future, where  ∈ Ω = [1  ∞) is a positive number and  is a zero-mean random variable defined on an interval [ ]. The expected value of the firm, therefore, is . The owner has personal wealth (outside the firm) of 0 and is risk-averse, with an increasing, strictly concave, twice continuously differentiable money utility function  . The capital market is risk-neutral. The owner considers diversifying his risk by selling a fraction 1 −  of the firm in exchange for a payment of  by the capital market. The owner’s utility from an allocation ( ) ∈ [0 1] × R is   (( + ) + 0 + )

(2)

and that of the capital market (using risk-neutrality) (1 − ) − 

(3)

Because the owner is risk-averse, the first-best is    () = 0, i.e., to sell the firm completely, regardless of . The interaction between the two sides of the market is given by a signaling game in which the owner, knowing the value of , proposes an equity issue ( ) which the stock market accepts or rejects. and so  is not incentive compatible, a contradiction.

5

As is well known, this game has a large number of equilibria. The literature usually considers equilibria with (i) maximum information transmission that (ii) leave zero expected profits to the market conditional on each type. Property (i) restricts attention to strategies (  ) : Ω → [0 1] × R that are one-to-one (fully separating), while property (ii) implies transfers  (b  ) = (1−)b , where  b is the inferred expected value of the firm. One can then ignore  =  (b ) in the analysis and denote a strategy of the informed player (the owner) by (). The payoff function  of the informed agent as defined in (1) is

We have

and

)  (  b  ) =   (0 + ( + ) + (1 − )b

(4)

b  ) = (1 − )  0 (0 + ( + ) + (1 − )b ) 2 ( 

13 (  b  ) =   0 (0 + ( + ) + (1 − )b )

 ) + ( −  b )  00 (0 + ( + ) + (1 − )b

Simple examples show that in general 13 can be 0, violating typemonotonicity. Furthermore, 2 (  b  1) = 0, violating belief-monotonicity for  = 1. Finally, Ω is not compact, and X is not R.

3.2

Market Microstructure

Following Glosten (1989), consider a market for a risky asset in which risk neutral market makers provide liquidity to an informed investor who, depending on her private information, may wish to buy or sell. Let  ∈ R denote the quantity of the risky asset traded by the investor, with   0 corresponding to a purchase and   0 to a sale. The corresponding monetary transfer from the investor to the market maker is denoted by  ∈ R; if   0, − is the amount received by the investor. If as in standard market microstructure theory,  is the price of the asset, then  = . Mailath and Nöldeke (2008) generalize Glosten (1989) as follows: The final value of the risky asset is  =  + . The investor privately observes  and her endowment  of the risky asset before trade takes place. The random variables ( ) describing the investor’s private information are uncorrelated and elliptically distributed (Fang, Kotz, and Ng, 1990) with variances  2  0 and  2  0. The random variable , realized after trade, is normally 6

distributed with variance  2  0 and independent of ( ). The variables , , and  all have zero mean. After a trade  resulting in a monetary transfer , the investor’s final wealth is  = ( + )( + ) − .5 The investor has CARA preferences with risk aversion parameter   0. As  is normally distributed this yields, as usual, a quadratic representation of the investor’s preferences over ( ) ∈ R2 conditional on her private information. Defining  ≡  2  0

(5)

such a representation is given by U(  |  ) = ( − ) − 2 2 −  While the private information of the investor is two-dimensional, her preferences depend on this information only through the one-dimensional variable  − , which reflects a blend of the investor’s informational and hedging motives for trade. Setting  ≡ [ |  − ] = [ +  |  − ] the linear conditional expectation property of elliptically distributed random variables (Hardin, 1982) implies  −   

(6)

 2 + 2  2  1  2

(7)

= where ≡

Conditional on , the investor’s preferences over trade-transfer pairs are thus described by the utility function U(  | ) =  − 2 2 − 

(8)

Market makers are risk neutral and maximize expected trading profits. It suffices to consider aggregate trading profits  − . Conditional on , expected aggregate trading profits are given by V(  | ) =  −  5

(9)

The risk-free rate and the investor’s initial money holdings are assumed to be zero.

7

The analysis of the model can be conducted in this reduced form environment, with the investor’s private information summarized by her onedimensional type  and payoff functions given by (8) for the investor and (9) for market makers. The above assumptions on the information structure and traders’ preferences are as in Glosten (1989), with the exception that the random variables ( ) describing the investor’s private information are not required to be normally distributed. If  is normally distributed, then the support Ω of its distribution is R, but in general, Ω can be bounded.6 The strategic interaction between the two sides of the market is a screening game in which the market makers compete for the investor’s trade. Each market maker  offers a menu {( ()  ())∈Ω :  () ∈ R  () ∈ R ∀ ∈ Ω} of trading possibilities to the informed investor, and the in)  (b  )) from one menu. vestor then chooses one allocation ( (b We again consider outcomes with maximum information transmission, i.e., trading schedules that are separating with respect to . Competition between market makers implies that if there exists a separating equilibrium in the screening game, each market maker must make zero expected profits on each type. By (9), this means  () =  () for all  ∈ Ω and all 

(10)

Hence, the trading schedule schedule pins down the pricing schedule. By (8), the payoff function  of the informed investor as defined in (1) is then (11)  (  b  ) = ( −  b ) − 2 2 We have

and

 

½

2 (  b  ) = −

13 (  b  ) =  ¾ 3 (   b  ) = −  2 (  b  ) 

(12) (13) (14)

Equations (12) and (14) violate the assumptions of Theorem 1. Furthermore, while it is possible in the equity issue model of the previous subsection to restrict Ω arbitrarily to a compact interval, the case of an unbounded Ω is important in this case (arising, for example, when  is normal).7 6 The distribution of  is not completely arbitrary. In particular, the distribution function of  is symmetric around 0 and has finite variance (see Mailath and Nöldeke (2008) for details). 7 Mailath and Nöldeke (2008) do not provide a proof that separating trading schedules

8

3.3

Security Design

The fundamental question in corporate finance is how to allocate the cash flow generated by a firm’s assets among its different providers of capital. DeMarzo and Duffie (1999) have argued that this problem should be analyzed in two steps. First, the firm’s owners or managers design the security, and second the security is sold to investors. Since the second step may take place significantly later than the first, the firm may have obtained private information concerning the security’s payoff once it sells the security. For this second step, DeMarzo and Duffie (1999) therefore consider the following game. The security has an expected payoff  ∈ Ω, where  is private information of the firm and Ω ⊂ R is a potentially unbounded interval with left endpoint  1 . The firm considers selling a quantity  ∈ [0 1] of the security to market investors. There are gains from trade because the firm discounts the security’s cash flows at a higher rate than the market. Let   1 be the firm’s discount rate relative to that of the market (which is normalized to 1). The firm and the market are both risk-neutral. If the firm sells the amount  of the security for a total of , the firm’s payoff is  + (1 − )

(15)

and the market investors’ payoff is  − 

(16)

Market investors are competitive and must make zero expected profits for each value of . Hence, if they believe the expected value of the security to be  b , they will pay  = b . Inserting this into (15) yields the payoff function  of the informed investor as defined in (1):  (  b  ) = b  + (1 − )

=  + (b  − )

The informed agent’s payoff function  is linear in  and therefore violates Assumption 2 of Theorem 1.8 Furthermore, Ω need not be compact, though X is. are differentiable, simply asserting (p. 118) that the arguments in Mailath (1987) apply. The results here support that assertion. 8 Another example of a model with linear preferences and continuous types in the IO/finance area is Burkart and Lee (2011), who also discuss further papers.

9

4

The Generalized Theorems

In this section, we provide two theorems that significantly expand the applicability of Theorem 1. Our first result is that incentive-compatibility implies differentiability in models with strictly monotone payoffs and compact choice sets, as in DeMarzo and Duffie (1999). Theorem 2 Let Ω and X be intervals in R and  one-to-one and incentivecompatible. Suppose that X is compact. 1. For any  ∈ Ω, if Assumption 1 holds and if () =    () then  is continuous at . 2. For any  ∈ Ω, if 3 (  ) 6= 0 for all  ∈ X , then  is differentiable at . At all points of differentiability,  satisfies the differential equation (DE). The assumption in statement 2.2 of the theorem is slightly stronger than strict monotonicity in . Note that if  (  ·) is strictly monotone in ,    () exists and is unique (hence, Assumption 1 is satisfied) and lies on the boundary of X . As noted earlier, if  is on the boundary of Ω or  on the boundary of X , the derivatives are the relevant one-sided derivatives. Corollary 1 Let Ω and X be intervals in R and let X be compact. Let  be one-to-one and incentive-compatible. If  is affine in  ∈ X ,  (  b  ) = (  b ) + (  b )

(17)

( ) 0 () + 2 ( )() = −2 ( )

(18)

with ( ) 6= 0 for all  ∈ Ω, then  is differentiable at every  ∈ Ω and satisfies the linear differential equation

Corollary 1 is a direct consequence of Theorem 2, because (17) implies that 3 (  ) = ( ) 6= 0 for all  ∈ Ω, and (18) is the re-write of (DE) for the affine case. Corollary 1 is useful because many standard models in corporate finance, as in Industrial Organization, work with linear preferences, which often gives rise to valuations  of the form (17). The theorem is surprising because constrained optimization problems with linear objective functions often yield discontinuous solutions. Interestingly, the assumption that the action set X 10

is compact does not force the solution to lie on the boundary: the optimal  typically lies in the interior of X . Instead, compactness is needed to prove that for any 0 and any sequence   →  0 ,  (0   0  (  )) →  ( 0   0  ( 0 )) (Lemma D in the appendix). This is a crucial insight to establish the continuity of , from which, in turn the differentiability of  can be deduced. Instead of assuming compactness of X , this insight can also be proved by using our relaxed concavity Assumption 2, which we do in the following theorem. Theorem 3 Let Ω and X be intervals in R and  one-to-one and incentivecompatible. Suppose Assumptions 1 and 2 hold. 1. For any  ∈ Ω, if () =    () then  is continuous at . 2. For any  ∈ Ω, if 3 (  ) 6= 0 for all  ∈ X , then  is differentiable at . 3. For any  ∈ int(Ω), if  (  ·) is a monotone function in  and if 2 (     ()) 6= 0, then  is differentiable at . 4. Suppose 13 (  ) 6= 0 for all ( ) ∈ int(Ω)×X , 2 (  b  (b )) 6= 0 for all   b ∈ int(Ω), and 3 (  b  (b  ))2 (  b  (b  )) is a strictly monotone function of  for all  b ∈ int(Ω). Then  is differentiable on int(Ω).

5. Assume that 13 (  ) 6= 0 and 12 (  ) ≤ 0 for all ( ) ∈ int(Ω) × X . If 2 (  ())  0 or if 2 (  ())  0 for all  ∈ int(Ω), then  is differentiable on int(Ω). 6. Assume that 13 (  ) 6= 0 for all ( ) ∈ int(Ω) × X .

(i) Assume that Ω = [ 1   2 ] or Ω = [1  ∞) and that ( 1 ) = 1 ). If 2 (  ())  0 for all  ∈ Ω then  is differentiable on Ω \ { 1 }.

   (

(ii) Assume that Ω = [ 1   2 ] or Ω = (−∞  2 ] and that ( 2 ) =    ( 2 ). If 2 (  ())  0 for all  ∈ Ω then  is differentiable on Ω \ { 2 }. At all points of differentiability,  satisfies the differential equation (DE).

11

The proofs of both theorems, which recycle Mailath’s original proof and adds a number of new elements, are in the appendix. Theorems 2 and 3 generalize Theorem 1 in several respects. First, the theorems do not assume that Ω is compact or that X = R. With respect to Ω, the argument is not immediate because Mailath’s proof uses uniform convergence (for which compactness is needed) and exploits the behavior of  on the boundary of Ω. With respect to X , the difficulty is that the equivalence of 3 (  ()) = 0 and () =    () breaks down for () on the boundary of X . Second, Theorems 2 and 3 provide new conditions on  to establish the differentiability of , and for some cases where differentiability cannot be established (if 3 = 0) yield at least continuity. Third, the assumptions on the partial derivatives need not hold for all (  b  ) ∈ Ω2 × X . Here, the necessary changes in Mailath’s proof are simple, but the new generality is useful because (i) the restriction to the diagonal  b =  has bite and (ii) usually there is some a priori information about the graph of  that can be used. The equity issue example in Section 5.1 is an example. Theorem 2.1 and 3.1 are useful partial results that follow directly from Mailath’s proof. The assumption in these statements seems difficult to verify a priori, because one needs to know , which one actually wants to characterize. However, since it is often straightforward to find the  b for which    ), the injectivity of  then implies that one can apply the (b  ) =  (b theorems to {   b } or {   b }, or to X \{(b )} if (b  ) is on the boundary of X . Hence, the statement is useful to “fill possible holes" left by the other statements. Subsection 5.2 provides an example for this technique. Theorems 2.2 and 3.2 have identical conclusions, differing only in their assumptions (statement 2.2 uses the compactness of X , while statement 3.2 uses the quasi-concavity of Assumption 2). Theorem 3.3 addresses the same situation slightly differently, because under Assumption 2 the fact that  (  ·) is a monotone function in  only implies 3 (  ) 6= 0 for  ∈ int(Ω) and  ∈ int(X ). Theorem 3.6 clarifies the role of the boundary conditions in Theorem 1 and shows that only one boundary condition is necessary to obtain the result. This extends the validity of the theorem to the case of intervals that are either unbounded from below or from above. Theorem 3.4 is the same statement as in Theorem 1 but without the restrictions on Ω. The comparison of Theorem 3.4 and Theorem 3.6 therefore extends and clarifies Mailath’s (1987) observation that in order to prove differentiability one can

12

use single crossing or a boundary condition.9 The more substantial results are Theorems 2.2, 3.2, 3.3, and 3.5. They also show that in order to prove differentiability, neither single crossing nor a boundary condition are necessary. Theorems 2.2, 3.2, and 3.3 are useful because the required monotonicity is often easy to verify, and is structurally novel because it is local (i.e., it only requires conditions at  0 to establish differentiability at  0 ). Note, however, that also statements 4 - 6 in Theorem 3 hold for arbitrary sets Ω, so the results can be applied “piecemeal" to open subsets Ω0 ⊂ Ω. This is particularly useful if the regularity assumptions for  are known not to hold on the whole domain. The condition 12 ≤ 0 (“manipulation monotonicity"), required in Theorem 3.5, is mild and satisfied in almost all examples we know of (though it does fail in Kartik, Ottaviani, and Squintani (2007)). It requires that the informed agent’s gain from manipulating the uninformed beliefs upwards does not increase in her type. Theorems 2 and 3 identify conditions under which incentive-compatibility implies differentiability and so (DE). To complete our discussion we now briefly turn to the question under what conditions the converse is true, i.e. when differentiability (in the form of (DE)) implies incentive-compatibility. Under the assumptions of Theorem 1, Mailath (1987, Theorem 3) showed that the converse holds if  satisfies the single-crossing property on the graph of . The next theorem shows that this statement continues to be true under weaker assumptions. Moreover, the property is also locally necessary (the statement of Mailath (1987, Theorem 3) is incorrect). The Spence (1973)-Mirrlees (1971) single-crossing property requires the agent’s marginal rate of substitution between her action () and that of the uninformed agents be appropriately monotone in her type. In our examples, the uninformed agents’ action is a monetary transfer, and as is typical, the action is monotone in beliefs about type. Consequently, in our reduced form model, the appropriate marginal rate of substitution is between  b and b  )2 (  b  ). The adjective “appropriately” captures , that is, 3 (  the requirement that, for example, in job market signaling, single crossing is implied by more (rather then less) able workers having a lower marginal cost of education. If less able workers have the lower marginal cost of education, then the marginal rate substitution between education and wage is an increasing function of ability. While monotonic, such a marginal rate of substitution precludes the existence of a separating equilibrium. 9 The single-crossing property is a standard condition on the indifference curves of the informed agent, which we discuss in more detail below.

13

The single-crossing property imposes a uniform structure on the derivatives of  that implies global optimality from the first-order condition (which is essentially the differential condition (DE)). Conversely, the second-order condition for local optimality implied by incentive-compatibility is essentially the local single-crossing condition. The following theorem on the role of single crossing is proved in the appendix. Since  0 and 2 do not change sign, (19) and (20) both imply that 3 (  b  )2 (  b  ) is monotonic in  (with the signs of  0 and 2 jointly determining the appropriate monotonicity). Condition (19) implies a global monotonicity, in that it holds everywhere on the graph of , while (20) implies a local monotonicity, in that it only holds for the derivative evaluated at  b = . Theorem 4 Assume that the one-to-one function  is continuous on Ω and satisfies the differential equation (DE) on the interior of Ω. Suppose b  (b  )) 6= 0 for all   b ∈ Ω. 2 (  1. If

 b  (b ))  ()2 (   0

½

3 (  b  (b  )) 2 (  b  (b  ))

¾

≥0

(19)

for all   b ∈ Ω, then  is incentive-compatible.

2. If  is incentive-compatible, then ½ ¾¯ b  (b  )) ¯¯  3 (  0 ≥0  ()2 (  ())  2 (  b  (b  )) ¯ =

(20)

for all  ∈ Ω.

5 5.1

The Examples Revisited Equity Issues

In equilibrium, the payoff of the informed party in the model of Section 3.1, (4), is strictly decreasing in : under truth-telling (b  = ), investors pay the expected value of the firm, and any share the informed owner holds yields a mean-preserving spread around the mean. Since the informed owner is risk-averse, her utility is therefore strictly decreasing in the size of her shareholdings. However, since 3 (  0) = 0, the assumption of Theorem 2.2 is violated.

14

In order to apply Theorem 2, choose any small  ∈ (0 1) and consider the modified problem in which X = [ 1]. Theorem 2.2 then implies that any solution to the original problem satisfies the differential equation (DE) for X = (0 1]. Since the lowest -type obtains the first-best,  = 0, in this type of signaling problem, Theorem 2.1 implies that  is continuous on all of [0 1]. If the differential equation (DE) of this problem has a unique solution on (0 1] (which it does under standard conditions), Theorem 2 therefore yields the unique solution to the full problem. Note that the condition of Theorem 2.2 is not satisfied for  =  1 (the left boundary), and indeed we have  0 ( 1 ) = ∞. Since  is strictly concave in , one can also apply Theorem 3 to obtain differentiability. In fact, by statement 3,  must be differentiable on ( 1  ∞), and by the first statement it is continuous at  1 .

5.2

Market Microstructure

From (11), the first-best in the model of Section 3.2 is given by    () =

−1  

Under any competitive incentive-compatible separating price-quantity schedule, type  = 0 must get her first-best allocation,  = 0.10 Separation then implies that every other type must choose a non-zero quantity. Hence, 2 (  ()) 6= 0 for all  6= 0 by (12). Since 12 ≡ 0, Theorem 3.5 therefore implies that any incentive-compatible schedule  must be differentiable on the open sets (0 ∞) and (−∞ 0). By Theorem 3.1, the schedule  is continuous at  = 0. Calculating the derivatives of  on (0 ∞) and (−∞ 0) from (DE) shows that the left-hand and right-hand derivative of  at  = 0 exist and are identical. Hence,  is differentiable on all of Ω. One of the main insights in Glosten (1989) is his non-existence result. In particular, he shows that in the case  ≤ 2, the differential equation (DE) for Ω = R does not have a separating solution (see also Hellwig (1992)). This suggests that too much competition may be detrimental for market activity. This conclusion requires that every equilibrium trading schedule is differentiable, a result missing in Glosten (1989), but which is implied by Theorem 3. 10 Since  (0 0 (0)) ≥ 0 (type  = 0 has the option of choosing  = 0), and  (0 0 ) = −2 2, which is strictly negative if  6= 0, we have (0) = 0.

15

It is worth noting that this conclusion also requires viewing separation as an implication of competition. Such a view involves an implicit appeal to some refinements (see footnote 1).11 Mailath and Nöldeke (2008) argue that competitive pricing does not lead to market breakdown, even if the investor’s private information has unbounded support.

5.3

Security Design

Since the firm’s payoff function  is linear in , if the firm’s strategy  is incentive-compatible and one-to-one, Theorem 2 implies that it is differentiable for all   0 and satisfies the differential equation (1 − ) 0 () + () = 0

(21)

It can be easily verified that (21) has the solution 1

() =  − 1− 

(22)

where  ≥ 0 is a constant of integration. Since () ∈ [0 1] by construction, (22) implies that  1 = inf Ω and  must satisfy 1 ≥ 1− for a solution to exist. In particular, if the interaction in the model is a signaling game (as in Section 3 of DeMarzo and Duffie (1999)), then firm type  1 must obtain its most preferred allocation  = 1, and the constant of integration is 1

 =  11−  Note that this implies that Ω must be bounded away from 0 for a nontrivial solution to exist.12

A

Proofs of Theorems 2 and 3

In what follows, Ω and X are intervals of R, and  is a one-to-one function satisfying (IC). As in Mailath (1987), we first conduct some preliminary b ∈ Ω and  ∈ X  calculations. Fix 0 ∈ Ω. Define, for arbitrary   (  b  ) ≡  (  b  ) −  (  0  ( 0 ))

11 Gale (1992, 1996) describes a Walrasian approach to competition in markets with adverse selection that is related to the refinement ideas of Kohlberg and Mertens (1986) and which yields similar conclusions. 12 DeMarzo and Duffie (1999) derive (22) assuming differentiability and cite an earlier unpublished version of their paper for a proof.

16

Since  is C 2 ,  is C 2 on Ω2 × X . The derivatives are well defined on the boundary of the relevant domains (in which case they are the appropriate one-sided derivatives). Moreover, ( 0  ( 0 )) = 1 ( 0  (0 )) = 0

∀

(A.1)

Incentive compatibility implies ( 0   ()) ≤ 0

and (  ()) ≥ 0

(A.2) (A.3)

For any  ∈ Ω and  ∈ [0 1], define [; ]1 ≡ (0 + (1 − )  ()) and for any  ∈ [0 1], define [; ]23 ≡ ( 0   0 + (1 − ) (0 ) + (1 − )()) Expanding (  ()) around ( 0   ()) as a second order Taylor expansion yields (  ()) = (0   ()) + 1 ( 0   ())( −  0 )

+ 12 11 ([; ]1 )( −  0 )2

for some  ∈ [0 1]. Next, expanding 1 ( 0   ()) around ( 0   0  ( 0 )) as a first order Taylor expansion yields 1 ( 0   ()) = 1 ( 0  0  (0 )) + 12 ([; ]23 )( −  0 ) + 13 ([; ]23 )(() − ( 0 )) for some  ∈ [0 1]. Because the first term on the right-hand side is 0 by (A.1), combining these two expressions yields (  ()) = (0   ()) ½ £1 ¤ + ( − 0 ) 2 11 ([; ]1 ) + 12 ([; ]23 ) ( −  0 )

¾ + 13 ([; ]23 )(() − ( 0 ))  (A.4)

17

Expressions (A.2), (A.3), and (A.4) imply 0 ≥ ( 0   ()) ≥ −( −  0 )

½

£1

¤

2 11 ([; ]1 ) + 12 ([; ]23 )

( −  0 ) ¾ + 13 ([; ]23 )(() − ( 0 ))  (A.5)

for some   ∈ [0 1]. Lemma A If  is continuous at  0 and 3 ( 0   0  ( 0 )) 6= 0, then  is differentiable at 0 with derivative  0 (0 ) = −

2 ( 0   0  ( 0 ))  3 ( 0   0  ( 0 ))

This is essentially Proposition 2 of Mailath (1987). Its proof requires no modification (after taking the bound  as the sup of the cross-partials of  over a compact neighborhood of ( 0  0  (0 ))), since the proof only requires  to be C 2 (which extends to the boundary). Under Assumptions 1 and 2, Lemma A implies that  is differentiable at  if  is continuous there,    () ∈ int(X ) and () 6=    (). Lemma B Suppose that Assumptions 1 and 2 hold. If  is continuous at 0 ∈ int(Ω), and 2 ( 0   0  ( 0 )) 6= 0 or 2 ( 0   0     (0 )) 6= 0, then 3 ( 0  0  (0 )) 6= 0. Proof. Suppose that 3 (0   0  ( 0 )) = 0. Then by Assumption 2.2, ( 0 ) =    (0 ) (and so 2 ( 0  0  (0 )) = 2 (0   0     ( 0 ))), even if ( 0 ) is on the boundary. Hence, by Assumption 2.1, 33 ( 0  0  (0 ))  0 (one-sided derivative if ( 0 ) ∈ {min X  max X }). Suppose 2 ( 0  0  (0 ))  0. Expanding (0   ()) around (0   0  ( 0 )) in (A.5) and dividing by  −  0  0 yields, since ( 0   0  ( 0 )) = 0 and 3 ( 0   0  ( 0 )) = 0, ¸ ∙ () − (0 ) © 1 0 ≤ 2 ( 0  0  (0 )) + 2 33 ([; ]23 )(() − ( 0 )  − 0 ª 1 + 23 ([; ]23 )( −  0 ) + 2 22 ([; ]23 )( −  0 ) £1 ¤ ≤ − 2 11 ([; ]1 ) + 12 ([; ]23 ) ( −  0 ) − 13 ([; ]23 )(() − ( 0 ))

18

for some  ∈ [0 1]. Rearranging and simplifying gives − 12 22 ([;]23 )( −  0 ) − 23 ([; ]23 )(() − ( 0 ))

(() − ( 0 ))2 33 ([; ]23 ) (A.6) ≤ 2 ( 0   0  ( 0 )) + 2( −  0 ) ¤ £ ≤ − 12 11 ([; ]1 ) + 12 ([; ]23 ) + 12 22 ([; ]23 ) ( − 0 ) − [13 ([; ]23 ) + 23 ([; ]23 )] (() − (0 ))

Since  is continuous at  0 , as  %  0 , the terms bounding the expression in (A.6) converge to 0, and so the term in (A.6) must also converge to 0. But, for  close to  0 , 33 ([; ]23 )  0, and so that term is bounded away from 0 from below by 2 (0   0  ( 0 ))  0, contradiction. If 2 (0   0  ( 0 ))  0, a similar contradiction is obtained using the same argument applied to a sequence  &  0 . Under Assumptions 1 and 2, Lemma B implies that () 6=    () at an interior  if  is continuous there,    () ∈ int(X ) and 2 (  ()) 6= 0. Lemma C Suppose Assumptions 1 and 2 hold. For each non-empty compact interval [ ] ⊂ Ω, ([ ]) is bounded. Proof. The continuity of  and the Maximum Theorem imply that the first-best    is a continuous function on Ω. Suppose that  is unbounded on [ ] and let   ∈ [ ],  = 1 2  be a sequence such that  = (  ) → ∞ (the case  → −∞ is handled analogously). We may assume that, by taking subsequences if necessary, the sequence   converges to some 0 ∈ [ ]. There is  ∈ N such that (  )     (0 ) for all  ≥  . Assumption 2 implies that  ( 0   0  (  )) → −∞. For any   0 and   0 let 1 ∈ N be such that  (0   0  (  ))  − −  for all  ≥ 1 . Since (  ) → ∞ we can assume that (1 )  sup    ( ). The continuity of  implies that there is an 2 ∈ N, 2  1 , such that  (      ( 1 )) is within  of − −; hence  (     (1 ))  − for all  ≥ 2 . Assumption 2 implies that for each  ,  (     ) is strictly decreasing in  if      ( ). Hence,  (      (  ))  − for all  sufficiently large. Since this is true for arbitrary , we have a contradiction to the incentive compatibility of .

19

Lemma D Suppose either that X is compact or that Assumptions 1 and 2 hold. If  → 0 , then  ( 0  0  ()) →  (0   0  ( 0 )). Proof. Fix a compact neighborhood  in Ω containing  0 . By Lemma C or directly by the compactness of X , ( ) is bounded. Hence,  is uniformly continuous on  2 × cl(( )), where cl(·) denotes the closure. Fix   0. Uniform continuity implies that there is a  1  0 with { ∈ Ω : | − 0 |   1 } ⊂  such that for all  ∈ ( ), | − 0 |   1 =⇒ | ( 0   ) −  ( 0  0  )|   For these , incentive compatibility implies  ( 0   0  ( 0 )) ≥  ( 0   ())   ( 0   0  ()) − 

(A.7)

On the other hand, there is a  2  0 with { ∈ Ω; | −  0 |   2 } ⊂  such that for all  ∈ ( ), | −  0 |   2 =⇒ | ( 0  ) −  ( 0   0  )|  2 and

| (  ) −  (0   0  )|  2

Hence, for these , incentive compatibility implies  2  ≥  ( 0  (0 )) −   ( 0  0  (0 )) −  (A.8) 2

 ( 0   0  ())   (  ()) −

Therefore, for  ∈ Ω with | −  0 |  min( 1   2 ), we have  ( 0   0  ( 0 )) −    ( 0   0  ())   ( 0   0  ( 0 )) +  where the first inequality is from (A.8) and the second (A.7). Letting  go to zero proves the lemma. Theorems 2.1 and 3.1. Suppose that X is compact (Theorem 2) or that Assumption 2 holds (Theorem 3). For any  ∈ Ω, if Assumption 1 holds and if () =    (), then  is continuous at . Proof. Consider  0 ∈ Ω and fix a compact neighborhood  in Ω containing 0 . Consider a sequence  → 0 in  . By the compactness of X (for Theorem 2.1) or Lemma C (for Theorem 3.1), the sequence (  ) has a 20

convergent subsequence that converges to an  b ∈ cl(( )). By Lemma D, on that subsequence,  ( 0   0  (  )) →  ( 0  0  (0 ))

(A.9)

By Assumption 1,  ( 0   0  ·) has a unique maximum at  =    ( 0 ). Equation (A.9) then implies  b =    ( 0 ) when (0 ) =    ( 0 ), and so  is continuous at  0 .

Theorems 2.2 and 3.2. Suppose that X is compact (Theorem 2) or that Assumptions 1 and 2 hold (Theorem 3). For any  ∈ Ω, if 3 (  ) 6= 0 for all  ∈ X , then  is differentiable at .

b and (A.9) as in the previous proof. Proof. Choose  0 ∈ Ω and derive  Since  (  ·) is strictly monotone by assumption, it is one-to-one. Hence,  b = ( 0 ), and  is continuous at 0 . Differentiability then follows from Lemma A.

Theorem 3.3. Suppose that Assumptions 1 and 2 hold. For any 0 ∈ int(Ω), if  (0   0  ·) is a monotone function in , and if 2 ( 0  0     (0 )) 6= 0, then  is differentiable at 0 .

Proof of Theorem 3.3. Choose  0 ∈ Ω and derive  b and (A.9) as above. If  ( 0   0  ·) is monotone, it is strictly monotone (and so one-toone) because of Assumption 2. Hence,  b = ( 0 ), and so  is continuous at  0 . Differentiability therefore follows from Lemmas A and B. Lemma E Suppose that Assumptions 1 and 2 hold, 13 (  ) 6= 0 and 2 (  ()) 6= 0 for all ( ) ∈ int(Ω) × X . Then  can have at most one point of discontinuity 0 in int(Ω). At the discontinuity, 1.  is continuous from either the left or the right, 2. the left-hand and the right-hand limits exist (i.e., the discontinuity is a jump discontinuity), and 3. the jump of  is of the same sign as 13 , i.e., µ ¶ lim () − lim () · 13  0 & 0

% 0

21

(A.10)

Proof. Suppose  is discontinuous at some  0 ∈ int(Ω) and fix a compact neighborhood in Ω around 0 , [0 −   0 + ] ∩ Ω. By Lemma C, there exists a sequence { } in [ 0 −   0 + ] ∩ Ω with   →  0 such that the b 6= (0 ). sequence ( ) converges to some  By the continuity of  and Lemma D, b) =  ( 0   0  ( 0 ))  ( 0   0  

(A.11)

The strict quasi-concavity of  (from Assumptions 1 and 2) implies that the equation  ( 0  0  ) =  ( 0  0  (0 )) can have at most two distinct solutions in , one of them being ( 0 ). If there is only one such solution,  is continuous at  0 . Hence, suppose there are two and denote them by 0 and 00 , with 0  00 . Equation (A.11) and the strict quasi-concavity also implies that (A.12) 0     ( 0 )  00  We now show that the left- and right-hand limits of  at  0 exist. First, consider any sequence   & 0 , and let + = lim &0 ( ). Focusing on the left-most and right-most terms of the inequality chain (A.5) and dividing through by  −  0 yields, for  =   , £1 ¤ 2 11 ([  ; ]1 ) + 12 ([  ; ]23 ) (  − 0 )+13 ([  ; ]23 )((  )−( 0 )) ≥ 0 (A.13) Since 13 ([; ]23 ) = 13 ([; ]23 ), in the limit, this implies (+ − (0 ))13 (0   0  (0 ) + (1 − )+ ) ≥ 0

(A.14)

Now consider any sequence  %  0 , and let − = lim %0 ( ). By the same argument as before, (− − (0 ))13 (0   0  (0 ) + (1 − )− ) ≤ 0

(A.15)

Suppose that 13  0 (the case 13  0 is handled similarly, with the relevant inequalities reversed). Inequalities (A.14) and (A.15) imply lim inf (  ) ≤ lim sup (  ) ≤ ( 0 ) ≤ lim inf (  ) ≤ lim sup (  )  % 0

  &0

 % 0

  &0

(A.16) By our earlier argument, each of the five terms in (A.16) is either equal to 0 or 00 . Hence, exactly one of the four inequalities in (A.16) is strict and  is continuous from the left or from the right. We now argue that the discontinuity at  0 is a jump discontinuity. Suppose, en route to a contradiction, that the right-most inequality in (A.16) 22

is strict (the left-most inequality is handled similarly). Hence, there are e  & 0 with ( 0 ) = lim (  )     ( 0 )  sequences   & 0 and    lim (e   ). Since  is continuous, for large  and small , we have   (e   )   ()  ( ) for all  ∈ [ 0   0 + ] ⊂ int(Ω). Hence, for  ∈ [ 0   0 + ],    () is not on the boundary of X , and therefore 3 (     ()) = 0. e  with  0   e     0 +. Theorem 3.1 and Lemma B Fix   and  imply that for all  ∈ [e      ], () 6=    (). Hence,  is discontinuous ) and some right at some  b ∈ [e      ] with some left limit exceeding    (b  ). But this contradicts (A.16), applied to  b. limit being less than    (b Hence, the right-hand limit at  0 exists. Similarly, one shows that the lefthand limit exists. Moreover, (A.16) implies (A.10), i.e., jumps can only be in one direction. It remains to argue that  can have at most one discontinuity in int(Ω). Since all discontinuities are jump discontinuities, (A.12) implies that all discontinuities are isolated. Suppose there exist two discontinuities 1  2 such that  is continuous on ( 1   2 ). Because jumps can only be in one direction, (A.12) holds, and    is continuous, it follows that there exists an  ∈ ( 1  2 ) such that () =    (). By continuity, the set  = { ∈ b ∈ ( 1   2 ) be its minimum. ( 1  2 ); () =    ()} is compact. Let  Since  is one-to-one,  b is an isolated point of  and    intersects  at ) is  b from below. However, Theorem 3.1 and Lemma B imply that    (b on the boundary of X . Contradiction.

Theorem 3.4. Suppose Assumptions 1 and 2 hold, 13 (  ) 6= 0 for b  (b  )) 6= 0 for all   b ∈ int(Ω), and that all ( ) ∈ int(Ω) × X , 2 (  3 (  b  (b  ))2 (  b  (b  )) is a strictly monotone function of  ∈ int(Ω) for all  b ∈ int(Ω). Then  is differentiable on int(Ω).

Proof. The result follows from Lemma B once we have proved that  is continuous in the interior of Ω. Suppose that  is discontinuous at  0 ∈ int(Ω). Assume that 13 (  )  0 (the other case is analogous). By Lemma E,  is continuous for all  6=  0 and so, by Lemma B, differentiable at all such  ∈ int(Ω) with derivative 2 (  ())   0 () = − 3 (  ())

Since by (A.12) and Lemma E, () is strictly smaller than the first-best for    0 and strictly greater for   0 , we have (−0 )3 (  ())  0 for  6= 0 , and so ( −  0 ) 0 ()2 (  ()  0 for  6=  0 . Since 23

2 (  ()) does not change sign on int(Ω) by assumption,  0 has one sign for   0 and the other sign for    0 . By assumption, 3 (  b  (b  ))2 (  b  (b )) is a strictly monotone function of  ∈ int(Ω), and so ∙ ¸ b  (b ))  3 (  0 b  (b  )) 0 (A.17)  ()2 (   2 (  b  (b ))

for  either below or above  0 . Suppose it is the former (the latter is handled similarly). Choose arbitrary  0   00   0 in int(Ω). Consider  (00   −1 () ) as a function of . By the differentiability of  and the Intermediate Value Theorem, there exists an  ∈ (0  00 ) such that ½  −1 (()) 00 0 0 00 00 00  (    ( )) =  (    ( )) − 2 (00   ())  ¾ + 3 ( 00   ()) ((00 ) − (0 )) ¶ ½ µ 3 (  ()) =  ( 00  00  (00 )) − 2 (00   ()) − 2 (  ()) ¾ + 3 ( 00   ()) ((00 ) − (0 )) ¾ ½ 3 (00   ()) 3 (  ()) 00 00 00 − =  (    ( )) − 2 (00   ()) 2 (  ()) × 2 ( 00   ())((00 ) − ( 0 )) µ ¶ Z 00  3 (  ()) 00 00 00 =  (    ( )) −   2 (  ())  × 2 ( 00   ())((00 ) − ( 0 ))   ( 00  00  (00 ))

The strict inequality (which follows from (A.17) and   00 ) contradicts incentive compatibility. Lemma F Suppose Assumptions 1 and 2 hold. Let Ω0 be an open subset of Ω on which  is differentiable. Assume that 2 (  ()) 6= 0 for all  ∈ Ω0 . Then 12 (  ()) +  0 ()13 (  ()) ≥ 0 for all  ∈ Ω0 . 24

(A.18)

Proof. By Lemma A,  0 () 6= 0 for all  ∈ Ω0 , and so  −1 is differentiable for all  ∈ (Ω0 ). Since () must maximize  (  −1 () ), the implied first order condition evaluated at  =  −1 (), 2 ( −1 ()  −1 () )

 −1 () + 3 ( −1 ()  −1 () ) = 0 

must hold for all  ∈ (Ω0 ). Since the equation is an identity, the first derivative must also equal zero: ¶2 µ  −1 2  −1  −1 + 2 + [13 + 223 ] + 33 = 0 (A.19) [12 + 22 ]   2 where all the partial derivatives of  are evaluated at ( −1 ()  −1 () ). The second derivative of  (  −1 () ) is ¶2 µ  −1 2  −1   −1 −1 +  (  () ) =  +2 +33  (A.20) 22 23 2 2   2 where now all partial derivatives are evaluated at (  −1 () ). Evaluating (A.20) at  =  −1 (), and substituting from (A.19), yields µ ¶2  −1  −1  −1 −1  ( ()  () ) = − − 13 12 2    µ ¶ 2  −1 =− (12 +  0 13 ) (A.21)  Since  (  −1 () ) must have a local maximum at  = (), the right-hand side of (A.21) must be (weakly) negative for all  ∈ Ω0 , which yields (A.18). Theorem 3.5. Suppose Assumptions 1 and 2 hold, 13 (  ) 6= 0, and 12 (  ) ≤ 0 for all ( ) ∈ int(Ω) × X . If 2 (  ())  0 or if 2 (  ())  0 for all  ∈ int(Ω), then  is differentiable on int(Ω).

Proof. By Lemma E,  can have at most one discontinuity in int(Ω), say 0 . By Lemma A,  is differentiable on int(Ω)Â{0 }, and by Lemma F, it satisfies (A.18) there. Since 12 ≤ 0 and 13 6= 0, (A.18) immediately implies that  0 must have the same sign as 13 for all  ∈ int(Ω) \ { 0 }. Recall that for all  ∈ int(Ω) \ {0 }, ()     () ⇔ 3 (  ())  0 0

⇔  ()2 (  ())  0 25

(A.22)

where the first equivalence follows from the definition of the first-best and the second from the form of the derivative of  derived in Lemma A. By (A.12), we have both (0 )     ( 0 ) and ( 00 )     ( 00 ) for some 0  00 ∈ int(Ω) (since  0 is a point of discontinuity). Since 2 does not change sign on int(Ω),  0 does. But  0 must have the same sign as 13 on int(Ω)Â{ 0 }, which does not change its sign by assumption. Contradiction. Theorem 3.6. Suppose Assumptions 1 and 2 hold and 13 (  ) 6= 0 for all ( ) ∈ int(Ω) × X . (i) Assume that Ω = [ 1   2 ] or Ω = [1  ∞) and that (1 ) =    ( 1 ). If 2 (  ())  0 for all  ∈ Ω then  is differentiable on Ω \ {1 }, (ii) Assume that Ω = [1   2 ] or Ω = (−∞  2 ] and that ( 2 ) =    ( 2 ). If 2 (  ())  0 for all  ∈ Ω then  is differentiable on Ω \ {2 }. Proof. Assume that 2 (  ())  0 for all  ≥ 1 (the other case is handled similarly). Suppose that  is discontinuous at  0   1 (by Lemma E, there can be no other discontinuity). Denote Ω0 = { ∈ Ω;   1   6= 0 }. By Lemma A,  is differentiable on Ω0 , and by Lemma F, it satisfies (A.18) there. Since 2 (  ()) 6= 0, by assumption, the continuity of  at  1 and Lemma A imply that | 0 ()| → ∞ for  →  1 . Hence, (A.18) implies that  0 must have the same sign as 13 for  ∈ Ω0 sufficiently close to  1 . As in (A.22), we now have for all  ∈ Ω0 ()     () ⇔  0 ()  0 But according to Lemma E, at the discontinuity, the jump of  has the same sign as 13 , which is impossible.

B

Proof of Theorem 4

First note that it suffices to prove the result assuming Ω is open (since if Ω includes a boundary, and  is incentive compatible on the interior of Ω, then continuity implies  is incentive compatible on Ω). 1. (Sufficiency of global single crossing for IC.) Since  satisfies (DE), it satisfies the first order condition implied by (IC), and so satisfies (IC) 26

if   (  −1 () ) · ( − ()) ≤ 0 

∀ ∈ (Ω)  ∈ Ω (B.23)

The derivative equals 2 (  −1 () )

Ã

!−1 ¯  ¯¯ + 3 (  −1 () ) b  ¯ −1 ()

−3 ( −1 ()  −1 () ) + 3 (  −1 () ) 2 ( −1 ()  −1 () ) ¾ ½ 3 (  −1 () ) 3 ( −1 ()  −1 () ) −1 = 2 (  () ) −  2 (  −1 () ) 2 ( −1 ()  −1 () ) = 2 (  −1 () )

If  is strictly increasing and 2 (  ())  0 (the other possibilib  (b  ))2 (  b  (b  )) ties handled mutatis mutandis), (B.23) is satisfied when 3 (  is an increasing function of  for all  b ∈ Ω.

2. (Necessity of local single crossing for IC.) Suppose  satisfies (IC). The second order condition is ¯ ¯ 2 −1 ¯  (  () ) ≤ 0 ¯ 2 =() which is, after substituting for 2  −1 ()2 (see (A.21)), ½ ¾  −1  −1 · 12 (  ()) + 13 (  ()) ≤ 0 −   i.e., ½ ¾  −1 3 (  ()) · 13 (  ()) − 12 (  ()) ≤ 0 −  2 (  ()) Multiplying both sides of this inequality by −( 02 yields an expression equivalent to (20).

References Burkart, M., and S. Lee (2011): “Smart Buyers,” Stockholm School of Economics and NYU, unpublished.

27

Cho, I.-K., and D. Kreps (1987): “Signaling Games and Stable Equilibria,” Quarterly Journal of Economics, 102(2), 179—221. DeMarzo, P., and D. Duffie (1999): “A Liquidity-Based Model of Security Design,” Econometrica, 67(1), 65—99. Fang, K.-T., S. Kotz, and K.-W. Ng (1990): Symmetric Multivariate and Related Distributions. Chapman and Hill, New York. Gale, D. (1992): “A Walrasian Theory of Markets with Adverse Selection,” The Review of Economic Studies, 59(2), 229—255. (1996): “Equilibria and Pareto Optima of Markets with Adverse Selection,” Economic Theory, 7(2), 207—235. Glosten, L. R. (1989): “Insider Trading, Liquidity, and the Role of the Monopolist Specialist,” Journal of Business, 62(2), 211—235. Hardin, C. (1982): “On the Linearity of Regression,” Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 61, 291—302. Hellwig, M. (1992): “Fully Revealing Outcomes in Signalling Models: An Example of Nonexistence When the Type Space is Unbounded,” Journal of Economic Theory, 58(1), 93—104. Kartik, N., M. Ottaviani, and F. Squintani (2007): “Credulity, Lies, and Costly Talk,” Journal of Economic Theory, 134(1), 93—116. Kohlberg, E., and J.-F. Mertens (1986): “On the Strategic Stability of Equilibria,” Econometrica, 54(5), 1003—1037. Kreps, D., and R. Wilson (1982): “Sequential Equilibria,” Econometrica, 50(4), 863—894. Leland, H., and D. Pyle (1977): “Information Asymmetries, Financial Structure, and Financial Intermediation,” Journal of Finance, 32(2), 371— 387. Mailath, G. J. (1987): “Incentive Compatibility in Signaling Games with a Continuum of Types,” Econometrica, 55(6), 1349—1365. Mailath, G. J., and G. Nöldeke (2008): “Does Competitive Pricing Cause Market Breakdown Under Extreme Adverse Selection?,” Journal of Economic Theory, 140(1), 97—125. 28

Mailath, G. J., M. Okuno-Fujiwara, and A. Postlewaite (1993): “Belief-Based Refinements in Signaling Games,” Journal of Economic Theory, 60, 241—276. Mirrlees, J. A. (1971): “An exploration in the theory of optimum income taxation,” Review of Economic Studies, 38, 175—208. Roddie, C. (2011): “Theory of Signaling Games,” Nuffield College, Oxford. Spence, A. M. (1973): “Job Market Signaling,” Quarterly Journal of Economics, 87(3), 355—374.

29