Get Complete Project Material File(s) Now! »

## Incentive Efficiency and The Virtual Utility Approach

Following Holmstr¨om and Myerson (1983) we say that a mechanism µ¯N for the grand coalition is (interim) incentive efficient if and only if µ¯N is incentive compatible and there does not exist any other incentive compatible mechanism giving a strictly higher expected utility to all types ti of all players i ∈ N.9 Because the set of incentive-compatible mechanisms is a compact and convex polyhedron, (by the supporting hyperplane theorem) the mechanism µ¯N is incentive efficient if and only if there exist non-negative numbers λ = (λi(ti))i∈N, ti∈Ti , not all zero, such ¯ that µN is a solution to ∑ ∑ λi(ti)Ui(µN | ti) max ∗ (2.1) µN∈M i∈N ti∈Ti.

We shall refer to this linear-programming problem as the primal problem for λ . Let αi(τi | ti) ≥ 0 be the Lagrange multiplier (or dual variable) for the constraint that the type ti of player i should not gain by reporting τi. Then the Lagrangian for this optimization problem can be written as i∈N ti∈Ti τi∈Ti ! L (µN,λ,α) = ∑ ∑ λi(ti)Ui(µN | ti) + ∑ αi(τi | ti) [Ui(µN | ti) −Ui(µN , τi | ti)] .

where µN ∈ MN . To simplify this expression, let # p(ti) » τi∈Ti ! τi∈Ti p(t−i | ti) vi(d, t, λ , α) = 1 λi(ti) + ∑ αi(τi | ti) ui(d, t) − ∑ αi(ti | τi) p(t−i | τi) ui(d, (τi, t−i)) (2.2).

The quantity vi(d, t, λ , α) is called the virtual utility of player i ∈ N from the joint action d ∈ D, when the type profile is t ∈ T , w.r.t. the utility weights λ and the Lagrange multipliers α. Then, the above Lagrangian can be rewritten as L (µN , λ , α) = ∑ p(t) ∑ µN (d | t) ∑ vi(d, t, λ , α) (2.3).

**Equity Principles for Bayesian Cooperative Games**

The Harsanyi NTU value can be characterized using two different fair allocation rules. The first of these two equity notions, introduced by Myerson (1980) under the name of balanced contributions, requires that, for any two members of a coalition, the amount that each player would gain by the other’s participation should be equal when utility comparisons are made in some weighted utility scale. The second equity principle, denominated subgame value equity by Imai (1983), says that, for every coalition S ⊆ N, each player in S should obtain his Shapley TU value from the game restricted to the subcoalitions of S when utility has been made comparable in some weighted utility scale. These two equity notions are in dual relationship: for fixed utilit Given a vector of utility weights λ and a vector of Lagrange multipliers α, let us consider the fictitious game in which players make interpersonal utility comparisons in the virtual utility scales (λ , α). In such a virtual game, each player’s payoffs are represented in the virtual utility scales and virtual payoffs are transferable among the players (conditionally on every state). We assume that, as a threat during the bargaining process within the grand coalition N, each coalition S ⊂ N commits to some mechanism µS : TS → Δ(DS).19 We denote by MS the set of mechanisms for S. Let M = ∏S⊆N MS denote the set of possible profiles of mechanisms that all various coalitions might select.

Let vi(µS, t, λ , α) denote the linear extension of vi(·, t, λ , α) (as defined in (2.2)) over µS. We define WS(µS, t, λ , α) as the sum of virtual utilities that the members of S ⊆ N would expect in state t when they select the mechanism µS, that is WS(µS, t, λ , α) = ∑vi(µS, t, λ , α). (4.1).

Let W (η, t, λ , α) = (WS(µS, t, λ , α))S⊆N denote the characteristic function game when the vec-tor of threats η = (µS)S⊆N ∈ M is selected by the various coalitions20 in the virtual game. For any vector η ∈ M , let ηS = (µR)R⊆S denote its restriction to the subcoalitions of S. We define W |S(ηS, t, λ , α) as the subgame of W (η, t, λ , α) obtained by restricting the domain of W (η, t, λ , α) to the subsets of S. Let φ be the Shapley TU value operator ; for i ∈ S ⊆ N, φi(S,W |S(ηS, t, λ , α)) will thus denote the Shapley TU value of player i in the subgame re-stricted to S when the vector of threats ηS is selected in the virtual game.

We denote Vi(µS | ti, λ , α) the expected virtual utility of type ti of player i ∈ S when the members of S agree on µS, i.e., Vi(µS | ti, λ , α) := ∑ p(t−i | ti)vi(µS, t, λ , α). (4.2).

### Some Comments About the (Non-)Existence of the H-solution

The H-solution is characterized by strong equity conditions that may lead to its non-existence in some cases. In this section we shall exhibit an example of a 4-player cooperative game with complete information in which there is no H-solution. The following hinders the existence of the H-solution in this example: first, optimal egalitarian threats do not exist for some utility weights; second, optimal egalitarian threats vary discontinuously with the utility weights, which makes impossible the consistency of conditions (i ) and (iv ) in the definition of the H-solution. This example can be used to construct a game with incomplete information satisfying the same properties. The method is outlined in footnote 31 below. We study instead the game with complete information, this being however easier to analyze. Finally, we discuss the reasons why the methods and techniques used to obtain existence results of the Harsanyi NTU value cannot be well adapted to games with incomplete information.

#### Free Disposal and the Structure of Incentives

When information is complete, the above difficulties are ruled out by considering games whose characteristic function is comprehensive (“free disposal” assumption). Then, one is tempted to accommodate free disposal activities by introducing decisions in each DS specifying how much utility a player may discard. This has no significant consequence when information is com-plete, however under asymmetric information, adding new decisions may change the incentive structure of the game: free disposal can be used for signaling purposes, i.e., for weakening incentive compatibility. As a result, for any interim utility allocation on the interim incentive efficient frontier (of the grand coalition), we cannot generally extend the original game by in-troducing additional decisions allowing players to discard utility (conditional on every state), while leaving the original utility allocation efficient in the expanded problem32. In order to il-lustrate this issue, consider again the (sub)game faced by players 1 and 3 in Example 1. Assume now that player 3 is allowed to dispose of his utility in state H. Specifically, let d˜ be such that u3(d˜, H) = 0, u3(d˜, L) = 5 and u1(d˜, H) = u1(d˜, L) = 0. Decision d˜ is equivalent to implement decision d131 but then player 3 agrees to discard 10 units of his utility in state H. Now consider the expanded problem with decision set D˜{1,3} = D{1,3} ∪{d˜}. The new set of incentive feasible interim utility allocations is depicted in Figure 4.

**Table of contents :**

**A Generalization of the Harsanyi NTU Value to Games with Incomplete Information **

1 Introduction

2 Formulation

2.1 Bayesian Cooperative Game

2.2 Incentive Efciency and The Virtual Utility Approach

2.3 The M-solution

3 Motivating Examples

3.1 Example 1: A Collective Choice Problem

3.2 Example 2: A Bilateral Trade Problem

4 Equity Principles for Bayesian Cooperative Games

5 Optimal Threats

6 The H-Solution

6.1 Example 1

6.2 Example 2

7 Some Comments About the (Non-)Existence of the H-solution

7.1 Example 3: Non-existence of the H-solution

7.2 Free Disposal and the Structure of Incentives

8 Appendix: Proof of Proposition 4

**On the Values for Bayesian Cooperative Games with Sidepayments **

1 Introduction

2 Bayesian Cooperative Game

3 Incentive Efciency and Virtual Utility

4 Values for Bayesian Cooperative Games with Orthogonal Coalitions

4.1 The M-Solution

4.2 The H-Solution

4.3 Reconciling the Differences

5 Values for Two-person Bayesian Games

xii Incentives in Cooperation and Communication

**The Value of Mediated Communication **

1 Introduction

2 Motivating Example

3 Basic Game

4 Mediated Persuasion

4.1 Mediated Persuasion Under Veriable Information

4.2 The Virtual Persuasion Game

4.3 Optimal Mediators

4.4 Extreme Communication Equilibria and the Number of Signals

5 Discussions

5.1 Cheap-Talk Implementation

5.2 Information Design Problems

**Bibliography**