Hiring Guns: Strategic Delegation and Common Agency 

Get Complete Project Material File(s) Now! »

Players, payoffs and communication technology

We study a setting where a population of jNj = n agents are embedded into an exoge-nously given network G. The network is represented by an adjacency matrix G that keeps track of all the direct connection between agents, where gij = 1 if agent i is con-nected to agent j and gij = 0 otherwise. Links are not reciprocal gij 6= gji and we focus only on directed networks. Let us denote the set of individual i’s direct connections as Ni(G) = fj 6= ijgij = 1g. We study the problem of how a message created from one agent – the source – will spread into the network and eventually reach the designed receiver of the message – the sink. The identity and position of the source and that of the sink are publicly known from all agents. The message created by the source is passed on from agent to agent and it is relayed from the source to the sink via word of mouth. The successful transmission from the source to the sink faces two threats. Messages can involuntarily be altered and can mutate at every step of the communication chain, resulting into the possible spread of the wrong message. Second, there exist two types of agents and the set of possible types is T = ( c; b). An agent is of type c with probability and is willing to retransmit the message he receives. Agents of type c have the choice to communicate when on the path from the source to the sink. We call them “communicative” agents. Respectively, with probability 1 any given agent is of type b and blocks the transmis-sion of messages to its peers. We call them “blockers”. As highlighted by existing research (Carley and Lin (1997); Adamic et al. (2016)), the complete breakdown of channels is a serious threat to successful communication. While in organizations employees could be unavailable or ignoring the necessity of communication, in online messaging some users might experience connection errors or device problems. We want to study how the risk of channel breakdown influences players’ strategic investments in communication precision. The breakdown of a channel differs fundamentally from communicative agents not invest-ing in communication precision, i.e. a middle manager sending a completely ambiguous report to his manager. In the latter case, the message is passed on from i to j, but the probability that the transmitted message matches the information item previously received by i is 1=2.
There is complete information about the network architecture, but agents do not know how the two types of agents are located in the network. More specifically, each agent ignores the types of his neighbors in the network and the population shares a common prior belief on the distribution of agents’ types ( ; 1 ). The source perfectly observes the underlying state of the world and generates a message describing the underlying state. We code the state of the world and messages as either being in favor of an action (“1”) or against it (“0”). We denote the set of possible states of the world by S = fs0; s1g and the set of messages as M = fw0; w1g. We assume without loss of generality that the source observes the state s0 and generates the word w0. This message is passed from the source on to her neighbors, who in turn will pass it to their neighbors and so forth, until the message reaches the sink who takes the payoff relevant decision for all the players in the network, picking an action in Asink = fw0; w1; ;g.
There is no conflict of interest and all individuals seek the truth, i.e. everyone wants the sink to receive the true message and learn the underlying state. Agents are Bayesian and share a common prior belief = 12 on the original message being generated by the source. Each agent decides how much to invest in the precision of the message she sends. The precision of communication is a costly effort and reflects real costs of communicating: primarily the time and effort involved (Dewatripont and Tirole (2005), Niehaus (2011)). The cost of communication is born by agents only when they speak and not when they listen. This is a simplifying assumption reflecting the fact listening and speaking are perfect complements and that it is sufficient to study strategic investments in one activity to capture the main workings of the model. The cost agent i endogenously chooses to pay for communicating with agent j 2 Ni is c(xij) = c xij.

Monitors and Game Overview

We choose monitors candidates with respect to their Bonacich centrality and their assign-ment to groups can be determined by either democratic election or random exogenous assignment. Underlying the framework is the assumption that participants’ behavior in the experiment will likely affect market and non-market interaction outside the labora-tory in real-life interactions, such as access to jobs, informal loans or other opportunities. In this context, we assume that monitors have the power to spur cooperative behavior through their capacity to report outside the laboratory bad behavior occurred within our experiment. In 2019, to provide support for our framework we conducted a survey to more than 300 random women. We shared with them a vignette of our experiment and asked several questions about the reputational power of monitors. The purpose of this survey was to capture their perceptions of the role of monitors and possible motivations behind voting for one of them. We described our study and asked subjects about whether information about misbehavior in the experiment would spread, how that would depend on the identity of the monitor, and what could be the motivations for voting to have a monitor. We find that on average respondents believe that high central monitors are able to spread information to almost 60% of the village population, while low central or average central monitors would reach less than 40% of the village population. Similarly, more than 80% of respondents declared that they would vote for a monitor in order to keep in check other group members through the threat of reputation. We present the main results in Figure 2.8.
We pick monitors in function of their Bonacich centrality. For every given village, we compute the eigenvector Bonacich centrality of all women and select for the role of mon-itors those with a centrality score greater than the 95th percentile or smaller than the 5th percentile. We choose eigenvector centrality because it captures how much information emanating from a monitor should spread in the network reaching also individuals who are not directly connected to the monitor. Our choice of basing our experiment on eigenvector centrality, and not on other centrality measures, derives from the litera-ture (Banerjee et al. (2014); Banerjee et al. (2019a); Breza and Chandrashekhar (2018)). These works show that an individual’s eigenvector centrality can explain his capacity to spread information in the larger network and that villagers are able to accurately identify central members of the community. In order to check in our context the robustness of this choice against alternative measures of centrality, we compute correlations between three centrality measures for the whole undirected network sample: degree, betweenness and (eigenvector) Bonacich centrality. While degree simply measures how many links a node has, betweenness quantifies the number of times a node acts as a bridge along the shortest path between any two other nodes. The results shown in Table 2.1 give us reason to think that, in our sample, attributing the roles of monitors according a measure of eigenvector centrality is robust to different centrality measures. The correlations in Table 2.1 are very strong and the coefficient between degree and Bonacich centrality is almost 0:92 while the coefficient between the latter and betweenness is almost 0:87. Computing the same coef-ficients on the subset of monitors, we obtain even stronger correlations between different measures. These figures give us reason to believe that the centrality of monitors is an intrinsic network characteristic of individuals that underlies different possible measures.
In order to neatly disentangle the different possible channels that might drive behavior, we set up an experiment where groups of three individuals are asked to privately vote for their preferred monitor and then play twice a standard public good game. The experimental session is sequenced as follows: first, players are assigned to a group formed either by their closest friends or by socially distant peers. The order of assignment to these two group compositions is randomized. Secondly, after being assigned their groups, players privately vote for their preferred monitor. Third, the choice of monitor is immediately followed by a contribution game. Each individual plays 2 rounds of a public good game within each group, once played with the elected monitor and once with a randomly picked monitor option, where we randomize the order of the two treatments. Groups are then reshuffled so that the same player is then placed in a different group composition (dense or sparse) and the game unfolds again as explained above. In total, each individual plays 4 rounds in two different groups (dense and sparse). We are able to exploit this design to get extract individual fixed effects and get partially rid of the endogeneity of networks when evaluating the impact of treatments. After participants play in the experimental sessions and receive payment for their performance in the games, we administer a second questionnaire meant to capture caste, wealth, religion, membership to community based organizations and a set of other individual level characteristics. Participants are quite homogeneous in terms of wealth and networks are highly homogenous in terms of caste.

READ  The roles of elite women of the New Kingdom

Timing, Actions and Payoffs

Agents play a two-stage game. In the second stage, agents play a voluntary contribution game which can be either overseen by a third-party monitor or by no one. The third party monitor can be assigned either through a random lottery or can be elected through a democratic vote, which happens in the first stage. More precisely, the game unfolds as follows. First, agents simultaneously vote for their preferred monitor mi 2 f0; 1g, where mi = 0 implies no monitor is chosen by individual i and mi = 1 means i votes for having the monitor. Once participants cast their votes, a monitoring technology is assigned to the group according to the following voting rule 8 m = <1 if mi= mj= 1. where m denotes the outcome of the vote. Second, agents make their contribution de-cision ci 2 R+. The action profile of agent i is then (mi, ci). The total contribution of all players is increased by 50 % and divided equally among the group members, implying that the rate of return for the contribution game with two players is 34 . The utility of player i is a function of both ci and cj, the level of altruism i and the rate of return of the contribution game. We assume a convex cost of contributing to represent the behavioral burden of contributing and to ensure the existence of an interior solution. Further, we believe that in this context belief-dependant motivations deeply affect players’ actions and, in the spirit of psychological games 14, we assume that how much player i values the utility of player j depends on i’s belief about the altruism of player j, 0i( j). In this regard, we take inspiration by Rabin (1993) which models the reciprocity of one agent as a function of beliefs about the other agent. The payoff of player i in the contribution game without a monitor is then U( ijm = 0) = W ci ci2 + 3 (ci + cj) + i 0i( j) W cj cj2 + 3 (ci + cj).

Preliminary Findings and Possible Limitations

We start the analysis by looking at the individual level variation in the choice of the monitor. In Table 2.2, the numbers along the diagonal represent the percentage of indi-viduals that always choose the same voting strategy irrespective of group composition. The largest proportion being 34.95% that always chooses to have no monitor, followed by 19.68% that always vote to have a high central monitor. The voting result shows substantial variation in voting strategy. Looking at the aggregate demand for peer mon-itoring, both dense and sparse groups vote more often to not have a monitor. Figure 2.4 shows that in dense groups, around 32% of players vote for a high central monitor, while in sparse groups more than 39% of players do so. Low central monitor is seldom chosen accounting for around 13% in both dense and sparse groups. For contribution, exogenous monitoring increases contribution only in sparse groups as seen from Table 2.3. We want to study how this differs when individuals play under the monitor that has been endogenously chosen by the group. To begin with, we compare the outcomes under endogenous and exogenous institutions, clubbing all three monitor treatments together for the later in Figure 2.5. The political process whereby the monitoring institution is obtained matters only for sparse groups where endogenous monitoring in blue increases contribution compared to the exogenous one.
Before presenting the results, we highlight two possible threats to our results and point to possible solutions. First, a number of recent studies have focused on the role that group inequality could play in contribution games (e.g. Nishi et al. (2015); Fehr and Schmidt (1999); Bolton and Ockenfels (2006)). We build three variables in order to capture in-equality along dimensions that are particularly relevant to our context: wealth, caste and education. The inequality indices are simply the group variance of the indices we constructed with our questionnaire on individual level characteristics. We observe that the 19 villages where we conduct our experimental sessions display very high degrees of homogeneity along these three dimensions. We control for these variables in all regres-sions under the label “Group Characteristics”, which embed also a set of socio-economic characteristics at the individual level. None of these variables has a significant impact on cooperative behavior and our results are robust to their inclusion among the regressors. Second, our result could be sensitive to the process of network elicitation. We ask for at least three “nominations” of friends. In most interviews, women named an average of 4 women which may not be fully exhaustive and may lead to have networks that are sparser than they actually are. This could imply an overestimation of social distance, i.e. individuals are actually socially closer than what they appear to be, which in turn may bias our results. However, it does not represent a threat to the validity of our results. On the contrary, it implies that the estimated effects of our treatments represent a lower bound of the real effect.

Impact of Different Exogenous Monitoring

For contribution, we start with the baseline case where monitors are assigned exogenously and study the difference in contribution between sparse and dense groups. As seen from Table 2.5, in sparse groups, average contributions increase significantly (p-value 0.014) by Rs 7.4 19 (15.8% of the mean) in the presence of a high central monitor (H) as compared to no monitor (NM). In dense groups, there is a Rs 4.5 increase (8.3% of the mean) but the difference is not significant. This result is in line with the literature that suggests presence of a central monitor increases cooperation only in sparse groups (Breza et al. (2016)). Further, the cost of the monitor being 8% of the average payoff, it is optimal for sparse groups to vote for a monitor but not dense. Taking only the exogenous monitor treatment, we run a linear regression with fixed effects on the contribution with respect to the type of monitor that was assigned and the group composition. It takes the following form: cjt = + 1 Dense + 2 H + 3 L + 4 H Dense + 5 L Dense + j + t + jt.

Table of contents :

1 Whispers in Networks 
1.1 Introduction
1.1.1 Related Literature
1.2 The Baseline Model
1.2.1 Players, payoffs and communication technology
1.2.2 Conversation in Trees
1.3 Conversation in a simultaneous game
1.4 Conversation in a sequential game
1.5 Cycles
1.5.1 Two Channels
1.5.2 Unique Channel
1.6 Conclusion
1.7 Appendix: Proofs
1.7.1 Proof of Proposition 1.1
1.7.2 Proof of Proposition 1.2
1.7.3 Proof of Proposition 1.3
1.7.4 Proof of Proposition 1.4
1.7.5 Proof of Proposition 1.5
2 Endogenous Institutions: a network experiment in Nepal 
2.1 Introduction
2.1.1 Related Literature
2.2 Experiment
2.2.1 Networks and Data
2.2.2 Monitors and Game Overview
2.2.3 Experimental Context
2.2.4 Design
2.3 The Framework
2.3.1 Types
2.3.2 Timing, Actions and Payoffs
2.3.3 Equilibrium
2.4 Results
2.4.1 Preliminary Findings and Possible Limitations
2.4.2 Statistical Estimation Impact of Group Composition on Monitor Voting Impact of Different Exogenous Monitoring Impact of Endogenous v/s Exogenous Monitoring Impact of Order of Endogenous/Exogenous
2.5 Conclusion
2.6 Appendix
2.6.1 Figures
2.6.2 Tables
2.6.3 Experiment Instructions
2.6.4 Summary Statistics
2.6.5 Monitor Choice
2.6.6 Model with Three Agents Proof of Proposition 2.1 Proof of Proposition 2.2 Proof of Proposition 2.3
3 Delegating Conflict 
3.1 Introduction
3.1.1 Related Literature
3.2 The Baseline Model
3.2.1 Players, actions and payoffs
3.2.2 Equilibrium
3.2.3 Comparative statics
3.3 Contracts: Complete Information
3.4 Incomplete Information
3.4.1 Incomplete information on the opposing militia’s ideology
3.4.2 Incomplete information on the ideology of both militias: second best contracts
3.5 Conclusion
3.6 Appendix: Proofs
3.6.1 Proof of Proposition 3.1
3.6.2 Proof of Proposition 3.2
3.6.3 Proof of Proposition 3.3
3.6.4 Proof of Proposition 3.5
3.6.5 Proof of Proposition 3.6
3.6.6 Proof of Proposition 3.7
3.6.7 Proof of Proposition 3.8
4 Hiring Guns: Strategic Delegation and Common Agency 
4.1 Introduction
4.1.1 Related Literature
4.2 Strategic Delegation of War
4.2.1 Players, actions and types
4.2.2 The game
4.2.3 Results
4.3 Competing for a Common Militia
4.3.1 Setting and governments’ programs
4.3.2 Optimization
4.3.3 Results
4.4 Conclusion
4.5 Appendix: Proofs
4.5.1 Proof of Lemma 4.1
4.5.2 Proof of Proposition 4.1
4.5.3 Proof of Lemma 4.2
4.5.4 Proof of Lemma 4.3
4.5.5 Corollary Proof of Corollary
4.5.6 Proof of Proposition 4.2
4.5.7 Proof of Proposition 4.3


Related Posts