Game-theoretic model for control effects of embeddedness on trust in social networks

Get Complete Project Material File(s) Now! »

Previous work – contemplating and modeling trust

A wide variety of literature exists on modeling and reasoning of trust computationally. However, the meaning of trust employed by different authors differs across the span of existing work. Bonatti et al. (2005) point out two different perspectives on determining trust, policy-based and reputation-based, each has been developed within different environmental contexts and targeting different requirements, whereas both address the same problem of ‘establishing trust among interacting parties in distributed and decentralized systems’(ibid p.12), however, assuming different settings. Policy-based trust refers to the reliance on objective strong security mechanisms such as trust certification authorities, while in case of reputation-based trust, trust is computed from ‘local experiences together with the feedback given by other entities in the network’(ibid p.11). In later case, research uses the history of an individual’s actions/behavior to compute trust over social network through direct relations or recommendations – i.e. two parties rely on a third party since they have no direct trust information about each other. Based on such identification of two perspectives on trust in the semantic web, Artz and Gil (2007) categorize the trust research into 4 major areas: policy- based trust, reputation-based trust, general models of trust, and trust in information resources, considering the fact that several scholars can fit into more than one category.
Many researches using policies to express in what situation, for what, and how to determine trust in an entity rely on credentials, but generally utilizing a broad range of information to make trust decisions. Noteworthy in application of credentials is the essential need for establishing trust in both directions as highlighted in the evolving work in policies that ‘how much to trust another entity to see your own credentials when you wish to earn that entity’s trust’(Artz and Gil 2007 p.65). Several researches (Winsborough, Seamons et al. 2000; Yu, Winslett et al. 2001; Winslett, Yu et al. 2002; Lia, Winsborough et al. 2003; Yu and Winslett 2003; Nejdl, Olmedilla et al. 2004) have focused on such problem, some build a viewpoint on trust as it is established using security techniques (e.g. authentication, encryption, etc.). Examples of such contributions are the trust management language RT0 (Lia, Winsborough et al. 2003), PeerTrust policy and trust negotiation language (Nejdl, Olmedilla et al. 2004), and Protune provisional trust negotiation framework (Bonatti and Olmedilla 2005). Further contributions, such as Gandon and Sadeh (2004), aim to enable context-aware applications – i.e. those applications which will disclose credential only in the proper context, on the web by making use of ontology. However the credentials are viewed upon, they are still subject to trust decisions as whether some can believe a given credential to be accurate (Artz and Gil 2007), knowing that it is objectionable to have a certain authority in charge of deciding upon if one is to be trusted. This problem, entitled as trust management (Artz and Gil 2007), has been addressed in a number of scholars through trust policies(Blaze, Feigenbaum et al. 1996; Blaze, Feigenbaum et al. 1999; Kagal, Finin et al. 2005).
In social networks, where individuals are privileged to make decisions on whom and in what situation to trust, consulting a central trusted third party is refused by researchers, switching the focus on reputation-based trust. Yu and Singh (2002; 2003; 2004) indicate a ‘decentralized’ solution by providing approaches to use information received from external sources, witnesses, about individuals’ reputation, further weighted by the reputation of the witnesses themselves, allowing people to determine trust based on the information they receive in a network. Such information, which is most commonly called referral trust, has been first proposed by Beth et al. (1994), providing methods for computing degrees of trust based on the received information, and further addressed by other scholars such as Sabater and Sierra (2002) and Xiao and Benbasat (2003). Many reputation-based approaches to trust in peer-to-peer networks carry the need for a growing performance history to maintain  referral trust information. Aberer and Despotovic (2001), in contrast with Yu and Singh(2003; 2004), address such by using statistical analysis on reputation information to characterize trust, resulting in a more scalable approach. After all, whereas there are namely many scholars studying trust in peer-to-peer networks, Olmedilla et al. (2005) point out the limitations of existing academic works on trust in the context of grid computing.
More scholars in this field alter their perspective to reputation by defining it as a measure for trust where individuals create a web of trust by identifying reputation information on others. In Golbeck and Hendler’s (2004a; 2004b) study of introducing a way to compute trust for the TrustMail application, they make use of ontology to express information about others’ reputation and trust, which further allows the quantification of trust to be used in algorithms for the purpose of measuring trust for any couple of entities in a network. Such quantification of trust in addition to its following algorithms is referred to as trust metrics. A trust metric is a technique for predicting how much a certain user can be trusted by the other users of the community. One important set of research in this area includes those that assume a given web of trust, also called Trust Overlay Network (TON) by some scholars, in which a link between two entities carries the value of the trust decision made between those two, where an absence of a link means no trust decision has been made. Noteworthy in such studies is neglecting how trust decision has been made as long as the value of trust is quantified. The basic assumption of trust metrics is that trust can be propagated in some way. Empowering individuals to make trust decisions rather than referring to a single authority, raises the idea of trust transitivity – i.e. if A trusts B and B trusts C, then A trust C. The reason for such is that one trusts her friend more than a stranger and so, under certain conditions, a friend of her friend is possibly more trustworthy than a random stranger. This has attracted the attention of many of researchers resulting in more contribution to exploring how trust is propagated within a web of trust. Stewart’s (1999) work describes a set of hypotheses of how trust is transferred between hyperlinks on the web, specifically, from a trusted web resource to an un-evaluated one. His later study (Stewart and Zhang 2003) explains how to compute transitivity of trust where the actual quantities of trust, and distrust, are given. Guha et al. (2004) also performs an evaluation of several methods for propagation of trust and distrust in a given network. Such works further lead to computation of global trust values such as PageRank (Brin and Page 1998) and EigenTrust (Kamvar, Schlosser et al. 2003) algorithms. In contrast with global trust values, others emphasized on local trust values to compute personalized results for each entity. Massa and Avesani (2005) address the problem of controversial users (those who are both trusted and distrusted) suggesting that computed global trust values for controversial users will not be as accurate as local values because of the global disagreement on trust for those users. The distinctive characteristic of all of these approaches is neglecting the context since they perform the computation over a web of trust which does not differentiate between referral trust and ‘topic specific trust’(Artz and Gil 2007 p.74).
Further scholars move onto general considerations and properties of trust presenting a broader view on properties and models of trust. In their seminal study, Knight and Chervany (1996) integrate existing work on trust and highlight the different uses of the word “trust” in social science research. They identify 4 significant qualities taken into account when making a trust decision: Competence, benevolence, integrity, and predictability. Later, Ridings et al. (2002) simplify the factors engaged in a trust decision by eliminating predictability, whereas Acrement (2002) suggests 7 qualities of trust through a business management perspective: predictability, integrity, congruity, reliability, openness, acceptance, and sensitivity. One of the remarkable works in this area is that of Mui et al. (2002), using the key concept of reciprocity in deriving a computational model for trust in addition to differentiating between trust and reputation as another significant characteristic of this work. Marsh’s (1994) frequently cited Ph.D. dissertation suggests a continuous value for trust in the range of [-1,1] arguing that neither completely full trust or distrust exists. He proposes a set of variables in addition to a way to combine the variables resulting in the value for trust. He takes context and time into account as influential factors in computing trust value. Many researchers in this area claim that trust is a subjective expectation in performing local trust computation (Friedman, Khan Jr et al. 2000; Resnick, Kuwabara et al. 2000; Ziegler and Lausen 2005). Falcone and Castelfranchi (2004) point out the role of context in which trust is formed and decided upon to argue that “good reputation” may be merely a result of the obligations imposed by the context and does not infer trust.
Additional mostly taken perspectives for trust on social networks are multi-agent systems and game theory. Considering relationships between agents, Ramchurn et al. (2003) defines trust as the expected behavior of an agent inferred from its reputation within the context of relationships and later Ramchurn, Huynh et al. (2004) carry out a survey of trust in multi-agent systems. The studies that commonly use the trust Games in dyads (Jensen, Farnham et al. 2000; Davis, Farnham et al. 2002; Zheng, Veinott et al. 2002) or in groups (Rocco 1998; Bos, Olson et al. 2002), tend to take pay-off as an indicator of trust the players hold on each other. Buskens (1998) applies a combination of approximation methods to a game-theoretic solution to measure a type of trust in a graph of social networks. Another example of game-theoretic perspective to trust is Brainov and Brainov and Sandholm’s (1999) study showing that the mutual level of trust contributes to more utility in social networks. Rocco (1998), Bos et al. (2002) and Zheng et al. (2002) use games that only have binary decisions – i.e. decide to cooperate or defect in a continuous scale in the game. They have tried to study the effects of personal information on cooperation in order to investigate whether ‘trust needs touch’, an argument introduced by Handy (1995). The most popular experimental game-theoretic study that has attracted considerable attention in the research on trust in virtual social networks is that of Buskens (1998). More detailed overview of the game and its assumptions are available in the following sections.

Literature review on trust

Prior to elaborating on the relationship between trust and social networks, it is necessary to point out the functions of trust relations in social order. Misztal (1996) argues the urgency and difficulty of construction of trust in contemporary societies, focusing on the importance of trust in searching for social order (chapter 2). In an exhaustive review on classical sociology literature, she pinpoints 3 functions for trust: integrative function of trust, reducing complexity, and lubricating cooperation (Misztal 1996, chapter 3). The first two functions reflect the benefit of trust for the social systems as a whole, while neglecting the reason for individuals putting trust on each other. They, respectively, describe social order as a result of trustworthy behavior, and the need for trust resultant form the complexity of the society in which the outcomes of decisions are more influential. The later focuses on trust where it emerges in individual relationships, approaching trust as a rational choice phenomenon (Buskens 2002, chapter 1). Nevertheless, individual rationality by itself is in conflict with collective rationality when the problem of trust is out in a social context. This fact will be further elaborated upon by first giving a definition of such rationality, in addition to its implication in trust situations. Coleman (1994, p. 97-99) defines a trust situation characterized by 4 elementary but important points: First, the trustee is allowed to honor or abuse trust in case the trustor places trust on him, while he is not provided by this chance otherwise. Second point is that the trustor benefits from putting trust if the other person is trustworthy, whereas she will regret trusting him otherwise. Third, in the action of trust, the trustor voluntarily places ‘resources at the disposal of another party’ (ibid p.98) with no real commitment from the trustee to honor trust. And forth point refers to the fact each trust action involves a time lag between trustor’s placement of trust and trustee’s taking an action.
Coleman’s (1994) description of 4 points in a trust situation is in accordance with definition of trust given by Deutsch (1962):
[a]n individual may be said to have trust in the occurrence of an event if he expects its occurrence and his expectations lead to behavior which he perceives to have greater negative consequences if the expectation is not confirmed than positive motivational consequences if it is confirmed.
Deutsch’s concept, however, restricts trust situations to those in which the potential loss is more than the potential gain, such restriction that is not made in Coleman’s definition.
A game-theoretic representation of trust is illustrated in Figure 2-1 Standard Trust GameFigure 2-1. Such a social situation is frequently referred to as a standard ‘Trust Game’7 (Camerer and Weigelt 1988; Kreps 1992; Kreps 1996; Snijders 1996; Dasgupta 2000; Buskens 2002) which starts with a move from the trustor by choosing whether or not place trust on the trustee (Buskens 2002, chapter 1). If she does not place trust, the game is over and the trustor receives a payoff P1 , while the trustee receives P2 . If she places trust, the trustee decides whether to honor or to abuse this trust. If the trustee honors trust, both players receive Ri > Pi , i 1,2 , whereas in case the trustee abuses trust, the trustee and the trustor will receive T2 > R2 and S1 < P1 respectively.
The trust game has been frequently exemplified (e.g. Buskens and Raub 2008) as a scenario involving a transaction between a buyer and a seller (e.g. of a book on the internet, a car that is to be purchased by a novice). A relatively more complex model of the trust problem is the Investment Game (Ortmann, Fitzgerald et al. 2000; Barrera 2005) in which the trustor can decide on which degree she trusts the trustee, while the trustee chooses to which degree honors the trust.
Intuitively speaking, ‘incentive-guided and goal-directed behavior’ (Buskens and Raub 2008, p. 3) of trustee indicates that if the trust is placed, he will abuse it. On the other hand, the trustor anticipates this, so will never place trust at the first place which leads to fewer payoffs for both the trustor and the trustee than when trust is placed and honored. Such rationality, however, is applicable in a one-shot trust game between two isolated individuals, since the incentives would differ if the two-actor game is embedded in a social context. Even though the no-trust outcome seems more justifiable8, the outcome of a game ‘may be dictated by the individual rationality [,in the sense of incentive guided and goal directed action,] of the respective players without satisfying a criterion of collective rationality’ (Rapoport 1974, p.4). The trust game, consisting of such a conflict between individual and collective rationality, is an example of a social dilemma involving two actors.
7 Trust games are frequently considered as ‘one-sided prisoner’s dilemma game’ where the trustor starts the game by deciding to place trust on the other party (Kreps 1996; Snijders 1996).
8 Technically speaking, Buskens and Raub (2008) use the term pareto-suboptimal in both individual rationality and collective rationality cases. The concept is further used to specify the solution of the game utilizing Nash equilibrium as a basic game-theoretic specification of individual rationality. For more on this theme refer to (Nash 1951; Buskens 1998; Buskens 2002; Buskens and Raub 2008) Social dilemma is an area of strategic research for rational choice in social research (Merton and Storer 1973), considering actors as interdependent (ibid) individuals, while ‘entirely self-interested’ (Coleman 1964, p.166) and ‘rationally calculating to further [their] own self interest’ (ibid).
Following, is a short explanation of the utility of the Game Theory in analysis of social dilemmas in rational choice social research. Social dilemmas are fundamentally formed upon the interdependence between actors, meaning that the behavior of one actor has effects on another, establishing the use of game theories as a major tool in that respect. ‘Game theory is the branch of rational choice theory that models interdependent situations, providing concepts, assumptions, and theorems that allow to specify how rational actors behave in such situations’ (Buskens and Raub 2008, p.4). The primary assumption of the theory is for the actors to identify their preferences and restrictions in decision situations, as well as other interdependent actor’s, same, rational behavior9. Buskens and Raub (2008) further combine individual rationality with the assumptions based on the embeddedness of actions in network of relations to highlight the crucial effect of embeddedness on the behavior of rational actors in social dilemmas.

READ  Is Handelsbanken Using Facebook Efficiently? 

Trust Games

Studying Trust Games, as a type of social dilemmas, leads us to the problem of order, challenged by Parsons (1937), to be solved through conditions specified by rational individuals. Coleman (1964) further asserts that: … a society can exist at all, despite the fact that individuals are born into it wholly self-concerned, and in fact remain largely self-concerned throughout their existence. Instead, sociologists have characteristically taken as their starting point a social system in which norms exist, and individuals are largely governed by those norms… I will start with an image of man as wholly free: un-socialized, entirely self-interested, not constrained by norms of a system, but only rationally calculating to further his own self interest. (p. 166-167)
Radical though Coleman’s perspective is, it has been taken as the basis for rational choice research to overcome the problems of social dilemmas (Buskens and Raub 2008). Individuals seeking for their own benefit can be governed, to some extent, using extensive explicit contractual agreements. Contractual governance, however, has been remarked to be inefficient due to its limitations in regards to many contingencies, that might, or in fact do, arise during or after a transaction and anticipating of them is unfeasible or at least prohibitively costly (Durkheim 1973). He points out the importance of extra-legal factors for the governance of transactions (ibid). Many social network theorists’ contributions to the concept of reputation (Granovetter 1973; Granovetter 1974/1995; Lewicki and Bunker 1996; Lewicki 2006) have remarked it as an important non-contractual mechanism in governance of trust relations. Reputation conceptualizes the fact that individuals receive information about the behavior of other actors in the network, and use the information to decide upon their own future behavior. Information transfer between individuals, as an essential affecting point on reputation, takes place through some kind of relation between actors. Therefore, social network is utilized in modeling reputation as a consequence of information diffusion (Buskens 1995). Scholars building a discussion on reputation suggest that it is developed resultant from embeddedness in a social context. Embeddedness, in Granovetter’s (1985) sense, denotes expected future interactions between two parties who have been previously engaged in a trust game. Buskens (2008) designates such by ‘dyadic embeddedness’ (p. 16) and further introduces another type of embeddedness, referred to as ‘network embeddedness’ (p. 16). The later expresses the relation of a trust game to interactions of the trustor and the trustee with other actors in the network10.
Dyadic and network embeddedness affect trust through two mechanisms: control and learning (Buskens and Raub 2008). Control mechanism refers to the case that ‘the trustee has short term incentives for abusing trust, while some long-term consequences of his behavior in the focal Trust Game depend on the behavior of the trustor’ (ibid p. 16). This means that the trustee has to concede that there is a need for a trade-off between the short-term incentives to abuse trust and long-term costs of it, considering the long-term benefits of honoring trust. The reason for the need for such rationality is justified by the effects of dyadic embeddedness, knowing that the trustor can reward honoring trust and punish abusing it by applying, respectively, positive and negative sanctions in the future. Ergo the trustee has to consider that whether the trustor decides to place trust in the future is affected by the honoring or abusing trust in the focal Trust Game. Likewise, in the sense of network embeddedness, the trustor can inform other actors, whom are in contact with, about the behavior of the trustee and therefore influence his reputation in a network of individuals who may be involved with the trustee in future Trust Games. The second mechanism, learning, indicates that ‘[b]eliefs of the trustor on the trustee’s characteristics can be affected by information on past interactions’ (Buskens and Raub 2008, p. 16). This information can be obtained through both dyadic embeddedness – i.e. past interaction between trustor and the trustee, and network embeddedness from those who have been previously involved in interactions with the trustee.

Game theoretic assumptions for the effects of embeddedness on trust

Buskens and Raub (2008), studying the effects of social embeddedness on trust in a rational choice research on social dilemmas, have taken a game theoretic approach to elaborate how the control and learning effects of social embeddedness would let individuals be influenced by the effects of dyadic and network embeddedness, leading entirely self-interested actors (Coleman 1964) to consider the long-term consequences of their behavior. In this approach, the effects of dyadic and network embeddedness are theorized in a simple focal Trust Game that is embedded in a more complex game.
To start, we consider an indefinitely repeated Trust Game (Kreps 1992; Gibbons 2001) – i.e. a simple Trust Game played repeatedly for indefinite times between a pair of trustor and the trustee. In this model (Kreps 1992), a focal Trust Game is played repeatedly between two actors in rounds 1,2,…,t,…, for after each round t, the probability of playing another round is , while the repeated game ends with the probability . Axelrod (1984) refers to as ‘the shadow of the future’ (p. 12) for that the larger the continuation probability, the larger the expected payoff of each actor in the game. For indefinitely repeated Trust Game, an actor’s expected payoff is calculated by the sum of actor’s payoffs that has been discounted by a factor of in each round (Kreps 1992; Buskens and Raub 2008). In the repeated game, both actors can take different strategies towards the play. A strategy is ‘a rule that prescribes an actor’s behavior in each round … as a function of the behavior of both actors in the previous rounds’ (Buskens and Raub 2008, p. 17). The trustee can use a conditional rewarding/punishment strategy, as a control effect, by placing trust in future games as a reward of honoring trust, and refusing to place trust in case it has been previously abused11. Conditional strategy implies that abusing trust will grant the trustee in one round and only in future interactions where no trust is placed by the trustor. On the contrary, honoring trust will result in larger payoffs than in future interaction, increasing the probability of placing trust by the trustor. A rational trustee, therefore, has to trade off between short term and long term incentives. Trustee’s tendency for building such balance is roughly influenced by the shadow of the future, , knowing that abusing trust will trigger a change in trustor’s behavior in a way that she refuses to place trust in future rounds. Such severe sanction from the trustor in response to trustee’s deviation from trustworthy behavior is labeled as a ‘trigger strategy’ (Buskens and Raub 2008, p.19). In this manner, the best reaction to a trigger strategy is for the trustee to always honor trust if and only if the shadow of the future is large enough for a selfish trustee to decide not to abuse trust in the current round. This condition is clarified as (Buskens and Raub 2008)
In case condition (1) applies, for large enough , the indefinitely repeated Trust Game has many equilibria, e.g. always placing and honoring, hence emerging an equilibrium selection problem (examples of equilibria for repeated games can be found in (Rasmusen 2007, chap. 5)). One criterion for choosing between equilibrium points in the context of game theory is ‘payoff dominance’ (Harsanyi 1995). An equilibrium is considered payoff dominated if there exists another equilibrium that makes at least one individual better off without making any other individual worse off (Fudenberg and Tirole 1991). In the indefinitely repeated Trust Games, an equilibrium indicating placing and honoring trust throughout the game is payoff dominant over any other equilibrium12. In this respect, similar to dyadic embeddedness, the control effects of network embeddedness are highlighted in the equilibrium selection problem for indefinitely repeated Trust Games by virtue of communication, helping rational actors to coordinate on trigger strategy equilibrium (Buskens and Raub 2008). Moreover, generalizing the results of the indefinitely repeated Trust Games to n-person games where condition (1) applies, will lead to an equilibrium of the indefinitely repeated games in which actors cooperate. Noteworthy in this generalization is the need for an assumption that actors obtain reliable information about the behavior of the trustee that has been unfolded in previous rounds of the game (Buskens and Raub 2008).

Table of contents :

CHAPTER ONE: THESIS INTRODUCTION
1.1 Introduction
1.2 Problem description and research questions
1.3 Thesis outline
CHAPTER TWO: BACKGROUND
2.1 Previous work – contemplating and modeling trust
2.2 Literature review on trust
2.3 Trust Games
2.4 Game theoretic assumptions for the effects of embeddedness on trust
CHAPTER THREE: ESTABLISHED FINDINGS AND STUDY FOUNDATION
3.1 Social network analysis
3.2 Game-theoretic model for control effects of embeddedness on trust in social networks
3.2.1 The game-theoretic model – assumptions
3.2.2 The game-theoretic model – the solution equilibrium
3.3 The effects of social structure on the level of trust
3.4 Theoretical framework
CHAPTER FOUR: THE MODEL FOR THE EFFECT OF NOISE IN INFORMATION TRANSMISSION 
4.1 The Model
4.2 The Solution of the Model
CHAPTER FIVE: METHOD
5.1 Method
5.1.1 Sampled Networks
5.1.2 Experimental Design
5.1.3 Dependent Variables
5.1.4 Simulation
5.2 Method of Analysis
5.3 Analysis of the Simulated Data
CHAPTER SIX: SUBSTANTIVE IMPLICATIONS
CHAPTER SEVEN: MODEL BUILDING, VERIFICATION, AND VALIDATION
7.1 Verification of the simulation model
7.2 Model Validation
CHAPTER EIGHT: CONCLUSION
8.1 Discussion on the findings
8.2 Future Work
REFERENCES
APPENDIX A
APPENDIX B

GET THE COMPLETE PROJECT

Related Posts