Understanding Influence (Models) from the 5W1H Framework 

Get Complete Project Material File(s) Now! »

Decision-Theoretic Agents

In a collective decision-making system, the elemental concept is the agent (as decision-maker).


The meaning of the term agent differs in different natural science or social sciences. In philosophy, it is usually simply an entity which is capable of action. A related concept is agency, which is simply the capacity of an agent to act in a given environment. Most relevant for us are the definitions in economics and artificial intelligence:
In economics —an agent is an actor or decision-maker in a model, and they may be of different types. For instance, buyers and sellers are two common types of agents in partial equilibrium models of a single market, and households, firms, and governments or banks are the main types of agents in the macroeconomic models. Typically, every agent makes decisions by solving an optimization/choice problem.
In Computer Science and Artificial Intelligence —a typical meaning is that of an Intelligent Agent [Russell and Norvig, 2003], an autonomous entity which can observe (through sen-sors) and acts upon environment (through actuators) and directs its activity towards achiev-ing their goals, the intelligent agents may also learn or use knowledge. Decision-theoretic agents [Parsons and Wooldridge, 2002] rely on decision-theory to implement their behav-iors.
Agent Structure. In [Russell and Norvig, 2003] a simple agent structure is proposed, as illus-trated on Figure 2.1. The same behaviour acan also be defined as a function [Russell and Norvig, 2003] mapping every possible perceptional inputs (P ⇤) to a possible action (A) the agent can perform, or to a coefficient, feedback element, function or constant that affects eventual actions: f : P ⇤ ! A. .
At the interface of economics and computer science also lies the paradigm of agent-based simulation, and in particular agent-based economics [Page, 2008], whereby economics models are approached, refined and evaluated thanks to agent simulations. We shall rely on such techniques in part of this thesis. It is a interdisciplinary paradigm combining computer science and economics, in which, corresponding agents are “computational objects modeled as interacting according to rules” over space and time. Quoting [Page, 2008]: “The rules are formulated to model behavior and social interactions based on stipulated incentives and information”. In an economic model, if all agents of a given type (such as all consumers, or all firms) are assumed to be identical, then it is called as a representative agent model, while if the differences among agents of a given type are recognized, then it is called as a heterogeneous agent model. Representative agent models are often used by economists as traditional economics paradigm, to describe the economy in the simplest form, while on many cases heterogeneous agent models should be used when differences among agents are critical and directly relevant to questions and outcomes [R´ıos-Rull, 1995], especially for a complex question interleaved with complexity, then the agent-based computational paradigm would be more appropriate, to avoid the oversimplifying.
Notation 1 (Agent) An agent is a decision-maker in a collective (multi-agent) decision-making system. We denote by N = {A(1), A(2), …, A(n)} the set of all agents, where A(i) represents the agent i.


Each agent needs to make decision based on his or her own preferences. These preferences represent how good are different options, alternatives, outcomes, for the agent (for [Lichtenstein and Slovic, 2006], preferences “could be conceived of as an individual’s attitude towards a set of objects”). Several aspects of preferences are studied in different fields.
In psychology —the elements that build and modify preferences are primarily studied, for in-stance how emotions may impact them [Scherer, 2005], how specific circumstances may play a role, how they may change over time (“A preference is not necessarily stable over time, which can be notably modified by decision-making processes, such as choices” [Sharot et al., 2009], even in an unconscious way [Coppin et al., 2010]).
In Economics —the relation from preferences to actual choices is studied, and —although only actual choices can eventually be observed—, they are a key element in many theoretical studies ranging from voting, markets, consumer choices, etc. Issues of misrepresentation of preferences (manipulation) have for instance a long tradition of study.
In computer science —preferences have gained popularity with the rise of decision-theoretic agents. One specific issue which is studied in computer science is related to the representa-tion of preferences (how to compactly encode a preference structure which may naively be prohibitive to represent, as is the case in particular in combinatorial domains), and of course to the computation of various reasoning tasks related to preferences (for instance, checking that an option is preferred to another one, or computing an aggregation).
Preferences can be modeled either ordinally or cardinally.
Cardinal Preferences —under a cardinal approach, preferences usually have a quantitative mea-sure. The classical way is to rely on an utility function. For one example, when you choose
mobile phone, there are three makers as alternatives {Appler, Samsungr, N okiar}, and your degree of liking are respectively a utility of 10,3 and -5, showing you are a big fan of Appler, and could still accept Samsungr, but dislike or even hate N okiar. Thus, through the cardinal preference, we not only know which alternative is preferred to which other alternative, but also know how much is one alternative preferred than another one. However, note that when dealing with cardinal preferences and aggregating preferences amongst agents the issue of interpersonal comparisons of utility or preference comes up: indeed this is usually meaningless because there is no simple way to interpret how different agents value their options.
Ordinal Preferences —under an ordinal approach, preferences are only captured by orders, such as full rankings or orderings. Taking the same example as above, there are three alternatives {Appler, Samsungr, N okiar} too, and your preference is expressed as an ordering:
Appler Samsungr N okiar
Thus, when an ordinal approach is used, the preference provides an ordering between alter-natives of a choice set, but tells nothing about the related strength of preferences. Through above ordering, we know that your prefer Appler more to Samsungr and N okiar, pre-fer Samsungr more to N okiar but less to Appler, and prefer N okiar less to both Appler and Samsungr, but nothing more. Although it is clear that you prefer Appler most and prefer N okiar least, but it is unclear to what extent you prefer Appler more to Samsungr, and also not clear you like or dislike these alternatives, for example, the ordering might be understood as you hate N okiar, or you just prefer less to N okiar than other two, you still could accept N okiar; also, the ordering could be understood that you are a big fan of Appler, but the truth might be you like none of the three makers, but just hate Appler not that much. As lacking a respective value for each of alternative but just an ordering between them, some information about the preference would be missing. How-ever, as the interpersonal comparisons of cardinal utility are usually deemed as unfeasible, and on many situations, the information about preference are natively limited or incomplete, we might just have the information about the ordering, but unknown about the detailed and delicate magnitudes.
It should be noted that, preference orderings used in this thesis are transitive. Transitivity is a key property of both partial order relations and equivalence relations. In mathematics, a binary relation R over a set X is transitive if whenever an element a is related to an element b, and b is in turn related to an element c, then a is also related to c. However, they will be required to be complete.


Under the assumption of rationality, decisions and preferences are related: different prefer-ences lead to corresponding decisions and the decision is made based on the preference. If the preference is the underlying or internal psychology (determining the decision), then the decision is the superficial or external behavior (reflecting the preference).
Decision-making can be perceived as a cognitive process based on preferences to select an alternative or option among several ones. According to [Simon, 1965], “decision-making is one of the central activities of management and is also a critical part of any process of implementation”. Decisions has been the subject of active research from several different perspectives, that we list here following [Wikipedia, 2015d]:
Psychological Decision—“studies individual decisions in the context of a set of needs, pref-erences and values the individual has or seeks”.
Cognitive Decision—“means the decision-making process is regarded as a continuous pro-cess integrated in the interaction with the environment”.
Normative Decision— “concerned with the logic of decision-making and rationality and the choice it leads to” (which is the perspective of decision mainly used in computer science and artificial intelligence) [Kahneman and Tversky, 2000].

Combinatorial Domains


Combinatorial domains (see e.g. [Chevaleyre et al., 2009]) occur when the alternatives of the decision-making setting are defined upon different features, criteria or dimensions. In that case, the dimension of the domain of alternatives grows fast, since they are as many options as the Cartesian product of the domains on each feature.
When the features are dependent, then this poses a challenge for the representation of prefer-ences of the agents, since in principle it may be required to enumerate all the feasible alternatives. To worsen matters, a collective decision may have to be taken on such domains. For example [Grandi et al., 2014] take the combinatorial and collective decision-making context of a family buying a car, usually the car has more than one features to make choices about, such as the M aker, M ode and Color, each of these features should be decided, which make buying a car a multi-features decision-making but not a single feature decision-making. The feature here means the property of one entity to make choices about.
Besides, many decision-making questions are not about making choices on multi-features of an entity, but (directly) making choices on multi-entities, usually namely as multi-issues for decision-making. If we take the example of the United Nations Security Council voting, there are a lot of bills proposed ceaselessly and put on table for collective voting, which is a typical multi-issue decision-making. In some cases, some bills are clearly related to other ones.
Notation 2 (Feature) Feature is the property of the decision-making object in combinatorial decision-making, one feature is denoted as F, the set of all features M = {F(1), F(2), …, F(m)}, with finite domains D(F(1)), D(F(2)), …, D(F(m)), F(k) represents the k-th feature.
Notation 3 (Variable) Variable is the decision-making on one feature by one agent in combi-natorial and collective decision-making, one variable is denoted as V, the set of all variables ⇥ M = {V(1)(1), …, V(m)(n)}, with finite domains D(V(m)(n)), …, D(V(m)(n)), V(k)(i) represents the decision-making variable of i-th agent on k-th feature.
Notation 4 (Alternative) Alternative is the candidate for decision-making in the domain of feature or variable (a domain is constituted by alternatives), alternatives of feature is denoted as O, Alternatives on feature k D(F(k )) = {o1, o2, …, ot}, which could be shorted as O(k) = {o1, o2, …, ot }, and a full alternatives on all features O = O(1) ⇥ O(2) ⇥ … ⇥ O(m), which is the cartesian product of all domains {D(F(1)), D(F(2)), …, D(F(m))}.
We are now in a position to define the notations for preferences and choices, introduced earlier: Notation 5 (Preference) In a collective (multi-agents) decision-making system, preference is the ordering or magnitude of alternatives according to the degree of like or dislike, Preference is denoted as P, the set of all preferences P = {P(1)(1), …, P(m)(n)}, P( k)(i) represents the preference of agent i on feature k, the preferences of agent i on all features P(i) = {P(1)(i), …, P(m)(i)}, the preferences on feature k by all agents P(k) = {P(k)(1), …, P(k)(n)}.
Notation 6 (Decision/Choice) In a collective (multi-agents) decision-making system, deci-sion is the selection of an alternative among several alternative possibilities based on the preferences of the decision maker, Decision/Choice is denoted as C, the set of all choices = {C(1)(1), …, C(m)(n)}, C(k)(i) represents the decision/choice of agent i on feature k, the choice of agent i on all features C(i) = {C(1)(i), …, C(m)(i)}, the choices on feature k by all agents C(k) = {C(k)(1), …, C(k)(n)}.
As mentioned above, combinatorial domains are a challenge for preference representation.
We now present the approach of CP-nets that we shall mostly use in this work.


CP-nets [Boutilier et al., 2004a] (for Conditional Preference networks) are a graphical model for compactly representing conditional and qualitative preference relations. They are sets of ce-teris paribus (all other things being equal) preference statements (cp-statements). As explained in [Maran et al., 2013], “the cp-statement “I prefer red wine to white wine if meat is served.” as-serts that, given two meals that differ only in the kind of wine served and both containing meat, the meal with red wine is preferable to the meal with white wine.” Technically, a CP-net is defined as set of features (or issues) F = {x1, ,., xn} with finite domains D(x1), ,, D(xn). Then, for each feature xi, a set of parent features P a(xi) is given, that can affect the preferences over the values of xi. This results in a dependency graph in which each node xi has P a(xi) as its immediate predecessors. However, CP-nets is mainly concerned to describe the dependency among features or issues (of one agent), while not in a multi-agent influence context (until the work of influenced CP-nets [Maran et al., 2013]).
Example 2.1 (CP-nets) For a simple general example for the model of CP-nets, assume there are two features A and B, both are binary, respectively with domain { }, { ¯}. The preference for a, a¯ b, b A is a a¯ (a is preferred to a¯), while the preference for feature B is dependent on the choice of feature A, if A is chosen as , then ¯ (b is preferred to ¯), but if A is chosen as , then ¯ a b b b a¯ b b
(¯ is preferred to b), which means that there is a dependency between feature A and feature B, the b decision of latter one is dependent on the decision of former one, and the dependency relations or ¯ ¯ the cp-statements could be stated as: a a,¯ a : b b, a¯ : b b.
Then given a practical example to help understand, for example of buying a car, assumed there are two features Mode and Color to make choices about, respectively with binary do-main as {Commercial car, Sports car}, {Red, Black}. The preference for feature Mode is Commercial car Sports car (Commercial car is preferred to Sports car), while the pref-erence for the feature Color would be dependent on the choice of the feature Mode, if Mode is chosen as Commercial car, then Black Red (Black is preferred to Red on the condition that Commercial car is determined, as Black would be more appropriate than Red for a Commercial car), but if Mode is chosen as Sports car, then Red Black (Red is preferred to Black on the condition that Sports car is determined, as Red might seem more energetic than black for a Sports car). Therefore, there is a very straightway dependency between feature Mode and feature Color, the decision of Color would be dependent on the decision of Mode, and the cp-statements to ex-press the dependency relations would be Commercial car Sports car, Commercial car : Black Red, Sports car : Red Black.

READ  Climate Model Output Downscaling 

Collective Decision-making

Collective decision-making is the problem of aggregating the preferences of several agents to single out one winning option, or even sometimes a representative preference for the group. The most classical methods for collective-decision making (beyond deliberation) are voting-based methods. While there are many such methods, we only present the relevant ones for our work. These methods are simple scoring based methods, whereby we assign points to certain positions in the preference ordering (assumed to be complete here).
Voting-based Methods.
Borda rule—if there are p candidates, the top candidate gets p points, the second one gets p 1 points, etc.
Plurality—only the top option of each candidate is considered, and the one with the highest score wins, even if it falls short of a majority (lower than 50%).
Consensus Decision-making. This approach requires an option to be approved by a majority but the minority should also agree to go along with that option, which means if the minority disagree or oppose to the option, then the option should be modified or compromised to reduce the objection as possible. We see that this approach is more deliberative since the nature of the option may vary during the process. The veto power in UN security council is a typical negative example for consensus decision-making, ignoring the objections from minority or even majority.

Decision-making in Combinatorial Domains

[Xia et al., 2007] discussed the sequential composition of voting rules in multi-issue domains, as dealing with combinatorial domains leads to the well-known dilemma: either ask the voters to vote separately on each issue (and aggregate the votes on each issue independently), which may lead to the so-called multiple election paradoxes; or allow voters to express their full preferences on the set of all combinations of values, which as we mentioned may be practically impossible even for a few issues. [Xia et al., 2007] try to reconciliate both views and find a middle way, by relaxing the extremely demanding separability restriction (which would guarantee sequential votes to be well-behaved) into the so-called o-legality notion:
Definition 1 There exists a linear order X1 > … > XP on the set of issues such that for each voter, every issue Xi is preferentially independent of Xi+1, …, XP given X1, …, Xi 1.
This leads to define a family of sequential voting rules, defined as the sequential composition of local voting rules. These rules relate to the setting of conditional preference networks (CP-nets [Boutilier et al., 2004a]). They study in detail how these sequential rules inherit, or do not inherit, the properties of their local components.

Influence among Agents

For a combinatorial and collective decision-making system, another important aspect is the influencing relations among agents. For example of a family buying a car, which is a collec-tive decision collectively by husband, wife and even kids, wife’s preference on maker might be influenced by her husband, while husband’s preference on model might be influenced by his wife.

Structure of Influence

Influencing relations among agents is close to the concept of social relation or social in-teraction in social science. Social relation is any interpersonal relationship among at least two individuals. The prequisite of social relations is the individual agency1, and then the gathering of social relations is the social structure.
Interpersonal Relationship
Most works about social relations are about personal social relations, namely the interper-sonal relationship. An interpersonal relationship is a association or acquaintance between two or more people that may vary on both strength and duration. The concept of transitivity in rela-tions is also possessed by interpersonal relationships, for example of the friendships, “my friend’s friend may become my friend”. Actually, the study of interpersonal relationships has received attentions from scientists from several different fields, such as sociology, psychology, artificial intelligence and so on. The scientific study of relationships came to be referred to as “relationship science” [Berscheid, 1999]. Interpersonal ties are also a subject in mathematical sociology [Kelley et al., 1983].
Human beings are innately social and are shaped by their relations and interactions with others. There are multiple perspectives to understand this inherent motivation to interact with others. As summarized in [Wikipedia, 2015h]:
Need to belong: According to Maslow’s hierarchy of needs [McLeod, 2007], humans need to feel accepted in various social groups (like family, peer groups).
Social exchange: Individuals engage in relations that are rewarding, this concept fits into a larger theory of social exchange: “The theory is based on the idea that relationships develop as a result of cost-benefit analyses. Individuals seek out rewards in interactions with others and are willing to pay a cost for said rewards. In the best-case scenario, rewards will exceed costs, producing a net gain. This can lead to “shopping around” or constantly comparing alternatives to maximize the benefits (rewards) while minimizing costs” [Wikipedia, 2015h].
Relational self: “Relationships are also important for their ability to help individuals de-velop a sense of self. The relational self is the part of an individual’s self-concept that consists of the feelings and beliefs that one has regarding oneself that develops based on interactions with others [Andersen and Chen, 2002]. In other words, one’s emotions and behaviors are shaped by prior relationships. Thus, relational self theory posits that prior and existing relationships influence one’s emotions and behaviors in interactions with new individuals, particularly those individuals that remind him of others in his life” [Wikipedia, 2015h]. In short, the prior relationship with others become one part of self, and affecting how interacting with new others [Hinkley and Andersen, 1996].
Interpersonal relationships should be regarded as dynamic systems that change continuously during their “life cycle”: they may vary according to circumstances, being strengthen or weakened are people get closer or not. One of the most influential models of relationship development was proposed by [Kelley et al., 1983].
1“In the social sciences, agency refers to the capacity of individuals to act independently and to make their own free choices. By contrast, structure are those factors of influence (such as social class, religion, gender, ethnicity, customs, etc.) that determine or limit an agent and his or her decisions” [Barker, 2003].
Theory of Interpersonal Relationship
Example 2.2 (Confucianism) “Confucianism is a theory of relationships especially within hier-archies. Social harmony-the central goal of Confucianism-results in part from every individual knowing his or her place in the social order, and playing his or her part well. Particular duties arise from each person’s particular situation in relation to others. The individual stands simulta-neously in several different relationships with different people: as a junior in relation to parents and elders, and as a senior in relation to younger siblings, students, and others. Juniors are considered in Confucianism to owe their seniors reverence and seniors have duties of benevo-lence and concern toward juniors. A focus on mutuality is prevalent in East Asian cultures to this day” [Richey, 2005]. Example 2.3 (Minding relationships) “The mindfulness theory of relationships shows how close-ness in relationships may be enhanced. Minding is the ‘reciprocal knowing process involving the nonstop, interrelated thoughts, feelings, and behaviors of persons in a relationship.’ ” [Harvey and Pauwels, 2009].
Social Networks
A social network is a social structure made up of a set of agents (social actors) (which can be individual agents but also organizations) and a set of bilateral relations among them (also called ties). The discipline of social network has recently emerged has a very successful interdisciplinary area of research.
Interpersonal Ties. In mathematical sociology, interpersonal ties are defined as “information-carrying connections” between people. Interpersonal ties generally come in three varieties: strong, weak,absent, and with two directions: positive, negative [Granovetter, 1973, Granovetter, 1983, Granovetter, 2005] argued that weak social ties are responsible for the majority of the embedded-ness and structure of social networks and the transmission of information through these networks, specifically, more new information flows to individuals through weak rather than strong ties, as our families and close friends tend to be in the same circles with us, the information they posses usually overlap with what we already graph. However, [Granovetter, 1983] deemed that “weak ties provide people with access to information and resources beyond those available in their own social circle; but strong ties have greater motivation to be of assistance and are typically more easily available”, therefore there exits the weak/strong ties paradox. According to [Granovetter, 1973], “absent ties are those relationships without substantial significance, such as ‘nodding’ rela-tionships between people living on the same street, the fact that two people may know each other by name does not necessarily qualify the existence of a weak tie. If their interaction is negligible the tie may be absent. The ‘strength’ of an interpersonal tie is a linear combination of the amount of time, the emotional intensity, the intimacy (or mutual confiding), and the reciprocal services which characterize each tie.”

Table of contents :

I Basics of Influence 
1 Introduction 
1.1 Computational Social Choice
1.1.1 The Framework of Decision-Influence-Structure
1.2 What is the Influence?
1.2.1 The Connotation of Influence
1.2.2 The Denotation of Influence
1.3 Overview of the Thesis
2 RelatedWorks 
2.1 Decision-Theoretic Agents
2.1.1 Agent
2.1.2 Preferences
2.1.3 Decision
2.2 Combinatorial Domains
2.2.1 Feature/Issue
2.2.2 CP-nets
2.3 Collective Decision-making
2.3.1 Decision-making in Combinatorial Domains
2.4 Influence among Agents
2.4.1 Structure of Influence
2.4.2 Social Influence
2.4.3 Convergence to consensus
2.4.4 A Note on Information Cascades
2.4.5 Influence with Ordinal Preferences
2.4.6 Summary of our approach
3 Understanding Influence (Models) from the 5W1H Framework 
3.1 What Influence
3.2 Where Influence
3.3 When Influence
3.4 Who Influence
3.5 Why Influence
3.6 How Influence
4 What is Missing? 
4.1 Influencing and Influenced Structure
4.2 Influence from More than One Origins
4.3 Influence with Abstention and Constraint
II Theory of Influence 
5 The Extended Patterns of Influence 
5.1 A Framework of Combinatorial and Collective Decision-making
5.1.1 CP-nets with Initial Inclinations
5.2 The System of Influence Patterns by the DIS Framework
5.2.1 New Influences and New Statements beyond CP-statement and CI-statement
5.3 Pattern 1-3 Intra-influence of Decision
5.4 Pattern 4-6 Intra-influence of Structure
5.5 Pattern 7-9 Inter-influence of Decision
5.6 Pattern 10-12 Inter-influence of Structure
5.7 Pattern 13-15 Intra-inter influence of Decision
5.8 Pattern 16-18 Intra-inter influence of Structure
5.9 Pattern 19-21 Inter-intra influence of Decision
5.10 Pattern 22-24 Inter-intra Influence of Structure
6 Influence from More than One Origins 
6.1 The Prominent Influence-by the Priority of Influence
6.2 The Collective Influence-by the Weight of Influence
6.3 The role of structure in collective influence
6.3.1 Three Levels of Influence: from Independent Agents, Grouped Agents to Influencing Agents
6.3.2 The Influential Effect from Structure among Agents (an ordinal approach)
6.3.3 The Interplay of Group and Structure Effect (a cardinal approach)
7 Influence with Abstention and Constraints 
7.1 Abstention
7.1.1 Comparison between Value Gained and Cost
7.2 Constraints and Partial Domains
7.3 Constrained CP-nets
7.3.1 Consistency notions
7.3.2 Checking the consistency notions
7.3.3 Achieving top and local consistency in constrained CP-nets
7.4 Collective decision-making with Constrained Profiles
7.4.1 Top, local, and dependency consistency
7.4.2 Aggregation in non-consistent profiles
7.4.3 Properties of CLA
7.5 Collective Decision-making with Abstention
7.6 Domains and Influence: perspectives
III Application of Influence 
8 Testing the Models of Influence by Qualitative Case Studies 
8.1 “Great Powers Worship the Reputation”
8.2 “Side with Allies and Go against Enemies”
8.3 “Different Influencing Relations Touch Different Sensitive Nerves”
8.4 “Be Close to Your Friends When Your Enemies be Close to Theirs”
8.5 How to Deal with Contradictory Multipartite Relations
8.5.1 Balance Strategy: Offend Neither Side, or Offend One Side then Please the Same Side Later
8.5.2 Revenge Strategy: Offend Neither Side, or Offend One Side then Wait for the Revenge from the Same Side
8.6 How to Maintain Stable Relationships
8.6.1 Unilateral Loyalty or Bear Grudge: Once I Follow You Then I Always Follow You, Once I Oppose to You Then I Always Oppose to You
8.6.2 Mutual Favor or Mutual Harm: If You Play Nice to Me Then I Play Nice Back, If You Play Hard to Me Then I Play Hard Back
9 Testing the Models of Influence by a Quantitative Approach 
9.1 Test Sample: Passed Resolutions with at least One Different Voices
9.1.1 Classified as Different Subjects with Dependencies among Resolutions
9.2 Test Method: Influence Pattern Matching Algorithm Design
9.2.1 Making Assumptions about Influences
9.2.2 Influence Pattern Matching Algorithms
9.3 Test Outcome
9.3.1 Subject 1-Admission of New Memberships
9.3.2 Subject 2-the Iraqi Invasion of Kuwait and the Sanctions against Iraq
9.3.3 Subject 3-Israeli and Palestinian Conflicts
9.3.4 Subject 4-Yugoslav Wars
9.3.5 Subject 5-the Conflicts between India and Pakistan
9.3.6 Subject 6-the Decolonization of Territories and Military Operations of Portugal
9.3.7 Subject 7-the Apartheid Policy and the Invasion by South Africa
9.3.8 Subject 8-the Minority Regime and the Invasion by Southern Rhodesia
9.3.9 Specific Influencing Relations Ranking
9.3.10 General Influence Pattern Comparison
10 Modeling and Simulation of the Influence Models in UN SC Voting 
10.1 Conceptual Model: Reasoning Chart design
10.1.1 Key Concepts and Mechanisms
10.2 Mathematical Model: Variables Definition and Rules Design
10.2.1 Define Variables
10.2.2 Design Rules
10.3 Computer Model: Multi-agent System Modeling and Simulation
10.4 Simulation Experiments and Analysis
10.4.1 Experiments Design from Computer Science Paradigm
10.4.2 Experiments Design from Social Sciences Paradigms
10.4.3 Simulation Analysis and Discussion


Related Posts