Phase transition: Emergence of a giant component
The phase-transition result regarding the emergence of a giant component in ER(n,=n) (Section 1.2.1) has been generalized to configuration model by Molloy and Reed in  and , where the critical boundary of the phase-transition is given by E[D(D 2)] > 0. (1.18).
Superficially, this condition appears quite different from the case of ER(n,=n), where the critical boundary of the phase-transition is determined by the average aympototic degree. But the similarity
becomes evident when one examines the approximating branching process. Let us start the exploration of a connected component of configuration model from a vertex i. Then, the number of neighbours, analogous to the first generation in the approximating branching process, has distribution (pk)k0. But this is not true from the second generation onwards. A first generation vertex with degree k is k times as likely to be chosen as one with degree 1, so the distribution of the number of children of a first generation vertex 9 becomes size-biased and is given by qk1 = kpk , (1.19) for all k 1. The k 1 on the left-hand side comes from the fact that we used up one edge connecting to the vertex. The same logic applies, upto an approximation, to the subsequent generations. Now, the expectation of this size-biased distribution is given by := 1X k=0 kqk = E[D(D 1)] [D] .
Diameter of weighted configuration model
We will now give the result for the diameter of weights of configuration model, i.e. configuration modelwhose edges are given i.i.d. exponentially distributed weights with parameter 1. This result given here is proved by Amini and Lelarge in .
Recall that for a graph G, the diameter of the graph is given by diam(G) := maxfdist(a, b), a, b 2 V, dist(a, b) < 1g (1.32) where dist is the usual graph distance and V is the vertex set of graph G. Also recall from Section 1.3.1 that if > 1, then a giant component exists whp in configuration model. We assume this for the following theorem. Let dmin := minfkjpk > 0g such that for k < dmin; jfi, di = kgj = 0, for all n sufficiently large. We state the result only for the case dmin 3.
Local weak convergence
We briefly introduce here the idea of local weak convergence in the context of random rooted graphs, which provides the framework for our discussion on convex comparison of random graphs in Chapter 5.
We refer to Aldous and Steele (2002)  for more details and applications of this idea. We first recall that a rooted graph is a graph with a distinguished vertex called root, and a rooted graph isomorphism from G to G0 is a graph isomorphism which maps the root of G to the root of G0. We now recall what is meant by local weak convergence for unweighted graphs.
Definition 1.18 (LocalWeak Convergence) Let G be the set of rooted graphs. For G 2 G, we denote by Nk(G) the restriction of G to those vertices which are separated from the root by at most k edges. We say that Gn converges to G1 in G if for each k 2 N, there exists an n0 = n0(k,G1) such that for all n n0 there exists a rooted graph isomorphism from Nk(G1) to Nk(Gn).
This definition determines a topology that makes G into a complete separable metric space. Local weak convergence is the usual weak convergence with respect to this space. In the context of random graphs that we have introduced in this chapter, it is natural to ask if [ER(n,=n); n 2 N] and [G(n, (di)n 1); n 2 N] converge to their approximating branching processes in the local weak sense. The answer is indeed yes. We do not formally state the results here since they are only relevant for the informal discussion in Chapter 5, but refer to Dembo and Montanari (2009)  for the details in the case of configuration model (of which Erd˝os-Rényi model is a special case).
Enhanced Configuration Model
In order to generalize the diffusion process in configuration model, we enhance the (classical) configuration model introduced in Section 1.3 by considering two types of half-edges. Transmitter half edges of a given vertex represent links through which this vertex will influence (pass the information once it has it) to its neighbours. Its receiver half-edges represent links through which this vertex will not propagate the information to its neighbours. The neighbours receive the information both through their transmitter and receiver half-edges matched to a transmitter half edge of the information sender. The two types of half-edges are not distinguished during the uniform pair-wise matching of all half-edges, but only to trace the propagation of information. Assuming the usual consistency conditions for the numbers of transmitter
and receiver half-edges, the enhanced configuration model is asymptotically (when the number of vertices n goes to infinity) described by the vector of two, not necessarily independent, integer valued random variables, representing the transmitter and receiver degree of the typical vertex. Equivalently, we can consider the total vertex degree, representing the total number of friends of a person and its transmitter degree, representing the number of friends he/she is able to influence. Remark that this model is a generalization of the following more or less classical cases.
(i) (Pure) Configuration Model: A vertex can influence all its neighbours, that is, all the half-edges are transmitter half-edges (see [48, 49, 38]).
(ii) Bond Percolation: A vertex can influence each of its neighbours independently with probability p. Note that there is a difference between such a bond-percolation model (which has oriented edges) and the “usual” bond percolation studied on the configuration model (cf [29, 35]) in which edges of the underlying graph are appropriately thinned to get (non-directed) bond percolated components. It
is easy however to see that one can couple both models so that the information propagation starting from one arbitrarily fixed vertex reaches exactly all the vertices of its bond percolated component.
(iii) Node Percolation: A vertex can influence all of its neighbours with probability p and none with probability 1 p. Again, there is a slight difference from the previous studies in [29, 35]. This time one can couple both models so that the information propagation started from any vertex reaches all
the vertices of its node percolated component and in addition, some extra leaves which would have
been closed in the node percolated graph.
(iv) Coupon-Collector propagation: A vertex chooses, some number of times, one of its neighbors (with replacement, i.e., forgetting his previous choices) to whom it propagates the message.
Pioneers —Branching process heuristic
Our analysis technique to obtain the above results involves the simultaneous exploration of the configuration model and the propagation of influence. Another commonly used method to explore the components of configuration model is to make the branching process approximation in the initial stages of the exploration process. Although we won’t explicitly follow this path in this chapter, an heuristic analysis of the branching process approximation of our propagation model provides some important insights about the size of the set of good pioneers (using Theorem 1.1).
Coming to the approximation, if we start the exploration with a uniformly chosen vertex i, then the number of its neighbours that it does not influence and those that it does, denoted by the random vector (D(r) i , D(t) i ), will have a joint distribution (pv,w). But since the probability of getting influenced is proportional to the degree, the number of neighbours of a first-generation vertex excluding its parent (the vertex which influenced it) won’t follow this joint distribution. Their joint distribution as well the joint distribution in the subsequent generations, denoted by (eD (r),eD (t)), is given .
Analysis of the Original Forward-Propagation Process
The following analysis is similar to the one presented in  and wherever the proofs of analogous
lemmas, theorems etc. don’t have any new point of note, we refer the reader to  without giving
Throughout the construction and propagation process, we keep track of what we call active transmitter half-edges. To begin with, all the vertices and the attached half-edges are sleeping but once influenced, a vertex and its half-edges become active. Both sleeping and active half-edges at any time constitute what we call living half-edges and when two half-edges are matched to reveal an edge along which the flow of influence has occurred, the half-edges are pronounced dead. Half-edges are further classified according to their ability or inability to transmit information as transmitters and receivers respectively. We initially give all the half-edges i.i.d. random maximal lifetimes with distribution given by exp(1), then go through the following algorithm.
C1 If there is no active half-edge (as in the beginning), select a sleeping vertex and declare it active, along with all its half-edges. For definiteness, we choose the vertex uniformly at random among all sleeping vertices. If there is no sleeping vertex left, the process stops.
C2 Pick an active transmitter half-edge and kill it.
C3 Wait until the next living half-edge dies (spontaneously, due to the expiration of its exponential lifetime).
This is joined to the one killed in previous step to form an edge of the graph along which information has been transmitted. If the vertex it belongs to is sleeping, we change its status to active, along with all of its half-edges. Repeat from the first step.
Every time C1 is performed, we choose a vertex and trace the flow of influence from here onwards. Just before C1 is performed again, when the number of active transmitter half-edges goes to 0, we’ve explored the extent of the graph component that the chosen vertex can influence, that had not been previously influenced.
Let ST (t), SR(t), AT (t) and AR(t) represent the number of sleeping transmitter, sleeping receiver, active transmitter and active receiver half-edges, respectively, at time t. Therefore, R(t) := AR(t) + SR(t) and L(t) := AT (t) + AR(t) + ST (t) + SR(t) = AT (t) + ST (t) + R(t) denotes the number of receiver and living half-edges, respectively, at time t.
For definiteness, we will take them all to be right-continuous, which along with C1 entails that L(0) =
2m 1. Subsequently, whenever a living half-edge dies spontaneously, C3 is performed, immediately followed by C2. As such, L(t) is decreased by 2 every time a living half-edge dies spontaneously, up until the last living one die and the process terminates. Also remark that all the receiver half-edges, both sleeping and active, continue to die spontaneously.
The following consequences of Glivenko-Cantelli theorem are analogous to those given in  and we state them without proof.
Table of contents :
1 Random Graphs: an Overview
1.1 Galton-Watson Branching Processes
1.2 Erd˝os-Rényi Random Graph
1.2.1 Phase transition: Emergence of a giant component
1.3 Configuration Model
1.3.1 Phase transition: Emergence of a giant component
1.3.2 Information diffusion
1.3.4 Diameter of weighted configuration model
1.4 Local weak convergence
2 Viral Marketing in Configuration Model
2.1.1 Enhanced Configuration Model
2.1.4 Related Work
2.2 Notation and Results
2.2.1 Forward influence propagation
2.2.2 Pioneers— Branching process heuristic
2.2.3 Dual Back-Propagation Process
2.2.4 Concluding Remarks
2.3 Analysis of the Original Forward-Propagation Process
2.4 Analysis of the Dual Back-Propagation Process
2.5 Duality Relation
3 Viral Marketing: Examples, Applications and Numerical Studies
3.1.1 Related Work
3.2.1 Bernoulli transmissions
3.2.2 Enthusiastic and apathetic users or node percolation
3.2.3 Absentminded users or coupon-collector transmissions
3.2.4 Numerical examples
220.127.116.11 Analytic evaluation
18.104.22.168 Case study
3.3 Application to Viral Campaign Evaluation
4 Isolated Vertices and the Longest Edge of the Minimum Spanning Tree of Weighted Configuration Model
4.3 Isolated vertices of weighted configuration model
4.4 Longest edge of MST
5 FutureWork: Convex comparison of Random Graphs
5.1 Convex Comparison of Random Graphs
5.1.1 Convex Order on Galton-Watson Tree and Implications
5.1.2 Convex Order on Sequences of Finite Random Graphs and Implications in Configuration Model