Maximum capacitated matching via the cavity method 

Get Complete Project Material File(s) Now! »

Local weak convergence

The notion of local weak convergence of a sequence of graphs was introduced by Benjamini and Schramm [15] to study random walks on planar graphs. This frame-work was further formalized by Aldous and Steele in [5] and by Aldous and Lyons in [4]. More recent studies on the subject include [19, 23, 22].
A first step is to introduce the notion of local convergence of rooted graphs. A rooted graph (G; v) is a graph G together with a distinguished vertex v of G which is called the root. An isomorphism of rooted graphs is a graph isomorphism which sends the root of one to the root of the other, i.e., a one-to-one mapping of the vertices of a graph to those of another graph which preserves the root, the edges (and their marks). We denote by [G; v] the isomorphism class of (G; v) and by G the set of isomorphism classes of rooted locally finite graphs. For any k 2 N, we denote by (G; v)k the rooted subgraph obtained from (G; v) by keeping only those vertices at graph-distance at most k from v, and we write [G; v]k = [(G; v)k] for its isomorphism class. We can turn G into a complete separable metric space (i.e., a Polish space) by defining the following metric: let the distance between [G; v] and [G0; v0] be 1=(1 + ), where is the supremum of those k 2 N such that [G; v]k = [G0; v0]k. Then, a sequence of rooted graphs ((Gn; vn))n2N converges locally to (G; v) in G if the distance between (Gn; vn) and (G; v) goes to 0 as n ! 1. Equivalently, (Gn; vn) ! (G; v) locally () 8k 2 N; 9nk such that; 8n nk; [Gn; vn]k = [G; v]k: n!1
At this point, we recall that a sequence of probability measures ( n)n2N on a certain metric space S converges weakly to a probability measure , which we denote by n , if for every bounded continuous function f : S ! R, f d n converges to f d . As G is a Polish space, we can endow it with its Borel -algebra (see, e.g., [16]) and consider the complete separable metric space of probability measures over G , denoted by P(G ). The definition of weak convergence then applies to probability measures in P(G ). Furthermore, given a finite graph G there is a natural way to form a probability measure on G : we let U(G) be the distribution over G obtained by choosing a uniform random vertex of G as a root. Then, we say the sequence of finite graphs (Gn)n2N converges locally weakly towards the measure on G when the sequence of distributions (U(Gn))n2N converges weakly towards .
Any measure which is the limit of a sequence of finite graphs in the sense seen above has the property that it is unimodular [5], which we define next. Similarly to the space G , we define the space G of isomorphism classes of locally finite connected networks with an ordered pair of distinguished vertices and the natural topology thereon. We call unimodular if it obeys the Mass-Transport Principle (MTP): for Borel functions f : G ! [0; 1], we have Z v V f(G; o; v)d ([G; o]) = Z v V f(G; v; o)d ([G; o]): We let U denote the set of unimodular Borel probability measures on G . For 2 U, we write b( ) for the expectation of the capacity constraint of the root with respect to .

Some classical random graph models

In this section, we introduce the Erdős-Rényi random graph, which is possibly the simplest random graph model, as well as some more complicated random graphs which we will use throughout the thesis. We should warn the reader that we will often use the notation for a particular random variable to refer in fact to its distri-bution when the meaning is clear from the context, so as to avoid introducing one more notation for the distribution itself.
The Erdős-Rényi random graph and the Poisson-Galton-Watson tree
The Erdős-Rényi random graph is the simplest and most studied random graph model. Two related versions of the model were introduced by Gilbert [54] and Erdős and Rényi [40]. For a set of n vertices and p 2 [0; 1] , the Erdős-Rényi random graph n possible (n; p) is obtained by including independently each edge among the 2 edges with probability p. Therefore, taken individually, each vertex has a degree given by a binomial random variable Bin(n 1; p). As we are interested in the sparse regime, we focus mainly on p = p(n) =n as n ! 1, for some fixed 2 R+. Then, a sequence of Erdős-Rényi random graphs ( G(n; =n) n2N with increasing sizes admits almost surely a local weak limit in P G ) . This local weak limit is the Poisson-Galton-Watson (Poisson-GW) distribution GW(Poi( )) over trees with expected offspring , which was introduced in 1873 by Galton to study genealogical family trees [110]. A sample Poisson-GW tree is obtained through a branching process as follows: the root o has a number of children X given by a Poisson random variable with mean , i.e. P(X = k) = P(Poi( ) = k) = k e : k!
Then, each of these children also independently has a number of children equal to Poi( ), and so on. The rooted tree obtained via this process is almost surely locally finite. For a detailed study of brancing processes and random graphs, one can refer to [9, 107]. A detailed proof of the almost sure local weak convergence of the Erdős-Rényi random graph can be found in [6, 18, 61].

Random graphs with prescribed degree distribution

The Erdős-Rényi model has only one parameter which can be adjusted, the average degree. In order to better fit real-life graphs, it is useful to be able to capture more of their features and incorporate them into a random graph model. The so-called configuration model (introduced by Gallager [48] and refined by Bender and Canfield [14], Bollobás [17] and Wormald [115]) is one step in this direction, where one can specify the degree distribution of the n vertices of the graph. Given a degree distribution 2 P(N) with finite mean, we first sample independently from the degree dv of each vertex v 2 V . Starting from an empty graph, with vertex set V and no edges, we then add edges at random by pairing successively, uniformly and independently the stubs at each vertex in a manner consistent with the degree sequence d = (d1; : : : ; dn). This procedure produces a few self-loops and multiple edges: they can be either corrected [75] to obtain a simple random graph G(n; d) uniform among all graphs with the prescribed degree sequence d, or erased / merged [26] to obtain simply a simple random graph with an empirical degree distribution d converging to as n ! 1. In any case, a sequence of such random graphs G(n; d) (or G(n; d) n)2,N ) with increasing sizes admits almost elimitn2N ( ) 2 P ( which is the unimodular Galton-surely a local weak GW G Watson distribution with degree distributione (see Example 1.1 in [4], and [39] for detailed proofs of the convergence). The procedure to sample a unimodular GW tree is very similar to that explained before in the special case of Poisson-GW trees: the root o has a number of children drawn from ; however, all its descendants have numbers of children X independently drawn from the size-biased offspring distribution e 2 P(N) defined by: (k + 1) k+1 P(X = k) = ek = : i2N i i P
For the applications considered in this thesis, we will often encounter a sim-ilar model for bipartite random graphs G = (L [ R; E), where the degree (and constraints) distributions are different for the two parts of the graph: given two distributions L and R specifying jointly the degree, the vertex-constraint and the adjacent edge-constraints for vertices in L and R respectively, we can sample two sequences (dl; bl; c@l)l2L and (dr; br; c@r)r2R indicating these quantities for each ver-tex of the graph. Provided these sequences meet some basic consistency conditions (the sum of degrees should be the same in L and R and the number of edges with a given capacity should be the same when counted from L and from R), we can generate a uniform random bipartite graph accordingly (see [29, 51] for more details on the bipartite case). A jointly defined sequence of such random graphs (Gn)n2N rooted at a random left-vertex, with jLn j = n and jRnj 1 n for some fixed > 0, converges almost surely to a local weak limit in P(G ). This time, the local weak limit GW( L; R) is a two-step unimodular Galton-Watson distribution with laws L and R, and a sample from this distribution can be obtained in a very similar way as before: the parameters of the root o are sampled from L, the pa-rameters of its descendants at even and odd generations are drawn from size-biased offspring distributions eL and eR respectively, conditionally on the capacity of the edge linking them to their parent. More details on this model can be found in Chapter 2.
In the next section, we present the two applications considered in this thesis in more detail.

READ  Rôle du CO2 dans les transferts et le fractionnement des métaux d’intérêt économique par des fluides géologiques

Load-balancing problems

Load balancing is the process of distributing tasks among resources so as to make the load level of the resources as even as possible. This issue has grown very popular in particular with the emergence of distributed computing, where loading equally all the available processors ensures the quickest completion for a given set of tasks. However, even a basic setup, with two identical processors and a large number n of tasks with known durations, is already an NP-complete problem, as shown by Karp [63]. Furthermore, many internet applications are nowadays supported by server farms comprising thousands of machines. Therefore, in many applications of interest the number of resources itself is also growing. In this situation, with a large number of tasks and resources, the coordination capabilities of the distributed system become a predominant limitation, and it is often more important to find a simple yet efficient scheme –for which randomization is a useful tool– than an optimal deterministic solution, which complexity and communication requirements would render impractical.
In this thesis, we consider two main examples of load-balancing problems which share a common abstraction: the performance in the two setups is characterized by the size of a capacitated matching in a bipartite random graph describing the constraints of the problems. This section introduces more formally the two load-balancing examples mentioned and the main questions we try to answer in each context. Each time, we will point out the connection with capacitated matchings as well as the relevant random graph models.

Cuckoo hashing

A hashtable is a data structure that maps items to buckets so as to be able to later retrieve the items or any associated data. The idea is that the item (or an associated descriptor called a key) is hashed into the index of a bucket in an array (also called a value). Among the many possible items, only a small number is expected to be present at any time, therefore it is advantageous to dimension the array to store only that smaller expected number of items rather than the whole collection of items. The purpose of the hashing step is then to map many items to the same bucket index, as hopefully only one of those items is present most of the time; otherwise, a collision occurs and it needs to be resolved in some way. Cuckoo hashing is a particular scheme used to resolve such collisions.
The hashing paradigm is as follows: we are given n items and m buckets. In the most basic setting, we want to have exactly one bucket per item and at most one item per bucket. In more complex versions, multiple copies of each item must be stored, each bucket can hold many items simultaneously and there are restrictions on how many times an (item,bucket) pair can be used. Critical performance metrics for such systems are the size of the hashtable (the number of buckets) needed to hold a given number of items, and the time it takes to either retrieve or insert items. The multiple-choice hashing strategy is one that guarantees a constant look-up time, while requiring with high probability a hashtable of size m proportional to n. This strategy consists in pre-defining a set of d 2 buckets for each item, which is then only allowed to pick a bucket within that set. Of course, depending on the choices of the pre-defined sets of buckets, it may be impossible to handle simultaneously some sets of items and inserting a new item may not be easy. The first issue becomes very unlikely if the hashtable contains enough buckets compared to the number of items it needs to hold, and a lot of work has actually been devoted to figuring out exactly how large the hashtable needs to be. As for the second issue, cuckoo hashing was proposed by Pagh and Rodler [89] as a simple, randomized way to search for a new valid assignement upon arrival of an item: one first checks whether one of the buckets pre-defined for the new item is available, in which case it suffices to pick one of these; otherwise, one of the pre-defined buckets is chosen at random and re-allocated to the new item. The same procedure is then used for the item that has just been evicted (which is thus treated as a new item). An interesting survey [81] reviews the state of research on cuckoo hashing as of 2009 and states some related open questions at that time. Formally, an instance of the cuckoo hashing scheme can be described by a bipar-tite graph G = (L [ R; E) called the hashing graph (or cuckoo graph), where L is the set of n items and R the set of m buckets. The edges adjacent to an item l 2 L indicate the d pre-determined buckets to which l can be assigned. To capture the more complex versions of cuckoo hashing, where one needs to store h copies of each item or where each bucket can hold at most k items at a time, we add respec-tively capacity constraints bL h to the left-vertices (items) and bR k to the right-vertices (buckets); similarly, if each (item,bucket) pair can be used at most s times, it is indicated by the edge-capacities c s. Note that minimum number of copies h of each items is modeled as a capacity-constraint bL in the constraint graph; this will be required to establish a link between the feasibility of a cuckoo hashing instance and the size of maximum capacitated matchings in the correspond-ing hashing graph G. To illustrate the cuckoo hashing setup, Figure 1.2 shows two different hashing graphs, with the items on the left and the buckets on the right side. The two realizations have the same values for the parameters, i.e., the same degrees d = 3 for the items, each item must be stored h = 4 times, each bucket can store up to k = 3 items and each (item,bucket) pair can be used at most twice. Note that the two graphs differ by only one edge (between the central vertices of each part in the right figure). However, one cannot build a valid hashtable from the hashing graph on the left, while we pointed out a valid assignement on the right using the same notations as for capacitated matchings in Figure 1.1.

Table of contents :

1 Introduction 
1.1 Notions of graph theory
1.2 Load-balancing problems
1.3 Introduction to the cavity method
1.4 Overview of the results
2 Maximum capacitated matching via the cavity method 
2.1 Introduction
2.2 Asymptotic size of maximum capacitated matchings
2.3 Main proof elements
2.4 Structural properties of local operators
2.5 Finite graphs
2.6 Limit of large graphs
2.7 Exactness in finite bipartite graphs
3 Cuckoo hashing thresholds 
3.1 Introduction
3.2 Cuckoo hashing threshold and hypergraph orientability
3.3 An analysis of double hashing
4 Load-balancing and resource-placement in distributed CDNs 
4.1 Introduction
4.2 Edge-assisted CDN model and statistical assumptions
4.3 Related models and replication strategies
4.4 Performance under optimal matching
4.5 Performance under random greedy matching
4.6 Adaptive replication schemes to minimize losses

GET THE COMPLETE PROJECT

Related Posts