Get Complete Project Material File(s) Now! »

## Inverse problems in disordered systems

In all the models described before, a lot of eorts have been devoted to the understanding of the direct problems. In this context, they typically consist in calculating thermodynamic quantities such as magnetisations and correlation functions from the knowledge of the interactions acting among the spins (in the simplest cases, elds and couplings). This problem is highly non trivial on its own, as is shown by the fact that a solution for the d = 3 Ising model is still lacking. No simple relation can be generally established connecting elds and couplings with the thermodynamic quantities they induce, and an exact estimate is not computationally feasible as it would require a number of calculations exponential in the size of the system (because of the sum over all the possible congurations).

Because of the variety of systems that can be described as Ising models, in the last years much research has been devoted to the study of the far more complicated inverse Ising problem [18]. In this latter, one has access to the thermodynamic quantities and wants to estimate the interactions having generated them. Formally, the problem can be relatively easy stated: knowing that each set of J and h will generate certain thermodynamic quantities, one has to nd out which of them is more likely to have generated the ones we observe; an exact, exhaustive search for it would again require an exponential number of operations.

A brute force approach, giving typically good results but very slow in its convergence, is a Monte Carlo simulation that, starting from an initial arbitrary choice of elds and couplings, iteratively calculates the thermodynamic quantities they imply and compares them to the true ones. The parameters are then changed in order to diminish the discrepancy between the two, for instance by using a gradient descent on the parameters, and the procedure is repeated until a set of J and h producing thermodynamic quantities similar enough to the observed ones is produced. This approach is made more dicult by the fact that in this case even the solution of the direct problem, which has to be performed multiple times, is not trivial at all.

A comprehensive review of the methods developed in this eld would go beyond the scope of this thesis, and the variety of methods proposed to address this central problem will therefore just be stressed: they go from dierent orders of mean-eld approximations [19] to adaptive clustering approaches [20], from methods making use of the linear response theory [21] to expansions for small values of the correlations [22] to approximations of the probabilistic features of the system making use of the pseudolikelihood [23].

### A computer science modelling framework: the constraint satisfaction problems

A modelling framework that turns out to be of particular interest for the type of problems addressed in this thesis is the one of the constraint satisfaction problems [24]. Such problems involve a certain number of objects and a set of conditions or limitations they have to satisfy. Each of these conditions involves only few variables, in a number which remains in particular nite and typically small even in the innite system size limit. One such problem is the already briey mentioned q-colouring problem, in which each constraint involves exactly two variables (i.e. two neighbouring sites) and asks them not to take the same colour in the assignment proposed.

The most natural solution one can look for in these situations is an assignment of the variables such that all the constraints are satised; in the previous example, one would like to look for an assignment such that to none of the couples of neighbouring nodes is assigned the same colour. For the sake of generality, a more general denition is usually given. The problem is assumed to be dened on a set of N variables 1; : : : ; N, each of them taking values in a nite set ; a global assignment will be indicated as = (1; : : : ; N) 2 N.

Because of the constraints, each assignment will be characterised by a cost, or an energy (by respectively using a computer science or a physics language) E() corresponding to the number of constraints it violates. The natural request expressed in the previous paragraph can hence be quantitatively stated as the search for a such that E() = 0; however, other legitimate questions can be also asked. A slight generalisation of the previously discussed request is the optimisation problem, the output of which is a conguration with minimum total cost. In the cases in which no assignment can satisfy all the constraints at the same time, this algorithm will return an optimal conguration = arg min E() ; in a q-colouring problem, for instance, it will return an assignment of colours such that the number of neighbouring couples sharing the same colour (i.e. the number of unsatised constraints) is the smallest possible. In the decision problems, the output is a boolean variable telling whether an assignment such that E() < E0 exists for the problem under analysis, without explicitly nding it. If we set E0 = 0, these problems can be seen as asanity check on the constraints involving the variables: indeed, depending on the output of this problem being positive or not, the CSP is respectively said to be satisable or unsatisable. An evaluation problem, nally, will determine the cost of an optimal solution; formally, the output of an algorithm implementing it will be E = min E() to violate, without anyway explicitly proposing such an assignment. Because of the generality of this framework, a lot of situations can be stated as constraint satisfaction problems. In more applicative elds, for instance, one is often interested in optimising something under some constraints. To make one out of many possible examples, in a resource allocation problem one has to decide how to distribute a xed amount of resources in a system in order to optimise some outcome: in a public transportation system, this may correspond to the goal of maximising the number of people able to use the oered service and minimising their waiting time (i.e. optimising the service in some sense) by distributing in the best possible way the overall availability of buses. As usual, we will mainly be concerned about more abstract versions of such situations, nevertheless possibly generating insights about such specic applications. Many CSP are dened on networks and will hence be discussed in chapter 2 devoted to graph theory: the already discussed graph colouring is one such problem, and other examples will be proposed in the following. Another class of CSP having played a central role in the theory of computational complexity is the one of SAT problems. Here the variables xi are boolean, meaning that their value can be either TRUE or FALSE or, for brevity, respectively 1 or 0; each variable can also be negated, the negation of xi = 1 being xi = 0 and vice versa. The variables have to satisfy M constraints, taking the form of clauses: these latter are logical OR of a certain number of variables (which is in general dierent for any clause) or of their negations. Because of the denition of the logical OR, in order for a clause to be satised, at least one of the variables has to appear with its correct value. Let us consider for instance a clause involving three variables x1; x3; x7 as follows: x1 OR x3 OR x7.

#### Infeasibility of the straightforward solutions

A common feature of the CSP is that nding a solution in a non-ecient way is easy enough. If one thinks about the k-SAT problem, for instance, for any given assignment of the variables the number of clauses (if any) that it violates can be evaluated quite easily, i.e. in a time which grows just polynomially with the system size. If one systematically tried all the possible assignments, he would be able to answer all the possible questions on the CSP under exam in an exact way. For some of them, one could even stop in advance, for example as soon as an assignment violating no more than a given number of clauses would be found. Unfortunately, this turns out to be practically infeasible, as the number of possible assignments is exponential in the size of the instance: for a k-SAT problem (and in general for any problem in which the variables may take one out of two values) dened on N variables, there are 2N possible assignments. This exponential law is far from being a technical detail, as it practically makes this exhaustive approach completely infeasible even for quite small problems.

Smarter algorithms avoiding us to check all the possible solutions can be thought of; a class of such algorithms incrementally builds potential solutions as follows. At the beginning of the procedure all the variables are unassigned and at each step one of them is picked up at random, and it is temporarily assigned rst to 0 and then to 1. If just one of the assignments does not lead to any clause violation, the variable is assigned to that value; if none of the assignments produces contradictions, the variable is assigned at random to 0 or 1. In some cases, however, both the assignments may lead to contradictions; without getting into the details, I will simply say that rened versions of this original algorithm have been designed so to face such situations. One of the main elements of these renements is the backtracking, mechanism by which it is possible to get out of a contradiction as the one described above. In such a situation, the backtracking consists in de-assigning some of the variables previously xed, and to restart. This iterative procedure is repeated until one is able to get to a complete assignment of the variables without incurring into any contradiction.

The procedures discussed can be rened over and over, so to accelerate the search for a solution, or to enlarge the probability of nding one. For reasons that will be claried in the following section, however, all these renements are in general not sucient to be sure to nd a solution of a problem big enough in a reasonable amount of time.

**The worst case scenario**

Given the immense number of CSP one can think of, a question of great importance is how to design a way of comparing them. More precisely, one would like to build a hierarchy of diculties (i.e. of the number of operations requested) in problems, so to be able and assign each one of them to the appropriate level. Being this framework largely independent on the details, one could imagine to use as proxies of the diculty of a problem its running time, or the number of operations requested for solving it. The hardness of a problem may however dier from an instance to another, and the notion of diculty of a CSP problem (and not of one of its specic realisations) has hence still to be dened.

**Spin glasses and CSP: two languages, same problems**

In the previous sections I presented two frameworks apparently very far apart from each other. On the one hand, the constraint satisfaction problems, originally studied in the computer science community; on the other, the Ising model for magnetic systems and its generalisation to spin glasses and disordered systems. People realised at a certain point that the two were intrinsically similar, and that all the machinery developed in one community could have been fruitfully applied by the other (for instance, the study of the worst case scenario to spin glasses and the search for a ground state in CSP).

In some cases the connection between statistical mechanics and CSP emerges in a very clear way, as a mapping between problems is possible: the parallel between solving a colouring problem and nding the ground state of an antiferromagnetic Potts model has been for instance discussed in section 1.3.4. Even more generally, CSP can be described in terms of spin glasses: in both cases, indeed, a conguration of the binary variables is chosen depending on an assigned cost function. The interactions among the variables are in both cases randomly chosen and constitute a quenched disorder of the system; the procedure of averaging over the quenched disorder represented by the explicit realisation of couplings and elds in a spin glass model is analogous to performing ensemble averages over the instances while considering rCSP. As it has been discussed, the energy level of the ground state of a spin glass is not at all trivial to determine, and it is in general larger than zero even for small systems because of the frustration emerging among the couplings. From a CSP point of view, this situation is equivalent to arm that the system is unsatisable as not all of its constraints can be matched at the same time; also in this case, one may incur in such a situation even in very small systems, such as a three variable a; b; c CSP in which we require a = b, a = c and b 6= c. These three conditions clearly cannot be satised together and the system is said to be geometrically frustrated because of the presence of such contradictory conditions.

**Table of contents :**

**1 Models and frameworks **

1.1 The role of models

1.2 Forward problems, inverse problems

1.2.1 Same phenomena, dierent questions

1.2.2 A more precise denition

1.3 A physics modelling framework: the Ising model

1.3.1 Motivation

1.3.2 The Ising model

1.3.3 Universality of critical phenomena

1.3.4 The generalisations

1.3.5 Inverse problems in disordered systems

1.4 A computer science modelling framework: the constraint satisfaction problems

1.4.1 Denitions and examples

1.4.2 Infeasibility of the straightforward solutions

1.4.3 The worst case scenario

1.4.4 The typical case scenario

1.5 Spin glasses, rCSP and large deviations

1.5.1 Spin glasses and CSP: two languages, same problems

1.5.2 Looking for atypical events

1.5.3 The large deviations in a simple case

1.6 Statistical mechanics for non-physics problems

1.6.1 A common framework, many elds of interest

1.6.2 Social systems

1.6.3 Economics and nance

1.6.4 Neural networks

1.6.5 Other biological systems

1.6.6 Signal processing

1.6.7 Computer science

**2 The network theory **

2.1 Motivations and denitions

2.1.1 Ubiquity of networks

2.1.2 The birth of graph theory

2.1.3 The main features of a graph

2.1.4 Enriching the network metaphor

2.2 A path towards more realistic networks

2.2.1 The comparison between observed and reproduced features .

2.2.2 Reproducing the observed average distance and clustering coe cient

2.2.3 Reproducing the observed degree distribution

2.3 Studying graphs as a branch of probability theory

2.3.1 A mathematical description

2.3.2 Combinatorial problems on graphs

2.4 Phase transitions, critical phenomena and statistical physics approach

2.4.1 Dynamics on networks, dynamics of networks

2.4.2 Statistical physics and network theory

2.4.3 Critical phenomena on networks

2.5 Examples of other problems

2.5.1 The detection of communities

2.5.2 Networked versions of other problems

**3 Contagion dynamics on graphs **

3.1 General framework

3.1.1 The problem and its range of applicability

3.1.2 Epidemic processes and compartmental models

3.1.3 The choice of the dynamics

3.1.4 The characteristics of the graph

3.1.5 One spreading phenomenon, many possible questions

3.1.6 Dierent approaches

3.2 Minimal contagious sets in random regular graphs

3.2.1 Denition of the problem

3.2.2 The energy function

3.2.3 Mapping to other standard problems in graph theory

3.3 The cavity method treatment

3.3.1 The replica symmetric RS formalism

3.3.2 The breaking down of the RS assumption

3.3.3 1RSB and energetic 1RSB formalism

3.3.4 The energetic 1RSB formalism

3.4 Main results

3.4.1 The solutions of the problem

3.4.2 Analytical results

3.4.3 Numerical results

3.5 Future perspectives

**4 Exploring networks **

4.1 Graph exploration and random walks

4.1.1 Exploring a graph, a very general problem

4.1.2 Interest and applications in dierent elds

4.1.3 Random walks on graphs

4.1.4 Joining the two: graph exploration through random walks

4.2 Rare event statistics on random walks on networks

4.2.1 The problem

4.2.2 The model

4.2.3 The large deviations

4.3 Main results

4.3.1 A degree-based approximation

4.3.2 Localisation transition

4.3.3 Mode-switching transition

4.4 Summary and future perspectives

4.4.1 Summary

4.4.2 Better understanding of the critical behaviour

4.4.3 Dierent functions

4.4.4 Functions taking value on the links

4.4.5 Dierent topologies

4.4.6 Improvement of clustering techniques

**5 Inferring the graph 99**

5.1 Motivation, applications and connection to other problems

5.1.1 Motivation

5.1.2 The matrix completion problem

5.1.3 The collaborative ltering

5.2 The inference of a network of interactions from a partial knowledge of

the correlations

5.2.1 Denition of the problem

5.2.2 Motivation, applications and previous work

5.3 The model

5.3.1 Statistical context of the project

5.3.2 Prior, posterior and optimal estimate of the model

5.4 Choosing the couplings and evaluating the performances

5.4.1 Articial models

5.4.2 The geometry of J

5.4.3 Generating synthetic J

5.4.4 The observables

5.5 Main analytical results

5.5.1 Reasons justifying the study of the Hessian

5.5.2 Derivation of the Hessian

5.5.3 Derivation of the Hessian in the eigenbasis of C

5.5.4 Perturbative approximation of the Hessian

5.6 The heuristics

5.6.1 Choices based on inference

5.6.2 Heuristics making use of the Hessian

5.6.3 Intractability of the complete Hessian

5.6.4 Diagonal elements of the Hessian

5.6.5 Validity of the perturbative approximation

5.7 Initial knowledge and eects of measurements

5.7.1 Choice of the initial condition

5.7.2 Update of C after a measurement

5.8 Results

5.8.1 Dependency on the initial condition

5.8.2 The exploration-exploitation trade-o

5.8.3 Dierent heuristics on a xed geometry

5.8.4 Dierent graphs, dierent features

5.9 Future perspectives

5.9.1 Biological applications

5.9.2 Theoretical challenges

5.9.3 More complicated heuristics

5.10 Appendices

iv

5.10.1 Caveat on the procedure for creating synthetic J

5.10.2 The value of the couplings

5.10.3 Regularisation and noise

5.10.4 Uniqueness of the minimum

5.10.5 Completion of the initial condition

6 Conclusions and perspectives 151

6.1 The network theory in the big data era

6.1.1 The end of theory?

6.1.2 An example of rich data analysis: Quora

6.1.3 New solutions to old questions

6.1.4 The future of network theory

6.2 More data, same problems

6.2.1 Optimisation of spreading on networks

6.2.2 Extreme events in network exploration

6.2.3 Inferring networks with partial information

References 159

A Minimal contagious sets in random regular graphs 173

B Rare events statistics of random walks on networks: localization and

other dynamical phase transitions