The Economy as a constraint satisfaction problem

Get Complete Project Material File(s) Now! »

constraint satisfaction problems

In its most general form, the aim of a constraint satisfaction problem is to find a configuration of a number N variables that satisfy a certain number M of constraints. Typical examples of these problems are the Boolean Satisfiability problems where the variables are boolean and we describe them briefly below. More specialized and exhaustive treatments (from a physicists’ viewpoint) can be found in [79].
We consider here the SAT series of problems. We define a Boolean formula F written in the conjunctive normal form [80] F = C1 ^C2 ^C3 ^CM.

From csps to statistical mechanics

As introduced in Chap 2, the fundamental idea in statistical mechanics is the description of macroscopic phenomena as properties of the system that emerge statistically. Moreover, in statistical mechanics, we are not concerned with the particular microscopic configurations of the system — for instance the precise positions of a molecule in a gas; since essential, macroscopic properties like the temperature or pressure of the system do not depend on precise microscopic details. Hence the study of any physical system in statistical mechanics begins by defining a probability measure on the space of possible configurations = (1;2; ;3). The variables i can take discrete (like spin variables) or continuous (particle positions in a liquid). To each configuration of the system, we associate an energy function E(). The probability of finding the system in configuration at temperature T (or the inverse temperature = 1 T ) is then given by the Gibbs measure, () = 1 Z exp 􀀀 1 T E() .

the perceptron as a learning problem

The perceptron was one of the first formal models of how neurons function. Starting with the pioneering work of Rosenblatt [86], it has continued to be both a paradigm and a cornerstone of machine learning. Statistical physicists became interested in the perceptron problem and significant progress was made in the late 1980’s in a series of papers [87–91]. The perceptron is an element that gives an output according to the following rule: y = sgn XN j=1 pj j : (3.17) 1 , 2 , , N are N inputs and p1, p2, , pN are called synaptic connections with running from 1 M. Thus, whenever the input ~p ~ exceeds 0, the perceptron (modeling a neuron) fires (y = +1 and -1 otherwise). The corresponding learning problem consists of adjusting the weights ~p = [pj ] so that the inputs [ i ]i=1;;N (called patterns) are matched correctly to their corresponding output y 2 f􀀀1;+1g.

perceptron and sphere packing

It is known that the packing fraction of a collection of hard spheres in 3 dimensions cannot exceed a maximum of 0.74. This is the well-known case of the close packed lattice such as the HCP (Hexagon Closed packing) lattice. However, if we begin with a loose collection of hard spheres with random initial conditions and compactify these spheres, then c can be less than 0.74. We say that the system jams during the compactification process before it crystallizes. Interestingly, c < 0:64, which is called the random closed packing limit.
There has been a lot of activity in the study of the jamming transition [93]. Recently, Franz and Parisi [74] consider the onset of jamming as a SAT-UNSAT transition, where the terms SAT and UNSAT carry the same meaning as in the previous section. They study a toy problem of jamming where M points i are randomly distributed over the surface of a N􀀀 dimensional sphere with radius p N. Here i runs from 1 to N and runs from 1 to M. Suppose that we have a hard-core radius . Then if we were to add a new free particle with coordinates xi , it would have to satisfy the condition j~ 􀀀 ~xj > .

READ  ATTRIBUTES OF IMPORTANCE IN STAPLE-TYPE FOOD PRODUCT DEVELOPMENT FOR LOW-INCOME URBANISED CONSUMERS IN SOUTH AFRICA

Table of contents :

List of Figures
1 outline 
2 introduction 
2.1 Overview
2.2 The Microfoundations conundrum
2.3 Agent-based models – the way ahead
2.4 Applications of ABMs
2.5 Statistical Physics and ABMs
2.6 Contributions of this Thesis
3 background 
3.1 Constraint Satisfaction Problems
3.2 From CSPs to Statistical mechanics
3.3 Back to K-SAT
3.4 Continuous CSPs — The Perceptron
3.5 The Perceptron as a learning problem
3.6 Perceptron and sphere packing
3.7 Discussion
4 self-planting in the perceptron csp 
4.1 A few preliminaries
4.2 Phase diagram of the Perceptron CSP
4.3 Planting and Self-planting
4.4 “Remove and Replace” Dynamics
4.4.1 Long-time behavior of the energy
4.4.2 The R&R Transition
4.4.3 Interpreting the algorithmic transition
4.4.4 Energy landscape dynamics
4.5 Discussion & Conclusion
5 a csp based agent-based model 
5.1 Overview
5.2 The Economy as a constraint satisfaction problem
5.2.1 Budget constraint and formation of prices
5.2.2 Preferences update: supply and demand
5.2.3 Transactions, production costs and redistribution .
5.2.4 Removal and Replacement of agents
5.2.5 Summary of the parameters
5.3 Reducing the space of parameters
5.4 Role of the debt limit: Macro-level
5.5 Role of the debt limit: Dynamics
5.6 Discussion & Conclusion
6 simulating covid-like shocks to an abm 
6.1 Description of Mark-0
6.1.1 Households
6.1.2 Firms
6.1.3 Banking sector
6.2 Summary
6.3 Simulating shocks to the economy
6.4 Policy proposals for a quick recovery
6.5 Discussion & Conclusion
7 summary and conclusions 
7.1 Summary of the major results
7.2 Perspectives
7.3 Broader purpose
a derivation of the perceptron phase diagram 
a.1 Setting up the problem
a.2 Calculation of the partition function
a.3 Hierarchical ansatz for qab
a.3.1 Some preliminary details
a.4 Reformulation in terms of the magnetization m
a.5 Replica Symmetric Solution
a.5.1 SAT Phase
a.5.2 UNSAT Phase
a.5.3 Jamming Limit
a.5.4 Stability Analysis
a.5.5 Phase diagram
references

GET THE COMPLETE PROJECT

Related Posts