Precise patterning for competitive systems with spatial cues and HP diffusion

Get Complete Project Material File(s) Now! »

On the statistical description of neuron networks: the weak connectivity conjecture

Most large-scale neuronal networks can described by a density function f = f(t, ξ) ≥ 0 describing the
probability density of finding neurons in some state ξ ∈ (typically ξ stands for a intern neuron time, the membrane voltage or the couple voltage-conductance of the neuron in the FhN model) at time t ≥ 0. The density f evolves according to an integral and/or partial differential equation ∂tf = LM(t)f, f(0, ·) = f0, (1.5.69) where the operator f 7→ Lmf is linear for any given network state m ∈ R, and the evolution of M(t) is also given by some constraints, differential or delay equation M(t) = M[f] = M[(f(s))|s∈[0,t]]. (1.5.70) The fundamental property of the dynamics is that the total number of neurons is conserved so that the (mass) conservation equation Z f(t, ξ) dξ = Z f0(ξ) dξ = 1 ∀ t ≥ 0.

Annealed convergence and propagation of chaos in the general case

We now turn our attention to the case of non-translation invariant networks where the law of delays and synaptic weights depend on the index of neuron i in population α. In this case we will see that the propagation of chaos property remains valid as well as convergence to the mean-field equations (2.3.2), no more for almost all realization of the disorder, but in average across all possible configurations. Denoting Ei the expectation over all possible distributions iγ, we have: Theorem 2.3.40 (Annealed convergence in the general case). We assume that (H1)-(H4) are valid and that network initial conditions are chaotic in M2(C), and that the interaction does not depend on
the postsynaptic neuron state (i.e., b(w, x, y) = ℓ(w, y)). Let us fix i ∈ N, then the law of process (Xi,N t , −τ ≤ t ≤ T ) solution to the network equations (2.2.1) averaged over all the possibles realizations of the disorder, converge almost surely towards the process ( ¯X i t , −τ ≤ t ≤ T ) solution to the mean field equations (2.3.2). This implies in particular the convergence in law of (Ei[Xi,N t ], −τ ≤ t ≤ T ) towards ( ¯X α t , −τ ≤ t ≤ T ) solution of the mean field equations (2.3.2). Extensions to the spatially extended neural field case are discussed in Appendix 2.7.

Application: dynamics of the firing-rate model with random connectivity

In the previous section, we derived limit equations for networks with random connectivities and synaptic weights. The motivation of these mathematical developments is to understand the role of specific connectivity and delays patterns arising in plausible neuronal networks. More precisely, it is known that anatomical properties of neuronal networks affect both connectivities and delays, and we will specifically consider the two following facts:
• Neurons connect preferentially to those anatomically close.
• Delays are proportional to the distance between cells.
At the level of generality of the previous sections, we obtained very complex equations, from which
it is very hard to uncover the role of random architectures. However, as we already showed in previous works [152], a particularly suitable framework to solve these questions is provided by the classical firingrate model. In that case, we showed in different contexts that the solution to the mean-field equations is Gaussian, whose mean and standard deviation are solution of simpler dynamical system.

Reduction to distributed delays differential equations

In the firing-rate model, the intrinsic dynamics of each neuron is given by fα(t, x) = −x/θα + Iα(t), where Iα(t) is the external input of the system, and the diffusion function gα(t, x) = λα is constant. The interaction only depends in a nonlinear transform of the membrane potential of the pre-synaptic neuron multiplied by the synaptic weight: bαγ(w, x, y) = Jαγ(w)S(y). We also assume, in order to satisfy the assumptions of the Theorems 2.3.39 and 2.3.40, that the functions Jαγ ∈ L∞(R) and S ∈ W1,∞(Ed). Therefore, when considering the delays and the synaptic weights only depending on p(i), we have propagation of chaos and almost sure convergence (quenched) towards the mean-field equations: d ¯X α t = − ¯X α t θα + Iα(t) + XP γ=1 0 −τ Z R Jαγ(w)EY (Y γ t+s) dαγ(s,w) dt.

Relationship with pathological rhythmic brain activity

Synchronized states are ubiquitous and serve essential function in brain such as memory or attention [31]. Impairments of synchronization levels often relate to severe pathological effects such as epilepsy (too much synchronization) or Parkinson’s disease (too little synchronization) [134]. Troubles in oscillatory patterns have also been related to connectivity levels in epilepsy. In detail, the emergence of seizures and abnormal synchronization was hypothesized to be related to an increased functional connectivity, or more recently to the appearance of an increased number of synaptic buttons between cells. The former phenomenon has been reported in various epileptic situations (see e.g. [16]), and the latter was mainly evidenced in hippocampal epilepsy, and is generally referred to as neosynaptogenesis, or sprouting, see e.g. [8, 115, 117]. Our models provides an elementary account for the fact that indeed, increased connectivity levels (corresponding to small values of β) tend to favor synchronization for most values of τs. The model even makes a prediction about some possible parameter regions in which this synchronization may only arise in a particular intermediate interval of connectivity levels β. Disorder also seems to intervene in the emergence of abnormally synchronized oscillations, as evidenced for instance by Aradi and Soltesz [6] who showed that even if average levels of connectivity in rats subjects to febrile epileptic seizures were similar to those of a control population, variance in the connectivities were increased. Our models incorporate the law of the synaptic weights, and therefore all for testing this hypothesis, as well as a number of variations around these models, in a rigorous manner.

READ  Analysis of the dynamic mechanical properties of apple tissue and relationships with the intracellular water status, gas distribution as measured by MRI, histological properties and chemical composition

Cluster size and synchronization in primary visual area

The structure of the primary visual areas are very diverse across species. These areas are composed of cells sensitive to the orientation of visual stimuli. In primates, neurons gather into columns as a function of the orientation they are selective to, and these columns organize spatially creating continuous patterns of a specific anatomical size (see e.g. [20]). In contrast, rodents present no specific organization of neurons selective to the same orientation (salt-and-pepper organization, see [118]). The reason why these architectures are very different across mammals is still poorly understood, and one of the possibles explanations proposed is related to the size of V1: the model tends to show that it is harder to ensure collective synchronization at the level of large cortical areas than locally, phenomenon probably due to the fact that naturally, connectivities are local. This is precisely one of the results of our analysis. In our model, the parameter a characterizes the size of one cortical column, and the results of the analysis of the model show that increasing the size of a column a induces transitions from synchronized regimes to stationary regimes, reducing the collective response of neurons.

Table of contents :

Acknowledgments
1 General Introduction 
1.1 Presentation
1.1.1 Part I: Neuronal networks
1.1.2 Part II: The role of homeoprotein diffusion in morphogenesis
1.1.3 Part III: On a subcritical Keller-Segel equation
1.1.4 Plan of the Thesis
1.2 Mathematical toolbox
1.2.1 Mean-field macroscopic equations: propagation of chaos property
1.2.2 Uniqueness of stationary solutions and nonlinear convergence
1.3 Biomathematical background
1.4 Main results
1.4.1 Randomly connected neuronal networks with delays
1.4.2 On a kinetic FitzHugh-Nagumo equation
1.4.3 Competition and boundary formation in heterogeneous media
1.4.4 On a subcritical Keller-Segel equation
1.5 Perspectives and open problems
1.5.1 A microscopic spiking neuronal network for the age-structured model
1.5.2 On the statistical description of neuron networks
I Neuronal networks 
2 Limits on randomly connected neuronal networks 
2.1 Introduction
2.2 Setting of the problem
2.3 Main results
2.3.1 Randomly connected neural mass models
2.3.2 Quenched convergence and propagation of chaos in the translation invariant case .
2.3.3 Annealed convergence and propagation of chaos in the general case
2.4 Application: dynamics of the firing-rate model with random connectivity
2.4.1 Reduction to distributed delays differential equations
2.4.2 Small-world type model and correlated delays
2.5 Proofs
2.6 Discussion
2.6.1 Relationship with pathological rhythmic brain activity
2.6.2 Cluster size and synchronization in primary visual area
2.6.3 Macroscopic vs Mesoscopic models
2.6.4 Perspectives
2.7 Appendix A: Randomly connected neural fields
3 On a kinetic FitzHugh-Nagumo equation 
3.1 Introduction
3.1.1 Historical overview of macroscopic and kinetic models in neuroscience
3.1.2 Organization of the paper
3.2 Summary of the main results
3.2.1 Functional spaces and norms
3.2.2 Main results
3.2.3 Other notations and definitions
3.3 Analysis of the nonlinear evolution equation
3.3.1 A priori bounds
3.3.2 Entropy estimates and uniqueness of the solution
3.4 The linearized equation
3.4.1 Properties of A and Bε
3.4.2 Spectral analysis on the linear operator in the disconnected case
3.5 Stability of the stationary solution in the small connectivity regime
3.5.1 Uniqueness of the stationary solution in the weak connectivity regime
3.5.2 Study of the Spectrum and Semigroup for the Linear Problem
3.5.3 Exponential stability of the NL equation
3.6 Open problems beyond the weak coupling regime
3.7 Appendix A: Mean-Field limit for Fitzhugh-Nagumo neurons
3.8 Appendix B: Strong maximum principle for the linearized operator
II The role of homeoprotein diffusion in morphogenesis 
4 Local HP diffusion and neurodevelopment 
4.1 Introduction
4.2 Model
4.2.1 Theoretical Description
4.3 Results
4.3.1 Ambiguous boundary in the absence of non cell-autonomous processes
4.3.2 Unpredictable patterns in the absence of morphogen gradients
4.3.3 Precise patterning for competitive systems with spatial cues and HP diffusion
4.3.4 Stability of the front
4.4 Discussion
4.5 Appendix A: Supplementary material
4.5.1 Mathematical Model
4.5.2 Stationary solutions in the cell autonomous case
4.5.3 Uniqueness of the front in the presence of HP diffusion
4.5.4 Movies
5 Competition and boundary formation in heterogeneous media 
5.1 Introduction
5.1.1 Biological motivation
5.1.2 General model and main result
5.2 Analysis of the parabolic problem
5.2.1 Monotonicity in space
5.2.2 Positivity of the solutions
5.3 Asymptotic analysis as ε vanishes and front position
5.3.1 The limit as ε vanishes
5.3.2 WKB change of unknown
5.4 Characterization of the Front
5.5 Application
5.5.1 Model
III On a sub-critical model of chemotaxis 
6 On a subcritical Keller-Segel equation 
6.1 Introduction and main results
6.1.1 The subcritical Keller-Segel Equation
6.1.2 The particle system
6.1.3 Weak solution for the P.D.E
6.1.4 Notation and propagation of chaos
6.1.5 Main results
6.1.6 Comments
6.1.7 Plan of the paper
6.2 Preliminaries
6.3 Well-posedness for the system of particles
6.4 Convergence of the particle system
6.5 Well-posedness and propagation of chaos
6.6 Renormalization and entropic chaos
Bibliography 

GET THE COMPLETE PROJECT

Related Posts