The mammalian cortex: from functional localization to microscopic organization

Get Complete Project Material File(s) Now! »

Neurons and action potentials

What is real? How do you define ”real”? If you’re talking about what you can feel, what you can smell, what you can taste and see then ”real” is simply electrical signals interpreted by your brain.
— Morpheus, Matrix
In this section we will describe the global organization of the neuron and explain how it conveys and integrates electrical signals. As will be emphasized in section 1.4.3, there exists a great diversity of neurons, dramatically differing in size function and shape. The following description does not intend to present them all, but gives a stereotyped picture underlying neurons functioning. Moreover, we will mainly over-look important supportive cells, called glial cells.
Neurons are cells characterized by their aptitude to transmit information through the emission of transient electrical signals, called action potentials or spikes, that travel along their membrane. In order to establish connections with possibly remote interlocutors, they display a fibrous tree-like shape that can be divided in three dis-tinct parts: the dendrites, cell body, and axon. Interestingly, these evoke, by both their shape and function, the roots, seed, and stem of a growing plant. First, the cell body, also called soma, is the core of the neuron. It contains the nucleus and DNA, and shelters the classical cellular machinery that notably produces the proteins and energy of the cell. It is also the pivot that links the two other parts of the neuron, both made of filamentous extensions. On one side, the bushy and highly ramified den-drites – ”tree-like” in Greek – extend in the neighborhood of the soma, and feed it with the neural raw material: information. They are the ears of the neuron that collect the electrical activity incoming from the network, to integrate it at the level of the soma. On the other side lies the solitary axon. It is a far-reaching electrical highway, able to extend at the order of the meter before branching to connect, through bud-like structures of its membrane called synapses, the dendrites of other neurons. Its basis, the axon hillock, displays the highest density of voltage-gated sodium channels (see below) making it the most excitable part of the neuron. In order to accurately trans-mit possibly complex spiking patterns to far away areas (outside the cortex), axon are wrapped in insulating sheets of myelin (actually, these sheets are made of supportive glial cells: Schwann cells in the Peripheral Nervous System and of oligodendrocites in the Central Nervous System) which are steadily spaced by Ranvier nodes, a kind of exit-entrance for this biological highway. Note that a neuron can possess more than one axon, that axon-to-axon, dendrite-to-dendrite, and dendrite-to-axon connec-tions also exist, but these ”exceptions” are relatively scarce in mammalian nervous systems.
When the membrane potential of the soma reaches a given threshold, a spike is triggered at the level of the axon hillock, and propagates all along its membrane to reach the synaptic terminals. Locally, this electrical wave lasts 2 milliseconds. It is governed by the opening and closure of voltage-gated ionic channels covering the surface of the neuron. When the neuron is at rest, some of these transmembrane proteins actively thwart the natural diffusion of ions in order to maintain a high concentration of potassium and a low concentration of sodium inside the cell. This pumping requires ATP (energy of the cell), and represents approximatively seventy percent of the neuron’s consumption in energy. This results in a polarized resting membrane potential whose typical value lies around -70mV. When a positive current is locally applied on the membrane, this prompts the opening of sodium channels that passively and massively let these ions rush into the cytoplasm for the membrane po-tential to attain +40mV within a millisecond. This local depolarization in turn yields both the closing of sodium channels and the opening of potassium ones. This causes a rapid outward flow of potassium ions, allowing the neuron to return to its resting state. It also impacts the potential of the directly adjacent portions of membrane, propagating the spike along the axon. This short-lasting event, is generally followed by a refractory period lasting around two milliseconds, and during which the ion gra-dient is rebuild.

The synapse

At the level of a synapse, two neurons are interacting: the presynaptic neuron sends information, whereas the postsynaptic neuron is listening. Remark that a neuron is perfectly capable of connecting its own dendrites. There exists two great categories of synapses. On the on hand, electric synapses or gap junctions directly connects the cytoplasm of the two neurons, allowing the bidirectional transit of various ions, molecules. This transit is made possible by the binding of two transmembrane pro-teins, hemichannels, that tightly link the two membrane, and create a small local aperture. These electrical synapses are characterized by a high speed of transmission between neurons, a very useful feature for escaping predation through stereotypical reflexes.
More sophisticated and versatile are the chemical synapses. They rely on the diffusion of molecular messengers called neurotransmitters throughout the synaptic cleft – a small space that is set up and maintained between the two neurons. When an action potential crosses the axon of a presynaptic neuron, and reaches the synap-tic terminal, voltage-gated calcium channels covering the membrane of the synaptic terminal open and let this ion flow into the cellular medium. Calcium ions then bind to specific receptors attached to arm-like proteins linking both the membrane of the presynaptic neuron, and vesicles filled with neurotransmitters. This activates a spring-loaded fusion between these vesicles and the synaptic membrane through a bending of the protein, enabling a massive release of neurotransmitters into the synaptic cleft. Neurotransmitters then diffuse to bind specific receptors on the mem-brane of the postsynaptic neuron, yielding one of the two following scenari. In the first case, receptors are directly controlling the aperture of ion-channels, enabling an immediate effect on the postsynaptic potential. Alternatively, receptors are located on intermediary transmembrane protein that triggers a slower long-lasting response in the intracellular medium of the postsynaptic neuron. Interestingly, this kind of synaptic transmission is seemingly crucial for learning and memory. In both cases, within a 1ms [66], the synapse progressively deactivates through the clearance of neurotransmitter achieved through diverse mechanisms (antagonist molecules bind-ing the receptors, destruction, diffusion outside the synaptic cleft, etc.).
Despite their relative slowness, chemical synapses present the advantage of the sophistication: there exists more than a hundred different kind of neurotransmitters displaying various characteristic times. Hence, they are able to induce a wide range of possible interactions, and notably enable inhibition. Moreover, the location of the synapse on the postsynaptic neuron’s dendrite modulate its transmission: it is all the more effective than it is close from the soma. Hence, an inhibitory synapse located near the soma can short-cut several excitatory synapses located farther on the same dendrite. Furthermore, the size of the synapse is directly linked with the number of vesicles it contains, and thus, to its impact on the postsynaptic neuron. Both these properties shape the synaptic efficiency, which corresponds to the maximal transmis-sion of a given terminal.
Many interesting and intricate phenomena can arise at the level of the synapse, making them particularly difficult to model. For example, synaptic transmission de-pends on the geometry of the synapse. Furthermore, the astrocytes that support the synapse, and notably create the synaptic cleft, might actually play an important role into the transmission [78, 187]. More importantly, the wiring of the brain is a dy-namics process, as synapses are created and destroyed on a daily basis, while existing synapses adjust their efficiency along time. This phenomenon, called synaptic plas-ticity, was formalized by Donald O. Hebb [133], and has today become one of the most important fields of research in neuroscience. It is of crucial importance for the good functioning of the brain, in particular for learning and memory. All these complex and interesting phenomena widely outrange the scope of this manuscript.


General overview

The mammalian nervous system is a three-dimensional object presenting a strong spatial organization at every scale. It can be divided into two major parts. On one hand, the Central Nervous System (CNS) – on which we will further insist – contains the brain and spinal cord and shelters the higher cognitive functions. On the other hand, the Peripheral Nervous System (PNS) is made of all the nerve cells and fibers ly-ing outside the CNS. It is essentially involved in conveying sensory messages (touch, pain, temperature), and motor commands (voluntary movements) between the spinal cord and the different parts of the body. Moreover, the PNS ensures the good func-tioning of the internal organs. This vital function, homeostasis, is achieved through the careful regulation of the body constants (blood pressure, temperature, etc.).
In the CNS, the brain and spinal cords continuously join through the foramen magnum. As vital organs, they display several level of protection. Firstly, they are encapsulated into bone structures: the skull and spine. Secondly, they are wrapped into three successive layers called meninges. The most external and toughest one is the dura, that notably carries a venous network extracting the blood out of the CNS. The second layer, arachnoid, contains the cerebrospinal fluid in which the CNS is immersed. This fluid has both a shock-absorbing and a nutritive role. Eventually, the pia is a delicate impermeable tissue tightly enclosing the brain and spinal cord.
The brain can be divided into three main parts. At its base, the brainstem (medulla, pons and midbrain) is the continuation of the spinal cord. Besides being an unavoid-able pathway for corticospinal communications, it also notably controls respiration. Behind it, at the rear of the head, lies the peach-size cerebellum involved in equilib-rium and motor coordination. As the human cortex (see below) it displays a folded surface, characterized by very deep sulci, and condenses fifty percents of the neurons of the brain, justifying its name: ”little brain”. The forebrain is the rostral-most part of the nervous system, connecting the brainstem at its extremity. It is composed of the diencephalon and telencephalon or cerebrum. The former notably regroups the thalamus – a relay station for incoming pathways to sensory and motor areas of the cerebral cortex -, the hypothalamus regulating the autonomic nervous system, as well as the retina. The cerebrum is composed of two symmetric hemispheres – right and left – gathering the cortex as well as some subcortical areas: the olfactory bulb, the hippocampus where lies memory, the almond-shaped amygdala controlling emotions, and basal ganglia involved in procedural learning. These two hemispheres are linked through a flat bundle of nerve fibers: the corpus callosum, ensuring their good communication.
The different part composing the nervous systems are complexly dependent, and are intricately linked by tracts of nerve fibers. This wires are so intertwined that tracking them is a difficult task. Let us insist on the presence of two different cir-cuits: a local short-connections, far-reaching long-connections regrouping into thick myelinated cables. The latter compose the white matter of the brain, responsible for the good communication between distant areas, while the gray matter is character-ized by a high density of cell bodies. Certain large fibers can be seen by the naked eye and tracked on considerable distance, giving many insights on the function of the con-nected regions. Nevertheless, while it was undertook for the entire nervous system of C. Elegans composed of three hundred neurons, mapping the entire connectome of the human brain is still out of technological reach [214].

The mammalian cortex: from functional localization to micro-scopic organization.

The cerebral cortex is a thin extended sheet of tissue covering the outer surface of the cerebrum. While almost inexistent for fishes and amphibians and very rudimen-tary for reptiles, it displays for all mammals a stratified organization composed of six distinct layers. In contrast with rats or mice presenting smooth hemispheres, the human cortex is convoluted, forming cavities (sulci) and ridges (gyri), allowing its sur-face to triple within the skull. The most important sulci divide each hemisphere into four distinct regions: the frontal, parietal, temporal and occipital lobes, respectively associated with the planning and organization of future actions, the processing of sensory information, hearing and other aspects of language and memory, and vision. The human cortex contains up to 28 billion neurons, is approximately 2600 square centimeters wide and 3-4 mm thick. As the siege of the highest cognitive functions, it is undoubtedly the most fascinating region of the human brain.
Its important functional role has been overlooked until the beginning of the nine-teenth century. In fact, since the European Renaissance, only a few scientist were keen to show the cortex any interest, and their work were largely ignored [127]. It is not a detail that the meaning of cortex is bark. Ironically, it was the dubious phrenology, developed by the German physician Franz Joseph Gall, that conceptu-ally revolutionized our vision of the cortex by spatially dividing the cortex into distinct functional areas. Still impregnated with dualist considerations, the scientific community fiercely withstood this new depiction of human nature. This initiated one of the most roaring controversies of the nineteenth century known as the holism ver-sus localism debate. It ended with the prevailing of the latter. Indeed, in 1861, Paul Broca, demonstrated the link between speech impairments with brain damage to the left hemisphere. In 1870, German researchers Fritsch and Hitzig located the motor cortex at the rear of the frontal lobes, and notably reopened an experimental high-way: cortex electrical stimulation. This technique was then thoroughly put at use by the Scottish neurologist David Ferrier to dramatically strengthen localism theory. In 1875, Ferrier discovered the auditory cortex in the temporal lobe. This same year, following the pioneer work of the Italian Bartholomeo Panizza, the German physician Munk positioned the visual cortex in the occipital lobe. Furthermore, complementing Broca’s research, the German Carl Wernicke found in 1874 another cortical region associated with a new form of aphasia: unintelligible speech pattern with incapabil-ity of comprehension. All these advances suggested that the cortex was divided into distinct functional area, but also lateralized as speech was mainly present in the left-hemisphere. Meanwhile these picture of the cortex should be mitigated in certain regards, it has nevertheless proven largely valid until today.
From the anatomical viewpoint, the mammalian cortex is an homogeneous medium organized both horizontally and vertically. At the horizontal level, it displays six dis-tinct layers (laminae I to VI, the first being the most extern) that constitute the frame of cortical pathways. Each layer has its own characteristic composition, and its spe-cific set of connections with the other layers as well as the other cortical cortical and subcortical areas. On top of this stratified organization is a vertical one whose small-est anatomical and functional unit is the cortical microcolumn (or minicolumn).
Figure 1.7: Layered structure of the cortex. Each layer has its own composition and typical connectivity. Left: drawing by Santiago Ramon´ y Cajal. Right: reproduced from [125]
This microscopic structure consists of around a hundred preferentially intercon-nected neurons, organized as column 20-60 micrometers wide, that vertically traverse laminae II-VI. Within a microcolumn, neurons display an homogeneous level of activ-ity. The diversity of the neurons in each column is representative of the cortex compo-sition, with around 20 percents of inhibitory neurons playing a central role for its good functioning. In various cortical regions, microcolumns gather into larger functional unit called cortical column or macrocolumn, each specific of a given region and whose variety has been reviewed in [177]. While these macro structures are quite versatile, all have in common an approximate diameter of 0.5-1 mm. They are seemingly a well preserved organizing principle of mammalian cortex. These columns have specific functions and spatial locations resulting in the presence of delays in their interac-tions due to the transport of information through axons and to the typical time the synaptic machinery needs to transmit it. These delays have a clear role in shaping the neuronal activity, as established by different authors (see e.g. [70, 213]). More-over, neurons and microcolumns typically have two different scales of connections in the cortex (see 1.8). At the microscopic level, they project many connections toward their most proximate siblings, while they send a few far-away synapses to determined areas.
Figure 1.8: Two different scales of connection in the primary visual cortex of a cat. At the microscopic level, neurons connects many of their neighbors in a random fash-ion (A). At a higher level, a patch of neurons will send connections to other patches processing the same task. Colors correspond to preferred orientation of neurons
(B). Sketch of the two scales of connections for an abstract representations of mi-crocolumns (C). Modified from [31]
Lorente de No,´ a former student of Ramon´ y Cajal, was the first to envision this possibility of such a columnar organization [215]. It was latter evidenced electrophys-iologically in the somato-sensory cortex of cat by Vernon MountCastle in [176]. Albeit controversial [138, 145], microcolumns have been accounted for by diverse techniques ranging from Nissl-staining method, optical density measurement, to metabolic 2-deoxy-D-glucose methods, and might replace single neurons as the functional units of the cortex. Several fact support this theory. First, while the synaptic transmis-sion in the cortex is generally of 1-5 ms and might have an important impact on its function, the latency in one column is very small making it an acceptable indivisible structure [49, 177]. Furthermore, there is not enough myelinated connections in the cortex for single neurons to form one to one long-range synaptic connections. For ex-ample, in the primary visual cortex, callosal termination seems to correspond to the output of an entire orientation hypercolumns. Lastly, microcolumns can display more sophisticated behaviors than single neurons, making them very adaptable structures.
When taken alone, a neuron already displays a complex dynamics. It was carefully described in the seminal work of Hodgkin and Huxley in 1952 on the giant squid
This animal presents uncommonly large axon fibers, yielding the possibility to wire it and record its excitable properties. Hodgkin and Huxley translated them into equations, and tailored the canonical model – still widely studied – of the single neuron dynamics: a set of four coupled non-linear ordinary differential equations, taking into account both the membrane potential, the opening and closing dynamics of the channels covering the surface (sodium and potassium), and the leak of the membrane. Because these equations were quite intricate, some reductions have been proposed in the next decades, such as Morris-Lecar model [175] and FitzHugh-Nagumo model [109, 140]. These simpler models all presented the advantage of being tractable and of preserving the excitability properties of neurons. As the present manuscript is exclusively interested in the dynamics of neural networks, we refer to [60] for further details on the intrinsic dynamics of single neurons.
Neural networks certainly fall into the scope of the theory of ”complex systems”. These systems are generally composed of many elementary units (spins, oscillators, neurons, etc.) interacting with each other through complex non-linear interactions and possibly subjected to transmission delays. In fact, the human brain is made of around 80 billiards neurons, each projecting up to 10 thousands synapses toward other neurons. Besides the possible intricacy of the intrinsic behavior of the com-ponents at play, the aim of complex systems theory is to discover, describe and cat-egorize the possible dynamics of such large networks. One of its main insights is that, due to the non-linearities in the model, the system is not equal to the sum of its parts. It might indeed displays unexpected emergent behaviors, not deducible from the properties of its elements taken alone. The flocking of bats, in which millions fly coherently, is a striking example of such natural phenomena. Very often, the macro-scopic behavior of the system is tuned by some parameters of the model, possibly associated with bifurcations or phase transitions. The system is thus reducible, and many details of the microscopic models become irrelevant. This can concern both the intrinsic dynamics, microscopic interactions, and some heterogeneity parameter in the model: a change in their value will not make a change! This conclusion – along with mathematical tractability – justifies, in some sense, the poor biological relevance of the intrinsic dynamics we will consider in our network equations. It also clarifies what is at stake: the aim is not to perfectly describe a biological neural network, but to capture something of its dynamics in the limit where the number of neurons is very large (known as the thermodynamic limit). Before giving the microscopic dynamics of the class of neural networks we will study, let us precise a few hypothesis concerning the neural code and synaptic transmission.

READ  Segregation effect and N2 binding energy reduction in CO-N2 system adsorbed on water ice substrates

The neural code and the firing-rate hypothesis

A very intricate and interesting issue in neurobiology concerns the neural code. In fact, knowing that neurons communicate by electrical means does not tell us how, exactly, the information is encoded. To gain some insights, neurobiologists have care-fully recorded and categorized the temporal spiking patterns of neurons. It turns out that these spike trains can take many forms: individual spikes, periodic fir-ing, square-waves bursting, periodic spikes intertwined with small oscillations, etc…
There is still no dictionary to translate this bestiary, but it certainly has an important role in the encoding of neural information. Especially if one considers long-range con-nections, with well identified source and target. If the precise temporal spike pattern, we speak of a temporal code. We can compare this possible code to Morse, where the interval between two flashes carry information.
The problem is that, if one wants to model a network composed of many neurons, taking into account the spike trains of every one of them is an Herculean task, if pos-sible. Even simulating the network’s trajectory with a computer seems prohibited. Nevertheless, if one considers a local area of the brain containing only a few million neurons (V1 is thought to contain around 140 millions in each hemisphere [157]), each sending and receiving many connections, you might expect that some averaging effect occurs, and that individual spike trains loose their importance. This is why a common assumption to reduce this difficulty is to consider that in such integrative networks, interactions between neurons can roughly be summarized as a non-linear function of the instantaneous spiking frequency of the presynaptic neuron (integrated on a short time window), also called firing-rate. This non-linearity is described in the next subsection.
While a rough assumption, the firing-rate hypothesis actually makes sense in a number of scenarios. First, it is compatible with modern non-invasive monitoring techniques whose resolution is not precise enough to go beyond the local activity of brain’s regions, containing many hundred neurons (EEG techniques, functional MRI, etc.). In fact, in most experiments where a stimulus is presented to an animal, it is possible to find a group of neurons whose firing rate will increase compared with the background activity. Moreover, in certain precise case, the firing-rate seems to con-dense all the information. For example, it has been demonstrated that the firing-rate of a stretch receptor neuron associated to a muscle fiber is a function of the stretch-ing force applied to the fiber [3, 4]. In these cases, the information is contained in the firing-are of a set of neurons. Without surprise, the significance of firing-rate hypothesis is quite limited: ne-glecting (by definition) important temporal features of neuronal activity, it has been challenged by many authors [1, 202, 219]. The real surprise comes from the successes it did earn, providing insights into the mechanisms underlying both hallucinations patterns, binocular rivalry, working memory or the emergence of up-down states in the cortex [94, 99]. These achievements justify the respectability this model as earned among the neuroscientific community, and explain why it is still at use today.

Synaptic models in Neural Networks

Synaptic transmission models are of chief importance in this manuscript. We will thus describe their evolution in details. Nonetheless, the history of mathematical modeling of neural network is a long one, and at least goes back to the 30s [130]. Interestingly, it shares a lot with the development of Artificial Intelligence (AI) and Cybernetics, and it is notable that many early neuroscientists had made contribu-tions to both disciplines. Let us begin in 1943, when the neurophysiologist Warren S. McCulloch and logician Walter Pitts proposed one of the first formal equations for neural integration [169]. Of logical inspiration, their article would become the start.
ing point of the theory of neural networks, as it opened the possibility to study their discrete time evolution. At the time, it was well known that neurons presented a spik-ing behavior in order to implement complex representations of the world. Inspired by this biological fact, McCulloch and Pitts suggested binary neurons, or formal neu-rons, either excitatory or inhibitory, evolving in a discrete time, and whose activity would be described at time t 2 Nby a boolean 0 or 1 (or 1 and 1). Moreover, at each time step, every neuron of the network updates its state through a nonlinear function of the afferent activities with a threshold condition, unless inhibition occurs. For a given neuron i receiving inputs from neurons j 2 [[1; N]].
When i represents a single neuron, xi(t) is seen as a smooth average on a short time window of the otherwise spiky membrane potential of neuron i. If the neuron is very excited, it will exactly emit one spike within a time tr, where tr is the refractory period. Thus, S(xi(t)) represents either the probability that the neuron spike within a time window of width tr, either its normalized frequency of spike. A heuristic deriva-tion of the latter interpretation can, for example, be found in [5], [60, section V.A] or [99, Chapter. 12]. In contrast, when i represents a subnetwork of many neurons, xi(t) may represents the averaged membrane potential of its components, and S(xi(t)) the instantaneous frequency of spike it generates. This view is especially natural when considering a neural field, that is a continuous spatially extended neural network, for which xi(t) accounts for the local membrane potential on a neighborhood containing infinitely many neurons (see e.g. [245]). Note also that generally 0 S 1 but, as for the Heaviside function, this sigmoid some time takes value in the interval [ 1; 1].
Of course, these models are far from embracing the whole complexity of synaptic transmission. Richer microscopic models take, for example, into account the reversal potential of the synapse, the concentration of neurotransmitters within the synaptic cleft, or the aperture’s dynamics of the channels covering the membrane of the postsy-naptic neuron [99]. As already discussed, these omission are consistent with the fact that we are dealing with a complex systems, and do not want to enter too much into details. Nevertheless, a biological fact that should be taken into account is that the synaptic transmission actually depends on the membrane potential of the postsynap-tic neuron xti [99]. In this manuscript, we propose to address A synaptic transmission of the form b xi(t); x j(t) through a mean-field analysis performed in chapter 5.

Table of contents :

General Introduction 
1 Neurosciences 
1.1 Historical notes
1.2 Basic notions
1.2.1 Neurons and action potentials
1.2.2 The synapse
1.3 Spatial organization of the brain
1.3.1 General overview
1.3.2 The mammalian cortex: from functional localization to microscopic organization
1.4 Mathematical models in Neuroscience
1.4.1 The neural code and the firing-rate hypothesis
1.4.2 Synaptic models in Neural Networks
1.4.3 Heterogeneities and noise
1.4.4 Neural fields
2 The mean-field approach 
2.1 Mean-field formalism
2.2 Initial conditions and propagation of chaos
2.3 Heterogeneous systems: averaged and quenched results
3 Mean-field theory for neuroscience 
3.1 A short history of rigorous mean-field approaches for randomly connected neural networks
3.2 Coupling methods for weak interactions
3.3 Large deviations techniques for Spin Glass dynamics
3.3.1 Setting
3.3.2 Frame of the analysis
3.3.3 Main results
II Large deviations for heterogeneous neural networks 
4 Spatially extended networks 
4.1 Introduction
4.1.1 Biological background
4.1.2 Microscopic Neuronal Network Model
4.2 Statement of the results
4.3 Large Deviation Principle
4.3.1 Construction of the good rate function
4.3.2 Upper-bound and Tightness
4.4 Identification of the mean-field equations
4.4.1 Convergence of the process
4.5 Non Gaussian connectivity weights
4.6 Appendix
4.6.1 A priori estimates for single neurons
4.6.2 Proof of lemma 4.2.2: regularity of the solutions for the limit equation
4.6.3 Non-Gaussian estimates
5 Network with state-dependent synapses 
5.1 Introduction
5.2 Mathematical setting and main results
5.3 Large deviation principle
5.3.1 Construction of the good rate function
5.3.2 Upper-bound and Tightness
5.4 Existence, uniqueness, and characterization of the limit
5.4.1 Convergence of the process and Quenched results
III Phenomenology of Random Networks 
6 Numerical study of a neural network 
6.1 Introduction
6.2 Mathematical setting and Gaussian characterization of the limit
6.2.1 Qualitative effects of the heterogeneity parameters
6.2.2 The generalized Somplinsky-Crisanti-Sommers Equations
6.3 Numerical results on Random Neural Networks
6.3.1 One population networks
6.3.2 Multi-population networks
6.3.3 Heterogeneity-induced oscillations in two-populations networks
6.4 Discussion
7 Study of the Random Kuramoto model 
7.1 Introduction
7.2 Presentation of the model
7.3 Kuramoto model in neuroscience
7.4 Application of our results to the random Kuramoto model
7.4.1 Comparison with previous results
7.4.2 Numerical analysis of the phase transition
IV General Appendix 
8 Generalities 
8.1 A few classical theorems from probability theory
8.2 Gaussian Calculus
8.3 Stochastic Fubini Theorem
9 Large Deviations 
9.1 Preliminaries
9.2 Preliminary results, and Legendre transform
9.3 Cram´er’s Theorem
9.4 General Theory
9.4.1 Basic definitions and properties
9.5 Obtaining a LDP
9.5.1 Sanov’s theorem
9.5.2 Gartn¨er-Ellis Theorem
9.6 Deriving a LDP
9.6.1 Remarks on Legendre transform and convex rate functions
9.6.2 Non-convex Fenchel duality
9.7 Large Deviations Appendix
9.7.1 Sanov’s theorem for finite sequences
9.7.2 Proof of abstract Varadhan setting
General Notations
List of figures


Related Posts