Spiking networks of leaky integrate-and-fire neurons: numerical results 

Get Complete Project Material File(s) Now! »

Dynamical Mean Field solutions

In all the cases that we consider, extracting solutions from the DMF equations corresponds to solving a system of several non-linear coupled equations which involve single or multiple Gaussian integrals. Importantly, for threshold-linear activation functions, an analytical expression for many of these integrals can be derived. Iteration is a practical and stable method for exactly computing the solutions of the DMF system of equations.
For a network architecture where the number of incoming connections is fixed, DMF theory reduces to a system of two equations (Eq. 2.39).
In agreement with the dynamical systems analysis in paragraph 7.3.4, at low coupling values, the DMF theory predicts a solution with vanishing variance and auto-correlation (Fig. 3.1 a). Input currents set into a stationary and uniform value, corresponding to their mean . The predicted value of coincides with the homogeneous fixed point x0, representing a low firing rate background activity (Fig. 3.1 c). As the coupling J is increased, the mean current becomes increasingly negative because inhibition dominates, and the mean firing rate decreases (Fig. 3.1 c-d).

Intermediate and strong coupling chaotic regimes

The mean field approach revealed that, above the critical coupling JC, the network generates fluctuating but stable, stationary activity. The dynamical systems analysis, however, showed that the dynamics of an equivalent linearized network are unstable and divergent for identical parameter values. The stability of the fluctuating activity is therefore necessarily due to the two non-linear constraints present in the system: the requirement that firing rates are positive, and the requirement that firing rates are limited by an upper bound ϕmax.
In order to isolate the two contributions, we examined how the amplitude of fluctuating activity depends on the upper bound on firing rates ϕmax. Ultimately, we take this bound to infinity, leaving the activity unbounded. Solving the corresponding DMF equations revealed the presence of two qualitatively different regimes of fluctuating activity above Jc (Fig. 3.2).
For intermediate coupling values, the magnitude of fluctuations and the mean firing rate depend only weakly on the upper bound ϕmax. In particular, for ϕmax ! 1 the dynamics remain stable and bounded. The positive feedback that generates the linear instability is dominantly due to negative, inhibitory interactions multiplying positive firing rates in the linearized model. In this regime, the requirement that firing rates are positive, combined with dominant inhibition, is sufficient to stabilize this feedback and the fluctuating dynamics.
For larger coupling values, the dynamics depend strongly on the upper bound ϕmax. As ϕmax is increased, the magnitude of fluctuations and the mean firing rate continuously increase and diverge for ϕmax ! 1. For large coupling values, the fluctuating dynamics are therefore stabilized by the upper bound and become unstable in absence of saturation, even though inhibition is globally stronger than excitation.
Fig. 3.2 d summarizes the qualitative changes in the dependence on the upper bound ϕmax. In the fixed point regime, mean inputs are suppressed by inhibition, and they correspond to the low-gain region of ϕ(x), which is independent of ϕmax. Above JC, in the intermediate regime, the solution rapidly saturates to a limiting value. In the strong coupling regime, the mean firing rate, as well as the mean input , and its standard deviation p Δ0 grow linearly with the upper bound ϕmax. We observe that when ϕmax is large, numerically simulated mean activity show larger deviations from the theoretically predicted value, because of larger finite size effects (for a more detailed discussion, see in Appendix A).
The two regimes of fluctuating activity are characterized by different scalings of the firstand second-order statistics with the upper-bound ϕmax. In the absence of upper bound on the activity, i.e. in the limit ϕmax ! 1, the two regimes are sharply separated by a second “critical” coupling JD: below JD, the network reaches a stable fluctuating steady-state and DMF admits a solution; above JD, the network has no stable steady-state, and DMF admits no solution.
JD corresponds to the value of the coupling for which the DMF solution diverges, and can be determined analytically (see in next paragraph). For a fixed, finite value of the upper bound ϕmax, there is however no sign of transition as the coupling is increased past JD. Indeed, for a fixed ϕmax, the network reaches a stable fluctuating steady state on both sides of JD, and no qualitative difference is apparent between these two steady states. The difference appears only when the value of the upper bound ϕmax is varied. JD therefore separates two dynamical regimes in which the statistics of the activity scale differently with the upper-bound ϕmax, but for a fixed, finite ϕmax it does not correspond to an instability. The second “critical” coupling JD is therefore qualitatively different from the critical coupling JC, which is associated with an instability for any value of ϕmax. In summary, the two non-linearities induced by the two requirements that the firing rates are positive and bounded play asymmetrical roles in stabilizing fluctuating dynamics. In excitatory-inhibitory networks considered here, this asymmetry leads to two qualitatively different fluctuating regimes.

READ  WHAT IS CERVICAL CANCER?

Extensions to more general classes of networks

In a second step, we extend our analysis to more complex models of excitatory-inhibitory networks. In all the cases that we study, the DMF equations can still be derived and solved numerically, but an analytical expression for the divergence coupling JD is typically harder to derive.
In Appendix B we show in details how the mean field equations should be modified in order to include the new additional constraints within the self-consistent description. Here, we focus on the results and their implications.

Connectivity with stochastic in-degree

We now turn to networks in which the number of incoming connections is not fixed for all the neurons, but fluctuates stochastically around a mean value C. We consider a connectivity scheme in which each excitatory (resp. inhibitory) neuron makes a connection of strength J (resp. 􀀀gJ) with probability C/N. In this class of networks, the number of incoming connections per neuron has a variance equal to the mean. As a consequence, in the stationary state, the total input strongly varies among units. In contrast to the case of a fixed in-degree, the network does not admit an homogeneous fixed point. The fixed point is instead heterogeneous, and more difficult to study using dynamical systems tools.
The dynamical mean field approach can however be extended to include the heterogeneity generated by the variable number of incoming connections [141, 69, 58]. As derived in Appendix B, the stationary distributions are now described by a mean and a static variance Δ0 that obey: = J(CE 􀀀 gCI )[ϕ] + I;

Table of contents :

1 Introduction 
1.1 Irregular firing in cortical networks
1.1.1 Irregular inputs, irregular outputs
1.1.2 Point-process and firing rate variability
1.2 Chaotic regimes in networks of firing rate units
1.3 Outline of the work
I Intrinsically-generate dfluctuation sinrandom network sofex citatory-inhibitory units 
2 Dynamical Mean Field description of excitatory-inhibitory networks 
2.1 Transition to chaos in recurrent random networks
2.1.1 Linear stability analysis
2.1.2 The Dynamical Mean Field theory
2.2 Fast dynamics: discrete time evolution
2.3 Transition to chaos in excitatory-inhibitory neural networks
2.3.1 The model
2.3.2 Linear stability analysis
2.3.3 Deriving DMF equations
3 Two regimes of fluctuating activity 
3.1 Dynamical Mean Field solutions
3.2 Intermediate and strong coupling chaotic regimes
3.2.1 Computing JD
3.2.2 Purely inhibitory networks
3.3 Extensions to more general classes of networks
3.3.1 The effect of noise
3.3.2 Connectivity with stochastic in-degree
3.3.3 General excitatory-inhibitory networks
3.4 Relation to previous works
4 Rate fluctuations in spiking networks 
4.1 Rate networks with a LIF transfer function
4.2 Spiking networks of leaky integrate-and-fire neurons: numerical results
4.3 Discussion
4.3.1 Mean field theories and rate-based descriptions of integrate-and-fire networks
II Random networks as reservoirs 
5 Computing with recurrent networks: an overview 
5.1 Designing structured recurrent networks
5.2 Training structured recurrent networks
5.2.1 Reservoir computing
5.2.2 Closing the loop
5.2.3 Understanding trained networks
6 Analysis of a linear trained network 
6.1 From feedback architectures to auto-encoders and viceversa
6.1.1 Exact solution
6.1.2 The effective dynamics
6.1.3 Multiple frequencies
6.2 A mean field analysis
6.2.1 Results
6.3 A comparison with trained networks
6.3.1 Training auto-encoders
6.3.2 Training feedback architectures
6.3.3 Discussion
6.4 Towards non-linear networks
6.4.1 Response in non-linear random reservoirs
6.4.2 Training non-linear networks
III Linking connectivity, dynamics and computations 
7 Dynamics of networks with unit rank structure 
7.1 One dimensional spontaneous activity in networks with unit rank structure
7.2 Two dimensional activity in response to an input
7.3 The mean field framework
7.3.1 The network model
7.3.2 Computing the network statistics
7.3.3 Dynamical Mean Field equations for stationary solutions
7.3.4 Transient dynamics and stability of stationary solutions
7.3.5 Dynamical Mean Field equations for chaotic solutions
7.3.6 Structures overlapping on the unitary direction
7.3.7 Structures overlapping on an arbitrary direction
7.3.8 Response to external inputs
8 Implementing computations 
8.1 Computing with unit rank structures: the Go-Nogo task
8.1.1 Mean field equations
8.2 Computing with rank two structures
8.3 Implementing the 2AFC task
8.3.1 Mean field equations
8.4 Building a ring attractor
8.4.1 Mean field equations
8.5 Implementing a context-dependent discrimination task
8.5.1 Mean field equations
8.6 Oscillations and temporal outputs
8.6.1 Mean field equations
8.7 Discussion
9 A supervised training perspective 
9.1 Input-output patterns associations
9.1.1 Inverting the mean field equations
9.1.2 Stable and unstable associations
9.2 Input-output associations in echo-state architectures
9.2.1 A comparison with trained networks
9.2.2 Different activation functions
Appendix A
Finite size effects and limits of the DMF assumptions
Finite size effects
Correlations for high ϕmax
Limits of the Gaussian approximation
Appendix B
DMF equations in generalized E-I settings
Mean field theory in presence of noise
Mean field theory with stochastic in-degree
Mean field theory in general E-I networks
Appendix C
Unit rank structures in networks with positive activation functions
Dynamical Mean Field solutions
Appendix D
Two time scales of fluctuations in networks with unit rank structure
Appendix E
Non-Gaussian unit rank structures
Appendix F
Stability analysis in networks with rank two structures
Bibliography

GET THE COMPLETE PROJECT

Related Posts