Discrete and continuous attractors in recurrent neural networks

somdn_product_page

(Downloads - 0)

Catégorie :

For more info about our services contact : help@bestpfe.com

Table of contents

1 statistical physics meets computational neuroscience 
2 the low-dimensional manifold hypothesis 
2.1 In machine learning
2.2 In computational neuroscience
3 recurrent neural neural networks and attractors 
3.1 Discrete and continuous attractors in recurrent neural networks
3.2 Hopfield model: multiple point-attractors
3.2.1 Ingredients of the model
3.2.2 Model details and properties
3.2.3 Why is it important to study simple models?
3.3 Representation of space in the brain
3.3.1 Hippocampus
3.3.2 Place cells and place fields
3.4 Storing a single continuous attractor
3.4.1 Tsodyks and Sejnowsky’s model
3.4.2 CANN and statistical physics: Lebowitz and Penrose’s model
3.4.3 Continuous attractors and population coding
3.5 Why is it necessary to store multiple continuous attractors?
3.6 The case of multiple continuous attractors
3.6.1 Samsonovich and McNaugthon’s model
3.6.2 Rosay and Monasson’s model
3.7 Issues with current theory
3.8 Experimental evidences for continuous attractors
3.8.1 Head-direction cells
3.8.2 The fruit fly central complex
3.8.3 Grid cells
3.8.4 Prefrontal cortex
3.8.5 Other examples
4 optimal capacity-resolution trade-off in memories of multiple continuous attractors 
4.1 Introduction
4.2 The Model
4.3 Learning the optimal couplings
4.4 Results of numerical simulations
4.4.1 Couplings obtained by SVM
4.4.2 Finite temperature dynamics (T > 0)
4.4.3 Zero temperature dynamics (T = 0) and spatial error
4.4.4 Comparison with Hebb rule
4.4.5 Capacity-Resolution trade-off
4.5 Gardner’s theory for RNN storing spatially correlated patterns
4.6 Quenched Input Fields Theory
4.6.1 Replica calculation
4.6.2 Log. volume and saddle-point equations close to the critical line
4.6.3 Large-p behavior of the critical capacity
5 spectrum of multi-space euclidean random matrices 
5.1 Introduction
5.2 Spectrum of MERM: free-probability-inspired derivation
5.2.1 Case of the extensive eigenvalue – k=0
5.2.2 Case of a single space (L=1)
5.2.3 Case of multiple spaces (L = N)
5.3 Spectrum of MERM: replica-based derivation
5.4 Application and comparison with numerics
5.4.1 Numerical computation of the spectrum
5.4.2 Merging of density “connected components”: behavior of the density at small
5.4.3 Eigenvectors of MERM and Fourier modes associated to the ERMs
6 towards greater biological plausibility 
6.1 Introduction
6.2 Border effects
6.3 Positive couplings constraint
6.3.1 Couplings obtained by SVMs with positive weights constraint
6.3.2 Stability obtained by SVMs with positive weights constraint
6.3.3 Adding the positive weights constraint in Gardner’s framework
6.4 Variants of the place cell model
6.4.1 Dilution
6.4.2 Place fields of different volumes
6.4.3 Multiple place fields per cell in each space
6.4.4 Putting everything together
6.5 Individuality of neurons
6.6 Non uniform distribution of positions
6.7 Comparison between SVM and theoretical couplings
6.8 Dynamics of learning
6.9 Learning continuous attractors in RNN from real images
7 conclusions 
bibliography

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *