Computational Neuroscience, Statistical Mechanics and the hippocampus 

Get Complete Project Material File(s) Now! »

The Statistical Mechanics Approach

At this point, the reader may wonder why on earth this thesis is labeled \theoretical physics ». Statistical mechanics is indeed a branch of physics, but the reason for this is more historical than anything else. Historically, this eld was born in the context of ther-modynamics at the end of the 19th century and developed during the subsequent decades in the framework of condensed matter, with the objective to deduce macroscopic quanti-ties as the result of microscopic laws, and to explain collective behaviours such as phase transitions. It relies on the constatation that, when moving from the individual to the collective level of description, novel, non-trivial phenomena appear | « more is di erent », as P.A. Anderson put it [5]. In the case of gases or condensed matter, the motivation for resorting to statistical mechanics is two-fold: (1) what we observe experimentally is the macroscopic scale, so we are mostly interested in macroscopic quantities and (2) even if we wanted, we would not be able to compute microscopic quantities, because the number of parameters is too big. A statistical approach thus encompasses both what we want to do and what we can do.
In a sense, statistical mechanics is more a mathematical framework grown on physics grounds than true-blue physics. Thus, it is not limited to problems of physics: situations with multiple levels of description are ubiquitous. During the past decades, tools from statistical mechanics have been applied in many elds such as sociology, economy, nance, biology. . . and neuroscience, as we will explain here.
Coming back to the brain, it should now appear clearly that the aforementioned con-nectionist hypothesis makes mental states a statistical mechanics problem. There are microscopic units | the neurons | in interaction on a network | via the synapses | and each of them follows its own local, microscopic dynamics. From this we would like to deduce the global states. The relationship with statistical mechanics is thus straight-forward. More precisely, this problem has been found to be particularly reminiscent of magnetic systems in which spins (magnetic moments) interact with each other: spins are binary units that can be either in a up or a down state, like as a neuron can be either ring a spike or quiescent. There is thus a direct kinship between the celebrated Ising model, which is the seminal model for magnetic systems, and schematic representations of neural networks [6]. To complete the analogy, in neural networks there may also be randomness, both in the structure of the network (« quenched noise ») and in the response of each neuron (« fast noise », equivalent to a temperature parameter). Therefore, all the tools developed in the former case are also usable in the latter.
A lot of work has thus been done to tackle computational neuroscience problems with statistical mechanics approaches. Neural network models have been proposed that ac-count for observations of the brain activity at macroscopic scales, or that perform certain functions. Moreover, another source of useful tools for computational neuroscience has been information theory, a eld closely tied to statistical mechanics. It is helpful both in addressing issues such as perception or neural representation and massive data analysis.
Therefore, interdisciplinarity in general has proved a fruitful approach. Still, the commu-nication between two di erent scienti c communities is not always easy: for a physicist, « temperature » and « energy » refer to noise and probability, while for a neurophysiologist they evoke body warmth and metabolism | just to give an example.

The Attractor Neural Network Theory

Let us now turn to the particular issue of memory. Before asking how memory can be supported by nervous systems, we have to de ne what memory is. Here comes the attractor neural network hypothesis, proposed by Donald Hebb in his Organization of Behavior (1949) [7]. It can be expressed as follows:
What is memorised are con gurations of activity of the neurons (i.e. ring patterns). A con guration is said to be memorised when it is a stable state of the network. In other words, a memory item is an activity con guration that is an attractor of the network’s dynamics | hence the name of attractor neural network.
This assumption may seem surprising and arti cial at rst glance. Yet it is quite well accepted among neuroscientists and has received some experimental support. Very schematically, the idea is that, for something to be memorised, one has to be able to maintain for some time the neural activity that corresponds to this memory. The next question is: can we propose a neural model that stores memories? We have said before that the states of a neural network are driven by the synapses. We can thus rephrase the question: for given activity con gurations, can we nd synaptic couplings for which those con gurations are attractors?
In 1982, John Hop eld [8] proposed such a network that became famous as the Hop eld model. It consists in a number N of binary (i.e. either active either silent) neurons f igi=1:::N and stores a number p of randomly selected binary con gurations f i gi=1:::N; =1:::p. i is equal to +1 if neuron i is active in con guration , and -1 other-wise. The synaptic couplings that allow these con gurations to be attractors are given by the so-called Hebb rule: Jij = N i j 8 i; j : (1.1) =1
The idea underlying this rule is that, during a previous hypothetical learning phase, the p patterns have been presented to the network (i.e. the network has been forced to adopt these p con gurations successively) and that neurons that were then active together rein-forced the connection between them. The reinforcement of couplings between coactivated neurons, already postulated by Hebb in 1949, has been observed experimentally since then [9]. It is summarised by the proverb: \neurons that re together wire together ». The rule 1.1 also assumes that these successive modi cations of the couplings are summed up additively. Note that each neuron is a priori coupled to all the others Jij 6= 0 8 i; j: the network is said to be recurrent.
The last thing to de ne is the dynamics of the network. In the original 1982 paper [8], time was discretized and at each time step neurons responded deterministically to their local elds. Later studies (see ref. [10]) incorporated the possibility of stochasticity in the response, through a noise (or temperature) parameter T , so that the system obeys detailed balance at temperature T under the Hamiltonian E = Jij i j : (1.2) i<j
The introduction of this model aroused much excitement in the statistical mechan-ics community. Indeed, the Hop eld model has many common points with spin glasses (frustrated1 disordered magnetic systems), a very active domain of statistical mechanics. The properties of the Hop eld model have been intensively studied. The rst question is of course to check whether the patterns f i gi=1:::N; =1:::p are indeed attractors of the dynamics: it turns out that, for p and T not too large (i.e. not too strong noise and memory load), they are. From any initial state, the system will converge to the closest attractor, i.e. the stored pattern that most resembles this initial state. For this reason we speak about \autoassociative memory »: feeding the network with a partial cue leads to the retrival of the whole pattern. The maximal value for p is proportional to the number N of neurons.
Many other aspects have been investigated: the dynamics of the system, its response to an external eld. . . Moreover, lots of variants have been proposed to the model, in order to take into account this or that property of real neurons, or to give the network this or that function. For instance, the observation of low activity in real cortical networks can be incorporated [11]. . . It would be too long to review here all the work and variations that have been made on the Hop eld model (the reader can refer to ref. [12]). We will discuss at length, in the case of the hippocampus, how a basic attractor model can be thoroughly explored and re ned.
To nish on a general remark, in all this work that has been done on attractor neural networks during the past thirty years, it is not always very clear whether the properties exhibited in this or that model are supposed to happen « for real » in biological systems. Statistical physicists, when they propose a model, like to explore every inch of it | even the parts that are beyond the scope of the system as a model for reality | just of the fun of it, if I may say so. Thus, the boundary between what is for real and what is for art’s sake is not always neat. This is not necessarily bad: this taste for exhaustivity gives rise to very elegant investigations, and sometimes the properties thus exhibited turn out later to happen for real in real life. But one should always keep in mind that all the reality is not contained in the model, nor are all the model’s properties present in reality.

The Hippocampus

The Hop eld model and its variants are theoretical views on what memory could be. Now, there are experimental observations indicating that the attractor mechanism is indeed the basis of memory, at least some sorts of memory in some parts of the brain, of which the hippocampus is one.
This section is dedicated to presenting this brain area and some of the empirical knowledge on it. Theoretical models will be presented and discussed in Chapter 2.


Types of memory

Memory is the ability to retrieve after a delay an information or a skill that has been learnt before. Learning and retrieval are the two fundamental components of this process. Several types of memory are distinguished based on their contents, timescales and retrieval modes.
First, memory can be either short- or long-term: the former decays after some seconds to minutes (e.g. remembering a phone number while dialling it), while the latter can last a lifetime. Then, long-term memory can be either declarative or procedural. Declarative memory refers to explicit knowledge, that can be consciously recalled; procedural mem-ory refers to non-conscious memory such as skills (e.g. riding a bike) and conditioning (e.g. salivating at the microwave bell). Within declarative memory itself, we distinguish episodic memory and semantic memory. The former is the memory of autobiographical events (e.g. one’s childhood in Combray) and the latter is the memory of general facts (Marcel Proust was a writer). The two can contain similar facts, but in episodic memory these facts are associated to the context of their learning while in semantic memory they constitute a knowledge detached from one’s personal experience.
This classi cation of memory types is summarised in Figure 1.2. When discussing the role of the hippocampus, we will be mainly concerned with declarative memory.


The hippocampus is a region of mammalian brains; it has an homologue in most verte-brate species. In humans it is located ventrally, under the cortex in the medial temporal lobe, and its aspect makes think of a seahorse (hence its name). In rodents, in contrast, it lies caudally (see Fig. 1.3). Despite these di erences of aspect from one species to the other, the general connectivity scheme of the hippocampus is well preserved across mam-mals. It is sketched in Figure 1.4. The hippocampal formation is composed of the CA elds (CA1 and CA3, from cornu ammoni, \Ammon’s horn »2), the dentate gyrus (DG) and the subicular complex. Synaptic outputs from many cortical areas converge to the hippocampal formation through the entorhinal cortex (EC). The synaptic input from the entorhinal cortex to the hippocampal formation is called the perforant pathway (PP).
Figure 1.3: Location of the hippocampus in the human brain (left) and in the rodent brain (right), in blue.
The dentate gyrus projects to CA3 through the mossy bers (MF) and CA3 projects to CA1 through the Scha er collaterals (SC). Within CA3 itself there is a dense recurrent connectivity. CA1 sends axons back to the entorhinal cortex both directly and through the subicular complex.
Figure 1.4: The hippocampal formation. Left: connectivity diagram. Blue boxes indicate the di erent regions (EC= entorhinal cortex, DG= dentate gyrus, Sub= subiculum). Orange arrows indicate the synaptic excitatory axons and their direction of propagation (PP= perforant pathway, MF= mossy bers, RC= recurrent collaterals, SC= Scha er collaterals). Right: drawing by Ramon y Cajal (1909) (adapted, with the same legend as the left gure).

READ  Sequential RLS Sampling in the Batch Setting

Research on the hippocampus

The hippocampus is one of the most studied parts of the brain. Historically, it has been the object of two parallel, almost separate lines of research. The rst concerns the human hippocampus, studied mostly through a behavioural approach in lesioned patients. The second line of research, making use of microelectrode recordings, focusses on the rodent hippocampus. As we will see, these two independent approaches led to apparently very di erent visions of the hippocampus.
In humans, the research on the memory function of the hippocampus really started with the seminal case of patient H.M. reported in 1957 by Scoville and Milner [13]. H.M. had undergone a bilateral medial temporal lobectomy in an attempt to relieve his epilepsy in a nutshell, he had no hippocampus anymore. After the surgery, he su ered from memory disorders: he was unable to form new memories (anterograde amnesia) and also to remember some of his memories anterior to the lobectomy (retrograde amnesia). His other intellectual and perceptual skills, including short-term and procedural memories, were intact. The case of H.M. inspired the theory of an episodic memory function for the hippocampus and oriented the subsequent research carried out on humans, mostly pa-tients with a lesioned hippocampus (see [14] for a review). Marr introduced an in uential model for the role of the hippocampus as a temporary memory centre, where memories of new events are encoded before being gradually transferred to the neocortex, where they are permanently stored [15]. More recently, single-units recording data in humans became available thanks to the pre-surgical implantation of some epileptic patients. This is how Quian Quiroga et al discovered neurons in the hippocampus that are activated speci cally by the presentation of pictures or the name of a person or an object, e.g. Jennifer Aniston, Halle Berry or the Sidney Opera House [16]. This discovery supported the assumption that the hippocampus forms episodic memories by binding together concepts, objects and people.
In rodents, on the other hand, everything started in 1971 when O’Keefe and Dostrovsky recorded single units activity in freely behaving rats and discovered that some cells in the hippocampus have the astonishing property to be active if and only if the rat is located at a given position [17]. These cells were named « place cells » and the location in space where a given place cell is active is called its « place eld » (see Figure 1.5). They are pyramidal cells | a type of excitatory neurons, named after the shape of their soma | found in CA1 and CA3. This original nding led to the postulate that the hippocampus supports spatial memory and navigation [18]. Since then, a lot of research has been carried out on the rodent hippocampus, focussing on its presumed spatial function.
Figure 1.5: Spatially localized ring of a rat’s place cell. The activity has been recorded in CA3 during a 14 minute exploratory session in a familiar 60 cm 60 cm square box. Left: trajectory of the rat (black line) and positions where spikes were emitted by the cell (each red dot represents a spike). Right: corresponding smoothed rate map (i.e. average number of spikes per time unit as a function of position), displaying the characteristic bump of activity. The recording is taken from the dataset described in Section 3.2.
The most widely used experimental technique is the micro-electrode recording of individual cells in the behaving rat. With this technique, rats trained for a given spatial task are surgically implanted micro-electrode arrays in the hippocampus. After recovery, the position of the electrode is adjusted and the activity is recorded while the animal performs the task. The recording is numerically treated to sort out the spikes and allocate them to individual neurons.
This way, the properties of place cells have been intensely investigated. They will be detailed in the next paragraph. In addition, other types of cells with spatial correlates were discovered over the years in regions neighbouring the hippocampus (Figure 1.6): there were rst the « head-direction cells » found by Ranck and colleagues in 1984 in the presubiculum [19] and later reported in several other areas. These cells are active when the animal is facing a certain preferred direction, irrespectively of its position and speed. Then, « boundary cells » (or « border cells », responsive to walls of an environment) were reported in several regions of the hippocampal formation [20]. Last but not least, the « grid cells » were discovered by the Moser lab in 2004 in the medial entorhinal cortex [21, 22]. These, like place cells, re speci cally at certain positions of space, but instead of being limited to a single spot their ring elds exhibit a nice spatial periodicity, with active positions located at the vertices of a triangular grid, the orientation and mesh of which depend on the recorded cell. All these discoveries, besides generating the excitement of the community, supported the trend to look for a spatial function of the hippocampal formation. In this respect, the research on the rodent hippocampus seems at odds with the more episodic avour of its human counterpart. We shall see nevertheless how these two views are eventually reconciled.
Apart from humans and rodents, hippocampal activity has been studied in many species. Equivalents of place and grid cells have been found in sometimes phylogenetically distant species including primates [23] and bats [24].

Properties of place cells

Let us now turn to the experimental properties of place cells. In order to avoid any speculative interpretation about what the neurons « represent » or « compute », we will limit ourselves to what is observed, that is their ring correlates.

Some numbers

The estimated number of pyramidal cells in the rat CA3 is 2 250; 000; in CA1 it is around 2 390; 000 and in the dentate gyrus, 2 1; 000; 000 [25] | the factor 2 comes from the two brain hemispheres. Among them, a great majority are place cells (92% according to ref. [26]). The recurrent connectivity in CA3 is very dense, around 4%: each pyramidal cell hence receives around 12,000 axons from other CA3 cells, against 4000 perforant pathway axons and a few tens of mossy bers, on average [27]. In addition to (excitatory) pyramidal cells, CA3 and CA1 contain inhibitory interneurons, which do not display spatial selectivity. Their number is less important | around 10% of the total number of neurons, but each interneuron is connected to thousands of pyramidal cells. Place elds have variable sizes, shapes and peak ring rates. Moreover, eld sizes depend on the size of the enclosure and on the position of the recorded cell [28]. To give an order of magnitude, the diameter of a place eld ranges from a few centimeters to a few meters and the ring rate at its centre ranges from a few Hertz to tens of Hertz [29]. This peak ring rate can be modulated by running velocity [30]. Place cells have place elds in both two-dimensional and one-dimensional environments, though in the latter case (linear tracks) the place elds are most of the time directional, that is the place cell is active only when the eld is crossed in a certain direction [30]. It has been also observed that the activity of a place cell within its place eld is very variable from one trial to the other, a property called overdispersion [31].

Reference frame

When saying that a place cell is active at a xed position, one needs to specify relative to what this position is xed. Indeed, in the absence of an absolute origin, the reference frame must be stated. In the case of experiments with rodents, there are basically three reasonable alternatives (Figure 1.7): the position can be de ned either relative to the ground (lab frame) or relative to the platform where the rat is evolving (platform frame) or relative to any object the rat has visual access to (and there are as many such cue frames as there are cues). A set of experimental studies have thus been carried out in order to determine to which frame the place elds are attached.
Place elds are stable in the dark [32] and under substitution of part of the visual cues [33]; di erent place cells are active in di erent environments sharing a common object [34]: these facts rule out the possibility of place elds tied to visual cues only. Nevertheless, visual cues do exert some in uence on place elds as geometrical deformation of a box leads to a corresponding stretching of place elds [20, 34]. Actually, all three alternatives mentioned above have found some experimental indications. Place elds linked respectively to the three frames have been found in an experiment where a platform was rotated in the dark and in the light [35]. In experiments where cues are moved, some place elds are bound to the cues while others remain xed [36{38]. When a linear track is elongated, the place elds are xed relative to the ground during the rst part of the journey and switch to a cue frame near the end of the track [34]. However, modifying the visual aspect of the enclosure does not a ect the position of the place elds as soon as the rat knows its position relative to the ground has not changed [39, 40]. Consequently, when there is no disorientation, the lab frame seems to dominate.
The issue of the reference frame has been posed in the di erent terms of navigation [18]. If we assume that the hippocampus computes the animal’s position, an eloquent parallel can be drawn with marine or aeronautic navigation. A ship or an aircraft can compute her position either by triangulation relative to daymarks, beacons, stars and satellites (visual, radar, celestial and GPS navigation respectively), or by integrating its trajectory by estimating the elapsed time, speed and heading from a previously calculated position (« dead reckoning » or « path integration »). Of course, all methods should in principle lead to the same result as the landmarks are supposed to be xed with respect to the ground and to each other. The parallel with place elds is straightforward: if place elds are xed relative to the visual cues, then the rat will be assumed to perform visual navigation. If place elds are xed relative to the platform, then the navigation is rather based on path integration. If nally place elds are xed relative to the ground, then the navigation is also based on path integration, but with an additional inertial integration of the platform’s motion, much alike the taking into account of currents in marine dead reckoning (or wind in aircraft).

Table of contents :

1 Computational Neuroscience, Statistical Mechanics and the hippocampus 
1.1 Methods in Computational Neuroscience
1.2 The Statistical Mechanics Approach
1.3 The Attractor Neural Network Theory
1.4 The Hippocampus
2 A model for place cells 
2.1 On models
2.2 Description
2.3 Phase diagram
2.4 Transitions between maps
2.5 Diusion within one map
2.6 Comparison with previous models
2.7 Estimation of the parameters from experimental data
3 Decoding 
3.1 Stating the decoding problem
3.2 Experimental data
3.3 Rate-based decoding methods
3.4 The inverse-Ising approach to decoding
3.5 Structure of the inferred eective networks
3.6 Decoding an environment
3.7 Decoded transitions between maps
3.8 Correspondence between the \Ising – coupled » and the \rate – max. posterior »methods
4 Conclusions & Extensions 
4.1 Summary of results
4.2 Rening the model
4.3 Future work on the decoding issue
4.4 Concluding remarks
Appendix A Publications
Appendix B Résumé en francais


Related Posts