# Timing and the Variety of Factors that Influence Plasticity

Get Complete Project Material File(s) Now! »

## Learning in a homogeneous network

We now generalise our learning rule derivation to a network of N identical, homogeneously connected neurons. This generalisation is reasonably straightforward because many characteristics of the single neuron case are shared by a network of identical neurons. We will return to the mor general case of heterogeneously connected neurons in the next section. We begin by deriving optimal spiking dynamics, as in the single neuron case. This provides a target for learning, which we can then use to derive synaptic dynamics. As before, we want our network to produce spikes that optimally represent a variable x for a linear read-out. We assume that the read-out ˆx is provided by summing and filtering the spike trains of all the neurons in the network: ˙ˆ x = −ˆx + 􀀀o, (12) where the row vector 􀀀 = (􀀀, . . . , 􀀀) contains the read-out weights2 of the neurons and the column vector o = (o1, . . . , oN) their spike trains. Here, we have used identical read-out weights for each neuron, because this indirectly leads to homogeneous connectivity, as we will demonstrate. Next, we assume that a neuron only spikes if that spike reduces a loss-function. This spiking rule is similar to the single neuron spiking rule except that this time there is some ambiguity about which neuron should spike to represent a signal. Indeed, there are many different spike patterns that provide exactly the same estimate ˆx. For example, one neuron could fire regularly at a high rate (exactly like our previous single neuron example) while all others are silent. To avoid this firing rate ambiguity, we use a modified loss function, that selects amongst all equivalent solutions, those with the smallest neural firing rates. We do this by adding a ‘metabolic cost’ term to our loss function, so that high firing rates are penalised: L = (x − ˆx)2 + μk¯ok2.

### Learning to Balance Should be Fast

The feedforward and recurrent weights are not learned separately and the respective learning rules work hand in hand. Assume that the network is already in a tightly balanced state thanks to the learning of the recurrent connections. Any change in the feedforward weights will likely cause an imbalance in the membrane potentials. In fact, the recurrent weights are no longer adapted to the new encoding weights and the recurrent input does not cancel the feedforward input as precisely as before. However, in order to converge, the learning of the feedforward weights requires the network to always be in the balanced state. For this reason, after each update of the encoding weights, recurrent connectivity should be relearned to always insure a tightly balanced state and thus an efficient encoding. The restoration of the balance should be very quick to ensure that the learning will converge in a reasonable time.
The learning rule that we derived in the previous work is indeed able to drive the network into the balanced state. However, it does not have the rapidity required to enable the learning of the feedforward weights simultaneously. A possible reason for this slow convergence is the presence of the firing rate term in the learning rule. This changes the weights continuously and on a timescale defined by the time constant of the decoder. However, and as shown in the supplementary information of the next article, the impact of the fast weights on balance and coding is defined on a much shorter timescale which is that of a single population’s ISI. Thus, the presence of the rate term may introduce noisy and irrelevant updates that take time to be averaged out. We remedy this problem by deriving an equally local learning rule. However, this rule results from minimizing the membrane potential fluctuations only at spike times rather than continuously. The new learning rule is two orders of magnitude faster in converging to the balanced state (Figure 1E, Chapter 4). The network undergoing the feedforward and the recurrent learning rules indeed converges to the optimal state.

#### The optimal decoder under length constraints

The (optimal) decoder will be an important conceptual quantity for the learning rules of the spiking network. In that context, however, the optimal decoder will be constrained to be of a particular length. While the necessity of this constraint will only become clear below, we here describe the corresponding optimization problem for future reference. (This subsection can also be skipped on first reading.).
Specifically, we will constrain the neurons’ individual contributions to the readout, Dn, to be of a certain length. Using a set of Lagrangian multipliers n, we simply add these length constraints to the mean square error of the reconstruction error, (S.8), to obtain the modified loss function 5 L = D kx − ˆxk2 E + XNE n=1 n 􀀀 kDnk2 − an .

The connectivity and thresholds of the optimal network

The optimal decoder allows us to find the best possible reconstruction for a given network. We emphasize that the ‘best possible reconstruction’ is not necessarily a good one; depending on the network connectivity, the reconstruction may not work at all. To counter this problem, we will study how to choose the free internal parameters of the network—the synaptic weights and the thresholds— such that the loss function is minimized. In other words, instead of assuming that the network is given, and then optimizing the decoder D, we will now assume that the decoder is given, and then optimize the network. We will do so following the derivations layed out in .
First, we assume that we have a given and fixed decoder, D. We can then consider the effect of a spike of neuron n on the loss ` (S.10). The spike will increase the filtered spike train, rn ! rn+1, and thus update the signal estimate according to ˆx ! ˆx + Dn, where Dn is the n-th column of the decoder matrix D. We can therefore rewrite the objective (S.10) after the firing of the spike as `(neuron n spiked) = kx − ˆx − Dnk2 + μ kr + enk2 + kr + enk1 .

Classical approaches to learning in recurrent neural networks

In ‘top-down’ approaches, one first specifies an objective that quantifies a network’s performance in a particular task, and then derives learning rules that modify the connectivity of the network to improve task performance. While this approach has mostly been used in feedforward neural networks, several approaches exist for recurrent neural networks . Famous examples include the
Hebbian rules for Hopfield networks , ‘backpropagation in time’ , or the more recent ’FORCE’ learning algorithm .
The key problem with top-down approaches is that they often achieve their functionality at the cost of biological plausibility. First, almost all top-down approaches are based on ‘firing rate’ networks, i.e., networks in which neurons  communicate with continuous rates rather than spikes. Second, most top-down approaches lead to ‘learning rules’ that are non-local, i.e., that depend on nonlocal information, such as the activity or synaptic weights of other neurons in the network.
In ‘bottom-up’ approaches, the word ‘learning’ refers to networks whose synapses undergo specific, biologically plausible plasticity rules. In contrast to top-down approaches, the bottom-up perspective has allowed researchers to explicitly investigate the effect of spike-timing-dependent plasticity on the dynamics of spiking networks. For instance, such rules may allow a network to learn how to properly balance itself on a specified time scale [11, 14].

The problem of learning the synaptic weights

The objective function (S.16) seems to pose a baffling problem to solve with a neural network for multiple reasons: first, at the beginning of learning the decoder is not explicitly represented in the network connectivity, so how could the network ‘learn’ a decoder? Second, neither the feedforward nor the recurrent weights are explicitly part of the objective, (S.16). Rather, the instantaneous firing rates, r(t), depend implicitly on the connectivity. Hence, even a simple gradient descent with respect to the synaptic weights is not possible. Third, even if the feedforward weights are initially set to the correct values, i.e., to the values of the optimal decoder D, i.e. F = D>, we could still not learn the desired recurrent connectivity = −FF> − μI. The key problem is that neurons have no direct access to their respective feedforward weights. More precisely, the input to the network is given by Fx, but from the perspective of the network F and x are only defined up to a linear orthogonal transformation A, Fx = (FA>)(Ax). Before showing how we can address these problems, we will briefly discuss a fourth issue, concerning the thresholds Tn of the neurons and their relation to the length of the decoder, as well as the feedforward and recurrent connectivities.

1 Introduction
1.1 Neural Coding
1.2 E-I Balance
1.2.1 Theoretical Foundation
1.2.2 Balanced Recurrent Networks
1.2.3 Experimental Evidence of Balance
1.3 Computations
1.3.1 Handcrafted v. Generic Networks
1.3.3 Optimizing the Recurrent Weights
1.4 Plasticity
1.4.1 Hebbian Learning
1.4.2 Timing and the Variety of Factors that Influence Plasticity
1.4.3 Plasticity and Functions
1.4.4 Local learning in Recurrent Networks
1.5 Objectives and Organization
2 Voltage-Rate Based Plasticity
3 Learning an Auto-Encoder
4 Learning a Linear Dynamical System
5 Discussion
5.1 Representing Information Within a Highly Cooperative Code
5.2 Fast Connections Enforce Efficiency by Learning to Balance Excitation and Inhibition
5.3 Learning an Autoencoder
5.4 Supervised learning
5.5 Open Questions

GET THE COMPLETE PROJECT