Learning spiking neural networks parameters 

Get Complete Project Material File(s) Now! »


A neuron is an electrically excitable cell that processes and transmits information by electrical and chemical signaling. Chemical signaling occurs via synapses, specialized connections with other cells. Neurons connect to each other to form networks. Given the diversity of functions performed by neurons in different parts of the nervous system, there is, as expected, a wide variety in the shape, size, and electrochemical properties of neurons. A typical neuron can be divided into three anatomical and functional parts, called dendrites, soma and axon; see figure 1.1.
Figure 1.1: Anatomy of a neuron. (Illustration from Mariana Ruiz Villarreal)
The soma is the central part of the neuron. It contains the nucleus of the cell, where most protein synthesis occurs. The soma is consid-ered as a central processing unit that performs an important nonlinear processing.
The dendrites of a neuron are cellular extensions with many branches. This overall shape and structure is referred to as a den-dritic tree. This is where the majority of inputs to the neuron occurs. It is considered as a linear combiners of these inputs.
The axon is a fine cable-like projection which can extend tens, hun-dreds, or even tens of thousands of times the diameter of the soma in length. The axon carries nerve signals away from the soma (and also carries some types of information back to it). Neurons have only one axon, but this axon may and usually will undergo extensive branch-ing, enabling communication with many target cells. The part of the axon where it emerges from the soma is called the axon hillock. Be-sides being an anatomical structure, the axon hillock is also the part of the neuron that has the greatest density of voltage-dependent sodium channels. This makes it the most easily-excited part of the neuron and the spike initiation zone for the axon: in electrophysiological terms it has the most negative action potential threshold. While the axon and axon hillock are generally involved in information outflow, this region can also receive input from other neurons. The axon terminal con-tains synapses, specialized structures where neurotransmitter chemi-cals are released in order to communicate with target neurons.

The spiking neuron

The neuron is a dynamic element that emits output pulses whenever the ex-citation exceeds some threshold. The resulting sequence of pulses or “spikes” contains all the information that is transmitted from one neuron to the next.
Action potentials
In physiology, an action potential (Figure 1.2) is a short-lasting event in which the electrical membrane potential of a cell rapidly rises and falls, following a stereotyped trajectory. Action potentials play a central role in cell-to-cell communication. In other types of cells, their main function is to activate intracellular processes. Action potentials in neurons are also known as “nerve impulses” or “spikes”, and the temporal sequence of action poten-tials generated by a neuron is called its « spike train ». A neuron that emits an action potential is said to“fire”.


Let us now consider a normalized and reduced “punctual conductance based generalized integrate and fire” (gIF) neural unit model Destexhe (1997) as reviewed in Rudolph and Destexhe (2006). The model is reduced in the sense that both adaptive currents and non-linear ionic currents are no more explicitly depending on the potential membrane, but on time and previous spikes only (see Cessac and Viéville (2008) for a development).
Here we follow Cessac (2008); Cessac and Viéville (2008); Cessac and Viéville (2008) after Soula and Chow (2007) and review how to properly dis-cretize a gIF model.
Figure 1.2: Action potentials arriving at the synapses of the upper right neuron stimulate currents in its dendrites; these currents depolarize the membrane at its axon, provoking an action potential that propagates down the axon to its synaptic knobs, releasing neurotransmitter and stimulating the post-synaptic neuron (lower left). Illustration from [2]

From a LIF model to a discrete-time spiking neuron model.

In this section we present the correct discretization of a LIF model, which has been introduced in Soula et al. (2006) and called BMS. The discretization process is carried out thanks to the well-done analogy presented in Gerstner and Kistler (2002a) between a RC circuit and a biological neuron (figure (1.3)).
The analogy represented in figure 1.3 permits us to study the neuron behavior using the well-defined techniques of the electrical circuits analysis. In this direction applying the Kirchhoff’s current law to the RC circuit shown in figure (1.3) we have: Ii(t) = IR + IC (1.1) then applying the Ohm’s law and deriving the current-voltage relation on the capacitor, yields in a differential equation: Ii(t) = Vi(t) + C dVi(t) where = RC is the time required for the voltage to fall to V0 and is called the time constant.
Figure 1.3: Schematic diagram of the integrate-and-fire model. The ba-sic circuit is the module inside the dashed circle on the right-hand side. A current I(t) charges the RC circuit. The voltage u(t) across the capacitance (points) is compared to a threshold #. If u(t) = # at time ti(f) and output pulse (t ti(t)) is generated. Left part: a presynaptic spike (t tj(t)) is low- pass filtered at the synapse and generates an input current pulse (t t(jt)) (Illustration from Gerstner and Kistler (2002a)).
Furthermore, we consider that the spiking neuron model will be used in numerical simulations, thus the Euler method is applied in order to solve the differential equation. Then we fix some constant values, such as the sampling time scale dt = 1 and the capacitance C = 1. Finally, we consider that = 1 1 , where 1, thus 2 [0; 1[, the equation reads: Vi(t + dt) Vi(t) = Vi(t) + Ii(t)
The next is to consider that Ii[k] is the sum of all signals flowing into the neuron, it is Ii[k] = IiS[k] + Iiext[k]. Hence, on one hand the synaptic cur-rent IiS[k] describes the strength of the connections among neurons and is defined by PN Wij (Vj[k]), where is the activation function described j=1 by the equation 1.3 as the firing state of the pre-synaptic neurons. On the other hand Iiext[k] corresponds to a current from an external stimulus. These assumptions allow us to define a preliminary approximation towards the discrete-time spiking neuron model as follow: Vi[k + 1] = Vi[k] + Wij (Vj[k]) + Iiext[k] j=1
Furthermore the term Z[k] is used to specify the indicator function [1;+1[(V [k]), A(x) = 1 if x 2 A and 0 otherwise. More precisely Z de-fines the firing state of the neuron. Finally, we introduce a reset mechanism (1 Z[k]) in order to return the neuron to its initial state, once that it has reached the fixed firing threshold . Thus, the discrete-time neuron model reads Vi[k] = Vi[k 1](1 Zi[k 1]) + WijZj[k 1] + Iiext[k] (1.2) =1 Z [k] = (V [k]) = 1 if Vi[k] (1.3) i ( 0 otherwise where Vi[k] defines the membrane potential of the neuron with index i at time t. W are the synaptic weights. corresponds to the leak factor and Iext is an external stimulus.

Time constrained continuous formulation.

Let V be the normalized membrane potential, which triggers a spike for V = 1 and is reset at a given value. This is an approximation characteristic of IF models: the reset value is a constant. This constraint can be released by adding noise in the system. The fire regime (spike emission) reads V (t) = 1 ) V (t+) = 0. Let us write !~t = (V (t)) = f tni g, the list of spike times tni < t. Here tni is the n-th spike-time of the neuron of index i. The dynamic of the integrate regime reads: dV + 1 [V EL]+ j t tjn [V Ej] = Im(~!t); Xj X dt L n
Here, L and El are the membrane leak time-constant and reverse poten-tial, while j(t) and Ej are the spike responses and reversal potentials for excitatory/inhibitory synapses and gap-junctions. Furthermore, j(t) is the synaptic or gap-junction response, accounting for the connection delay and time constant; shape showed in figure 1.4.
Finally, Im() is the reduced membrane current, including simplified adap-tive and non-linear ionic current (see Cessac and Viéville (2008) for details).
Consequently the dynamic of the integrate regime reads: dt + g(t; !~t) V = I(t; !~t); where g and I are the membrane conductance and the current respectively and their value only depend on the firing states !~. Once that we know the membrane potential at time t, we can deduce it at time t + , which reads: V (t + ) = (t; t + ; !~t) V (t) + (s; t + ; !~s) I(s; !~s) ds with: (t0; t1; !~t0 ) = e Rtt01 g(s;!~s) ds:
The key point is that temporal constraints must be taken into account Cessac et al. (2008b). Spike-times are bounded by a refractory period r; r < dni+1, defined up to some absolute precision t, while there is always a minimal delay dt for one spike to influence another spike, and there might be (depending on the model assumptions) a maximal inter-spike interval D such that either the neuron fires within a time delay < D or remains quiescent forever). For biological neurons, orders of magnitude are typically, in milliseconds: r t dt D 1 0:1 10 [1;2] 10[3;4]

Network dynamics discrete approximation.

Combining these assumptions with equations 1.2 and 1.3 we can to write the following discrete equation for a sampling period : Wijd Zj[k d] + Ii[k];
Thus, the cumulative effects of conductances reads: i(t) (t; t + ; !~t)jt=k (1.5) considering random independent and identically distributed Gaussian weights.
The numerical analysis performed in Cessac and Viéville (2008) demon-strates that, for numerical values taken from bio-physical models, consider-ing here ’ 0:1ms, this quantity is remarkably constant, with small varia-tions within the range: i 2 [0:965; 0:995] ’ 0:98;
It has been numerically verified that taking this quantity constant over time and neurons does not significantly influence the dynamics. This the reason why we write i as a constant here. This corresponds to a “current based” (instead of “conductance based”) approximation of the connections dynamics.
The additive current Ii(t) Zt t+ im(~!s) + L ds t=k ’ i im(~!t) + L t=k accounts for membrane currents, including leak. The right-hand size approximation assume i is a constant. Furthemore, we have to assume that the additive currents are independent from the spikes. This means that we neglect the membrane current non-linearity and adaptation.
On the contrary, the term related to the connection weights Wijd is not straightforward to write and requires to use the previous numerical approx-imation. Let us write: Wij[k kn] Ej Rt n t=k ;tjn=kjn (1.7) assuming (s; t + ; !~t) ’ i as discussed previously. This allows us to con-sider the spike response effect at time tnj = kjn as a function only of k kjn. The response Wij[d] vanishes after a delay D; r = D , as stated previously. We assume here that < t i.e. that the spike-time precision allows one to define the spike time as kjn; tnj = kjn (see Cessac and Viéville (2008); Ces-sac and Viéville (2008) for an extensive discussion). We further assume that only zero or one spike is fired by the neuron of index j, during a period , which is obvious as soon as < r.
This allows us to write Wijd = Wij[d] so that:
P n D n
j=1 Wij[k kjn] = d=1 j=1 Wij[d] fkjng(k d)
= Pd=1 Pij fkj ;kj ; g
= D W [d] 1 n (k d)
Pd=1 Wijd Zj[k d]
where Zj[l] = fkj1 ;kjn; g(l) is precisely equal to 1 on spike time and 0 other-wise, thus completes the derivation of (1.4).

READ  The mammalian cortex: from functional localization to microscopic organization

Properties of the discrete-time spiking neuron model

Now, we make a review on the main properties of the discrete-time spiking neuron model described by equation 1.4. These properties have been defined in Cessac (2008).
Since < 1 we can define the phase space for V as a set M =
[Vmin; Vmax]N .
Vmin Vi[k] Vmax; (1.8)
where: hmini=1:::N j Wijd<0 Wijd + Iiexti
Vmin = min 0; 1 1 (1.9)
and: hmaxi=1:::N j Wijd + Iiexti
Vmax = max 0; 1 1 Wijd>0 (1.10)
where the notation jjWijd < 0 (jjWijd > 0) specifies that all the weights are negative (positive).
Equations 1.9 and 1.10 hold even if Iiext depends on time.
For each neuron we can decompose the interval I = [Vmin; Vmax] into I0 [ I1 with I0 = [Vmin; [ and I1 = [ ; Vmax]. If V 2 I0 neuron is qui-escent and otherwise it fires. Further, this partition permits us to define the spiking state, which can be expressed as a N dimensional binary vector, ! = f!1; :::::; !N g 2 , where = f0; 1gN . Then where: M! = fV 2 MjVi 2 I!i g (1.11)
More precisely, this allows us to classify the membrane potential vectors according to their spiking state.
The coefficient (1 Z(Vi)) corresponds to contraction in direction i. This can be explained as follow: let us to suppose that Vi 2 I1, which means that Z(Vi) = 1 (neuron fires) by consequence the contraction coefficient will be zero and the membrane potential reseted, otherwise the membrane potential will be contracted by a factor . These effects are shown in figures 1.5(b) and 1.5(a) respectively.
Figure 1.5: Two examples of the partition space for non-perturbed balls in a system with two neurons. The space is partitioned from the firing state of the neurons and is labeled as ! = !1
Furthermore this property can be extended to a RN space, since 1.4 is originally defined on this space. If we consider the -ball, this ball is con-tracted by dynamics.
The equation 1.4 is discontinuous on the set S defined as follow: S = fV 2 M; j9i; Vi = g (1.12)
This is due to singularities in dynamics. Let us to consider the trajectory of a point V 2 M, which has perturbations with an amplitude < . Equiva-lently to consider the evolution of the -ball B(V; ) under 1.4.
There are two cases:
1. The non-critical case: when B does not intersects S, B(V; ) \ S = 0
In this scenario the perturbations have not a considerable effect on the trajectory of V, in fact they become asymptotically indistinguishable. At the most, there exist a contraction of the initial ball, since < 1.
In order to clarify the non-critical case let us to consider the examples shown in figure 1.6. In both cases the real trajectories (Figure 1.5) have been perturbed in B(V; ) ( ).
Figure 1.6: The effects on the dynamics when the trajectories of V has been perturbed. The non-critical case.
First, we concentrate in the partition 00 of the figure 1.6(a). Note that even if the real trajectory shown in figure 1.5(a) has been perturbed, the contraction effect is similar in both cases.
Now, in figure 1.6(b) we can observe that the perturbed ball has evoked a spike in partition 01 , which corresponds to neuron 2. However the – perturbation is not propagated throughout the system due that neuron 2 is reseted in 10

The critical case: when B intersects S, B(V; ) \ S =6 0

On the contrary, the crossing of S by the -ball induces a strong effect reminiscent on the sensitivity of initial conditions for chaotic dynamics. In other words, when B and S are intersected the trajectory of V change drastically, since the perturbed trajectory induces a new spiking state. Two important remarks of this:
The main difference with chaos is that the present effect occurs only when the ball crosses the singularity. (Otherwise the ball is contracted).
The singularity S is the only source of complexity of the discrete-time spiking neuron model, and its existence is due to the strict threshold in the definition of neuron firing.
The critical case is schematized in figure 1.7 where we can observe a drastic effect on the dynamics when the perturbed ball B(V; ) ( ) intersects the singularity set. More specific, a perturbed ball localized very close to the threshold can to induce a firing in the neuron.

Table of contents :

I gIF-type Spiking Neuron Models 
1 gIF-type spiking neuron models
1.1 Neuron Anatomy
1.1.1 The spiking neuron
1.2 Discretized integrate and fire neuron models
1.2.1 From a LIF model to a discrete-time spiking neuron model
1.2.2 Time constrained continuous formulation
1.2.3 Network dynamics discrete approximation
1.2.4 Properties of the discrete-time spiking neuron model
1.2.5 Asymptotic dynamics in the discrete-time spiking neuron model
1.2.6 The analog-spiking neuron model
1.2.7 Biological plausibility of the gIF-type models
II Learning spiking neural networks parameters 
2 Reverse-engineering in spiking neural networks parameters: exact deterministic estimation
2.1 Methods: Weights and delayed weights estimation
2.1.1 Retrieving weights and delayed weights from the observation of spikes and membrane potential
2.1.2 Retrieving weights and delayed weights from the observation of spikes
2.1.3 Retrieving signed and delayed weights from the observation of spikes
2.1.4 Retrieving delayed weights and external currents from the observation of spikes
2.1.5 Considering non-constant leak when retrieving parametrized delayed weights
2.1.6 Retrieving parametrized delayed weights from the observation of spikes
2.1.7 About retrieving delays from the observation of spikes
2.2 Methods: Exact spike train simulation
2.2.1 Introducing hidden units to reproduce any finite raster
2.2.2 Sparse trivial reservoir
2.2.3 The linear structure of a network raster
2.2.4 On optimal hidden neuron layer design
2.2.5 A maximal entropy heuristic
2.3 Application: Input/Output transfer identification
2.4 Numerical Results
2.4.1 Retrieving weights from the observation of spikes and membrane potential
2.4.2 Retrieving weights from the observation of spikes
2.4.3 Retrieving delayed weights from the observation of spikes
2.4.4 Retrieving delayed weights from the observation of spikes, using hidden units
2.4.5 Input/Output estimation
III Programming resetting non-linear networks 
3 Programming resetting non-linear networks
3.1 Problem Position
3.2 Mollification in the spiking metrics
3.2.1 Threshold mollification
3.2.2 Mollified coincidence metric
3.2.3 Indexing the alignment distance
3.3 Learning paradigms
3.3.1 The learning scenario
3.3.2 Supervised learning scheme
3.3.3 Robust learning scheme
3.3.4 Reinforcement learning scheme
3.3.5 Other learning paradigms
3.4 Recurrent weights adjustment
3.5 Application to resetting non-linear (RNL) networks
3.5.1 Computational properties
3.6 Preliminary conclusion
IV Hardware Implementations of gIF-type Spiking Neural Networks 
4 Hardware implementations of gIF-type neural networks
4.1 Hardware Description
4.1.1 FPGA (Field-Programmable Gate Array)
4.1.2 GPU (Graphics Processing Unit)
4.2 Programming Languages
4.2.1 Handel-C
4.2.2 VHDL (VHSIC hardware description language; VHSIC: very-high-speed integrated circuit)
4.2.3 CUDA (Compute Unified Device Architecture)
4.3 Fixed-point arithmetic for data representation
4.3.1 Resolution for the fixed-point data representation
4.3.2 Range for the fixed-point data representation
4.3.3 A precision analysis of the gIF-type neuron models
4.4 FPGA-based architectures of gIF-type neuron models
4.4.1 A FPGA-based architecture of gIF-type neural networks using Handel-C
4.4.2 A FPGA-based architecture of gIF-type neural networks using VHDL
4.5 GPU-based implementation of gIF-type neuron models
4.6 Numerical Results
4.6.1 Synthesis and Performance
V Conclusion 
5 Conclusion
A Publications of the Author Arising from this Work
Publications of the Author Arising from this Work


Related Posts