Storage of sequences in tournament-based neural networks 

Get Complete Project Material File(s) Now! »

Association in neural networks

The operation of association involves the linkage of two or more pieces of information.
An associative memory takes some “keys” as inputs and returns some “associated pattern” as the output. For example, given a misspelled word, we may wish to recall the correct spelling. Given the name of a football team, we may wish to recall the score of its last match. The first example involves auto-associative memory, which is able to retrieve a piece of information upon presentation of only partial or degraded data from this same piece of information. On the other hand, the second example involves the use of hetero-associative memory, able to recall an associated piece of information from one category exploiting information from another category. The previously state-of-the-art Hopfield neural networks (HNN) [Hop82] have been designed to act as an auto-associative memory, since they are able to retrieve information given only some partial clues. There are two major architectures implementing neural associative memories. In non−recurrent networks, there are no loops in the associated graph. In other words, starting from a neuron, there is no path to return back to it. The input pattern is fixed steady in the network and the output pattern will not interfere with the input because of this absence of loops. On the other hand, in recurrent networks, the input pattern is only used to give the initial state of the network. The subsequent retrieval process is then that of a looped non-linear dynamical system, and the output pattern is read out until a certain stopping condition is satisfied, for example, when an equilibrium state is reached.

Measures of performance

In this document, there are mainly three measures of performance. The first one is the diversity, that is to say, the number of units of information (for example, messages) a device can store in the case that a certain retrieval error rate is guaranteed. The second one is the capacity, the total amount of information a device can store, in bits. The last one is the efficiency, the ratio between the capacity and the total amount of information consumed by the storage device. As far as cognitive issues are concerned, the diversity of a network seems to be a more important parameter than the capacity. Indeed, as said in [Gri11], it is better to be able to store a lot of short messages than to store a small amount of long messages as it is the ability to confront many pieces of information which is the very foundation of human thinking.

Sparsity

This network organization represents three levels of sparsity. The first level is the sparsity of messages. Compared to Hopfield networks, this network stores “short” messages, the length of which is significantly smaller than the size of the network (that is, the number of neurons). As long as a message is represented by a spatial pattern in the graph, this is simply the consequence of the second level of sparsity, imposed by the local winner-take-all rule: only one fanal per cluster is eligible to carry on information for a message at a precise time. This could be slightly extended to some more tolerant rules, but the number of firing fanals should remain very small before the total number of fanals. This finds a direct correspondence to precisely what biologists call sparse coding in the brain [F¨03] [PB04] [OF04]. They present it as the fact that only a few neurons are firing at a specific time. Finally, the third level of sparsity is given by the network connections, which could be represented by a sparse connection matrix.
All the above could be enhanced by a supplementary level of sparsity, which is exterior to the network. In fact, in terms of knowledge perception, the brain could be considered as a selective filter. Among the very rich information received by the brain at one moment, only a small part of it will be considered relevant and possibly exploited later.
It is biologically relevant to evoke sparsity at several levels of our network organization. Actually, sparsity has a direct analogy to energy minimization which is often a good explanation of the life strategies.

READ  Secure Enforcement for Global Process Specifications 

Biological and information considerations

More and more researchers have adopted the idea that memories are stored in the brain using a distributed representation across numerous cortical and subcortical regions [EC01] [Fus09], which means that each memory is stored across thousands of synapses and neurons. Also each neuron or synapse is involved in thousands of memories. The cortical minicolumn (also called microcolumn) is likely the most basic and consistent information processing unit in the brain [Jon00] [JL07] [CBP+05]. The neurons within a minicolumn encode similar features, as said in [CBP+05]: “current data on the microcolumn indicate that the neurons within the microcolumn receive common inputs, have common outputs, are interconnected, and may well constitute a fundamental computational unit of the cerebral cortex”. As a consequence, a node in our network, the unit being able to send and receive information towards and from the rest of the network, may correspond to a minicolumn, each comprising around 80 neurons in reality. To make it clear, we deliberately call a node as a “fanal” instead of a “neuron”.
Following this logic, we propose to liken clusters in the clique-based networks to the cortical column, generally composed of 50 to 100 minicolumns. The columns are then grouped into macrocolumns, in the analogical way that clusters compose the network. A set of clique-based networks that cooperate with each other would likely correspond to the so-called functional areas of the brain.

Table of contents :

1 Introduction 
1.1 Biological neural networks
1.1.1 Functions of a neuron
1.2 Artificial neural networks
1.2.1 McCulloch-Pitts neuron
1.2.2 Association in neural networks
1.2.3 Sequences in neural networks
1.3 Thesis structure
2 Network of neural cliques 
2.1 Structure
2.1.1 Storage
2.1.2 Network density
2.1.3 Retrieval
2.1.4 Measures of performance
2.1.5 Sparsity
2.1.6 Biological and information considerations
2.1.7 Advantages and drawbacks
2.2 Adaptation to sparse structure
2.2.1 Generalized framework of decoding algorithms
2.2.2 Dynamic rules
Sum-of-Sum rule (SoS)
Normalization rule (NORM)
Sum-of-Max rule (SoM)
2.2.3 Activation rules
Global winner-takes-all
Global winners-take-all
Losers-kicked-out
Performance comparison
2.3 Extensions
2.3.1 Blurred messages
2.3.2 Binary errors
2.3.3 Fractal network
3 Chains of cliques and chains of tournaments 
3.1 Redundancy reduction in a clique
3.2 Towards unidirectionality: chain of tournaments
3.3 Associative memory with chains of tournaments
3.3.1 One erased cluster
3.3.2 Two erased clusters
4 Storage of sequences in tournament-based neural networks 
4.1 Definition of the sequence
4.2 Storing
4.3 Error tolerant decoding
4.4 Sequence retrieval error rate
4.5 Capacity and efficiency
5 Storage of vectorial sequences in generalized chains of tournaments 
5.1 Sparse random patterns
5.2 Storing vectorial sequences in looped chains of tournaments
5.3 Storing vectorial sequences in unlooped chain of tournaments
5.3.1 Global decoding schemes
5.3.2 Cluster activity restriction
5.4 Double layered structure
5.4.1 Error avalanche issue
5.4.2 Structure
5.4.3 Biological consideration
5.5 Interference issue with a limited dictionary
5.5.1 Interference issue
5.5.2 Multiple solutions
Segmentation of network
Association with signature
Cluster enlargement
Subsampling
5.6 Hierarchical structure
5.6.1 Performance analysis
6 Conclusion and open problems 
6.1 Obtained results
6.2 Perspectives and open problems
Bibliography 

GET THE COMPLETE PROJECT

Related Posts