Environmental Pressures and emergence of Internal Representations

Get Complete Project Material File(s) Now! »

Internal Representations and Cognitive Behaviors

Traditional theories in cognitive science consider internal models as the basis of cognitive agents’ actions (Craik, 1943; Tolman, 1948; Johnson-Laird, 1983). Those models are described as mental maps that incorporate symbolic representations.
In the seminal work by Craik (1943), states that the mind constructs “small-scale models” of reality that it uses to reason, and to anticipate events. Tolman (1948) describes cognitive maps, that take part in animal navigation and path planning. These maps are the basis of models and cognitive functions and are called internal representations (Tolman, 1948; Marr, 1982). In the case of Tolman’s maps, the representations are then symbolic, and their content was expected to be easily interpreted. A review of such mental models was conducted by Johnson-Laird (1980).
In the late 1980s, Harnad (1990) introduces the symbol grounding problem, which questions the presence of symbolic representations. In the 1990s, cognitive science has seen a new emphasis on the embodied and situated intelligence, including works by Brooks (1991b); Varela et al. (1991), and rejecting the symbolic only views. They argue that “the world is its own best model” (Brooks, 1991b), and emphasize intelligence without representation, with reactive behaviors. This has led to an anti-representationalist view and created debates in the cognitive science and artificial intelligence communities. However, Clark and Toribio (1994) argues that there might not be a representational/non-representational dichotomy, but rather different degrees and types of possible representations, from no representations at all to sophisticated models. In embodied views of cognition, cognition can rely on prediction and internal representations, however the representations are based on sensorimotor codes rather than symbols (Wolpert et al., 1995; Keijzer, 2002; Pfeifer and Bongard, 2006).
The term representation can be misleading and has been criticized. Keijzer (2002) argues that “Internal Control Parameters”, which can be interpreted as specific biological internal states, are not representations and advocates for the use of both representational and non-representational internal states in embodied views of cognition. However, this debate is beyond the scope of this thesis. The definition of representations in this work will rather be focused on their function.

Examples of Reactive and Cognitive Behaviors

What role do these internal representations have ? They have been associated with Cognitive behaviors (see, e.g. (Craik, 1943; Tolman, 1948; Johnson-Laird, 1980; Marr, 1982)). Shettleworth (2009) has studied cognition in the animal kingdom, which include very diverse nervous system structures and behaviors. For instance, behaviors purely based on reflexes are called reactive behaviors; behaviors involving complex computations and integration of information over time are cognitive behaviors.
The following paragraphs illustrate some reactive and cognitive behaviors in animal species. In particular, navigation provides a good context for the study of reactive and cognitive behaviors, because the trajectory of studied animals can easily be recorded in nature, or by cognitive scientists in setups such as mazes. Reactive behavior and taxis Reactive behavior are behaviors only depending on current sensor information, and not integrating other information. For instance, chemo-taxis is the process of moving towards a gradient of concentration of a given chemical substance. An example of such kind of behavior would be the chemotaxis of bacteria (for instance in Escherichia Coli (Berg et al., 1972)). In taxis, the navigation is then local and does not rely on past sensor information.
Another example of simple navigation behavior can be found in some insects, such as the African dung beetle Scarabaeus zambesianus (Dacke et al., 2003). Behavioral experiments show that in some conditions, the dung beetle has to move along a straight path. In order to do so, it heads towards the direction of polarized light (sunlight, or moonlight (Dacke et al., 2003)). Changing the polarization of the light (using a polarized screen) makes the beetle instantly change its direction. In that case, the beetle’s heading solely depend on its light polarization sensors, and thus it has a reactive behavior. However, other navigation strategies of the beetle might include non-reactive behaviors.

Internal Representations in Computational Neuroscience

The following Computational Neuroscience review focuses on the use of internal representations in the different models. While the encoding and the use of these representations can be very different from a model to another, common properties will be highlighted. The following list of cognitive functions is not exhaustive, nor mutually exclusive, as it is probable that some of these functions might be linked (see for example Deco et al. (2005)).
Working Memory Working memory is a short-term memory, which is referred as a “temporary storage and manipulation of the information necessary for complex cognitive tasks” (Baddeley, 1992). Working memory then is believed to be the basis of multiple cognitive processes. Several neural models of memory and working memory imply a buffering of the information in an internal neuron or group of neurons. The most simple information is a bit of information, which can be stored in a single internal neuron. This corresponds to a very small and simple internal representation.

READ  Impact of the outdoor environment upon the indoor environment via correlation analysis .

Internal Representations in Evolutionary Robotics

In the light of the cognitive functions described in the previous sections, the following general definition of representation will be used in this work:
Definition 9 (internal representation) An internal representation is a piece of information located in internal neurons which 1) is dependent on inputs; 2) cannot be deduced only from the values of the inputs at a particular time. Through this definition, several points can be made:
• Reactive agents do not make use of internal representations — their potential internal nodes can be deduced from a direct combinations of inputs.
• An internal circuitry not connected to inputs is not considered as a representation. While the models from computational neuroscience have complex internal structures, many ER experiments rely on much simpler control architectures. This section reviews different tasks — in the light of the present definition of internal representation — in the past ER experiments. This review is restricted to the evolution of artificial neural networks in ER.

Table of contents :

1 Introduction 
1.1 Context
1.1.1 Artificial Intelligence and Cognitive Science
1.1.2 Evolutionary Robotics
1.2 Problem
1.3 Outline
2 Background 
2.1 Evolutionary Algorithms
2.1.1 History
2.1.2 Principle
2.1.3 Multi-Objective
2.2 Evolutionary Robotics
2.2.1 Introduction
2.2.2 Behavior
2.2.3 Fitness Functions
2.2.4 Conclusions
2.3 Artificial Neural Networks
2.3.1 Neuron models
2.3.2 Neural Networks
2.3.3 Evolution and Artificial neural networks
2.3.4 Evolution of Artificial Neural Network Topology
2.3.5 Conclusions
3 Internal Representations 
3.1 Internal Representations and Cognitive Behaviors
3.1.1 Internal Representations
3.1.2 Examples of Reactive and Cognitive Behaviors
3.1.3 Internal Representations in Computational Neuroscience
3.2 Internal Representations in Evolutionary Robotics
3.2.1 Representation-less ER tasks
3.2.2 Representation-Hungry Tasks
3.2.3 Tasks with unclear Representation needs
3.2.4 Conclusion
3.3 Testing Characteristics of Internal Representations
3.3.1 Measuring Representations in the literature
3.3.2 Properties of Internal Representations
3.3.3 Quantitative tests
3.3.4 Qualitative tests
3.4 Analysis over Existing Networks
3.4.1 Representations in classic networks
3.4.2 Visual Attention
3.4.3 ER task: Hard Ball-Collecting Task
3.4.4 Conclusion
4 Behavioral Consistency Method 
4.1 Introduction
4.2 Working Memory Experiment
4.2.1 T-Maze navigation task
4.2.2 Methods
4.2.3 Results
4.2.4 Conclusion
4.3 Behavioral Consistency Method
4.3.1 Related Work
4.3.2 Behavioral Consistency Method
4.4 Attention Focus
4.4.1 Experimental Setup
4.4.2 Results
4.5 Action Selection
4.5.1 Experimental setup
4.5.2 Results
4.6 Conclusion
5 Environmental Pressures and emergence of Internal Representations 
5.1 Selective Pressures for the emergence of Internal Representations
5.1.1 Task and World Complexity
5.1.2 Helper Objectives and Specific Pressures
5.2 Circular Maze Evolutionary Robotics Protocol
5.2.1 Goal-oriented and Novelty Objective
5.2.2 Behavioral Consistency Objective and Environmental Pressures
5.2.3 Representation Tests
5.2.4 Setups and Selective Pressures
5.3 Results: Emergence of Internal Representations under Environmental
5.3.1 Impact of Selection Pressures
5.3.2 Resulting Architectures
5.3.3 Lineage and selection for Internal Representation
5.4 Conclusion
6 Discussion and Perspectives 
6.1 Contributions
6.1.1 Definition of Internal Representations
6.1.2 Behavioral Consistency Method
6.1.3 Emergence of Internal Representations under multiple selective pressures
6.2 Discussion and Perspectives
6.2.1 Classification of representations and tests
6.2.2 Behavioral Consistency Method versus standard evaluation .
6.2.3 Emergence of Internal Models
6.2.4 New Selective Pressures
6.2.5 Neural Network Encodings
7 Conclusion 
A Appendix 
A.1 DNN and EvoNeuro2 Parameters
A.2 Additional Lineage
A.3 Neural Networks
A.4 Chronograms
A.5 Comparison between BCO and Standard Evaluation
Bibliography 

GET THE COMPLETE PROJECT

Related Posts