Assessing the Applicability of Mental Simulation

Get Complete Project Material File(s) Now! »

Mental Simulation: a Common Framework

Evidence of mental simulation has been studied in humans [Buckner and Carroll (2007); Decety and Grezes (2006)] where similar brain regions have been observed to become activated when individuals perform remembering, prospection and Theory of Mind [Premack and Woodru (1978)] tasks. Humans have the ability to remember past events in a rich, contextual way, that allows them to revisit those memories and apply gained knowledge to novel scenarios. Planning generally takes the form of prospection in humans, where future scenarios are visualised in one’s mind, enabling the individual to choose what actions to perform based on their imagined utility. When in a social context, theorising and reasoning about what others think and feel { i.e. Theory of Mind { are crucial skills for successful interaction with other people. Research also extends into how people use this type of simulation to mentally represent mechanical systems [Hegarty (2004)] and to reason on how these will evolve in various conditions. It was suggested that the prediction ability of this approach can be limited by the perceptual capability of the individual and that, in complex situations, subjects cannot accurately predict the precise motion trajectories, but can evaluate within a reasonable margin of error what the eects would be.
From a computational perspective, studies on mental simulation in humans suggest that this paradigm oers a common framework for reasoning about one’s environment as a whole and therefore aid the creation of an architecture that would enable an agent to reach a higher level of adaptability.

Assessing the Applicability of Mental Simulation

Advances in cognitive science suggest the need of a unied framework of human cognition, while mental simulation and the theory of mind receive much attention as fundamental human traits for social reasoning. In the eld of articial intelligence, a variety of cognitive architectures have been proposed [Langley et al. (2009)], but only few feature mental simulation. Moreover, those that do include mental simulation make use of it within localised contexts and by using models of the environment and human psychology which are constructed by human experts, and not by the agent itself. Mental models allow an individual to anticipate certain aspects of the future state of the environment. From the way humans can, in some way, visualise scenarios through mental imagery, there is reason to believe that such models should be executable, so that an agent can apply them to the current state of the environment and obtain the next state as a result. This simulation process provides a version of the future that can be used to make decisions. Executable knowledge has been tackled in some cognitive architecture implementations [Kennedy et al. (2009)] that use sets of production rules that apply to certain preconditions and yield a new state of the system. However, these approaches require world models which must be developed beforehand making their usability cumbersome and restricted to specic problems.
Through the means of mental simulation, an agent can have access to an image of the future. Hence, the agent can compute a strategy to cope with the imagined situation before the future state is reached and therefore act in a timely manner when the event occurs. Deciding when to apply the strategy may depend on time, if the anticipated event is time-dependent (such as a moving object) or precondition-dependent (an expected event that is known to happen if certain cues occur in the environment). An example of such precondition-dependent anticipation would be avoiding passing cars while crossing a street, where it is expected that cars will pass but the precondition for avoidance is the recognition of a car. Clearly, after a car has been recognised, its trajectory can be anticipated.

Environment Type

Articial intelligence applications, since their advent in the second half of the twentieth century, have diversied to tackle many areas of human intelligence. Research in this eld has led to optimal algorithms on a number of problems and super-human performance on others such as in the 90s, when the Deep Blue computer [Hsu (2002)] won against a chess world champion.
However, humans still excel in many quotidian tasks such as vision, physical interaction, spoken language, environmental and behavioural anticipation or adapting ourselves in the constantly changing conditions of the natural world in which we live. To this end, hard problems have often been idealised in computer simulations (virtual reality) where research could focus on the essentials
of the articial agents’ intelligence without the need to solve low-level problems like noisy perception, motor fatigue or failure to name a few. Once matured in virtual reality, such agents would be ready for embodiment into a robotic implementations where, only few prove to be feasible. From this point of view, we can categorise existing approaches through the prism of environment complexity, namely those that have been implemented in virtual environments (Subsection 2.1.1) and those that have a robotic embodiment (Subsection 2.1.2)

READ  Measure of Corporate Social Responsibility

Virtual World

The challenge for an intelligent agent in a virtual world is to cope with potentially complex behaviour, but in an accessible sandbox context. The virtual world is a controlled environment where observing events and object properties is simplied so that agent development can focus on behaviour while neglecting problems that arise from interfacing with the world. An example of such simplicity is given by the trajectory of an object moving under the eects of gravity, whose exact coordinates can be directly sampled by the agent without requiring to capture, segment and analyse an image. The main characteristic that describes this environment type is the focus on behaviour, but this brings the drawback of possible poor scaling of developed methods towards the real environment due to noise, uncertainty and interface issues. Regarding computational approaches to mental simulation, most existing works have been evaluated in virtual environments of varying complexity. Discrete environments provide a simple but informative view of the behaviour of an agent [Ustun and Smith (2008)], under controllable circumstances. As complexity rises, namely the transition from discrete to continuous space, the challenge for intelligent agents to perform tasks increases signicantly, but it also enables a wider range of behaviour. Only now does the use of mental simulation begin to nd its applications, and advantages over traditional methods, in agent decision-making. Literature provides mental simulation approaches to constrained 2-dimensional continuous space [Kennedy et al. (2008, 2009); Svensson et al. (2009); Buche et al. (2010)] which focus on developing models to cope with the increased complexity of the environment. Other works go even further, to continuous 3-dimensional space where trajectories are more dynamic as human users intervene [Buche and De Loor (2013)] and collisions [Kunze et al. (2011a)] or occlusions [Gray and Breazeal (2005)] take place.

Table of contents :

Abstract
Contents
List of Figures
List of Tables
1 Introduction 
1.1 Cognitive Approach Towards Computational Decision Making .
1.1.1 Mental Simulation: a Common Framework
1.1.2 Assessing the Applicability of Mental Simulation
1.2 Objectives
1.3 Manuscript Organization
2 Related Work 
2.1 Environment Type
2.1.1 Virtual World
2.1.2 Real World
2.2 Mental Simulation Targets
2.2.1 Environmental Aspects
2.2.2 Behavioural Aspects
2.3 Cognitive Function
2.3.1 Prospection
2.3.2 Navigation
2.3.3 Theory of Mind
2.3.4 Counterfactual Reasoning
2.4 Computational Implementation
2.5 Discussion
3 Proposition: ORPHEUS 
3.1 Generic Agent Architecture
3.1.1 Perception
3.1.2 Imaginary World
3.1.3 Abstract World
3.2 Implementation
3.2.1 Software Architecture
3.2.2 Distributed Processing and Visualization
3.3 Discussion
4 Applications 
4.1 Introduction
4.2 Angry Birds Agent
4.2.1 Perception
4.2.2 Abstract world
4.2.3 Imaginary World
4.2.4 Decision and Control
4.2.5 Preliminary Results
4.2.6 Competition Results
4.3 Orphy the Cat
4.3.1 Perception
4.3.2 Abstract world
4.3.3 Imaginary World
4.3.4 Decision and Control
4.3.5 Results
4.4 Nao Robot Goalkeeper
4.4.1 Perception
4.4.2 Abstract world
4.4.3 Imaginary World
4.4.4 Decision and Control
4.4.5 Results
4.5 Discussion
5 Conclusion 
5.1 Discussion
5.1.1 Contributions of Orpheus
5.1.2 Limitations of Orpheus
5.2 Future Work
5.2.1 Nao Soccer Lab
5.2.2 Learning
5.2.3 Planning
5.2.4 Agent’s Goals
5.2.5 Emotions
5.3 Publications
Bibliography

GET THE COMPLETE PROJECT

Related Posts