New INtelligent Distributed Agent Based Architecture

Get Complete Project Material File(s) Now! »

Chapter 3: Agent Architectures

Agent architectures can be classified according to various criteria (see section 2.2.4). For the purpose of this thesis, agent architectures are classified based on the reasoning model. In this chapter, an overview and a critique of each of the main classes of agent architectures are presented and discussed. A few definitions of agent architecture are given in section 3.1. Section 3.2 presents the historically first agent architecture: the symbolic reasoning agent architecture. Sections 3.3 and 3.4 discuss sub-symbolic agent architectures and hybrid agent architectures, respectively. Each of the presented architectures is initially discussed and criticised in its general form and then an example of such an architecture is described in greater detail. Section 3.5 proposes a hybrid agent architecture that is used in the INDABA agent architecture. Section 3.6 concludes this chapter with a comparison between the various presented models.

 Introduction

The term agent architecture intuitively suggests a framework for the implementation of an agent. Agent architecture considers the issues surrounding the development and implementation of an agent based on a selected theoretical foundation. An agent architecture can be more formally defined as:
“[A] Particular methodology for building [agents]. It specifies how … the agent can be decomposed into construction of a set of component modules and how these modules should be made to interact…. An architecture encompasses techniques and algorithms that support this methodology” [106].
An alternative view on agent architecture is given as:
“[A] Specific collection of software (or hardware) modules, typically designated by boxes with arrows indicating the data and control flow among the modules. A more abstract view of an architecture is as a general methodology for designing particular modular decomposition for particular tasks” [92].
There are various taxonomies proposed for agent and MAS architectures. The reader is referred to [38][90][57] for more details. In this thesis, agent architectures are classified according to the reasoning model used by agents.
In the remainder of this chapter, different agent architectures are presented and discussed.

Symbolic Reasoning Agent Architecture

Symbolic reasoning techniques are at the core of symbolic reasoning agent architectures. Section 3.2.1 presents a historical overview of the evolution of symbolic reasoning agent architectures, while section 3.3.2 presents some of the general characteristics and shortcomings of the symbolic reasoning approach.
As the representative of the symbolic reasoning agent architecture, one of the first implemented robots, namely Shakey [136], is discussed in section 3.2.3.

Introduction and History

Historically, the first agent architecture to appear, the symbolic reasoning agent architecture [136], has its roots in traditional artificial intelligence systems. An example implementation of a symbolic architecture is the early theorem-prover, General Problem Solver (GPS) [134]. Symbolic reasoning architectures are sometimes referred to as traditional architectures [87]. Expert systems are based on the symbolic reasoning paradigm. Successes of some of the symbolic reasoning systems, such as early expert systems (e.g. MYCIN [171]), have given credibility to the belief that such a paradigm can be easily extended to agent systems and embodied agents (robots). One of the seminal symbolic planning systems was STRIPS [65]. STRIPS is applied to agents and even to embodied agents, such as the robot Shakey [136]. Although the symbolic reasoning approach has been heavily criticised in the late ‘80s (as discussed in section 3.2.2), the criticism did not stop the development of purely cognitive architectures such as SOAR [135] and ACT-R [4]. SOAR [135] and ACT-R [4] are both based on symbolic inference mechanisms. Agents systems that have utilised a symbolic planner as their main component include Integrated Planning, Execution and Monitoring (IPEM) [3] and “softbots” [62].
In the robotics field, after the initial success of Shakey [136] (which performed the required tasks, albeit slowly) and the harsh critique of symbolic reasoning systems during the ‘80s, there was not much development of pure symbolic reasoning, single agent systems. Similarly, the same applies to multi-robot symbolic systems. However, there were some simulated robotic systems based on a symbolic architecture, for example HOMER [193]. HOMER’s interaction with users was through commands that were given in a subset of the English language. Once commands were interpreted, the simulated robot would plan and execute given commands in its simulated environment.
It is becoming evident that any long-term artificial intelligence program must re-integrate some of the traditional AI based symbolic reasoning mechanisms [74]. The current trend is to create hybrid agent architectures where the symbolic component plays a significant role in agent architecture. An example of such a hybrid agent, 3T [22], is discussed in section 3.4.3.
The general characteristics of the symbolic reasoning agent architecture are presented in the next section.

READ  Model of individual differences in negative automaintenance (article)

General Characteristics of Symbolic Reasoning Agent Architectures

Symbolic reasoning agent architectures (also known as rational or deliberative) agents are based on a symbolic, abstract representation of the world and have an explicit knowledge base, often encompassing beliefs and goals [151]. Goal-oriented intelligent behaviour is explicitly coded into the agents, usually in production rule form. Typically, an agent can exploit many different plans to achieve its allotted goals. A plan is chosen on the basis of the current beliefs and goals of the agent. The selected plan can be dynamically modified if these beliefs and/or the goals change. Rational agents can be considered advanced theorem-provers that manipulate symbols in order to prove some properties of the world. Implementing an agent as a theorem-prover allows the re-use of well-known techniques developed in the AI field, for example, the inference engines of expert systems.
The first obstacle that any symbolic reasoning architecture needs to overcome is that of an accurate implementation of the world model. The task of translating real-world entities and the often complex relationships between those entities into adequate symbolic representations is by no means a trivial task. Furthermore, there is no universal widely-accepted model for encoding the symbolic knowledge of the real-world. There are numerous methods in use, ranging from first order logic and production rules [87] to network representations, for example semantic nets [101].
The next problem that needs to be solved is which external stimuli, as sensed from the environment, can be ignored. Even for real embodied agents in real-world environments, information received via sensors is just a subset of all possible stimuli. For example, a robot may have a collision detector, but not a light sensor. The question that arises is whether a deliberative process will be different if a robot has more information about its environment. In other words, the choice of sensors that will influence the deliberative process is not always intuitive, and requires further research.
In real-world environments, symbolic reasoning suffers from the “real-time processing problem”. To illustrate, consider that for real-world problems an optimal or nearly optimal solution is found based only on observation of the environment at initial time t. Most symbolic reasoning algorithms use a heuristic search of the problem space. However, because of a usually huge search space associated with real-world problems, an action executes during time t to t+k, where k is the time spent on finding an optimal or nearly optimal solution. In the meantime, during time k, the environment can change so that the optimal solution at time t may no longer be the optimal solution at time t+k.

Abstract 
Acknowledgements 
Content 
List of Figures 
List of Tables
List of Algorithms 
Chapter 1: Introduction 
1.1 Motivation
1.2 The Objectives
1.3 The Main Contributions
1.4 Thesis Outline
Chapter 2: Background
2.1 Introduction
2.2 Agents: Definitions and Classifications
2.3 Multi Agent Systems: Definitions and Classification
2.4 Problems with Multi-Agent Systems
2.5 Origins of the Agent Paradigm
2.6 Summary
Chapter 3: Agent Architectures.
3.1 Introduction
3.2 Symbolic Reasoning Agent Architecture
3.3 Reactive Agent Architecture
3.4 Hybrid Agents
3.5 Summary
Chapter 4: Multi-Robot Architectures
4.1 Introduction
4.2 Behaviour Based Robotics
4.3 Hybrid MAS (MACTA)
4.4 Summary
Chapter 5: New INtelligent Distributed Agent Based Architecture
5.1 Overview of INDABA
5.2 Controller Layer
5.3 Sequencer Layer
5.4 Deliberator Layer
5.5 Interaction Layer
5.6 Summary
Chapter 6: Coordination Approaches
6.1 Introduction
6.2 Biology-Inspired Approaches – Coordination Perspective
6.3 Organisational Sciences-Based Approach
6.4 Social Networks
6.5 Related Work
6.6 Social Networks Based Approach
6.7 Summary
Chapter 7: Experiments in an Abstract Simulated Environment
7.1 Scope Limitation and Simulation Set-up
7.2 Task Allocation and Team Formation Algorithm
7.3 Task Execution and Task Evaluation Algorithm
7.4 Simulating Task Details Uncertainty
7.5 Experimental Results
7.6 Summary
Chapter 8: Experiments in the Simulated Robot Environment.
8.1 Introduction
8.2 Robot Simulator Overview
8.3 Simulation Set-up and Assumptions
8.4 Simulation Results
8.5 Summary
Chapter 9: Experiments in a Physical Environment
9.1 Introduction
9.2 Physical Environment Set-up
9.3 INDABA Implementation
9.4 Results
9.5 Summary
Chapter 10: Conclusion
10.1 INDABA
10.2 The Social Networks Based Approach
10.3 Directions for the Future Research
Bibliography
Appendix
GET THE COMPLETE PROJECT

Related Posts