The Particle Swarm Optimization Algorithm

Get Complete Project Material File(s) Now! »

Introduction

The eld of computational intelligence (CI) is premised on the study of techniques to facilitate intelligent behaviour in complex environments. Within the eld of CI, a large body of research examines the study of social organisms, whereby the collective intelligence is leveraged in some capacity. This sub-eld of CI is referred to as swarm intelligence (SI). As a prominent example of SI, the particle swarm optimization (PSO) algorithm [59] is a population-based, stochastic search technique inspired by the ocking behaviour of birds.
It is well known that an eective search technique must strike a balance between exploring new regions in the search space and exploiting known, promising regions. In the context of CI algorithms, exploration refers to the identication of new, previously undiscovered areas in the search space while exploitation refers to renement of previously found, promising solutions. In the PSO algorithm, exploration and exploitation can be controlled by the values of the control parameters [4, 9, 10, 63, 105, 107]. Moreover, the performance of the PSO algorithm can be improved by appropriately tuning the values of the control parameters to the current problem [12, 52, 71, 106], given that the particle movement patterns are heavily in uenced by these control parameters [9, 106]. The searching capability of the algorithm, and by extension the exploration/exploitation balance, is thus directly in uenced by the values of the three main control parameters, namely the inertia weight (!), the cognitive acceleration coecient (c1), and the social acceleration coecient (c2). Additionally, the PSO algorithm has been shown to be rather sensitive to the values of these control parameters [12, 105, 107] and thus a priori tuning of the control parameter values may lead to improved performance. However, the tuning of control parameter values has traditionally been a time-consuming, empirical process followed by an arduous statistical analysis. To alleviate the issue of a priori parameter tuning, this thesis investigates self-adaptive particle swarm optimization (SAPSO) techniques that do not rely on a priori specication of values for the conventional PSO control parameters.
The remainder of this chapter is organized as follows. Section 1.1 provides further motivation for the investigation of SAPSO strategies. The primary objectives of this thesis are then provided in Section 1.2. Finally, Section 1.3 provides a detailed outline of the remainder of this thesis.

READ  second-order pipeline for temporal classification 

1 Introduction 1
1.1 Motivation
1.2 Objectives
1.3 Thesis Outline
2 Particle Swarm Optimization 
2.1 The Particle Swarm Optimization Algorithm
2.2 Neighbourhood Topologies
2.3 Theoretical Particle Stability
2.4 Velocity Clamping
2.5 Summary .
3 Self-Adaptive Particle Swarm Optimizers 
3.1 Non-Adaptive and Time-Varying Parameter Control Strategies
3.2 True Self-Adaptive Particle Swarm Optimizers
3.3 Summary
4 Stability Analysis of Self-Adaptive Particle Swarm Optimizers 
4.1 Empirical Analysis of Self-Adaptive Particle Swarm Optimizers
4.2 Critical Analysis of Self-Adaptive Particle Swarm Optimizers .
4.3 Summary
5 Analysis of Inertia Weight Control Strategies 
5.1 Motivation
5.2 Theoretical Stability Analysis .
5.3 Experimental Procedure
5.4 Empirical Results and Discussion
5.5 Summary
6 Investigating Optimal Parameter Regions 
7 Optimal Parameter Regions and the Time-Dependence of Control Pa- rameter Values 
8 An Adaptive Particle Swarm Optimization Algorithm Based on Opti- mal Parameter Regions
9 A Parameter-Free Particle Swarm Optimization Algorithm using Per- formance Classiers
10 Gaussian-Valued Particle Swarm Optimization 
11 Conclusions 
Bibliography
A Benchmark Problems
B Acronyms
C Symbols

GET THE COMPLETE PROJECT

Related Posts