# Analysis of Dynamic Multi-objective Optimisation Benchmark Functions

Get Complete Project Material File(s) Now! »

## Optimisation Concepts

Each optimisation problem contains one or more objective functions and a set of decision variables and most optimisation problems contain a set of constraints. Optimisation problems can be classified according to a number of characteristics, including the number of decision variables, the type of decision variables, the degree of linearity of the objective functions, the type of constraints, the number of optimisation criteria or objectives and the number of optima [36, 55]. These concepts are discussed in more detail below. The objective function represents the quantity to be optimised, i.e. the quantity to be minimised or maximised. The objective function is also referred to as the cost function or optimisation criterion. If the problem that has to be optimised is expressed using only one objective function, it is referred to as a SOOP. However, if a problem has more than one objective that have to be optimised simultaneously, it is called a MOOP. Each objective function has a vector of decision variables that influence the value of the objective function. Therefore, a search algorithm iteratively modifies the value of these variables to find the optimum for the objective function. If x represents the set of variables, the value of the objective function for the specific values of the variables can be quantified by f(x). Therefore, f(x) also quantifies the quality of the candidate solution, x.

Pareto-optimal Set and Pareto Optimal Front

For SOOPs, where only one objective is optimised, local and global optima are defined as presented in Section 2.1.2. However, when dealing with a MOOP, the various objectives are normally in conflict with one another, i.e. improvement in one objective leads to a worse solution for at least one other objective. For the manufacturing example (Example 2.2 in Section 2.2) the various objectives, namely to minimise the time required to manufacture a specific number of products, to minimise the time that a specific machine is idle, and ro minimise cost, are in conflict with one another. MOOPs do not have specific optima, but trade-off solutions. Therefore, for MOOPs, the definition of optimality has to be re-defined. This section discusses the new definition of optimality for MOO. When solving a MOOP the goal is to find a set of trade-off solutions where for each of these solutions no objective can be improved without causing a worse solution for at least one of the other objectives. These solutions are referred to as non-dominated solutions and the set of such solutions is called the non-dominated set or Pareto-optimal set (POS). The corresponding objective vectors in the objective space that lead to the non-dominated solutions are referred to as the POF or Pareto-front. These concepts and definitions are now discussed in more detail.

### Analysis of Dynamic Multi-objective

Optimisation Benchmark Functions “Without a standard there is no logical basis for making a decision or taking action.” – Joseph M. Juran Dynamic multi-objective optimisation problems are created by adjusting MOOPs in one or more of the following ways: changing the objective functions over time or changing the constraints over time. This thesis focusses on unconstrained DMOOPs with static boundary constraints and objective functions that change over time. In order to determine whether an algorithm can solve DMOOPs efficiently, DMOOPs should be used that test the ability of the algorithm to overcome certain difficulties. These DMOOPs are called benchmark functions. One of the main problems in the field of DMOO is a lack of standard benchmark functions. This chapter seeks to address this problem by evaluating the current state-of-the-art benchmark functions presented in the DMOO literature to establish whether they efficiently evaluate the abilities of DMOO algorithms. MOO benchmark functions adapted to develop DMOOPs and characteristics that an ideal set of MOO benchmark functions should have are discussed in Section 3.1. Current benchmark functions used in the DMOO literature are discussed in Section 3.2. Furthermore, approaches to develop DMOOPs with either an isolated or deceptive POF are proposed. New DMOOPs with complicated POSs, i.e. POSs that are defined by non-linear functions and where each decision variable has a different POS are introduced. Characteristics that an ideal DMOO benchmark function suite should have, are also presented and benchmark functions are suggested for each identified characteristic. Finally, a summary of this chapter is provided in Section 3.3.

Multi-objective Optimisation Benchmark Functions

Benchmark functions test how well an algorithm can overcome various types of difficulties when trying to find the true POF. When an algorithm solves a MOOP its goal is to find solutions that are as close as possible to the true POF and that have an uniform spread. Therefore, benchmark problems should test whether an algorithm can achieve this goal when faced with either multi-modality, deception (such as local POFs and isolated optima that may prevent the algorithm from converging towards the true POF; or a POF that is non-convex, discontinuous or non-uniform that may prevent the algorithm from finding an uniform spread of solutions [36, 49]. Section 3.1.1 discusses characteristics of ideal benchmark functions suites. Furthermore, two MOO benchmark function suites, namely the ZDT [38] and DTLZ functions [49], that were adapted to develop DMOOPs are discussed in Sections 3.1.2 and 3.1.3 respectively.

#### ynamic Multi-Objective Optimisation Benchmark Functions Currently Used

This section discusses benchmark functions used in the DMOO literature to evaluate whether DMOO algorithms can efficiently solve DMOOPs. Due to space constraints, only POSs and POFs with different characteristics will be illustrated in this section. In all two-objective figures f2 refers to gh. Guan et al. [74] suggested to create DMOOPs by replacing objective functions with new objective functions over time. The advantage of Guan et al.’s approach is that the new objective function(s) can cause a severe change in the DMOOP and by selecting the objective functions carefully, various types of changes can be incorporated into the DMOOP. Recently, Wang and Li [156] presented a DMOOP where the one subfunction of an objective function changes over time. When objective functions are changed over time, as in the approaches followed by Guan et al. and Wang and Li, the objective functions should be selected carefully to ensure that the resulting objective functions hinder the algorithm in finding the POF in various ways as discussed in Section 3.1.1. Another approach was followed by Jin and Sendhoff [90], where a two-objective DMOOP is constructed from a three-objective MOO function. The approach of Jin and Sendhoff has been used by various researchers [110, 111, 112, 108]. However, the adherence to the guidelines of Deb et al. by the benchmark functions suggested by Guan et al., Wang and Li, and Jin and Sendhoff will depend on the specific objective functions that are used. Based on the ZDT [38, 169] and DTLZ [49] functions, Farina et al. [58] developed the first suite of DMOOPs, namely the FDA benchmark functions. The FDA functions are constructed in such a way that they are one of the first three DMOOP types of DMOOPs, where either the POS or POF changes over time, or both the POS and POF change over time.

READ  time series’s generation and forecasting from disentangled representations of contexts

Contents :

• List of Figures
• List of Algorithms
• List of Tables
• 1 Introduction
• 1.1 Motivation
• 1.2 Objectives
• 1.3 Contributions
• 1.4 Research Methodology
• 1.5 Thesis Outline
• I Optimisation Background
• 2 Formal Definitions
• 2.1 Single Objective Optimisation
• 2.1.1 Optimisation Concepts
• 2.1.2 Types of Solutions
• 2.2 Multi-objective Optimisation
• 2.2.1 Multi-objective Optimisation Problems
• 2.2.2 Pareto-optimal Set and Pareto Optimal Front
• 2.2.3 Solving a Multi-objective Optimisation Problem
• 2.3 Dynamic Single-objective Optimisation
• 2.3.1 Dynamic Single-objective Optimisation Problem
• 2.3.2 Dynamic Environment Types
• 2.4 Dynamic Multi-objective Optimisation
• 2.4.1 Dynamic Multi-objective Optimisation Problem
• 2.4.2 Dynamic Environment Types
• 2.5 Summary
• 3 Analysis of Dynamic Multi-objective Optimisation Benchmark Functions
• 3.1 Multi-objective Optimisation Benchmark Functions
• 3.1.1 Ideal Benchmark Function Characteristics
• 3.1.2 ZDT Functions
• 3.1.3 DTLZ Functions
• 3.2 Dynamic Multi-Objective Optimisation Benchmark functions
• 3.2.1 Dynamic Multi-Objective Optimisation Benchmark Functions Currently Used
• 3.2.2 Dynamic Multi-objective Optimisation Problems with an Isolated
• Pareto Optimal Front
• 3.2.3 Dynamic Multi-objective Optimisation Problems with a Deceptive
• Pareto Optimal Front
• 3.2.4 Dynamic Multi-objective Optimisation Problems with
• Complicated Pareto Optimal Sets
• 3.2.5 Ideal Set of Dynamic Multi-objective Optimisation Benchmark
• Functions
• 3.3 Summary
• 4 Analysis of Dynamic Multi-objective Optimisation Performance Measures
• 4.1 Static Multi-objective Optimisation Performance Measures
• 4.1.1 Outperformance Relations
• 4.1.2 Accuracy Performance Measures
• 4.1.3 Diversity Performance Measures
• 4.1.4 Combined Performance Measures
• 4.2 Current Dynamic Multi-objective Optimisation Performance Measures
• 4.2.1 Accuracy Performance Measures
• 4.2.2 Diversity Performance Measures
• 4.2.3 Robustness Performance Measures
• 4.2.4 Combined Performance Measures
• 4.3 Issues with Current Dynamic Multi-objective Optimisation Performance
• Measures
• 4.3.1 Losing Track of the Changing Pareto Optimal Front
• 4.3.2 Outliers in the Pareto Optimal Front
• 4.3.3 Boundary Constraint Violations
• 4.3.4 Objective Space versus Decision Space
• 4.3.5 Comparing Performance of Algorithms
• 4.4 Summary
• II Computational Intelligence Algorithms
• 5 Population-based Single-objective Algorithms
• 5.1 Particle Swarm Optimisation
• 5.1.1 Initialising the Swarm
• 5.1.2 Stopping conditions
• 5.1.3 Calculating the Velocity
• 5.1.4 Calculating the Position
• 5.1.5 Calculating the pbest and nbest
• 5.2 Genetic Algorithms
• 5.2.1 Initialising the Population
• 5.2.2 Fitness Evaluation
• 5.2.3 Selection Operator
• 5.2.4 Cross-Over Operator
• 5.2.5 Mutation Operator
• 5.3 Summary
• 6 Population-based Multi-objective Optimisation Algorithms
• 6.1 History of Multi-Objective Optimisation
• 6.2 Non-dominated Sorting Genetic Algorithm II
• 6.2.1 NSGA-II Algorithm
• 6.2.2 Producing Offspring
• 6.2.3 Fast Non-dominated Sorting
• 6.2.4 Selecting a New Population
• 6.3 Multi-objective Particle Swarm Optimisation
• 6.3.1 Initialising the Swarm
• 6.3.2 Calculation of Velocity
• 6.3.3 Calculation of pbest
• 6.4 Cooperative-coevolution Evolutionary Algorithm
• 6.4.1 CCEA Algorithm
• 6.4.2 Initialisation
• 6.4.3 Evaluation of Individuals
• 6.4.4 Rank Assignment
• 6.4.5 Genetic Operators
• 6.4.6 Extending Operator
• 6.5 Summary
• 7 Population-based Multi-objective Vector Evaluated Approaches
• 7.1 Vector Evaluated Genetic Algorithm
• 7.2 Vector Evaluated Particle Swarm Optimisation Algorithm
• 7.2.1 Original VEPSO Algorithm
• 7.2.2 Extensions to the VEPSO Algorithm
• 7.3 Vector Evaluated Differential Evolution Algorithm
• 7.4 Hybrid Vector Evaluated Algorithm
• 7.5 Summary
• 8 Population-based Dynamic Multi-objective Optimisation Algorithms
• 8.1 Dynamic Multi-objective Algorithms
• .1.1 Multi-objective Optimisation Algorithms adapted for Dynamic Multiobjective Optimisation
• 8.1.2 New Computational Intelligence Algorithms
• 8.1.3 Converting Dynamic Multi-objective Optimisation Problems into
• Single Multi-objective Optimisation Problems
• 8.1.4 Generic Extensions for Dynamic Multi-objective Optimisation Algorithms
• 8.1.5 Prediction-based Approaches
• 8.2 Summary
• III Dynamic Vector Evaluated Particle Swarm Optimisation
• 9 Introduction to Dynamic Vector Evaluated Particle Swarm Optimisation Algorithm
• 9.1 Dynamic Vector Evaluated Particle Swarm Optimisation Algorithm
• 9.3.1 Change Detection
• 9.3.2 Guide Update Approaches
• 9.4 Effectiveness of Guide Update Approaches
• 9.4.1 Experimental Setup
• 9.4.2 Results
• 9.5 Summary
• 10 Sensitivity Analysis of Dynamic Vector Evaluated Particle Swarm Optimisation Algorithm
• 10.1 Experimental Setup
• 10.2 Results
• 10.2.1 Management of Boundary Constraint Violations
• 10.2.2 Knowledge Sharing Swarm Topologies
• 10.2.3 Responses to Change
• 10.3 Summary
• 11 Comparing the Dynamic Vector Evaluated Particle Swarm Optimisation Algorithm against State-of-the-art Dynamic Multi-objective Optimisation Algorithms
• 11.1 Experimental Setup
• 11.1.1 Parameter Optimisation of Dynamic Multi-objective Optimisation
• Algorithms
• 11.1.2 Experiments Comparing the Performance of Dynamic Multi-objective
• Optimisation Algorithms
• 11.2 Results
• 11.3 Summary
• 12 Conclusions
• 12.1 Summary of Conclusions
• 12.2 Future Research
• Bibliography
• A List of Symbols
• B List of Acronyms
• C Calculating the True POS and POF
• C.1 Example 1: FDA2Camara
• C.2 Example 2: FDA
• D Additional Data and Figures for Conducted Experiments
• D.1 Guide Update Approaches
• D.2 Sensitivity Analysis
• D.3 Comparison of Dynamic Multi-objective Optimisation Algorithms
• E List of Publications

GET THE COMPLETE PROJECT
Solving dynamic multi-objective optimisation problems using vector evaluated particle swarm optimisation