Get Complete Project Material File(s) Now! »

## Mono-level optimization

This part, State of the art, oers a critical review of the mono-level and the multi-level optimization. In chapter 2, mono-objective, mixed-integer and multi-objective optimization are presented brie y. Chapter 3 introduces formal approaches of complex systems optimization and explains the concept of the analytical target cascading (ATC).

In this study, mono-level optimization domain includes all kinds of optimization that deal with problems having a mono-level structure that are mono-objective optimization and multi-objective optimization. Furthermore, both kinds of optimization may deal with various types of design variables (integer, discrete, zero-one and continuous) leading to a mixed-integer optimization.

A classication is needed to surround the mono-objective optimization domain. Indeed, according to the state of the art, mainly there are two classes of mono-objective optimization (Figure 2.1):

1. combinatorial optimization,

2. and continuous optimization.

### Continuous optimization

As its name indicated, with the continuous optimization the design variables are all continuous. Generally, the linear programming (LP) is separated from the non-linear programming (NLP) which is considered as a hard optimization.

**Linear programming**

This kind of optimization treats problems with objective and constraints functions that are only linear. However in practice, especially in electrical engineering, these functions are mostly non-linear or they can not be expressed analytically.

**Non-linear programming**

Optimization problems whose mathematical models have non-linear equations are called Non-Linear Programming (NLP). Note that, the presence of only one non-linear equation in the model is sucient to classify the problem as a NLP problem. In general, with NLP two kinds of optimizations are highlighted:

local optimization,

and global optimization.

**Local optimization**

Local search needs generally a starting point X0 in order to move around the neighbourhood of X0 and gives the best solution found in this region, i.e. a local optimum. But, in several practical cases, it is necessary to nd the best solution, i.e. a global solution, in the whole search space.

### Global optimization

The emerging eld of Global Optimization (GO) deals with mathematical programming problems, in the presence of multiple local optima, and seeks to nd the best solution (or global solution). Unfortunately, looking for a global solution is an optimization problem which is intrinsically very dicult to solve. Indeed, based on both theoretical analysis (introduced below) and practical experience, the time required to solve a GO problem increases rapidly, really exponentially, with the number of variables and constraints [Minoux, 1983], [Adeli, 1994], [Horst & Tuy, 1995], and [Venkataraman, 2001].

In other term, global optimization problems belong to the complexity class of NP-hard problems. This implies natural limits on the resolvability of such problems in practice as it is illustrated in section 2.1.2.

This thesis does not deal with LP problems but focuses on the NLP ones, especially the global optimization problems.

**Combinatorial optimization**

Contrary to a continuous optimization, combinatorial optimization deals with discrete, integer and zero-one (binary) design variables. The number of possible solutions is limited but generally very large: a combinatorial number which is about n!; 2n; : : : , where n is the number of the design variables. Then it is impossible to explore all the solutions (dierent combinations of the variables values) in a moderate time.

**Mixed-integer optimization**

As its name indicated, this kind of optimization deals with continuous, integer, discrete and zero-one design variables in the same time. In this thesis it will be shown how to modify global optimization algorithm like genetic algorithms to solve optimization problems with several kind of variables.

#### Mono-objective optimization

First mono-objective optimization is discussed followed by a brief review of a mixed integer optimization where the design variables that dene the function to be optimized can be continuous, discrete and have zero-one values. Thereafter, a multi-objective optimization is introduced.

**Mono-objective optimization**

In the following, rst we discuss the mathematical formulation of a mono-objective optimization problem. Thereafter some elements of a global optimization and the analytical conditions of a convergence of a local search are given. Then, we tray to give a classication and list the most important optimization method to solve mono-objective optimization problems.

**Convexity**

A minimization problem is called convex if the objective function f(X) is convex and the set of constraints gj(X) and hi(X) bounds a convex domain, otherwise the problem is non- convex [Adeli, 1994, Venkataraman, 2001]. In accordance with that, linear programming (LP) problems are always convex [Adeli, 1994]. However non-linear programming (NLP) problems, in most cases, are non-convex then hard to be solved.

Furthermore, if a minimization problem is convex then there is only one minimum which is the solution [Adeli, 1994]. On contrary, in non-convex problems several local minima may exist and it is more dicult to nd the global optimum.

This study focuses on how to nd the global solution of non-convex NLP problems. Then, it is obvious that some terms like convexity, convex set, convex and non-convex function must be claried. For this purpose, few denitions are given below.

Denition 2.1.1 (Convexity [Venkataraman, 2001]) If an optimization problem can be shown to be convex, then the local minimum will also be a global minimum. A convex optimization problem requires the objective function to be convex. If constraints are present, then all of the inequality constraints must be convex, while the equality constraints must be linear.

Denition 2.1.2 (Convex Set [Venkataraman, 2001]) This usually relates to the design vector (or design point) of the problem. Consider two design points X1 and X2. As before, the set S denes a design region and by implication both X1 and X2 belong to S. The line joining X1 and X2 contains a whole series of new design points. If all of these points also belong to S, then S is a convex set. A closed set S requires that the points X1 and X2 can also be on the boundary. If, however, the line joining X1 and X2 contains some points that do not belong to S, then S is not a convex set.

In other words, the domain S is convex if for 0 < < 1 any point X = X1 + (1 )X2 is also inside the domain. For a two-dimensional design problem, an example of convex and non-convex domains are shown in Figure 2.2(a) and 2.2(b) respectively. As in Figure 2.2(c), the intersection of convex systems is also convex. Systems bounded by straight lines, as the example in Figure 2.2(d), are always convex. Convex system may be bounded or unbounded.

Denition 2.1.3 (Convex Function [Venkataraman, 2001]) A convex function is dened only for a convex set. Once again let S be a convex set from which two design points X1 and X2 are selected. Let f be the function whose convexity is to be established. Consider any point Xa on a line joining X1 and X2. The function f is convex if f(Xa) is less than the value of the corresponding point on the line joining X1 and X2.

In other term, a mathematical condition of convexity is that the Hessian matrix of the function f must be positive semi-innite at all points in the set S. This corresponds to the KT conditions (presented below on page 26) for unconstrained minimization problem.

As for convexity concept, global and local optimum theory should be dened due to their large use in this thesis.

**Global versus local solution**

Following [Horst & Tuy, 1995], global optimum of an optimization problem is dened as follows: Denition 2.1.4 (Global minimum) Given a non-empty closed set S Rn, and a continuous function f : A !R, where A Rn is a suitable set containning S. The point X satisfying:

f(X) f(X); 8X 2 S is called a global solution of f over S. The corresponding value of f is called a global minimum of f over S.

Then an ecient optimization method has to nd at least one point X 2 S satisfying f(X) f(X) for all X 2 S or show that such a point does not exist.

According to the literature [Fox, 1971], [Minoux, 1983], [Adeli, 1994], [Horst & Tuy, 1995], [Venkataraman, 2001], a global minimum for the function f(X) exists if the function is continuous on the non-empty feasible set S that is closed and bounded. According to [Venkataraman, 2001] and [Minoux, 1983], the conditions for the existence of X 2 S are attributed to the well-known Theorem of Weiersteass.

However, it is important to note that while the global minimum is guaranteed if the conditions are satised, they do not negate the existence of the global solution if they are not satised. For the sake of simplicity and clarity, throughout this thesis, it will be assumed that a solution X 2 S exists. Moreover, it is essential to realize that global optimum may still exist even if the convexity conditions are not met.

When the feasible region or the objective function of an optimization problem does not present convexity, there may be several local minima, where a local minimum X is dened as follow: Denition 2.1.5 (Local minimum) [Horst & Tuy, 1995] Let k:k denote the Euclidean norm in Rn and let > 0 be a real number. Then an -neighbourhood of a point X 2 Rn is dened as the open ball N(X; ) = fX 2 Rn; kX Xk < g centred at X with radius .

A real-valued function f dened on the real line R is said to have a local minimum point at the point X, if there exists some > 0, such that f(X) < f(X); 8X 2 N(X; )

To put it simply, a local minimum X is dened as a point for which there exists some > 0, so that for all X such that kX Xk then f(X) f(X). This holds to say, on some region around X all of the function values are greater than or equal to the value at that point.

To conclude, note that since max f(X) = min( f(X)); X 2 S, global maximization problems are also included in equation (2.1) presented on page 21. Then global and local maxima are also concerned by these denitions.

**Feasibility versus optimality**

In most cases, the design space S is often specied by a set of constraints, equalities or inequalities, that the design points X 2 S have to satisfy.

In the n-dimensional Cartesian space, the point X corresponds to a vector of length n = jXj, where jXj denotes the number of design variables. The boundaries of the region are part of the set S (closed): the lower and upper bounds are notated XLow and XUp. The length of the vector X should be a nite number.

All points X = [x1; x2; :::; xn] 2 S which satisfy the constraints hi(X); i = 1; : : : ; l and gj(X); j = 1; : : : ;m are called feasible design and form the feasible region. Because of the optimal solution must be feasible, then feasibility is more important that optimality [Venkataraman, 2001], [Adeli, 1994], [Fox, 1971], and [Minoux, 1983].

**Table of contents :**

**1 Background **

1.1 Introduction

1.2 Complex system analysis

1.2.1 Complex system

1.2.2 Why to break-down a system?

1.3 Engineering design optimization

1.4 New issues

1.4.1 Mixed-integer design optimization

1.4.2 Multi-objective design optimization

1.4.3 Multi-level design optimization

1.5 Thesis overview

I State of the art

**2 Mono-level optimization **

2.1 Mono-objective optimization

2.1.1 Optimization problem formulation

2.1.2 Global optimization

2.1.3 Solving mono-objective optimization problems

2.1.4 Outcome

2.2 Mixed-integer optimization

2.2.1 Optimization problem formulation

2.2.2 Solving mixed-integer optimization problem

2.2.3 Outcome

2.3 Multi-objective optimization

2.3.1 Optimization problem formulation

2.3.2 Pareto dominance and optimality

2.3.3 Solving multi-objective optimization problems

2.3.4 Outcome

2.4 Conclusion

**3 Multi-level optimization **

3.1 Overview

3.2 Analytical target cascading

3.3 Optimization problem formulation

3.3.1 Break down

3.3.2 Coordination

3.3.3 System level sub-problem

3.3.4 Single ATC element in the hierarchy

3.4 Conclusion

II Improvement of algorithms

**4 Tuning of optimization algorithm’s parameters **

4.1 Initial points sampling

4.1.1 Sampling techniques

4.1.2 Optimal design of a traction system

4.1.3 Outcome

4.2 Other parameters

4.2.1 Optimization test case

4.2.2 GA: Optimal tuning of parameters

4.2.3 ACO and PSO: Optimal tuning of parameters

4.2.4 Outcome

4.3 Conclusion

**5 Mono-objective optimization algorithms **

5.1 Mono-objective optimization design of a brushless DC wheel motor

5.1.1 Optimization problem formulation

5.1.2 Implicit equation solving

5.1.3 Computation of derivatives

5.2 Constraint handling

5.2.1 Use of SQP

5.2.2 Use of GA

5.2.3 Use of ACO and PSO

5.2.4 Use of hybrid algorithm

5.3 Optimization results comparison

5.4 Conclusion

**6 Multi-objective optimization algorithms **

6.1 Constraint handling

6.2 Performance assessment

6.3 Multi-objective optimal design of a brushless DC wheel motor

6.3.1 Optimization problem formulation

6.3.2 Optimization results comparison

6.3.3 Outcome

6.4 Adaptive weighted sum of objectives (AWS)

6.4.1 Optimization results

6.4.2 Outcome

6.5 Fast building of a Pareto set

6.5.1 Designer’s dilemma

6.5.2 Concept of the proposed method

6.5.3 Multi-objective optimal design of a safety isolating transformer

6.5.4 Outcome

6.6 Conclusion

**7 Mixed-integer optimization algorithms **

7.1 Mixed-integer mono-objective optimization algorithms

7.1.1 Mixed-integer genetic algorithm

7.1.2 Mixed-integer optimization design of a safety isolating transformer

7.1.3 Outcome

7.2 Mixed-integer multi-objective optimization algorithms

7.2.1 Mixed-integer NSGA-II

7.2.2 Mixed-integer optimization design of a traction motor

7.2.3 Outcome

7.3 Conclusion

**8 Multi-level optimization **

8.1 Consistency for unattainable targets

8.1.1 Weighting update method

8.1.2 Lagrangian dual coordination

8.2 Improvement of analytical target cascading convergence

8.2.1 Multi-objective formulation

8.2.2 Proposed algorithm

8.2.3 Demonstration example

8.3 Application case: Object-based decomposition of a railway traction system

8.4 Tram traction system modelling

8.4.1 Modeling of the traction system

8.4.2 Modeling of the heat sink

8.4.3 Modeling of the traction motor

8.5 Multi-level optimization design of a tram traction system

8.5.1 Optimization problem formulation

8.5.2 Optimization results

8.6 Conclusion

**9 Conclusion **

9.1 Summary of contribution

9.2 Future research of further work

**Appendices**