Meta-model design optimization MDO

Get Complete Project Material File(s) Now! »

Non-intrusive approach

The term « Black-box » has come into general use as a description or nomination of an undisclosed process or function about which we do not need to know how it operates – just that it does. A black-box is a device, system or object which can be viewed in only through its inputs and outputs (or transfer characteristics), without any knowledge of its internal workings. Its implementation is « opaque » (black). Through this dissertation, the black-box refers to the FEA that models the electromagnetic phenomena of a device, e.g. electrical machine.
The non-intrusive approach considers that the FEA as a black-box and all the eorts performed for device optimization is on the optimization algorithms. In the rst chapter, we detailed dierent algorithms for solving optimization problems; all these have pros and cons, but some are more well-suited than others for the problematic dealt with in this thesis. Approximation-based algorithms are of interest since they consider the model as a black-box and t a meta-model for the optimization
process. This approach decouples the black-box model and the optimization process, which enable to reduce the number of calls to the black-box model (expensive) and renders the optimization tractable in a limited time. The meta-model also oers desired characteristics for the most ecient optimization algorithms, such as smoothness and the possibility to compute the gradients since the meta-model is an analytic function. Thus, derivative-based algorithms will excel in optimizing such models.
In this chapter, all the research has been done on how to exploit meta-models in the context of black-box optimization. A discussion of the limitations of the conventional way of usage is conducted, and we propose strategies to tackle these.

Meta-model design optimization MDO

Meta-modelling has led to new areas of research in simulation-based design optimization.
Meta-modelling approaches have advantages over traditional techniques when dealing with the noisy responses and/or high computational cost characteristic of many computer simulations. Meta-models are used in many elds [66], mainly to replace expensive black-box models [67] [68]. In an optimization problem, the objective function and/or constraints are not always cheaply available data. Thus these surrogate models aim to give a model able to approximate the expensive black-box models from a limited number of solutions.
As seen in the rst chapter, there is a multitude of approximation techniques in the literature; This dissertation focuses on one particular technique, Kriging. This decision is not baseless. Other researchers such as Jones et al. [33] have made the argument that Kriging provides a suitable approximation for computer simulations and has led a thorough comparison between dierent approximation techniques.
Optimization using Kriging meta-models was rst introduced to tackle unconstrained optimization by Jones et al. [43]. This led to the famous EGO for the Ecient Global Optimization algorithm that forms the basis of this chapter. EGO is classied into a general class of optimization algorithms that is referred to as Bayesian analysis algorithms [39].
Meta-model based optimization owchart is presented in Figure 2.1. The rst step aims to determine an initial set of parameter values (initial design) using a design of experiments, e.g. Latin Hypercube Sampling (LHS). The black-box(expensive) model is solved for each set of parameters. Afterwards, a meta-model is built based on the initial design and the output data from the black-box. The most important part of the process is to nd the inll point (Find new promising samples) in which we look for samples that improve the actual best solution of the optimization algorithm and/or increases the meta-model prediction capabilities. This sample will be evaluated using the expensive model in the next iteration. Finally, some stopping criteria are evaluated to terminate the optimization.
In this section, we will discuss dierent steps of the algorithm, starting from initial sampling to stopping criteria, with the corresponding literature and our contributions to the subject. We will use the one-dimensional function f(p) = cos(0:6p) + cos(3p) + sin(6p + 1:5) (2.1) with p 2 [0; 4].
The function is shown in Figure 2.2 to illustrate the major concepts of how to minimize a black-box model using approximation methods. This function has three local minima and one global minimum (shown as an asterisk in Figure 2.2).

Design of experiment DoE

The choice of Design of Experiment (sampling points) plays a critical role in the accuracy of the meta-model and the subsequent use in prediction. The choice of DoE was addressed in the literature by performing empiric comparison on some test cases [69]. Here, we detail two families of design of experiments: classical designs and space-lling designs; moreover, we discuss their adaptability to Kriging models.

READ  Evidence on Risk Aversion and Psychological Traits 

A Classical DoE

The rst family of DoE consists of designs based on geometric considerations. Fullfactorial designs and central-composite designs belong to this category. They have been initially developed in the framework of linear regression, e.g. quadratic response surface. The idea is to choose the observation points that maximize the quality of statistical inference and minimize the uncertainty of the parameters of the meta-model.
For example, the samples of the full-factorial design are at the boundary of the domain. For the problem dened in Figure 2.2, the samples are p1 = 0 and p2 = 4, i.e.; borders of the design space.
In a two-dimensional space, the samples are on the corners of the design space as shown in Figure 2.3.
Although these designs remain reasonable in low dimensions, they require a large number of observations in high dimensions (2nv , nv is the number of variables), making them impractical for computationally expensive problems, i.e. exponential number of evaluations is needed.

B Space-lling DoE

A popular alternative to classical DoE is space-lling one. As the name suggests, they aim to spread the sample points evenly on design space. One of the most used designs for the construction of the Kriging response surface is Latin Hypercube Sampling (LHS) [69]. Figure 2.4 shows a two-dimensional sampling using LHS using four samples. All dimensions are subdivided by four intervals, which produces 42 = 16 sub-spaces. The sub-spaces are sampled in a way that each column and each row contains only one point. Each point is randomly sampled in the sub-space. This enables one to have an acceptable spread of the points on the whole design space. The extension of LHS to n-dimensional space LHS is relatively easy.
There exist other sampling techniques such as Halton sequence, Hammersley set and Sobol sequence [70]. These methods oer an aordable way of constructing good space-lling DoE, and they are suited for high dimensional problems.
Simpson has performed a comparison of dierent DoE methods on several test cases [69]. It was found that space-lling designs oer better performances than classical design, and thus, for the rest of this dissertation, we only consider these designs unless stated otherwise.

Table of contents :

Introduction 
1 Literature Review 
1.1 Deterministic Optimization
1.1.1 Optimization problems formulation
1.1.2 Optimization algorithms
1.1.3 Discussion
1.2 Uncertainty
1.2.1 Uncertainty propagation
1.2.2 Optimization under uncertainty
1.2.3 Discussions
1.3 Electrical machines
1.3.1 Physical model
1.3.2 Mathematical model
1.3.3 Numerical model
1.3.4 MagFEM – 2D magnetostatic FE code
1.3.5 Discussion
1.4 Chapter Summary
2 Non-intrusive approach
2.1 Meta-model design optimization MDO
2.1.1 Design of experiment DoE
2.1.2 Expensive evaluations
2.1.3 Fitting Kriging
2.1.4 Finding new samples
2.1.5 Stopping criteria
2.1.6 Analytical examples
2.1.7 Results assessment
2.1.8 Discussion
2.2 Branch and Bound assisted by Meta-models (B2M2)
2.2.1 Proposed algorithm
2.2.2 Initialization
2.2.3 Branching
2.2.4 Meta-modelling
2.2.5 Bounding
2.2.6 Elimination
2.2.7 Selection
2.2.8 Termination
2.2.9 Analytic Examples
2.2.10 Extension to multi-objective optimization
2.2.11 Extension to optimization under uncertainty
2.3 Chapter Summary
3 Intrusive approach 
3.1 How to compute the gradient?
3.1.1 nite-dierence
3.1.2 Mesh-morphing
3.1.3 Discussion
3.2 Explanatory example
3.2.1 Derivation of gradient
3.2.2 The adjoint variable method
3.3 Adjoint variable method for FEA
3.3.1 Finite element model
3.3.2 Derivatives for state variables
3.3.3 Derivatives for design variables
3.3.4 Discussion
3.4 Examples
3.4.1 Solenoid model
3.4.2 TEAM workshop
3.5 Chapter Summary
4 Applications and benchmarking
4.1 Test cases
4.1.1 TEAM Workshop Problem
4.1.2 TEAM Workshop Problem
4.2 Algorithms settings
4.2.1 B2M2 algorithm
4.2.2 SQP algorithm assisted by adjoint variable method
4.2.3 DIRECT
4.2.4 Genetic algorithm
4.3 Results and comparison
4.3.1 Comparison protocol
4.3.2 TEAM Workshop Problem
4.3.3 TEAM Workshop Problem
4.4 Chapter Summary
5 Claw-pole machine
5.1 Electrical machine
5.1.1 Structure of the Claw-pole machine
5.1.2 Operating mode and electrical characteristic
5.1.3 Variability of output current in claw-pole machine
5.1.4 Impact of manufacturing process
5.1.5 Summary
5.2 Metrology
5.2.1 Methodology
5.2.2 Summary
5.3 Raw data and variability modelling
5.3.1 Stator
5.3.2 Rotor
5.3.3 Virtual assembly
5.3.4 Variability modelling
5.3.5 Summary
5.4 FEA model
5.4.1 Geometry
5.4.2 Material properties
5.4.3 Electrical circuit
5.4.4 Model validity
5.4.5 Variability parameters
5.4.6 Summary
5.5 Uncertainty propagation
5.5.1 Sampling
5.5.2 Meta-modelling
5.5.3 Statistical inference
5.5.4 Sensitivity analysis
5.6 Chapter Summary
Concluding remarks

GET THE COMPLETE PROJECT

Related Posts