Get Complete Project Material File(s) Now! »

## Industrial context and contributions

**Context**

This thesis has been carried out at ArianeGroup, European leader in access to space through the reliable Ariane 5 vehicle, that has performed over 100 successful launches, and the oncoming Ariane 6, that aims for very high flexibility both in its payload and its target orbit. ArianeGroup designs propulsive systems for civil and strategic purposes, which may involve liquid and solid propellant. The lattest is mostly tackled in the Aquitaine sites, and notably at Le Haillan center where this thesis has been completed. It specializes in the conception of solid propulsive systems, so-called boosters, from the igniter to the nozzle and rear sets. These boosters are strapped on the launch vehicle and deliver a massive thrust from the ignition onward. They only burn for a few minutes and aim to tear the vehicle off the ground at the beginning of the flight. The nozzle and rear sets are also produced on-site.

Of course, real-world trials are costly and should be avoided as much as possible. To this extent, the Analysis department, in which this thesis was carried out, gathers vast expertise on numerical simulation. Specifically, fine modelisation of aerodynam-ical, thermochemical, thermomechanical or mechanical phenomena is required for performing simulation-based engineering and carrying out reliability analyses in col-laboration with the different Engineering departments.

As raised above, there is a need for both reliability and optimality. In its recent history, Ariane has shown steady and robust reliability levels, with an unprecedented number of consecutive successes. Given the price associated with each launch, a fail-ure would indeed have dreadful repercussions on the company. However, performance and reliability are often conflicting, and a proper compromise must be chosen. Over-conservativeness may ensure extreme reliability, but impacts severely the overall effi-ciency of the vehicle.

In this context, a precise uncertainty quantification must be carried out, and a reli-ability level must be imposed. Similarly, to ensure good performance under uncertain conditions, adapted robustness measures can be chosen and optimized in place of the original nominal objective function.

In practice, such an uncertainty-based optimization strategy could be applied to en-sure that, in an uncertain environment, the highest pressure inside the booster remains under a given threshold while maximizing the average overall performance. Similarly, a security coefficient is usually associated with each design, denoting whether the ma-terial capacity is exceeded at some location during the numerical simulation. Ensuring a probability level for this coefficient to remain in the admissible domain while opti-mizing the geometry of the rear set is another possible application of Optimization Under Uncertainty. Finally, with a more general view, trade-offs between cost and performance of the systems at hand should be compared in a multi-objective setting, either with average or quantile robustness measures. Note that the latter corresponds roughly to a relaxed worst-case strategy.

In summary, here are the main objectives of the methods developed in present work:

Be capable of tackling a general measure-based formulation for Optimization Under Uncertainty with most classical statistics (mean, variance, quantile, worst-case).

Deal with multiple objectives and constraints.

Achieve very high parsimony, to perform uncertainty-based optimization on com-putationally expensive numerical models.

Limit the computational impact of the proposed algorithm, to also efficiently tackle inexpensive functions.

Be flexible concerning the optimization and Uncertainty Propagation (UP) pro-cedures and propose an efficient coupling strategy.

The last objective permits to naturally benefit from future improvement in the op-timization and UQ community, exploit case-specific optimization strategies and tackle atypical settings such as non-parametric uncertainties.

### Contributions

In light of the above objectives, the main contributions of this thesis to the field of Optimization Under Uncertainty (OUU) are highlighted here.

**Tunable accuracy formulation of OUU problems**

Robustness and reliability measures are integrands that are approximated throughout the optimization process, with refinable accuracy. We assume that these measures can be regarded as random variables at a given design, and more generally as random fields over the design space. The associated distributions must then be estimated.

Probabilistic multi-objective setting We introduce a probabilistic framework for constrained multi-objective optimization problems. Such setting has been exploited in [Khosravi et al., 2018, Khosravi et al., 2019, Coelho, 2014], which proposed prob-abilistic Pareto dominance rules without any assumption on the distribution shapes. We extend these concepts, notably to dependent random variables.

Besides, a quantitative indicator is proposed for ranking Pareto-optimal designs. This indicator is called Pareto-Optimal Probability (POP) and naturally tackles depen-dent variables with non-parametric distributions. Similarly to the PUI in [Selçuklu et al., 2020], several metrics are compared, in terms of interpretability and computa-tional burden.

Tunable accuracy optimization An efficient strategy is then proposed for solving optimization problems where objectives and constraints are computed with tunable accuracy. It consists in coupling two primary techniques. First, a Measure Approxima-tion with Tunable Accuracy (MATA) approach, such as in [Fusi and Congedo, 2016], that only refines promising individuals up to a given threshold. Second, a Surrogate-Assisting (SA) strategy, that constructs a surrogate model of the objective and con-straint functions in the design space to bypass further evaluations when the surrogate is sufficiently accurate.

Both these techniques are proposed in a very general form. We investigate in the following the influence of the error distribution on the Surrogate-Assisted Measure Approximation with Tunable Accuracy (SAMATA) strategy and we propose efficient and suitable algorithms.

** Bounding-Box approximations**

We first assume the error to be uniformly distributed. The estimation error can thus be regarded as an interval (in one-dimensional problems) or a product of intervals (in multi-dimensional problems).

The restriction of the above tunable accuracy setting to such independent uniform distributions has notably been tackled in [Mlakar et al., 2014] and [Fusi and Con-gedo, 2016], with the name of Bounding-Box approach (BBa). The coupling with the Surrogate-Assisting (SA) strategy will be referred to as SABBa.

Theoretical considerations Bounding-Box Pareto dominance is shown to be a par-ticular case of the general probabilistic Pareto dominance rule. Explicit formulations are proposed and notably different from the literature on the management of unfea-sible individuals.

Then, under some conservativeness assumptions, we demonstrate the convergence of the proposed approach toward the Pareto-optimal designs. To this extent, the spe-cific set sequence of Pareto-optimal individuals is introduced and shown to yield robust estimations of the Pareto front. Application to noisy optimization We illustrate the performance of SABBa on noisy optimization problems with tunable fidelity. Specifically, we illustrate some expected trends concerning tuning parameters on some analytical test-cases, with specific opti-mization and surrogate modeling strategies.

Application to Optimization Under Uncertainty Finally, efficient formulations are proposed for applying SABBa to OUU problems. Both the SA model and the BB com-putation and refinement are based on Gaussian Process (GP) surrogate models. Sim-ilarly to [Mlakar et al., 2014] and [Baudoui, 2012], a 3 paradigm is exploited to determine the bounds associated with the performance of each design. Refinement is carried out on multiple robustness and reliability measures simultaneously through an adaptive weighting of statistics-dependent criteria.

This strategy is applied to analytical and engineering-based test-cases, with ex-cellent efficiency and parsimony compared to classical approaches. It notably shows much higher robustness in its convergence rate compared to non-adaptive strategies, while permitting a flexible coupling with any optimization and UQ processes of choice.

**Sampling-based approximations**

Then, we propose to compute non-parametric approximations of the error distribu-tions with a sampling-based technique. For a given design, the robustness and re-liability measures are represented as a set of realizations, that may exhibit complex shapes and some dependency, contrarily to all Distribution-Assuming (DA) strategies, such as projected processes [Da Veiga and Delbos, 2013, Baudoui, 2012, Janusevskis and Le Riche, 2013, Williams et al., 2000]. These realizations usually reveal more rep-resentative of the true measure distributions, and they can be jointly drawn between different designs if a coupled-space surrogate model is available, which has not been proposed in other probabilistic dominance frameworks [Khosravi et al., 2018, Khos-ravi et al., 2019, Coelho, 2014].

Low-cost strategy for drawing joint realizations Issues associated with drawing such joint realizations are highlighted, and an adaptive algorithm is proposed for keeping the associated cost manageable. This strategy show similarities with Nyström approximation for Karhunen-Loeve Expansion (KLE) but allows for intuitive control of the overall accuracy and adaptive sampling. A Surrogate-Assisting (SA) model that takes advantage of these joint realizations is then introduced to apply the SAMATA strategy. Quantitative comparisons with SABBa reveal a drastic increase in the overall parsimony and show that the non-parametric characteristic permits better represen-tativeness.

Surrogate-Assisting model for disjoint realizations When coupled-space surro-gate models cannot be constructed, the above strategy is not applicable. In this case, we propose a SA model based on Kernel Density Estimation (KDE) methods. From only disjoint realizations, it aims at recovering a non-parametric random field over the whole design space. This construction is achieved through a rejection sampling tech-nique that relapses to heteroscedastic GP in a Gaussian context. This approach shows both great parsimony and high efficiency in capturing complex distribution shapes.

#### Outline of this work

The manuscript is organized as follows.

Chapter 1 gives a brief introduction to kernel methods with the Reproducible Kernel Hilbert Space (RKHS) theory. It notably aims to present Gaussian Processes intuitively so that the reader can fully understand their use in the following chapters.

Chapter 2 introduces the Optimization Under Uncertainty (OUU) problem in a con-strained multi-objective context. The notion of Pareto efficiency is formally defined in a probabilistic framework. Finally, the Surrogate-Assisted Measure Approximation with Tunable Accuracy (SAMATA) strategy is illustrated in its most general formula-tion.

Chapter 3 deals with the specific assumption of independent uniform distribution of the error. We illustrate how the concept of Pareto dominance is simplified in this case and make the formulation of SABBa explicit. Theoretical convergence properties can then be demonstrated under some assumptions. SABBa is then applied to arbi-trarily noised test-case, and the impact of some user-defined parameters is illustrated. Secondly, specific formulations are proposed for using SABBa on Optimization Under Uncertainty (OUU) problems. Analytical test-cases reveal very high robustness and parsimony for the proposed approach compared to more conventional strategies.

Chapter 4 proposes sampling-based formulations for applying SAMATA with joint or disjoint measure realizations. The proposed algorithms for drawing joint measure re-alizations from a coupled-space Gaussian Process and reconstructing a non-parametric random field from disjoint measure realizations are presented in details and illus-trated on simple test-cases. Sampling-based SAMATA is then quantitatively compared to SABBa and classical approaches on analytical applications featuring parametric and non-parametric uncertainties.

Chapter 5 presents three engineering-based OUU problems solved with SABBa and sampling-based SAMATA. Several robustness and reliability measures are tackled through-out these test-cases, namely mean, quantile and worst-case.

Chapter 6 gives some conclusions and perspectives of the present work. The contri-butions are highlighted, and the limitations of the proposed strategies are discussed. Some ideas are finally proposed for improving current algorithms and using some concepts introduced in this thesis in another context.

**Table of contents :**

**A Context and theoretical tools **

Introduction en français

–1 Contexte et objectifs

–2 Plan et principales contributions

**Introduction **

–1 Context and main objectives

–2 State of the art

–2.1 Uncertainty Quantification

–2.2 Optimization

–2.3 Reliability-Based Design Optimization (RBDO)

–2.4 Robust Design Optimization (RDO)

–3 Industrial context and contributions

–3.1 Context

–3.2 Contributions

–4 Outline of this work

**I Introduction to kernel methods **

I–1 An intuitive introduction to Gaussian processes

I–1.1 The basics: Linear Regression

I–1.2 Adding complexity: Feature Regression

I–1.3 Adding generality: Non-parametric regression

I–1.4 A probabilistic view: Bayesian Regression

I–1.5 Gaussian processes

I–2 Reproducing Kernel Hilbert Spaces

I–2.1 RKHS theory fundamentals

I–2.2 Kernel Mean Embeddings

B Methods and algorithms

**II A general approach for constrained multi-objective optimization under uncertainty **

II–1 Formulation of the problem

II–1.1 Multi-objective optimization

II–1.2 Uncertainty-based optimization

II–1.3 Robustness and reliability measures

II–2 A probabilistic setting

II–2.1 Probabilistic Pareto dominance

II–2.2 Pareto-Optimal Probability

II–3 The SAMATA algorithm

II–3.1 Surrogate-Assisting strategy

II–3.2 Measure Approximation with Tunable Accuracy

**III Uniform approximations: Bounding Boxes **

III–1 Bounding-Box context

III–1.1 Definition of a Bounding-Box

III–1.2 Boxed Pareto dominance

III–1.3 SABBa general flowchart

III–2 Theoretical considerations

III–2.1 Pareto-optimal sets

III–2.2 Bounding-Box approach

III–2.3 Convergence analysis

III–2.4 Surrogate-Assisting model

III–2.5 Algorithm

III–3 Noisy optimization with tunable accuracy

III–3.1 Numerical ingredients

III–3.2 Applications

III–4 Optimization under uncertainty

III–4.1 Measures computation and refinement

III–4.2 POP computation

III–4.3 Quality indicator

III–4.4 Uncertainty-based SABBa algorithm

III–4.5 Numerical tests

**IV Sampling-based measure approximations **

IV–1 Sampling-based setting

IV–2 Coupled-space realizations for joint samplings

IV–2.1 Formulation and challenge

IV–2.2 Inducing points strategy

IV–2.3 Implementation in SAMATA

IV–3 KDE-based random field surrogate

IV–3.1 Kernel Density Estimation

IV–3.2 Surrogate-model construction

IV–4 Application to parametric uncertainties

IV–4.1 Cost assessment

IV–4.2 Benefit of sampling-based approximations

IV–5 Application to non-parametric uncertainties

IV–5.1 Adjustment of SAMATA

IV–5.2 Analytical application

C Applications

**V Engineering applications **

V–1 Two bar truss structure

V–2 Optimization of a Thermal Protection System for atmospheric reentry

V–3 ORC turbine blade optimization

V–3.1 Physical application

V–3.2 Problem setting

V–3.3 Numerical ingredients

V–3.4 Mono-objective results

V–3.5 Bi-objective results

D Perspectives

**VI Conclusions **

VI–1 Main achievements

VI–1.1 Tunable accuracy formulation

VI–1.2 Bounding-Box approximations

VI–1.3 Sampling-based approximations

VI–1.4 Properties of the proposed framework

VI–2 Limitations

VI–3 Perspectives

VI–3.1 Improvements within the SAMATA framework

VI–3.2 Ideas

**Bibliography**