Get Complete Project Material File(s) Now! »

## Subject of the Thesis

This thesis focuses on developing numerical methods to characterize the propagation of un-certainty through models with random inputs. Estimating some statistics of the model output’s (quantities of interest), such as the mean, the variance, different types of densities or quantiles, represent different ways of making this characterization.

Fine statistical characterizations require significant computational power. As an example of how extensive these simulations can be, in [76] the authors solve a time-dependent groundwater model with uncertain porosity and permeability, requiring a few hundred numerical simulations of problems up to 4:5 million spatial mesh points and between 1; 000 to 3; 000 time-steps. The total computational time to perform these simulations ranged from 2 to 24 hours on a cluster with up to 19; 200 computational units.

One way to optimally use the available computational resources is to exploit parallel architec-tures. The implementation of numerical simulations on parallel architectures naturally requires appropriate numerical methods, and Domain Decomposition (DD) methods are particularly well suited for parallel simulations. This thesis presents novel DD methods for characterising the un-certainty of a particular type of model: the stochastic elliptic equation with random coefficients.

The correct characterization of the uncertainty associated with the stochastic elliptic equa-tions considered in this thesis assumes the availability of a significant number of solution sam-ples. To this end, the general framework proposed amounts to solve a large set of deterministic elliptic equations associated with samples (snapshots) of the stochastic coefficient field. Each sample is solved by a DD method. A DD method amounts to split each sampled problem into local problems defined on small portions of the spatial domain (subdomains). The solution to these local problems satisfy some unknown compatibility conditions. The DD method builds a sequence of local functions that converge to the solutions of these local problems through an it-erative scheme. Advanced DD methods involve preconditioners that make this iterative scheme more efficient.

Most of the preconditioners of DD methods are fitted to particular deterministic problems. A classical strategy to solve multiple sampled problems amounts to apply a DD method to each problem, repeatedly. However, the process of constructing the preconditioner “from scratch” represents a significant computational burden on the overall resolution of each problem. The repeated construction of preconditioners becomes an inefficient process. In order to bypass this issue, this thesis proposes a new class of preconditioners. The idea is to build surrogates of the preconditioner of some DD method. These surrogates are then evaluated according to each sample, and the resulting realization (surrogate-based preconditioner) is used to accelerate the iterative scheme. The key aspect of the proposed approach is that the cost of surrogate-based preconditioning is significantly smaller compared to the straightforward approach while perfor-mances are not compromised. This thesis presents three contributions of surrogate-based pre-conditioning to DD methods. Each contribution is discussed separately through Chapters 3, 4 and 5, respectively

In this chapter we progressively motivate surrogate-based preconditioning. Section 2.2 in-troduces the stochastic elliptic equation with random coefficients. Section 2.3 describes two dis-tinct classes of numerical methods used to characterize uncertainty: functional representation methods and sampling methods. Sampling schemes substantially benefit from the proposed method for reasons also explained in Section 2.3. Section 2.4 gives a summary of DD methods for the resolution of deterministic elliptic equations. Section 2.5 discusses state-of-the-art ap-plications of DD strategies into solution methods for stochastic elliptic equations. This section includes a discussion of the main limitations of these approaches, which motivate surrogate-based preconditioning. Finally, Section 2.6 gives a general description of surrogate-based pre-conditioning as well as the main advantages of surrogate-based preconditioning with respect to the state-of-the-art applications previously described. In addition, this section provides a list of the main contributions of this thesis as well as a summary of the contribution of chapters 3, 4 and 5.

### Elliptic Equation with Random Coefficients

In the following, we introduce the elliptic equation with random coefficients. This stochastic equation will be the test model studied through this thesis.

Let be a geometric domain and a set of events. Let u : ! R be the solution of the stochastic elliptic partial differential equation given by

r ( (x; )ru(x; )) = f(x); x 22 (2.1a)

u(x; ) = uBC (x); x 2 @ ; 2 : (2.1b)

where f(x) is a deterministic source and uBC (x) denotes the Dirichlet boundary conditions. The elliptic equation (2.1) is a prototype of more complex models arising in many different physical phenomena such as porous media [20, 27, 26, 59, 128, 137], chemical reactions [36], wave scattering [50], heat diffusion [131] and elasticity [135]. The coefficient field is assumed to be random. The randomness of the solution u is a consequence of its dependence on the random coefficient field. The solution of the elliptic equation (2.1) exists almost surely (a.s.) and is unique provided that the random coefficient field is a.s. bounded from below and above for almost every x 2 [21]. Refer to [125] for a detailed analysis on existence and uniqueness of solutions to this equation.

In this thesis, the characterization of the solution is performed with a particular focus on co-efficient fields with large variability and short-scale spatial correlations, as it is often a necessary assumption in many of the applications described above.

**Solution Methods for Stochastic Elliptic Equations**

Estimating statistics from solutions of elliptic equations associated with highly variable and low correlated fields is challenging. The techniques used to estimate these quantities are split into functional representation methods [11, 7] and sampling methods [17, 78].

**Functional Representation Methods**

The idea of functional representation methods is to find a finite representation of the solution by a functional approximation, which hopefully makes the characterization of this uncertainty easier to perform. Since the randomness of the solution is a consequence of the randomness of the coefficient field, a crucial aspect for this representation lies in the representation of the random coefficient field by a finite-dimensional probabilistic model. One way to perform this representation is to parametrize the input data through a finite number of mutually independent random variables. This set of random variables, depending on the event , will be denoted as a finite dimensional random vector ( ). In this sense, we write ~ (x; ( )) (x; ). A classical parametrization technique is its Karhunen-Loève (KL) expansion [65, 79] (see Appendix B). The field’s parametrization enables the construction of a functional representation of the solution in a Fourier-type spectral expansion of the form u(x; ) u~ (x; ( )) = u (x) ( ( )) :=0

Once this functional is available, statistics of the solution can be obtained using evaluations of the spectral expansion (surrogate-based sampling) or directly retained from the information on this expansion [13, 67, 124].

Polynomial Chaos (PC) expansions [56, 70] represent a class of functional representation methods with the form (2.2) which have been extensively applied to the stochastic elliptic equa-tion [133, 30, 29, 98, 122, 4, 47]. The stochastic basis f g is set a priori as a finite polynomial basis (see generalized Polynomial Chaos [134]). Therefore, the construction of this expansion reduces to the estimation of the deterministic coefficients u . Methods for computing these co-efficients include intrusive (Galerkin) methods (see [35, 3, 128, 2, 107]) and non-intrusive meth-ods (see spectral projection methods [133, 30, 29, 98, 122], regression methods [14, 13, 124, 1] or collocation methods [4, 47, 131, 9]).

The size of the PC basis (number of terms) depends on the dimension of the random vector , and the complexity of the dependence of u on . Moreover, the uncertainty associated with highly variable and short-scale correlated fields requires many random variables to represent the field properly. However, the dimension of the PC expansion (2.2) increases very rapidly with the number of input random variables [97], an effect known as the curse of dimensionality. Con-sequently, the cost of constructing this functional can become prohibitively high. An alternative is to drop the assumption on a pre-set polynomial basis and construct the pairs (u ; ) that yield a low-rank optimal representation of the form (2.2). See the expositions of the Proper Gen-eralized Decomposition method [94, 96, 95] as well as the more general manuscript [11, Part II]. A number of applications to linear and non-linear problems include [96, 23, 42, 68, 106, 80]. The crucial aspect of low-rank representations is that the dimension of this representation only grows linearly with the number of random variables, which makes this approach suitable for problems associated with a high number of random variables.

The major drawback of estimating statistics using functional representations is that the error induced by the solution’s surrogate propagates into the resulting (surrogate-based) estimates. In case many (surrogate-based) samples are considered, the error induced by the surrogate may dominate the global error of the estimate, which may compromise the convergence to a meaningful statistical value.

#### Sampling Methods

The classical Monte Carlo (MC) method estimates statistics using randomly selected solution samples. Each solution sample is the solution of an elliptic problem associated with a ran-domly generated sample of the coefficient field. For example, the classical MC estimator for the mathematical average of some quantity of interest z(u) is given by E [z(u)] M1 X z u(m) m=1 ; (2.3)

where M denotes the number of samples. Each solution sample u(m) is the solution of a deterministic problem of the form (2.1) associated with some deterministic coefficient (m). The interest of using MC-type sampling methods is that it does not require a representation of the field, and can handle any kind of complexity in .

The main drawback of MC methods is their low rate of convergence. Classically, the error in methods are generalizations of the classical MC method that try to accelerate the classical MC convergence. The Multi-Level Monte-Carlo (MLMC) method [57] and Quasi-Monte-Carlo (QMC) methods [38] are popular extensions of the classical approach, which have been applied to the stochastic elliptic equations in [27, 8, 61, 59, 69, 111], for instance.

Sampling-based estimates converge to the exact statistics (up to spatial discretization er-ror) and are an attractive alternative to functional representations. However, the type of fields considered still pose significant challenges to sampling schemes. Indeed, an adequate rep-resentation of a sampled field with short scale correlations typically requires very fine spatial meshes that increase the cost associated with each sample. In addition, the large variabil-ity of the set of samples slows down the convergence of the sampling scheme. As a result, sampled-based estimators require a large amount of samples, that are expensive to compute, to accurately approximate quantities of interest. Hence, efficient numerical methods that reduce the cost associated with each sample are needed. A popular procedure is to apply a DD method to each sampled (deterministic) problem, and this will be the methodology used in this thesis. The following section gives a summary of general DD methods.

#### Domain Decomposition Methods

The idea of DD methods is to split the boundary value problem associated with each coefficient sample into a set of smaller (local) boundary value problems defined on portions of the spatial domain. Each portion of the spatial domain is called a subdomain, and the set of all subdomains forms a partition that covers the entire spatial domain. Subdomains can either overlap or not, in which case they only share an interface. The key aspect of DD methods is to establish compatibility conditions between the local problems so that the resulting local solutions coincide with the global solution over their subdomain. A general overview of DD methods can be found in the classical references [105, 127, 40].

The compatibility conditions depend on the type of problem and partition considered. For second-order elliptic problems, the local solutions should satisfy two continuity relations at the interfaces: continuity of the unknown values and continuity of fluxes. In the overlapping setting, however, the flux continuity is a consequence of the continuity of the unknown values, which becomes the only necessary and sufficient compatibility condition. Since different compatibility conditions are needed according to different partition type, DD methods are often split into Overlapping methods and Non-overlapping methods.

Solving an elliptic problem using a DD method has several advantages over a global res-olution on the entire spatial domain. One advantage is that DD methods are based on local operations defined over each subdomain. Usually, these local operations can be solved in par-allel because they have no data dependencies. Another advantage is that the computational complexity of a DD method is reduced to the workload associated with each subdomain. Local operations on smaller subdomains (or equivalently, on partitions with a large number of subdo-mains) are often cheaper, which motivates using small-sized partitions. The reduced complexity associated with each subdomain combined with parallel local operations makes the resolution of each sampled problem by a DD method extremely efficient.

**Iterative Domain Decomposition Methods**

A usual way to obtain local solutions that satisfy the compatibility conditions is using iterative schemes. In general, these iterative schemes amount to sequentially solve at each iteration a set of local problems with prescribed boundary conditions. The convergence rate of the se-quence of boundary conditions depends on the procedure used to update them. Classically, the update procedure is local in the sense that it uses information from the local interface nodes. Updating the boundary conditions from the purely local information yields a convergence rate that degrades as the partition of the domain uses more and more subdomains, because the information needs more iterations to be propagated within the domain.

Advanced DD methods use preconditioners to provide higher convergence rates. The idea behind preconditioners is to provide more effective updates of the boundary conditions for the local problems at each iterate. However, even with more effective boundary conditions, the convergence of the iterative scheme may still depend on the number of subdomains. A DD method whose convergence is independent of the number of subdomains is called scalable. In order to obtain a scalable method, the action of preconditioning may have to be combined with a separate treatment of the part of solution that deteriorates the convergence. This separate treatment amounts to compute the slow part of the solution, which belong to a low dimensional coarse space, by a direct method, and then use the DD iterative scheme to compute the re-maining part of the solution. Typically, a projected based strategy is used to keep the slow components of the solution outside the iterative scheme (orthogonal to the coarse space). A review on projection-based strategies can be found in [55, 54].

The key aspect of coarse space construction is defining a set of well-selected basis func-tions. In fact, for the type of elliptic problem considered, a coarse space spanned by the con-stant functions, the so-called Nicolaides coarse space [90], is very effective. However, a coarse space spanned by constant functions may not always provide the same robustness for different type of problems, such as problems with discontinuous fields (for details see [89]). Advanced coarse space constructions based on multiscale methods [44], energy minimizing methods [6] or spectral methods [118, 116, 39] are usual choices to achieve scalability for more complex problems. These methods can be used to construct coarse spaces either in overlapping or non-overlapping cases.

The discretization of the elliptic equation yields a (large) system, which the DD methods aim at solving efficiently. Spatial discretization techniques for solving deterministic elliptic prob-lems include Finite Differences [74], Finite Volume [73], Finite Element [25, 45] and Spectral Methods [129]. In this thesis we rely on the standard Finite Element (FE) method, but the developments proposed can be generalised to other discretizations suitable for DD.

**Table of contents :**

**1 Introduction en Français **

1.1 Sujet de la thèse

1.2 Plan et Contributions Principales

1.2.1 Idée Générale de Préconditionnement à Base de Metamodèles

1.2.2 Applications Explorées dans la Thèse

1.3 Résumé des Principales Contributions

**2 Introduction **

2.1 Subject of the Thesis

2.2 Elliptic Equation with Random Coefficients

2.3 Solution Methods for Stochastic Elliptic Equations

2.3.1 Functional Representation Methods

2.3.2 Sampling Methods

2.4 Domain Decomposition Methods

2.4.1 Iterative Domain Decomposition Methods

2.4.2 Overlapping Methods

2.4.3 Non-overlapping Methods

2.5 State-of-the-art DD Strategies for Elliptic SPDEs

2.5.1 Domain Decomposition in Functional Representation Methods

2.5.1.1 DD methods for the SFEM

2.5.1.2 Local PC expansions

2.5.2 Domain Decomposition in Sampling Methods

2.5.2.1 Straightforward application of DD methods

2.5.2.2 Surrogates of operators

2.5.2.3 Sample-independent preconditioners

2.5.2.4 Recycling methods

2.5.3 Concluding Remarks

2.6 Outline of the Thesis

2.6.1 Surrogate-Based Preconditioning

2.6.1.1 General idea

2.6.1.2 Application exploited in the thesis

2.6.2 Summary of the Main Contributions

2.6.3 Summary of Each Chapter

**3 Stochastic Preconditioners for the Addtive Schwarz Method **

3.1 Introduction

3.2 The Schwarz Method for the Stochastic Elliptic Equation

3.2.1 Deterministic and Stochastic spaces

3.2.2 Stochastic Elliptic Equation

3.2.3 Monte Carlo Method

3.2.4 Schwarz Method

3.2.5 Preconditioned Schwarz Method

3.3 Preconditioners for Schwarz Method

3.3.1 Deterministic Preconditioners

3.3.2 Stochastic Preconditioners

3.3.2.1 KL-expansion

3.3.2.2 KL-based Preconditioner

3.3.2.3 PC-based Preconditioner

3.4 Numerical Results

3.4.1 Median-based Preconditioner

3.4.2 KL-based Preconditioner

3.4.3 PC-based Preconditioner

3.5 Conclusion and Discussion

**4 Stochastic Preconditioning of Domain Decomposition Methods for Elliptic Equations with Random Coefficients **

4.1 Introduction

4.2 Sampling method for Stochastic Elliptic Equations

4.2.1 Deterministic and Stochastic spaces

4.2.2 Stochastic Elliptic Equation

4.2.3 Sampling Method

4.2.4 Domain Decomposition and the Schur Complement System

4.3 Stochastic Preconditioners for the Schur Complement Systems

4.3.1 Deterministic preconditioner

4.3.2 Stochastic Schur Complement System

4.3.3 PC Expansion of Local Operators

4.3.4 Factorization of local stochastic operators

4.3.4.1 Cholesky-type factorizations

4.3.4.2 Orthogonal factorization

4.3.5 Sampling and Preconditioning

4.4 Numerical tests

4.4.1 MPCG method

4.4.2 DPCG method

4.4.3 FPCG method

4.4.4 Influence of the number of subdomains

4.4.4.1 Fixed number of local KL modes

4.4.4.2 Adapting the local KL approximations

4.4.5 Complexity analysis

4.4.6 Discussion

4.5 Conclusions

**5 A Surrogate-Based Balancing Domain Decomposition Method **

5.1 Introduction

5.1.1 The Schur Complement System

5.1.2 Local Preconditioning

5.1.3 Motivation and Organization of the Chapter

5.2 The BDD Method

5.2.1 The Neumann-Neumann Preconditioner

5.2.2 The Nicolaides Coarse Space

5.2.3 The Projected PCG Method

5.3 The BDD method for Sampling

5.3.1 The Elliptic Equation Equation with Random Coefficients

5.3.2 Major Drawbacks of the Direct BDD Method for Sampling

5.3.3 The Median-NN Preconditioner

5.3.4 The PC-NN Preconditioner

5.3.4.1 Factorization of the pseudo-inverse matrices

5.3.4.2 Retrieving samples of local preconditioners

5.4 Numerical Comparison of NN Preconditioning Strategies

5.4.1 Effect of the Field’s Complexities on the Performance of the Preconditioners129

5.4.1.1 Performance of M-NN preconditioner

5.4.1.2 Performance of the PC-NN preconditioner

5.4.2 Impact of the Stochastic Discretization Parameters on the Performance of the PC-NN Preconditioner

5.4.2.1 Impact of the number of random variables (fixed partition)

5.4.2.2 Impact of the sparse grid level

5.4.3 Effect of the Size of the Partition on the Performance of the PC-NN Preconditioner

5.4.4 Scalability of the PC-NN Preconditioner

5.5 Alternative Sample-Dependent Coarse Space

5.5.1 The GenEO Coarse Space

5.5.2 The Median-Based GenEO Coarse Space

5.5.3 Numerical Comparison Between GenEO and M-GenEO

5.5.3.1 Impact of the coarse space relative dimension

5.5.3.2 Impact of the field complexity

5.5.3.3 Impact of the different number of subdomains

5.5.3.4 Effect of the preconditioners

5.5.4 Concluding Remarks on the GenEO-based Projection

5.6 Conclusion and Prospective Work

5.6.1 Summary of the Surrogate-based BDD method

5.6.2 Prospective Work

5.6.2.1 Parallel implementation

5.6.2.2 Coarse-space construction

5.6.2.3 Projection operator

**6 Conclusions and Perspectives **

6.1 Achievements

6.1.1 Advanced Surrogate-based Preconditioners Adapted to the Coefficient Samples

6.1.2 Preconditioners Constructed at Negligible Cost per Sample

6.1.3 Preconditioning Approach for Parallel Implementations

6.2 Limitations

6.2.1 Parallel Implementations

6.2.2 Adaptation of the Methods to Discontinuous Fields

6.2.3 Generalization of Surrogate-based Preconditioning Approaches

A The additive Schwarz Method: Matrix Form

B Global KL-expansion