Get Complete Project Material File(s) Now! »

## Compound Poisson process

Compound Poisson processes can be viewed as a generalization of the Poisson process. The Poisson process assumes the events to arrive one at a time, this assumption is relaxed in the Compound Poisson processes. In practice, this process is used when we want to count events that arrive in batches, like the customers arriving at a restaurant, or the number of cars involved in an accident, demands occurring in batches, etc.

Let Nt be the number of batches arrived before time t, and let Yn be the number of event contained in the n − th batch. The process (Xt)t>0 counting the number of events is defined by: Xt = Nt Yi. (1.32) X i=1.

Definition 18. The process (Xt)t>0 is a compound Poisson process if and only if:

(i) (Nt)t≥0 is a Poisson process with rate λ (i.e; the batches arrive at constant rate).

(ii) The sequence (Yn)n∈N is iid and independent of (Nt)t≥0.

Note that the compound Poisson process is more than just a counting process when it is defined in this manner. The compound Poisson process can also be used to model a quantity Xt impacted by events occurring at a rate λ. Indeed Yn is not necessarily an integer or a positive random variable.

### Piecewise Deterministic Markov Processes

A PDMP is a process that follows a deterministic dynamic, but this dynamic can change at random or deterministic times. Such deterministic dynamics are defined by ordinary diﬀerential equations. Therefore this section starts with a few reminders about ordinary diﬀerential equations. Then we gradually present PDMPs, as we start to present a particular case of PDMP before presenting the full PDMP model. The particular case of PDMP we present is the PDMP without boundary. This model is the most common form of PDMP. The most general form of PDMPs being unsurprisingly PDMPs with boundaries. This distinction between PDMPs with and without boundary is important, indeed we will see that, contrarily to PDMPs without boundary, PDMPs with boundaries can be very singular processes.

#### Generalities on first order ordinary diﬀerential equations

Definition 22. An application g : Rd → Rd is global Lipschitz if it exists a constant C > 0, such that we have: ∀x1 ∈ Rd, ∀x2 ∈ Rd, g(x1) − g(x2) ≤ C x1 − x2 . (1.37).

Corollary 1. The Cauchy-Lipschitz theorem or Picard-Lindelöf theorem implies that if g is global Lipschitz, the diﬀerential equation dX(t) = g X(t) , such that X(a) = xa ∈ Rd, (1.38) dt admits a unique global solution of class C1. Let φ be the function on Rd × R such that X(a + t) = φ(xa, t) is the solution to the diﬀerential equation. φ is called a flow. One can deduce from the uniqueness of the solution that φ verifies the two following properties Property 3. The application φt : x → φ(x, t) is inversible, and its inverse is continuous, so it is a homeomorphism. Indeed we have that φ−t1(x) = φ(x, −t). Property 4. The family (φt)t∈R is a group, meaning that for t and s in R φt+s or more explicitly φ(x, t + s) = φ(φ(x, s), t).

**Working on PDMP with boundaries and the issue of the topology**

PDMP with boundaries are complex processes, and working with these processes can be challenging. There are three main points of diﬃculty to keep in mind when we work with these processes. Firstly, a PDMP models a hybrid variable that has continuous coordinates and discrete coordinates, therefore, the state space E = ∪n∈MEm is by essence discontinuous and is not Euclidean. Because of the discrete coordinates and the shapes of the open sets Ωm, we have no obvious metric on the state space. In order to work with an easier topology, one option is to avoid working directly with the states, by working with their image through a real function defined on the state space. Secondly, the inter-jump times are hybrid random variables whose distributions have continuous and discrete parts, which can be tricky to manipulate. We will see in part II that the PDMPs are very degenerate processes because of these hybrid random variables involved in their laws. Thirdly, the trajectories on an interval of time are complex objects, not only because they evolve in the space E which does not have a metric, but also because their skeletons have variable sizes. Therefore it is also diﬃcult to define a metric on the space of trajectories.

**Markovian object as PDMPs**

We end this subsection on PDMP by reviewing how classical Markovian objects can be modeled by a PDMP.

We have seen that a Markov chain (Mn0)n∈N can be extended into a Markov process (Zt)t≥0 by setting Zt = (t, t − btc, Mb0tc). (1.66).

This process can be seen as a PDMP defined by

— A position Xt = (t, t − btc) and a mode Mt = Mb0tc.

— a diﬀerential equation dXt = 1 dt 1

— In order to force jumps when t is an integer we put a boundary on the second coordinate of Xt , setting: ∀m ∈ M, Ωm = (−∞, +∞) × (−∞, 1). Therefore we have a state space defined by: E = [ Em = [ {(x1, x2, m) | x1 ∈ R, x2 ∈ (−∞, 1)} (1.67).

**Table of contents :**

**I Introduction **

**1 Piecewise Deterministic Markov Processes **

1.1 Markovian objects

1.1.1 Markov chains

1.1.2 Markov processes

1.1.3 Poisson Processes

1.1.4 Compound Poisson process

1.1.5 Continuous time Markov chains

1.2 Piecewise Deterministic Markov Processes

1.2.1 Generalities on first order ordinary differential equations

1.2.2 PDMPs without boundary

1.2.3 PDMPs with boundaries

1.2.4 Markovian object as PDMPs

1.3 Model a multi-component system with a PDMP

1.3.1 Model the evolution of the state of the system

1.3.2 A model for reliability assessment

1.3.3 The model

1.3.4 Example of the Heated room system

1.3.5 Example of the spent fuel pool

**2 Monte-Carlo methods for rare events **

2.1 The Monte-Carlo method and its rare event issue

2.1.1 Computational burden of the Monte-Carlo estimator

2.1.2 The principle of variance reduction method

2.2 Importance Sampling

2.2.1 Principle

2.2.2 The Importance Sampling estimator

2.2.3 Dynamical importance sampling

2.2.4 Variance and optimal density

2.2.5 Important practical concerns

2.3 Variance optimization methods for importance sampling

2.3.1 Variance optimization: the Cross-Entropy method

2.3.2 Important remark on the Cross-Entropy

2.3.3 Adaptive Cross-Entropy

2.4 The interaction particle method

2.4.1 A Feynman-Kac model

2.4.2 The IPS algorithm and its estimator

2.4.3 Estimate the variance of the IPS estimator

2.4.4 Classical improvements of the IPS method

**II Importance Sampling with PDMPs **

**3 Theoretical foundation for Importance sampling on PDMP **

3.1 Prerequisite for importance sampling

3.1.1 The law of the trajectories

3.1.2 The dominant measure and the density

**4 The practical and optimal importance processes **

4.1 Admissible importance processes

4.2 A way to build an optimal importance process

4.3 Remarks on the optimal process

4.4 Practical importance processes for reliability assessment

**5 Applications to power generation systems **

5.1 Application to the Heated-Room system

5.1.1 A parametric family of importance processes for the Heated-Room system

5.1.2 Results

5.2 Practical issues with power generation systems

5.2.1 Specify a parametric family of proximity functions

5.2.2 Results with the series/parallel parametric proximity function on the spent-fuel-pool system

**6 Conclusion on the importance sampling for PDMP **

**III A contribution to the IPS method **

**7 The optimal potential functions **

7.1 The potentials used in the literature

7.2 The optimal potential

7.3 A short simulation study for the comparison of potentials

7.3.1 First example

7.3.2 Second example

**8 Conclusions and implications **

**IV The interacting particle method for PDMPs **

**9 The inefficiency of IPS on concentrated PDMP **

9.1 IPS with PDMP

9.1.1 A Feynman-Kac model

9.1.2 The IPS algorithm and its estimators

9.1.3 Variance estimation for the PDMP case

9.1.4 The SMC improvement for PDMP

9.2 Concentrated PDMP make the IPS inefficient

9.2.1 The kind of PDMP used in the reliability analysis

9.2.2 Concentrated PDMP

9.2.3 A poor empirical approximation of the propagated distributions within the IPS

**10 Efficient generation of the trajectories using the Memorization method**

10.1 Advantage of Memorization over a rejection algorithm

10.2 Principle of the memorization method

**11 The IPS+M method for concentrated PDMPs **

11.1 Modify the propagation of clusters

11.2 Convergence properties of the IPS+M estimators

**12 Application to two test systems **

12.1 Empirical confirmation

12.1.1 The Heated-room system

12.1.2 Results of the simulations on the Heated-room system

12.1.3 Remark on the SMC with Memorization

12.1.4 A dam system

12.1.5 Results of the simulations for the dam system

**13 Conclusion on the IPS+M **

**V Conclusion and prospects for future work **

Optimal intensity’s expression, and some properties of U

A.1 Proof of Equality (4.11)

A.2 Proof of theorem 18

A.3 Equality (4.12)

**Bibliographie **