Enlargement of filtration in discrete time
In this chapter, we present classical results on enlargement of filtration, in a discrete time framework. In such a setting, any F-martingale is a semimartingale for any filtration G larger than F, and one can think that there are not so many things to do. From our point of view, one interest of our study is that the proofs of the semimartingale decomposition formulae are simple, and give a pedagogical support to understand the general formulae obtained in the literature in continuous time. It can be also noticed that many results are established in continuous time under the hypothesis that all F-martingales are continuous or, in the progressive enlargement case, that the random time avoids the F-stopping times and the extension to the general case is diﬃcult. In discrete time, one can not make any of such assumptions, since martingales are discontinuous and random times valued in the set of integers do not avoid F-stopping times. This chapter is an extended version of [BSJRR16].
In Section 1.1, we recall some well know facts. Section 1.2 is devoted to the case of initial enlargement. The long Section 1.3 presents the case of progressive enlargement with a random time . We give a “model-free” definition of arbitrages in the context of enlargement of filtration, we study some examples in initial enlargement and give, in a progressive enlargement setting, necessary and suﬃcient conditions to avoid arbitrages before . We present the particular case of honest times (which are the standard examples in continuous time) and we give conditions to obtain immersion property. We also give also various characterizations of pseudo-stopping times.
After this chapter was quite finished, we discovered the lecture notes of Spreij [Spr15]. We recommend these notes, in which the basic results of Section 1 are given and much more information can be found for discrete time martingales. Also we recommend the Chapter II of the book of Shiryaev [Shi99, Chapter II : Stochastic Models. Discrete Time].
Definitions, notation and some important results
Let ( ; A; A; P) be a filtered probability space where A is a -algebra and A = An n 0 is a complete filtration with A1 = _n 0An A. We also consider a discrete filtration B = (Bn)n 0 such that A B and B1 A.
A random variable is positive (resp. non-negative) if > 0, (resp. 0) and a process X is positive if for all n 0, we have that Xn > 0. A process is non decreasing (resp. non increasing) if Xn 1 Xn (resp. Xn Xn 1) for all n 1.
For a discrete time process X, we denote by Xn := Xn Xn 1 its increment at time n, for n 1, with the convention that X0 := X0. The process X is defined as X n = Xn 1; n 1 and X 0 = 0. If X1 is defined and limn!1 Xn exists (in a.s. sense), we define the increment at infinity by X1 := X1 X1 , where X1 = limn!1 Xn.
Let n 0 be fixed and a random variable. We write, with an abuse of notation, 2 An to say that is An-measurable. If X is a process and a non negative random variable valued in f0; 1; : : :g, X denotes the process X stopped at , i.e. Xn = X ^n, for all n 0.
If Xn 2 An, for all n 0, then we say that the process X is A-adapted (or A-optional). We say that the process X is A-predictable if Xn 2 An 1, for all n 1 and X0 constant.
Remark 1.1.1 In discrete time, there is no distinction between optional and adapted processes. We recall that, in continuous time, the -algebra generated by the right-continuous and A-adapted processes is called the A-optional -algebra. A process is A-optional if and only if it is measurable w.r.t. the A-optional -algebra.
A process X is integrable if EjXnj < 1 for all n 0 and it is square-integrable if EjXnj2 < 1, for all n 0.
A process M is an A-local martingale if there exists a sequence of A-stopping times ( n)n 0 such that :
– n < n+1 for all n 0 ;
– limn!1 n = 1 ;
– the stopped process M n := M ^ n is an A-martingale for every n 0.
A process X is an A-semimartingale if it can be decomposed as a sum of a local martingale and a finite variation process.
The following theorem, established in [DM78, Page 89] or [Spr15], is a powerful tool.
Theorem 1.1.2 If M is an A-local martingale and M is integrable, then M is a martingale. A non-negative local martingale with E(M0) < 1 is a martingale.
Let X and Y be two processes, then we denote by X Y the discrete stochastic integral, also called martingale transform, in the case where Y is a martingale, defined by (X Y )n := Xk Yk ; 8n 0 k=0 and if the limit exists, we define (X Y )1 := 1 k=0 Xk Yk = limn!1(X Y )n. In particular, the stochastic integral X Y is given by (X Y )n := Xk Xk 1 Yk ; 8n 0 : =0
We recall some definitions and results in a discrete time setting that will be cru-cial for the next sections. We give three important decomposition theorems : Doob’s decomposition, multiplicative decomposition and Kunita-Watanabe’s decomposition. We recall the definitions of quadratic variation and predictable bracket. We define the exponential and logarithm process. Finally, we present Girsanov’s Theorem using the quadratic variation and the predictable bracket.
The first one is Doob’s decomposition (see [Doo53]), which allows to decompose explicitly any integrable and adapted process in a sum of a predictable process and a martingale. This result is central in discrete time theory since it implies that any martingale is a semi-martingale in a bigger filtration. The second theorem gives a mul-tiplicative decomposition for any positive integrable adapted process in a martingale and a predictable process. The third theorem is the Kunita-Watanabe decomposition (see [KW67] and [FS04]).
Theorem 1.1.3 Doob’s decomposition or semimartingale additive decompo-sition. Let X be any integrable A-adapted process, then X is an A-semimartingale, which can be represented in a unique way as X = P + M, where P is an integrable A-predictable process with P0 = 0 and M is an A-martingale. More precisely, Xk E(XkjAk 1) (1.1.1) M0:=X0; Mn := X0 +Xk ; 8n 1 ; Xk =1 E(XkjAk 8n 1 : (1.1.2) P0:=0; Pn := 1) Xk 1 ; =1
Proof: Notice that for each n 0 fixed, the random variable Mn, defined by (1.1.1), is integrable, as a sum of integrable random variables and E(MnjAn 1) = X0 + E Xn E(XnjAn 1)jAn 1 + Xk E(XkjAk 1) = Mn 1 ; =1 which proves that M is an A-martingale.
In the other hand, P , defined in (1.1.2) is obviously an A-predictable process, satis-fies P = X M and is integrable. The uniqueness of the decomposition follows from the construction since E (Xk Xk 1 k 1) = Pk Pk 1 for all k 1. Summing over jA n 1). k = 1; : : : ; n and under the condition P0 = 0, give Pn = (Xk Xk 1 k Consequently, Mn = Xn Pn, for all n 0, holds true. Pk=1 E jA
Remark 1.1.4 The initialization P0 = 0 and M0 = X0 is the traditional but ar-bitrary set-up in the Doob decomposition. It is in order to have uniqueness of the decomposition. In general we can set P0 = c and M0 = X0 c for any constant c.
Remark 1.1.5 Notice that if X is an integrable A-supermartingale (resp. A-submartingale) with Doob’s decomposition X = M + P , then Mn 1 + Pn = E(XnjAn 1) Xn 1 = Mn 1 + Pn 1 ; (resp. Mn 1 + Pn = E(XnjAn 1) Xn 1 = Mn 1 + Pn 1 ) , for all n 1, therefore P is a non increasing (resp. non decreasing) process.
Definition 1.1.6 Two A-martingales X and Y are A-orthogonal if the process XY is an A-martingale.
This is equivalent to E(Xn YnjAn 1) = 0 or E( Xn YnjAn 1) = 0 for all n 1
Proposition 1.1.7 Let X be an A-adapted square-integrable semimartingale and Y a square-integrable A-martingale, then X Y is an A-martingale if and only if the A-martingale part of X is A-orthogonal to Y . In particular, if X is A-predictable, X Y is an A-martingale.
Furthermore, if XY is square-integrable, then E(j(X Y )nj2) = E jXkj2j Ykj2 ; 8n 0 : k=0
Proof: Let X = M + P the Doob decomposition of X, with M an A-martingale, and P an A-predictable process. For n 1, we have due to the martingale property of Y that E( (X Y )njAn 1) = E(Xn YnjAn 1). Then, since Pn 2 An 1 and M is A-orthogonal to Y , we get that E( (X Y )njAn 1) = 0, hence, the martingale property is proved.
For the second part of the proof, since XY are square integrable then we have that ( (X Y )n j 2) < 1. In the other hand, for all n k 1 due to the E j 2 , it is well known 2 Ak 1 . fact that X Y is a martingale that E ((X Y ) )k Ak 1 = E ( (X Y )k) (X Y )k = Xk Yk
Then, using and taking expectations, we get E ((X Y )2)k = E Xk2( Yk)2 : (1.1.3)
Finally, taking the sum in (1.1.3) for k from 0 to n, we obtain E j(X Y )nj2 = E jXkj2j Ykj2 ; 8n 0 : k=0
More generally, from Theorem 1.1.2, if Y is a martingale and X a predictable process such that, for any n, the random variable Xn Yn is integrable, then X Y is an A-martingale. Otherwise, it is a local martingale.
Lemma 1.1.8 Let be an integrable positive random variable, then Yn := E( jAn) > 0, i.e., Y is a positive process.
Proof: For n 0, we have by monotonicity of the conditional expectation that Yn 0. Then, we have to prove that P(Yn = 0) = 0. Consider the set A = f! 2 : Yn(!) = 0g 2 An. Therefore, we obtain that E(Yn1A) = 0. In the other hand, we have by definition of the conditional expectation that E(Yn1A) = E( 1A), hence E( 1A) = 0, but since is positive, we have that necessarily A has measure zero.
We end this section with the definition of optional and predictable projections and of dual optional and dual predictable projection. The dual projection concepts were introduced originally in continuous time, for more details we refer to [Nik06], [JYC09, Section 5.2] or [HWY92, Chapter V]. In continuous time, dual projections are defined only for integrable finite variation processes ; in discrete time, since any process is with finite variation, we can define dual projections for any integrable process.
Definition 1.1.27 Optional and predictable projection. Let X be an integrable process (not necessarily A-adapted). We call the A-optional (resp. A-predictable) projection of X, the integrable A-optional (resp. A-predictable) process defined as (o)Xn = E(XnjAn) for all n 0 (resp. (p)Xn = E(XnjAn 1) and (p)X0 = E(X0jA0)) .
Definition 1.1.28 Dual optional projection. Let X be an integrable process (not necessarily A-adapted). We call the dual A-optional projection of X, the integrable A-optional (A-adapted) process X(o), defined as Xn(o) = E( XnjAn) for all n 0.
Remark 1.1.29 Notice that the dual A-optional projection of X, satisfies that E (Y X)1 = E (Y X(o))1
for any non negative bounded A-optional process Y . Moreover, if X is non decreasing, then X(o) is also non decreasing.
Definition 1.1.30 Dual predictable projection. Let X be an integrable process (not necessarily A-adapted). We call the dual A-predictable projection of X, the integrable A-predictable process X(p), defined as X0(p) = E(X0jA0) and Xn(p) = E( XnjAn 1) for all n 1.
Remark 1.1.31 Notice that dual A-predictable projection of X, satisfies that E (Y X)1 = E (Y X(p))1 for any non negative bounded A-predictable process Y . Moreover, if X is non decrea-sing, then X(p) is also non decreasing.
Theorem 1.1.32 Let X be an integrable process (not necessarily A-adapted). Then, the processes Y = (o)X X(o) and Yb = (o)X X(p) are A-martingales.
Proof: The integrability of Y and Y is obvious. Then, we have that for all n 1, and E( Yn n 1) = E E( bn n) E( n n) jF n 1 = 0 jF X jF X jF
E( YnjFn 1) = E E(XnjFn) E( XnjFn 1)jFn 1 = 0 :
In continuous time, the classical no-arbitrage theory is based on the notions of Arbitrage Opportunity and Free Lunch with Vanishing Risk, as developed by Delbaen & Schachermayer [DS94]. In our setting, we consider the following definition 1.1.33 for no arbitrage in a filtration, a price process being given (see for example [JS98] for more details about arbitrages in discrete time and [Bjo09] for the continuous setting).
In the enlargement of filtration setting, we pay attention to all A-martingales. We give a “model free » definition of arbitrage, in the sense that we do not specify the price process in the filtration A and we give conditions for the existence of a deflator for all the A-martingales. The study of conditions so that, for a given A-martingale X, there exists a deflator, can be found in [CD14].
Definition 1.1.33 Let X be an A-semimartingale. We say that the model (X; A) has no arbitrages if there exists a positive A-martingale L, with L0 = 1, such that XL is an A-martingale.
Table of contents :
Grossissement de filtration en temps discret
Prime d’indifférence de contrats d’assurance vie
Équations différentielles stochastiques rétrogrades, grossissement de filtration
et prix d’indifférence de l’information
Équations différentielles stochastiques rétrogrades avancées
1 Enlargement of filtration in discrete time
1.1 Definitions, notation and some important results
1.1.1 Basic results
1.1.3 Filtration enlargement
1.2 Initial enlargement
1.2.1 Random walk bridge
1.2.2 Initial enlargement with a Z-valued random variable
1.3 Progressive enlargement
1.3.1 Definitions and first results
1.3.2 Some particular random times
1.3.3 Immersion setting
1.3.4 Representation Theorem
1.3.5 Equivalent probability measures
1.3.6 Cox model
1.3.8 Construction of from a given supermartingale
2 BSDEs and variable annuities
2.1 Model for variable annuities
2.1.1 The financial market model
2.1.2 Exit time of a variable annuity policy
2.2 Indifference fee rate for variable annuities
2.2.1 Indifference pricing
2.2.2 Utility maximization without variable annuities
2.2.3 Utility maximization with variable annuities
2.2.4 Indifference fee rate
2.2.5 Indifference fee rates for a policyholder
2.3 Variable Annuities pricing in the worst case
2.3.1 GMDB and GMLB contracts
2.3.2 Utility maximization and indifference pricing
2.3.4 Utility maximization between T ^ and T
2.3.5 Proof of Lemma 2.2.7
3 BSDEs, filtration enlargement and IPI
3.1 BSDEs in different filtrations
3.1.1 Projection of the solution of a BSDE
3.1.2 Projection of the driver of a BSDE
3.2 Indifference price of information
3.2.1 Financial market and probability space
3.2.2 Utility maximization
3.2.3 Indifference price of information
4 Some existence results for ABSDEs
4.2 ABSDE with jump of type (4.0.1)
4.2.1 Study of the Equation (4.2.4)
4.2.2 Study of the Equation (4.2.5)
4.2.3 Integrability of the solutions
4.2.4 Uniqueness of the solution
4.3 ABSDE with jump of type (4.0.2)
4.3.1 Study of the Equation (4.3.2)
4.3.2 Study of the Equation (4.3.3