Consistency of the MLE for the Class of Fully Dominated Markov Chains 

Get Complete Project Material File(s) Now! »

General-order Observation-driven Models

In many time series, solving Problem 1 and Problem 2.1 in Section 1.2.1, that is, establishing the consistency of the MLE, usually requires assuming or proving that the observation process is stationary and ergodic. Under HMM framework, stationary and ergodic solutions for the model are inherited from the stationarity and ergodicity of the underlying hidden process and are usually studied by using ‘-irreducibility assumption. However, it may not be at all the same fortune when the studied model is under ODM setting. In this latter case, it turns out that a more subtle approach is required, at least in the circumstance where the observed process assumes integer values. The diculty in this case, though in its own right it is a Markov chain, may arise  from the degeneracy of the hidden state process from its state equation. Up to this point, several methods toward the solution of this problem has been proposed. These include the perturbation technique (see Fokianos and Tjøstheim [2011]), the contractivity approach (see Neumann [2011]), the weak dependence approach (see Doukhan et al. [2012]) and the approach based on the theory of Markov chains without irreducibility assumption introduced in Douc et al. [2013]. Among these, the result obtained recently in Douc et al. [2013] appears to be able to cope with many models of interest lying in the class of rst-order ODMs, regardless of whether the observation process is
discrete or continuous. However, whether this same result applies to a more general and exible context of general-order ODMs or general-order GARCH type models has not been known so far. The same question is for the asymptotic properties of the MLE for the class of general-order ODMs as posed by Douc et al. [2013] and recently addressed in Tjøstheim [2015].

Approaches and Main Results

Throughout this thesis, the statistical inference is performed, unless otherwise specied, under the framework of the well-specied models. The approaches and main results are outlined as follows.

Identication of the Maximizing Set ?

One of our primary objectives is to provide a general method toward solving Problem 2.1 as described in Section 1.2.1, that is, showing that any parameter in the maximizing set ? of the asymptotic normalized log-likelihood yields the same distribution for the observations as for the true parameter ?, under the general framework of partially dominated and partially observed
Markov models in which many interesting models such as HMMs and ODMs are included. Here, by partially dominated, we refer to a situation where the distribution of the observed variable is dominated by some xed -nite measure dened on the observation space. The proof of the main result is of probabilistic-analytic nature. The novel contribution of our present approach
is that the characterization of the maximizing set ? is addressed by relying mainly on the existence and the uniqueness of the invariant distribution of the Markov chain associated to the complete data, regardless of its rate of convergence to the equilibrium. This is indeed in deep contrast with existing results in these models where the identication of ? is established under the
assumption of exponential separation of measures (see Douc et al. [2011]) or geometric ergodicity (see Douc et al. [2004]). The ways of how this general result applies in the classes of HMMs and ODMs are also demonstrated.As it will be later shown, our general result can be readily applied to solve Problem 2.1 for some otherwise intractable versions of HMMs and ODMs mentioned earlier in Section 1.2.1. All of these are reported in the journal paper by Douc et al. [2014], which is going to appear soon in The Annals of Applied Probability.

Consistency of the MLE for Fully Dominated and Partially Observed Markov Models

In addition to solving Problem 2.1, our next objective is to investigate Problem 1 within the general context of PMMs. In this thesis, however, this task is performed separately in two directions; namely, we discuss this issue independently for the class of fully dominated PMMs and for the class of ODMs. A fully dominated PMM is a partially observed Markov model whose conditional marginal laws of the observation and hidden state processes both admit probability densities with respect to some -nite measures. Many HMMs, Markov switching models and Markov chains in random environment, for example, are members of this class. Solving Problem 1 for fully dominated PMM, that is, formulating the convergence of the MLE, is carried out by using the approach employed by Douc and Moulines [2012] as to derive the convergence of the MLE to the minimizing set of the relative entropy rate in misspecied HMMs. This approach consists in establishing a key propertythe exponential forgetting of the ltering distribution by using coupling method originated by Kleptsyna and Veretennikov [2008] and further rened by Douc et al. [2009]. When the forgetting of the ltering distribution is achieved, together with the assumption that the observation process is strict-sense stationary and ergodic, then the normalized log-likelihood can be approximated by an appropriately dened stationary version of it.
Then applying the results in classical ergodic theory, the latter is shown to converge to a limit of functional of the parameter, say `(), where is the parameter. It then implies by using standard argument (see for instance Pfanzagl [1969]) that the MLE of the normalized log-likelihood convergesalmost surely to a subset ? of the parameter set at which this limit `() is maximized. Using this same technique, meanwhile, we obtain an intermediary result showing that the block-type MLE converges to a maximizing set in a similar sense as the classical MLE, regardless of whether or not the models are well-specied. These results are shown under similar conditions derived in Douc and Moulines [2012]. Moreover, with these same assumptions, Problem 2.1 is also solved, yielding the equivalence-class consistency of the MLE for the class fully dominated PMMs.

READ  Relationship between interface stresses and the orientation of each grain boundary with respect to the tensile axis

Table of contents :

1 Introduction 
1.1 Background
1.2 Motivation and Problem Statements
1.2.1 Consistency
1.2.2 General-order Observation-driven Models
1.3 Approaches and Main Results
1.3.1 Identication of the Maximizing Set ?
1.3.2 Consistency of the MLE for Fully Dominated and Partially Observed Markov Models
1.3.3 Consistency of the MLE for Observation-driven Models 7
1.3.4 Generalizations of Observation-driven Models
1.4 Organization of the Thesis
2 Partially Observed Markov Chains: Identication of the Maximizing Set of the Asymptotic Normalized Loglikelihood
2.1 Introduction
2.2 A General Approach to Identiability
2.2.1 General Setting and Notation: Partially Dominated and Partially Observed Markov Chains
2.2.2 Main Result
2.2.3 Construction of the Kernel ;0 as a Backward Limit .
2.3 Application to Hidden Markov Models
2.3.1 Denitions and Assumptions
2.3.2 Equivalence-class Consistency
2.3.3 A Polynomially Ergodic Example
2.4 Application to Observation-driven Models
2.4.1 Denitions and Notation
2.4.2 Identiability
2.4.3 Examples
2.5 Postponed Proofs
2.5.1 Proof of Eq. (2.10)
2.5.2 Proof of Lemma 2.2.9
2.5.3 Proof of Lemma 2.3.9
3 Consistency of the MLE for the Class of Fully Dominated Markov Chains 
3.1 Introduction
3.2 Consistency of the Block-type Maximum Likelihood Estimators
3.2.1 Notation and Denitions
3.2.2 Forgetting of Initial Distribution for the Conditional L-likelihood
3.2.3 Convergence of the Maximum L-likelihood Estimator .
3.3 Consistency of the MLE for Fully Dominated Markov Models .
3.4 Postponed Proofs
3.4.1 Proof of Proposition
3.4.2 Proof of Proposition
3.4.3 Proof of Lemma
3.4.4 Proof of Proposition
3.4.5 Some Useful Lemmas
4 Observation-driven Models: Handy Sucient Conditions for the Convergence of the MLE 
4.1 Introduction
4.2 Denitions and Notation
4.3 Main Results
4.3.1 Preliminaries
4.3.2 Convergence of the MLE
4.3.3 Ergodicity
4.4 Examples
4.4.1 NBIN-GARCH Model
4.4.2 NM-GARCH Model
4.4.3 Threshold IN-GARCH Model
4.5 Numerical Experiments
4.5.1 Numerical Procedure
4.5.2 Simulation Study
4.6 Postponed Proofs
4.6.1 Convergence of the MLE
4.6.2 Ergodicity
4.6.3 Proof of Lemma
5 General-order Observation-driven Models: Ergodicity, Consistency and Asymptotic Normality of the MLE
5.1 Introduction
5.2 Denitions and Notation
5.3 Main Results
5.3.1 Preliminaries
5.3.2 Convergence of the MLE
5.3.3 Asymptotic Normality of the MLE
5.3.4 Ergodicity
5.4 Examples
5.4.1 GARCH(p; p) Model
5.4.2 Log-linear Poisson GARCH(p; p) Model
5.4.3 NBIN-GARCH(p; p) Model
5.5 Empirical Study
5.5.1 Sharpness of Ergodicity Condition (5.56) for the Loglinear Poisson GARCH Model
5.5.2 Data Example
5.6 Postponed Proofs
5.6.1 Proof of Lemma 5.3.4
5.6.2 Proof of Theorem 5.3.12
5.6.3 Proof of Lemma 5.4.2
5.6.4 Proof of Lemma 5.4.11
6 Conclusions and Future Perspectives 
Bibliography

GET THE COMPLETE PROJECT

Related Posts