Function spaces, wavelet bases and approximation theory 

Get Complete Project Material File(s) Now! »

Minimax risk analysis.

Let us recall that the minimax framework has been defined at inception of Chapitre 1. Let us here focus on the minimax risk Rn(F) over a function space F. F will essentially refer to a Besov space Bs τ,q( ) on a subset of Rd (see Chapter 3 for a proper definition of Besov spaces). As described in Chapitre 1, we can think of as the unitcube [0, 1]d of Rd. The study of the minimax rate Rn(F) most often divides into two distinct steps. It consists indeed on the one hand in determining a lower bound n−γ on the minimax risk Rn(F) and, on the other hand, in showing that a specific estimation procedure reaches this very same estimation rate (modulo a constant or/and log n factor). In this Chapter, we will focus on wavelet estimation procedures and recall that they are (nearly) minimax optimal over Besov spaces.

Upper-bound method, bias variance trade-off.

Let us assume that we dispose of a multi-resolution analysis (MRA) constituted of a sequence of approximation spaces (Vj)j≥0 such that dimVj = N = 2jd. In addition, we denote by f∗ j the best linear approximation of f onto Vj in Lp-norm. Denote by fn an estimator of f∗ j in Vj such that Efn = f∗ j and built upon the n sample points (Xi, Yi)s. The error which results from the estimation of f by fn breaks down into two bits. 21−pEkfn − fkp Lp ≤ Ekfn − f∗ j kp Lp + kf − f∗ j kp Lp.
By extension from the case where p = 2, the first term on the rhs is known as the variance term while the second term is known as the bias term. Notice that the bias term is deterministic so that no expectation is needed there. As detailed in Section 3.2.3 below, the bias term is a decreasing function of the model complexity, that is of dimVj . On the contrary, the variance term is an increasing function of the model complexity. In this linear estimation framework, statisticians look for the model complexity N = 2jd that ideally balances bias with variance, that is the optimal model complexity Ns or, equivalently, the optimal resolution level js, which typically depends on the smoothness s of f.

Lower bound methods, regular versus sparse case.

Roughly speaking, the lowerbound toolbox consists of Fano’s and Assouad’s lemma (see [73, Chap. 2]) as well as the Bayesian minimax theorem (see [74, Chap. 4]). In particular, the study of (a lower-bound on) Rn(Bs τ,q) leads to two different regimes termed the sparse case and the regular case. Assume that f belongs to the unit-ball U(Bs τ,q) of the Besov space Bs τ,q and write ν = s − (dp/2)(1/τ − 1/p). All the results that follow are also valid for f in a ball of finite radius M, but for ease of notations, we will stick to the unit-ball in the sequel. Then two cases arise, whether ν > 0 or ν ≤ 0. A lower bound on the minimax risk can be found in [75, Theorem 2] in the density estimation setting for = Rd. In the density estimation setting still, a similar result has been obtained in [47, Theorem 11] with needlet frames for = Sd, where Sd stands for the hypersphere of Rd+1. Needlet frames have been introduced previously in Chapitre 1 and are further detailed in Chapter 6.

Optimal wavelet estimation on a uniform design.

Over the last two decades, linear and non-linear wavelet estimators have proved to be very powerful tools in a wide range of applications and in particular in statistical estimation. From a practical standpoint, these estimators are built upon an euclidean lattice, which makes them computationally very appealing. In fact, the computation of a wavelet estimator everywhere on boils down to the estimation of a finite number of wavelet coefficients. This is a definitive advantage over kernel estimators, which must be recomputed at each single evaluation. On the flip side, and as detailed in Chapter 5, wavelet estimators loose some of the flexibility that is offered by kernel estimators due to this very same underlying lattice structure.
From a theoretical standpoint, wavelet estimators have been proved to be (nearly) minimax optimal over wide Besov scales in the density estimation setting. As stated previously, these results extend naturally to the nonparametric regression on a random design setting, provided the design density μ is uniform on . We will therefore quote results from the density estimation setting, being well understood that they are equally valid in the regression on a uniform design setting, under appropriate assumptions on the regression noise.

READ  Additional force_scans cost CPU time and impact isolation

Multi-resolution analysis, wavelets and notations.

First results on multi-resolution analysis (MRA) and wavelets basis (see [76, 77]) emerged in the nonparametric statistics literature in the early 1990’s (see [78, 79, 80, 81, 75]). Multi-resolution analysis (MRA) and associated wavelets are detailed below in Section 3.2.1. For reader’s convenience, we recall here the main notations. Fix r ∈ N and consider the Daubechies’ r-MRA built upon the corresponding Daubechies’ scaling functions ϕj,k and associated wavelets ψj,k (see Section 3.2.1 for terminology). Recall that it consists in nested approximation spaces (Vj)j≥0 that reproduce polynomials up to degree r − 1. Furthermore, we denote by Pj the orthogonal projection operator onto the approximation space Vj (see eq. (3.4)) and by Rj = I − Pj the corresponding remainder operator. Finally we define by Wj the projector onto the detail spaces up to level j − 1 (see eq. (3.5)). Recall that Pjf and Wjf are two representations of the projection of f onto Vj , the former in term of scaling function coefficients αj,k = hf,ϕj,ki and the latter in term of wavelet coefficients βj,k = hf, ψj,ki.

Table of contents :

1 Introduction 
1.1 Quelques probl`emes de statistique non param´etrique
1.2 Pr´esentation du cadre th´eorique minimax
1.3 R´egression lin´eaire localis´ee en ondelettes sur un design al´eatoire
1.4 Application des proc´edures localis´ees d’estimation en ondelettes `a la classification
1.5 R´egression sur la sph`ere avec des needlets
1.6 Contribution `a un probl`eme inverse en math´ematiques financi`eres
1.7 Appendice
2 Optimal wavelet regression on a uniform design 
2.1 Minimax risk analysis
2.2 Optimal wavelet estimation on a uniform design
2.3 Adaptation
3 Function spaces, wavelet bases and approximation theory 
3.1 Function spaces on a domain
3.2 Besov spaces, wavelet bases and multi-resolution analysis
3.3 Linear and nonlinear approximation theory
4 From orthonormal bases to frames 
4.1 Bases and their limitations
4.2 Bessel sequences, orthonormal bases, Riesz bases and frames
4.3 Frames and signal processing
5 Classification via local multi-resolution projections 
5.1 Introduction
5.2 Our results
5.3 Literature review
5.4 A primer on local multi-resolution estimation under (CS1)
5.5 Notations
5.6 Construction of the local estimator η@
5.7 The results
5.8 Refinement of the results
5.9 Relaxation of assumption (S1)
5.10 Classification via local multi-resolution projections
5.11 Simulation study
5.12 Proofs
5.13 Appendix
6 Needlet-based regression on the hyper-sphere 
6.1 Introduction
6.2 Needlets and their properties
6.3 Besov spaces on the sphere and needlets
6.4 Setting and notations
6.5 Needlet estimation of f on the sphere
6.6 Minimax rates for Lp norms and Besov spaces on the sphere
6.7 Simulations
6.8 Proof of the minimax rate
6.9 Proof of Proposition 6.8.1
6.10 Proof of Proposition 6.8.2
7 Stable spectral risk-neutral density recovery 
7.1 Introduction
7.2 Definitions and setting
7.3 Results relative to γ∗γ and γγ∗
7.4 Results relative to γ and γ∗
7.5 Other results relative to γ∗γ, γγ∗, γ∗ and γ
7.6 Explicit computation of (λk), (ϕk) and (ψk)
7.7 The spectral recovery method (SRM)
7.8 Simulation study

GET THE COMPLETE PROJECT

Related Posts