Get Complete Project Material File(s) Now! »

## Discrete subsets as counting measures

Once we have defined the topological space, we move to defining the corresponding Borel s-algebra B: the smallest s-algebra in D that contains all open subsets of D. Its elements are called Borel sets. A measure m is a function from B to R+ [ f¥g that is non-negative, s-additive and vanishes at the empty set. A locally finite measure is a measure m on (D, B) such that for every B 2 B that is relatively compact (its closure is compact) we have m(B) < +¥. (2.1).

Denote by M(D) the set of locally finite measures on D. We supply M(D) with the s-algebra M(D) generated by the evaluation maps defined for every Borel set B 2 B FB : M(D) ! R+ [ f¥g. m 7! m(B)

In other words, M(D) is the smallest s-algebra for which all the evaluations maps FB are measurables. This s-algebra is generated by the cylinder sets.

### The mean measure of a point process

The description of a point process through the cylinder sets g(B1) = n1, . . . , g(Bm) = nm.

is convenient in some cases such as the binomial process and the Poisson point process. However, in general the probability P g(B1) = n1, . . . , g(Bm) = nm (2.14).

have no tractable formula and it is more convenient to work with an alternative description offered by the moment measures that are defined in the following. We start with the mean measure.

Definition 2.5. Let g be a point process. The mean measure of g is the measure g1 defined by 8B 2 B, g1(B) = Eg(B). (2.15) The mean measure is well defined but may take infinite values.

Example 2.1. Let g be a Poisson point process associated to an intensity w. It is immediate from (2.12) that 8B 2 B, Eg(B) = w(B). (2.16) Therefore, the mean measure of g is equal to w.

** High-order moment measures of a point process**

The mean measure gives an idea of the random measure g evaluated on one Borel set B 2 B yet it does not capture the correlations between the evaluation of g on a family of Borel sets B1, . . . , BL 2 B. There are many ways to estimate this interaction. For example, we can define the L-th power of g that is a point process on the measurable space DL, equipped with the product s-algebra, defined by 8 Õ B‘ 2 BL, g L ( Õ B‘) = Õ g(B‘), (2.23) ‘2[L] ‘2[L] ‘2[L] and we consider the mean measure of g L defined by ‘2[L] ‘2[L] ‘2[L] g1 L Õ B‘ = E g L Õ B‘ = E Õ g(B‘). (2.24) The measure g1 L is also called the L-th moment measure of g. The definition of g1 L is straightforward, yet it is not convenient in the study of DPPs that will be presented later. An alternative definition that is more compatible with the structure of DPPs is slightly more technical, and it is given in the following.

#### The definition of DPPs under weaker assumptions

The usual assumptions (2.43), (2.49) and (2.50) made previously can be relaxed in many ways. We review two relaxations that are common in the literature.

First, the assumption (2.49) is usually relaxed so that *k* is only assumed to be locally square integrable: for any compact set C of D Under this relaxed assumption, the domain of definition of the integration operator Sk should be restrained to elements of L2(dw) that vanish a.e. outside a compact subset of D. Moreover, Theorem 2.4 remains valid and Theorem 2.5 and its consequences are still valid for the restricted integration operators: if g follows the distribution of a DPP of kernel k and reference measure dw then for every compact set C D, g \ C follows the distribution of a DPP of the kernel k and the reference measure dwC defined by 8B 2 B, wC(B) = w(B \ C). (2.57)

The Hermitianity of k can be relaxed too: there are examples of DPPs associated to non-Hermitian kernels ; see Section 2.5 in (Soshnikov, 2000) and (Brunel, 2018). Yet, many interesting aspects of DPPs require this condition: negative correlations, tractable numerical simulation .

**Table of contents :**

**1 introduction **

1.1 Subsampling problems with linear structure

1.2 Determinantal sampling

**2 determinantal point processes **

2.1 Point processes

2.2 Determinantal point processes

**3 column subset selection using projection dpps **

3.1 Notation

3.2 Related work

3.3 The proposed algorithm

3.4 Main results

3.5 Numerical experiments

3.6 Discussion

3.7 Proofs

Appendices

3.A Principal angles and the Cosine Sine decomposition

3.B Majorization and Schur convexity

3.C Another interpretation of the k-leverage scores

3.D Generating orthogonal matrices with prescribed leverage scores

**4 kernel quadrature using dpps **

4.1 Related work

4.2 The kernel quadrature framework

4.3 Projection DPP for optimal kernel quadrature

4.4 Numerical simulations

4.5 Discussion

4.6 Proofs

**5 kernel interpolation using volume sampling **

5.1 Introduction

5.2 Three topics on kernel interpolation

5.3 Volume Sampling and DPPs

5.4 Main Results

5.5 Sketch of the Proofs

5.6 Numerical Simulations

5.7 Discussion

5.8 Proofs

**6 conclusion and perspectives **

6.1 Conclusion

6.2 Perspectives

R´esum´e en franc¸ais

Notations