(Downloads - 0)
For more info about our services contact : help@bestpfe.com
Table of contents
1 Introduction
1.1 Illustrative High-dimensional Examples
1.1.1 Large Sample Covariance Matrices
1.1.2 Large Kernel Matrices
1.2 From GMM to Concentration though GAN
1.3 Some ML methods under Concentration
1.3.1 Behavior of Gram Matrices
1.3.2 Behavior of Kernel Matrices
1.3.3 Beyond Kernels to Neural Networks
1.3.4 Summary of Section 1.3
1.4 Outline and Contributions
2 Random Matrix Theory & Concentration of Measure Theory
2.1 Fundamental Random Matrix Theory Notions
2.1.1 The resolvent matrix notion
2.1.2 Stieltjes transform and spectral measure
2.1.3 Cauchy’s integral and statistical inference
2.1.4 Deterministic and random equivalents
2.2 Fundamental Random Matrix Theory Results
2.2.1 Matrix identities and key lemmas
2.2.2 The Marˇcenko-Pastur Law
2.2.3 Random matrices of mixture models
2.3 Connections with machine learning through spiked models
2.3.1 Simple example of unsupervised learning
2.3.2 Simple example of supervised learning
2.4 Extensions with concentration of measure theory
2.4.1 The notion of concentrated vectors
2.4.2 Resolvent of the sample covariance matrix
3 Universality of Large Random Matrices
3.1 GAN Data are Concentrated Data
3.2 Random Gram Matrices of Concentrated Data
3.2.1 Motivation
3.2.2 Model and Main Results
3.2.2.1 Mixture of Concentrated Vectors
3.2.2.2 Behavior of the Gram matrix of concentrated vectors
3.2.2.3 Application to GAN-generated Images
3.2.3 Central Contribution
4 Random Kernel Matrices of Concentrated Data
4.1 Kernel Spectral Clustering
4.1.1 Motivation
4.1.2 Model and Main Results
4.1.2.1 Behavior of Large Kernel Matrices
4.1.2.2 Application to GAN-generated Images
4.1.3 Central Contribution and perspectives
4.2 Sparse Principal Component Analysis
4.2.1 Motivation
4.2.2 Model and Main Results
4.2.2.1 Random Matrix Equivalent
4.2.2.2 Application to sparse PCA
4.2.2.3 Experimental Validation
4.2.3 Central Contribution and Perspectives
5 Beyond Kernel Matrices, to Neural Networks
5.1 A random matrix analysis of Softmax layers
5.1.1 Motivation
5.1.2 Model setting: the Softmax classifier
5.1.3 Assumptions & Main Results
5.1.3.1 Concentration of the weights vector of the Softmax classifier
5.1.3.2 Experimental validation
5.1.4 Central Contribution
5.2 A random matrix analysis of Dropout layers
5.2.1 Motivation
5.2.2 Model and Main Results
5.2.2.1 Deterministic equivalent
5.2.2.2 Generalization Performance of a-Dropout
5.2.2.3 Training Performance of a-Dropout
5.2.3 Experiments
5.2.4 Central Contribution and Perspectives
6 Conclusions & Perspectives
6.1 Conclusions
6.2 Limitations and Perspectives
A Synthèse de la thèse en Français
B Practical Contributions
B.1 Generative Collaborative Networks for Super-resolution
B.1.1 Motivation
B.1.2 Proposed Methods
B.1.2.1 Proposed Framework
B.1.2.2 Existing Loss Functions
B.1.3 Experiments for Single Image Super-Resolution
B.1.3.1 Proposed Methods
B.1.3.2 Evaluation Metrics
B.1.3.3 Experiments
B.1.4 Central Contribution and Discussions
B.2 Neural Networks Compression
B.2.1 Motivation
B.2.2 Proposed Methods
B.2.2.1 Setting & Notations
B.2.2.2 Neural Nets PCA-based Distillation (Net-PCAD)
B.2.2.3 Neural Nets LDA-based Distillation (Net-LDAD)
B.2.3 Experiments
B.2.4 Central Contribution and Discussions
C Proofs
C.1 Proofs of Chapter 3
C.1.1 Setting of the proof
C.1.2 Basic tools
C.1.3 Main body of the proof
C.2 Proofs of Chapter 4
C.2.1 Proofs of Section 4.1
C.2.2 Proofs of Section 4.2
C.2.2.1 Proof of Theorem 4.3
C.2.2.2 Proof of Theorem 4.4
C.3 Proofs of Chapter 5
C.3.1 Proofs of Section 5.1
C.3.2 Proofs of Section 5.2
C.3.2.1 Proof of Theorem 5.5
C.3.2.2 Proof of Theorem 5.6
C.3.2.3 Optimality
References




