(Downloads - 0)
For more info about our services contact : help@bestpfe.com
Table of contents
Acknowledgements
Abstract
Résumé
I Introduction
I.1 Motivation
I.2 Brief Overview of Results
I.2.1 Chapter II: Heterogeneous Biomedical Signatures Extraction based on Self-Organising Maps
I.2.2 Chapter III: Visualization approaches for metagenomics
I.2.3 Chapter IV: Deep learning for metagenomics using embeddings
II Feature Selection for heterogeneous data
II.1 Introduction
II.2 Related work
II.3 Deep linear support vector machines
II.4 Self-Organising Maps for feature selection
II.4.1 Unsupervised Deep Self-Organising Maps
II.4.2 Supervised Deep Self-Organising Maps
II.5 Experiment
II.5.1 Signatures of Metabolic Health
II.5.2 Dataset description
II.5.3 Comparison with State-of-the-art Methods
II.6 Closing and remarks
III Visualization Approaches for metagenomics
III.1 Introduction
III.2 Dimensionality reduction algorithms
III.3 Metagenomic data benchmarks
III.4 Met2Img approach
III.4.1 Abundance Bins for metagenomic synthetic images
III.4.1.1 Binning based on abundance distribution
III.4.1.2 Binning based on Quantile Transformation (QTF)
III.4.1.3 Binary Bins
III.4.2 Generation of artificial metagenomic images: Fill-up and Manifold learning algorithms
III.4.2.1 Fill-up
III.4.2.2 Visualization based on dimensionality reduction algorithms
III.4.3 Colormaps for images
III.5 Closing remarks
IV Deep Learning for Metagenomics
IV.1 Introduction
IV.2 Related work
IV.2.1 Machine learning for Metagenomics
IV.2.2 Convolutional Neural Networks
IV.2.2.1 AlexNet, ImageNet Classification with Deep Convolutional Neural Networks
IV.2.2.2 ZFNet, Visualizing and Understanding Convolutional Networks
IV.2.2.3 Inception Architecture
IV.2.2.4 GoogLeNet, Going Deeper with Convolutions
IV.2.2.5 VGGNet, very deep convolutional networks for large-scale image recognition
IV.2.2.6 ResNet, Deep Residual Learning for Image Recognition .
IV.3 Metagenomic data benchmarks
IV.4 CNN architectures and models used in the experiments
IV.4.1 Convolutional Neural Networks
IV.4.2 One-dimensional case
IV.4.3 Two-dimensional case
IV.4.4 Experimental Setup
IV.5 Results
IV.5.1 Comparing to the-state-of-the-art (MetAML)
IV.5.1.1 Execution time
IV.5.1.2 The results on 1D data
IV.5.1.3 The results on 2D data
IV.5.1.4 The explanations from LIME and Grad-CAM
IV.5.2 Comparing to shallow learning algorithms
IV.5.3 Applying Met2Img on Sokol’s lab data
IV.5.4 Applying Met2Img on selbal’s datasets
IV.5.5 The results with gene-families abundance
IV.5.5.1 Applying dimensionality reduction algorithms
IV.5.5.2 Comparing to standard machine learning methods
IV.6 Closing remarks
V Conclusion and Perspectives
V.1 Conclusion
V.2 Future Research Directions
Appendices
A The contributions of the thesis
B Taxonomies used in the example illustrated by Figure III.7
C Some other results on datasets in group A
List of Figures
List of Tables
Bibliography



