Different specific RNN architectures

somdn_product_page

(Downloads - 0)

Catégorie :

For more info about our services contact : help@bestpfe.com

Table of contents

Acknowledgements
Abstract
1 Introduction
1.1 Classical supervised machine learning
1.2 Budgeted learning
1.3 Thesis structure
1.3.1 Chapter 3 : Static Feature Selection
1.3.2 Chapters 4 and 5 : Adaptive Feature acquisition
1.3.3 Chapter 6 : Meta Active Learning
1.3.4 Other contributions
2 Supervised learning under budget constraints: Background
2.1 Budgeted Learning
2.1.1 An overview of the different types of cost
Taxonomy of budgeted learning methods
Integration of budget constraint during training
Integration of the budget constraint during inference:
2.1.2 Cost-sensitive inference : feature selection and acquisition
Static methods : feature selection
Adaptive methods: feature acquisition
2.2 Meta Active Learning
2.2.1 Active learning
2.2.2 Meta learning
2.2.3 One-shot learning
2.2.4 Meta Active Learning
2.3 Recurrent Neural Networks Architectures
2.3.1 Overview of the various settings
2.3.2 Different specific RNN architectures
Long-Short Term Memory and Gated Recurrent Unit
Memory Networks
Bidirectional RNN
Order-invariant RNN
2.4 Closing remarks
3 Feature Selection – Cold-start application in Recommender Systems
3.1 Introduction
3.2 Related work on recommender systems and cold start
3.2.1 Collaborative Filtering
3.2.2 Cold-start recommendation
3.3 Formulation of representation learning problem for user cold start
3.3.1 Inductive Additive Model (IAM)
Continuous Learning Problem
Cold-Start IAM (CS-IAM) Learning Algorithm
3.3.2 IAM and classical warm collaborative filtering
3.3.3 IAM from cold-start to warm collaborative filtering
3.4 Experiments
3.4.1 Experimental protocol
3.4.2 Results
Collaborative Filtering
Cold-start Setting
Mixing Cold-start andWarm Recommendation
3.5 Closing remarks
4 Adaptive cost-sensitive feature acquisition – Recurrent Neural Network Approach
4.1 Introduction
4.2 Adaptive feature acquisition with a recurrent neural network architecture
4.2.1 Definition of the problem
4.2.2 Generic aspects of the model
4.2.3 Recurrent ADaptive AcquisitIon Network (RADIN)
Components of RADIN
Loss and learning
Inference
4.3 Experiments
4.3.1 Experimental protocol
4.3.2 Results
Illustration of the adaptive acquisition process
Uniform cost
Cost-sensitive setting
4.4 Closing remarks
5 Adaptive cost-sensitive feature acquisition – Stochastic Approach
5.1 Introduction
5.2 Related work
5.2.1 Reinforcement learning
5.2.2 Policy Gradient
Formalization of the problem
Computing the gradient
Using policy gradient with RNN
5.3 Definition of the stochastic model
5.3.1 Cost-sensitive Learning problem with stochastic acquisition method
5.3.2 Gradient computation
5.4 REpresentation-based Acquisition Models (REAMs)
5.4.1 Instances of REAM
5.5 Experimental results
5.5.1 Feature Selection Problem
5.5.2 Cost-sensitive setting
5.5.3 Comparison of the learning complexity
5.6 Closing remarks
6 Meta Active Learning
6.1 Introduction
6.2 Definition of setting
6.3 One-step acquisition model
6.4 Experiments
6.5 Perspective : Hybrid Model
7 Conclusion and Closing Remarks
7.1 Future directions
Bibliography

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *