General Definition of Interpretable Machine Learning

somdn_product_page

(Downloads - 0)

Catégorie :

For more info about our services contact : help@bestpfe.com

Table of contents

1 Interpretable Machine Learning 
1.1 Overview
1.2 The Social Importance of Explaining Why
1.2.1 The Art of Explaining to Humans
1.2.2 Selecting and Evaluating Explanations
1.2.3 Communicating Explanations
1.3 Interpretable Explanations in ai
1.3.1 Toward a General Definition of Interpretable Machine Learning .
1.3.2 Desiderata of Interpretable Explanations
1.3.3 Properties of Interpretable Models
1.3.4 Evaluation Procedures
1.4 A review of Explainability Approaches
1.4.1 Transparent Models
1.4.2 Post-Hoc Interpretability
1.5 Discussion
1.5.1 On the Reliability of Post-Hoc Explanations
1.5.2 Challenges and Opportunities
2 Rule Learning 
2.1 Overview
2.2 Problem Definition
2.2.1 Data Description Language
2.2.2 Hypothesis Description Language
2.2.3 Coverage Function
2.2.4 Predictive Rule Learning
2.3 Rule Learning Process
2.3.1 Feature Construction
2.3.2 Rule Construction
2.3.3 Rule Evaluation
2.3.4 Hypothesis Construction
2.3.5 Overfitting and Pruning
2.4 Discussion
2.4.1 A Critical Review of Rule Learning Methods
2.4.2 Challenges and Opportunities
3 Learning Interpretable Boolean Rule Ensembles 
3.1 Overview
3.2 A real industrial use case
3.2.1 Context and objectives
3.2.2 Proposed solution
3.3 Preliminaries
3.4 Boolean Rule Sets
3.4.1 Assumptions on the Input Data
3.4.2 The Base, Bottom-up Method
3.4.3 The libre Method
3.4.4 Producing the Final Boundary
3.5 Experiments
3.5.1 Experimental Settings
3.5.2 Experimental Results
3.6 Conclusion
4 Disentangled Representation Learning 
4.1 Overview
4.2 Representation Learning in a Nutshell
4.2.1 What Makes a Representation Good
4.2.2 Disentangling Factors of Variation
4.3 Defining and Evaluating Disentangled Representations
4.3.1 Symmetry Transformations and Disentanglement
4.3.2 Disentanglement and Group Theory
4.3.3 Consistency, Restrictiveness, Disentanglement
4.3.4 Evaluating Disentanglement
4.4 Practical Applications
4.4.1 Simplifying Downstream Tasks
4.4.2 Transfer Learning
4.4.3 Increasing Fairness in Predictions
4.4.4 Higher Robustness against Adversarial Attacks
4.4.5 Other Applications
4.5 Discussion
5 An Identifiable Double vae For Disentangled Representations 
5.1 Overview
5.2 Preliminaries
5.2.1 Model Identifiability and Disentanglement
5.2.2 Connections with Independent Component Analysis (ica) .
5.3 vae-based Disentanglement Methods
5.3.1 Unsupervised Disentanglement Learning
5.3.2 Auxiliary Variables and Disentanglement
5.4 idvae – Identifiable Double vae
5.4.1 Identifiability Properties
5.4.2 Learning an Optimal Conditional Prior
5.4.3 A Semi-supervised Variant of idvae
5.5 Experiments
5.5.1 Experimental Settings
5.5.2 Experimental Results
5.5.3 Limitations
5.6 Conclusion
6 Conclusions 
6.1 Themes and Contributions
6.2 Future Work
References

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *