Class-based Language Models

somdn_product_page

(Downloads - 0)

For more info about our services contact : help@bestpfe.com

Table of contents

I Language Modeling and State-of-the-art Approaches 
1 Language Modeling 
1.1 Introduction
1.2 Evaluation Metrics
1.2.1 Perplexity
1.2.2 Word Error Rate
1.2.3 Bilingual Evaluation Understudy
1.3 State-of-the-art Language Models
1.3.1 Smoothing Techniques
1.3.1.1 Absolute discounting
1.3.1.2 Interpolated Kneser-Ney smoothing
1.3.1.3 Stupid back-off
1.3.2 Class-based Language Models
1.3.3 Structured Language Models
1.3.4 Similarity based Language Models
1.3.5 Topic and semantic based Language Models
1.3.6 Random Forest Language Models
1.3.7 Exponential Language Models
1.3.8 Model M
1.4 Continuous Space Language Models
1.4.1 Current Approaches
1.4.1.1 Standard Feed-forward Models
1.4.1.2 Log-bilinear Models
1.4.1.3 Hierarchical Log-bilinear Models
1.4.1.4 Recurrent Models
1.4.1.5 Ranking Language Models
1.4.2 Training algorithm
1.4.2.1 An overview of training
1.4.2.2 The update step
1.4.3 Model complexity
1.4.3.1 Number of parameters
1.4.3.2 Computational issues
1.4.4 NNLMs in action
1.5 Summary
II Continuous Space Neural Network Language Models 
2 Structured Output Layer 
2.1 SOUL Structure
2.1.1 A general hierarchical structure
2.1.2 A SOUL structure
2.2 Training algorithm
2.3 Enhanced training algorithm
2.4 Experimental evaluation
2.4.1 Automatic Speech Recognition
2.4.1.1 ASR Setup
2.4.1.2 Results
2.4.2 Machine Translation
2.4.2.1 MT Setup
2.4.2.2 Results
2.5 Summary
3 Setting up a SOUL network 
3.1 Word clustering algorithms
3.2 Tree clustering configuration
3.3 Towards deeper structure
3.4 Summary
4 Inside the Word Space 
4.1 Two spaces study
4.1.1 Convergence study
4.1.2 Word representation analysis
4.1.3 Learning techniques
4.1.3.1 Re-initialization
4.1.3.2 Iterative re-initialization
4.1.3.3 One vector initialization
4.2 Word representation analysis for SOUL NNLMs
4.3 Word relatedness task
4.3.1 State-of-the-art approaches
4.3.2 Experimental evaluation
4.4 Summary
5 Measuring the Influence of Long Range Dependencies 
5.1 The usefulness of remote words
5.1.1 Max NNLMs
5.1.2 Experimental evaluation
5.2 N-gram and recurrent NNLMs in comparison
5.2.1 Pseudo RNNLMs
5.2.2 Efficiency issues
5.2.3 MT Experimental evaluation
5.3 Summary
III Continuous Space Neural Network Translation Models 
6 Continuous Space Neural Network Translation Models 
6.1 Phrase-based statistical machine translation
6.2 Variations on the n-gram approach
6.2.1 The standard n-gram translation model
6.2.2 A factored n-gram translation model
6.2.3 A word factored translation model
6.2.4 Translation modeling with SOUL
6.3 Experimental evaluation
6.3.1 Tasks and corpora
6.3.2 N-gram based translation system
6.3.3 Small task evaluation
6.3.4 Large task evaluations
6.4 Summary
7 Conclusion 
A Abbreviation 
B Word Space Examples for SOUL NNLMs 
C Derivatives of the SOUL objective function 
D Implementation Issues 
E Publications by the Author 
Bibliography

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *