Beam search and global training

somdn_product_page

(Downloads - 0)

Catégorie :

For more info about our services contact : help@bestpfe.com

Table of contents

Abstract
Remerciements
List of Tables
List of Figures
Glossary
1 Introduction
1.1 Maximizing the usability of existing resources
1.2 Problem statement: multi-(re)source combination
1.3 Dependency parsing as a case study
1.4 Contributions
1.5 Outline
I The state of the art in cross-lingual parsing
2 Cross-lingual transfer and multilinguality
2.1 Cross-lingual bridge: an attempt at an exhaustive typology
2.2 State-of-the-art methods for cross-lingual transfer
2.2.1 Meta-methods in data and parameter spaces: a reading grid
2.2.2 Data space transfer
2.2.3 Parameter space transfer
2.2.4 An ambivalent case: direct delexicalized transfer
2.2.5 Common representations
2.3 Recent advances: cross-lingual embeddings and multilingual models .
2.4 Evaluation of cross-lingual transfer
2.5 Conclusion: a flourishing field, yet hard to categorize
3 Recent advances in transition-based dependency parsing
3.1 The dependency parsing formalism
3.2 Transition-based parsing: strengths and limitations
3.3 Transition systems
3.3.1 Stack-based systems
3.3.2 Understanding derivations: action-edge correspondence
3.4 Improving the classifiers: from the averaged perceptron to Stack-LSTMs
3.5 Beam search and global training
3.5.1 Globalization of local parsers
3.5.2 Role of the beam
3.5.3 Structured perceptron
3.5.4 Partial updates
3.5.5 Use in recent parsers
3.6 Dynamic oracles
3.6.1 Issues of static oracles
3.6.2 Training with dynamic oracles
3.6.3 Practical computation of action costs
3.6.4 Use in recent parsers
3.6.5 Differences with beam parsing
3.7 Conclusion: four open avenues
4 Cross-lingual parsing
4.1 Methods for cross-lingual parsing
4.1.1 Direct transfer
4.1.2 Parser projection
4.1.3 Training guidance
4.1.4 Joint learning
4.2 Cross-treebank consistency
4.3 Evaluation of cross-lingual parsing
4.4 Relations with domain adaptation
4.5 Effective use of cross-lingual resources
4.6 Conclusion: softer requirements, finer combination
II Cross-lingual parsing for low-resourced languages: an empirical analysis
5 How good are we at cross-lingual parsing?
5.1 Assessing transfer usefulness in a case study
5.2 Investigating tiny treebanks as an alternative to transfer
5.2.1 Selecting a competitive tiny parser
5.2.2 An interpretable quality scale
5.2.3 Application: parsing capacity of cross-lingual and unsupervised parsers
5.3 The optimistic approach: opposites attract
5.4 Complementarity from a linguistic standpoint
5.4.1 A motivating example: Romanian
5.4.2 A bilingual typology of knowledge
5.4.3 Target-oriented relevance
5.5 Wrap-up: not so good, but hope prevails
6 Heterogeneity in parsing: monolingual and cross-lingual issues
6.1 Parser usefulness: heterogeneity matters
6.2 Empirical evidence of treebank heterogeneity
6.3 Lexicalization and explaining away
6.3.1 Hands-on measures
6.3.2 Generalization and explaining away effects
6.4 Class hardness
6.4.1 Existing metrics to assess task difficulty
6.4.2 Characterizing difficulties: class hardness
6.4.3 Comparative evaluation
6.5 Interactions of unbalanced classes
6.5.1 Accuracy experiments: knowledge does not flow between similar classes
6.5.2 Feature-level experiments: complex classes result from unstable parameters
6.5.3 Word-level subsampling
6.6 Wrap-up: more is both more and less
III A new framework for cross-lingual transfer
7 Developing a more flexible parsing system
7.1 An increased need for flexibility
7.1.1 Non-standard parsing tasks
7.1.2 Additional benefits of dynamic oracles
7.1.3 Dynamic oracles for global training
7.2 Extensions of dynamic oracles
7.2.1 Redefining action cost to cover more ground: non-arcdecomposable systems and non-projective trees
7.2.2 Global dynamic oracles
7.2.3 Framework unification
7.3 PanParser: a flexible parser with a modular architecture
7.4 Partial training: an application to cross-lingual transfer
7.5 Partial transition systems: learning to ignore
7.6 Parsing under constraints: exploiting prior annotations
7.7 Conclusion: a dynamic oracle is all you need
8 Incompatible representations of knowledge
8.1 Typological differences incur transfer impossibilities
8.1.1 Order-dependent knowledge representation
8.1.2 Transfer failures
8.2 Reshaping training instances
8.2.1 Target-optimal reordering with a language model
8.2.2 Heuristic rewrite rules based on typological knowledge
8.3 Experiments
8.3.1 Experimental setup
8.3.2 Results
8.3.3 A fine-grained analysis
8.4 Qualitative comparison of relevance models
8.5 Conclusion: beyond the proof of concept
9 Cascading models for multi-(re)source selective transfer
9.1 Transfer cascades
9.1.1 Core method
9.1.2 Relations with the meta-learning literature
9.1.3 Algorithmic refinements
9.2 Concrete measures of relevance
9.2.1 Using a development set
9.2.2 Treebank similarity with a target sample
9.2.3 Approximation with raw target data
9.2.4 Another use of typological knowledge
9.3 Monolingual cascades
9.3.1 Multi-system and multi-domain cascades
9.3.2 Divide-and-conquer cascades
9.4 Experiments in realistic conditions
9.4.1 System components
9.4.2 Monolingual and cross-lingual cascades
9.4.3 Per-treebank results
9.5 Conclusion: the road ahead
IV Conclusion
10 Summary and perspectives
Appendices
A Advanced strategies for global training in parsing
A.1 Restart strategy with global dynamic oracles
A.2 Using global dynamic oracles to correct training biases
A.3 Experiments
A.4 Conclusion
B Word alignment transfer
B.1 Aligning words
B.2 Word alignments: cross-lingual scenarios
B.3 Concrete methods for transferring alignments
B.4 Experiments
B.5 Conclusion
C Leveraging lexical similarities: zero-resource tagger transfer
C.1 Perceptron-based PoS tagging
C.2 Modifying feature representations
C.3 Experiments
C.4 Conclusion
Résumé détaillé
List of publications
Bibliography

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *