Get Complete Project Material File(s) Now! »
Slot Filling Systems
Slot filling task requires extraction of relation between entities. Relation extraction for slot filling differs to traditional relation extraction task like ACE to some extents (Aguilar et al., 2014). In ACE, relation extraction task required to detect and characterize the relation type between two entities for a given sentence and a pair of entity mentions. In slot filling task, the goal is to find the object entity or entities for a given relation name and a subject entity where the entity types are predefined. For example, in Ex. 2.1, Nelson Mandela refers to the subject entity and per:spouse indicates the relation name or slot. This slot has to be filled up by a person name who is the spouse of Nelson Mandela. In addition, slot filling requires a system to justify the claimed relation by providing a justification text along with the filler value as discussed in Section 2.1. Traditional evaluation of relation extraction tasks such as ACE and SemEval provide annotated training data which makes this task completely supervised.
In KBP slot filling task, no annotated training data is provided that makes the task harder compared to the traditional relation extraction task. Several methods of slot filling have been proposed during the last couple of years
where most of them employed distant supervision (Craven and Kumlien, 1999; Bunescu and Mooney, 2007; Mintz et al., 2009) based relation extraction models (Wiegand and Klakow, 2013; Nguyen et al., 2014; Roth et al., 2014; Angeli et al., 2014; Angeli et al., 2015; Sterckx et al., 2015; Adel and Schütze, 2015; Zhang et al., 2016). A distant supervision method uses an existing knowledge base to collect facts. A fact is a tuple which consists of two entities and a relation name. Any sentence containing the pair of entities is considered as an example of that particular relation. Thus distant supervision facilitates to generate a large number training examples for extracting relations in a supervised fashion. Distant supervision suffers from inappropriate alignment of a sentence to a fact in an existing knowledge base (Riedel, Yao, and McCallum, 2010) and involving multiple relations between a pair of entities (Hoffmann et al., 2011; Surdeanu et al., 2012). Therefore, distant supervision based relation extraction methods are usually trained on noisy data. As a consequence, they generate a large number of false relations which result in lower score in slot filling task.
Relation Extraction Methods
During the last couple of decades, different methods of relation extraction have been studied. Relation extraction methods are basically classified into two types: unsupervised and supervised. In unsupervised methods (Rosenfeld and Feldman, 2006; Banko et al., 2007; Rosenfeld and Feldman, 2007a; Fader, Soderland, and Etzioni, 2011), pairs of entities are collected based on their co-occurrences. Then, the pairs of entities are clustered by extracting features at the sentence level automatically. Each cluster represents a relation. Unsupervised methods do not require any prior knowledge about the relation types. Such methods are useful for open relations where the precise semantic type of a relation is not important.
However, in supervised relation extraction, a system learns expression of a relation and characterization of the relation type from an annotated dataset. Usually, the annotated dataset contains sentences of different relation types. Each sentence consists of at least one pair of entities and expresses a particular relation between them.
The expression of a binary relation follows some lexical and structural patterns between two arguments. Such patterns are repeated to mention that relationship between another pair of arguments. Subject-verb-object (SVO) is the simplest pattern of expressing relation. Such pattern was used for extracting events (Yangarber et al., 2000) and to capture hypernym relations (Snow, Jurafsky, and Ng, 2005). Regular expression patterns have been used by Hearst (1992) for hyponym relation extraction. Mostly, the pattern based relation extraction methods use POS-tag patterns (Fader, Soderland, and Etzioni, 2011) and lexico-syntactic patterns (Alfonseca et al., 2012; Pershina et al., 2014). However, in natural language, relations are expressed by many diverse patterns and it is not possible to capture all of them. As a consequence, pattern based methods suffer from low recall even though they achieve very high precision. In order to solve this problem feature based methods have been explored. In feature based method, relation extraction is considered as relation classification task. Different features are computed on the annotated examples. A feature based method predicts an instance whether it expresses a specific type of relation by one of two possible ways: by computing similarity between the instance and annotated examples (Zelenko, Aone, and Richardella, 2003; Culotta and Sorensen, 2004; Bunescu and Mooney, 2005; Bunescu and Pasca, 2006) or by training a classifier model with the feature vectors of annotated examples (Kambhatla, 2004; GuoDong et al., 2005; Jiang and Zhai, 2007).
Linguistic Features for Relation Characterization
Almost, all the feature based relation extraction methods extract different features based on the syntactic and semantic analysis. Basically, these analyses are performed at the sentence level. The syntactic analysis focuses on the grammatical representation of a sentence. On the other hand, semantic analysis emphasizes on understanding the meaning of a sentence.
Syntactic dependency expresses the grammatical relationship among the words in a sentence. Moreover, syntactic dependency path between two related words indicates the structure of expressing a relation. Usually, a relation between two entities is expressed in a shorter context. Therefore, shortest dependency path has been proven effective for kernel based relation extraction (Bunescu and Mooney, 2005; Zhang et al., 2006). Neural network based relation classification methods (Cai, Zhang, and Wang, 2016; Liu et al., 2015) used syntactic dependency labels for capturing features in the shortest path automatically.
However, Zhou et al. (2007) argued that in many cases shortest path trees cannot capture enough information for extracting relations. They proposed a contextsensitive shortest path to include necessary information outside the shortest path. In order to capture useful context, Culotta and Sorensen (2004) proposed smallest common subtree and Chowdhury, Lavelli, and Moschitti (2011) proposed minimal subtree for extracting relations.
Consecutive dependency labels in the shortest path between two related entities make a pattern of a relation. Such patterns could be useful for trigger-independent relation extraction. Several patterns have been studied for extracting relation from texts. Pershina et al. (2014) extracted dependency patterns of different relations where maximum pattern length of 3 was found most effective. A SVO pattern has been used by Snow, Jurafsky, and Ng (2005) for extracting hypernym relations.
Collective and Statistical Analysis for Relation Extraction
Linguistic analysis is important for extracting relation at the sentence level. Relationship between two entities also depends on their co-existence and common resources between them. Such information cannot be explored by linguistic analysis. In relation validation task, corpus level studies e.g. co-occurrences of two entities and their sharing resources can be taken into account which we call collective analysis. Collection level information has been explored for improving the performance of relation extraction by learning the boundaries of relation arguments (Rosenfeld and Feldman, 2007b). Augenstein (2016) has taken into account global information about the object of a relation such as object occurrence, markup link with the object, title of the document containing the object etc. for web relation extraction.
The statistical analysis gets importance for extracting relation in a collective manner. Niu et al. (2012) performed statistical inference on diverse data for learning relation. A probabilistic model of inference has also been explored by (Fang and Chang, 2011). Such model counts co-occurrences of the subject-object pairs, frequencies of the relational tuples and patterns and their probabilities. Co-occurrence context has also been quantified by measuring mutual information for extracting relation between entities in the web (Xu et al., 2014).
Table of contents :
1.1 Research Objective
2 Literature Review
2.1 Slot Filling Task
2.2 Slot Filling Systems
2.3 Relation Extraction
2.3.1 Relation Extraction Methods
2.3.2 Linguistic Features for Relation Characterization
2.3.3 Collective and Statistical Analysis for Relation Extraction .
2.4 Relation Validation
2.4.1 Ensemble Learning for Relation Validation
2.4.2 Graph based Methods for Relation Validation
3 Entity Graph and Measurements for Relation Validation
3.1 Graph Definition
3.2 Entity Graph and Graph Database
3.3 Graph Construction
3.4 Measurements on Graph
3.4.1 Node Centrality
3.4.2 Mutual Information
3.4.3 Network Density
3.4.4 Network Similarity
3.5 Relation validation by Graph Analysis
4 Linguistic Characteristics of Expressing and Validating Relations
4.1 Linguistically Motivated Classification of Relation
4.2 Syntactic Modeling
4.2.1 Syntactic Dependency Analysis
4.2.2 Dependency Patterns and Edit Distance
4.3 Lexical Analysis
4.3.1 Trigger Word Collection
4.3.2 Word Embeddings
4.3.3 Recognition of Trigger Words
4.4 Syntactic-Semantic Fusion
4.5 Evaluation of Word-embeddings
5 Relation Validation Framework
5.1 Relation Validation Model
5.1.1 Relation Validation Features
5.1.2 Relation Validation System Overview
5.2 Corpus and Preprocessing
5.2.1 KBP Slot Filling Corpora
5.2.2 KBP Slot Filling Responses and Snippet Assessments
5.3 Evaluation Metrics
6 Experiments and Results
6.1 Participation to TAC KBP-2016 SFV Task
6.1.1 Evaluation of Different Feature Groups
6.1.2 Relation Validation Models for KBP-2016 SFV Task
6.2 System Investigation
6.2.1 Statistical Difference Between TAC KBP Evaluation Datasets in 2015 and 2016
6.2.2 Impact of the Trustworthy Features
6.2.3 Impact of Trigger Words in the Slot Filling Responses
6.2.4 Identifying the Reason of Failure to Compute Graph Features
6.2.5 Conclusion and Plans for Improving the System
6.3 Supervised Relation Validation and Knowledge Base Population
6.3.1 Enlarging the Training and Testing Datasets
6.3.2 Relation Validation Models
6.3.3 Knowledge Base Population by Employing Relation Validation Models
6.4 An Experiment of Unsupervised Relation Validation and Knowledge Base Population
6.4.1 PageRank Algorithm
6.4.2 Graph Modeling
7 Conclusion and Future Work
7.2 Future Work