Knowledge Reasoning: Comparison
Please note this is a comparison between Version 1 by Yinyu Lan and Version 2 by Camila Xu.

A knowledge graph (KG) organizes knowledge as a set of interlinked triples, and a triple ((head entity, relation, tail entity), simply represented as (h, r, t)) indicates the fact that two entities have a certain relation. The availability of formalized and rich structured knowledge has emerged as a valuable resource in facilitating various downstream tasks, such as question answering and recommender systems. Although KGs such as DBpedia, Freebase, and NELL contain large amounts of entities, relations, and triples, they are far from complete, which is an urgent issue for their broad application. To address this, researchers have introduced the concept of knowledge graph completion (KGC), which has garnered increasing interest. It utilizes knowledge reasoning techniques to automatically discover new facts based on existing ones in a KG.

 
  • distributed representation
  • knowledge graph
  • link prediction
  • logical rule

1. Introduction

A knowledge graph (KG) organizes knowledge as a set of interlinked triples, and a triple ((head entity, relation, tail entity), simply represented as (h, r, t)) indicates the fact that two entities have a certain relation. The availability of formalized and rich structured knowledge has emerged as a valuable resource in facilitating various downstream tasks, such as question answering [1][2][1,2] and recommender systems [3][4][3,4].
Although KGs such as DBpedia [5], Freebase [6], and NELL [7] contain large amounts of entities, relations, and triples, they are far from complete, which is an urgent issue for their broad application. To address this, researchers have introduced the concept of knowledge graph completion (KGC), which has garnered increasing interest. It utilizes knowledge reasoning techniques to automatically discover new facts based on existing ones in a KG [8].
Currently, the methods of KGC can be classified into two major categories: (1) One type of method uses explicit reasoning rules. It obtains these rules through inductive learning and then deduces new facts. (2) Another method is based on representation learning instead of directly modeling rules, aiming to learn a distributed embedding for entities and relations and perform generalization in numerical space.
Rule-based reasoning is accurate and can provide interpretability for the inference results. These rules can be hand-crafted by domain experts [9] or mined from KGs using an inductive algorithm like AMIE [10]. Traditional methods like expert systems [11][12][11,12] utilize hard logical rules for making predictions.
The methods used to determine knowledge graph embeddings (KGEs) learn how to embed entities and relations into a continuous low-dimensional space [13][14][13,14]. These embeddings maintain the semantic meaning of entities and relations, facilitating the prediction of missing triples. Additionally, these embeddings can be effectively trained using stochastic gradient descent (SGD). However, it is important to note that this approach does not fully capitalize on logical rules, which are instrumental in compactly encoding domain knowledge and have practical implications in various applications. It is worth mentioning that high-quality embeddings heavily rely on abundant data, making it challenging for these methods to generate meaningful representations for sparse entities [15][16][15,16].
In fact, both rule-based and embedding-based methods have advantages and disadvantages in the KGC task. Logical rules are accurate and interpretable, while embedding is flexible and computationally efficient. Recently, there has been research on combining the advantages of logical rules and KGEs to achieve more precise knowledge completion. Mixed techniques can infer missing triples effectively by exploiting and modeling uncertain logical rules. Some existing methods aim to learn KGEs and rules iteratively [16], and some other methods also utilize soft rules or groundings of rules to regularize the learning of KGEs [17][18][17,18].

2. Rule-Based Reasoning

Logical rules can encode human knowledge compactly, and early knowledge reasoning was primarily based on first-order logical rules. Existing rule-based reasoning methods have primarily utilized search-based inductive logic programming (ILP) methods, usually searching and pruning rules. Based on the partial completeness assumption, AMIE [10] introduces a revised confidence metric well suited for modeling KGs. AMIE+ [19][20] is optimized to expand to larger KGs by query rewriting and pruning. Additionally, AMIE+ improves the precision of the forecasts by using joint reasoning and type information. Based on pre-provided rules, it builds a probabilistic graph model and then learns the weights of rules. However, due to the complicated graph structure among triples, the reasoning in an MLN is time-consuming and challenging, and the incompleteness of KGs also impacts the inference results. In contrast, Iterlogic-E uses rules to enhance KGEs with more effective inference.

3. Embedding-Based Reasoning

Recently, embedding-based methods have attracted much attention. These methods aim to learn distributed embeddings for entities and relations in KGs. Generally, the current KGE methods can be divided into three classes: (1) translation-based models that learn embeddings by translating one entity into another entity through a specific relation [20][21][22,23]; (2) compositional models that use simple mathematical operations to model facts, including linear mapping [22][24], bilinear mapping [23][24][25][25,26,27], and circular correlation [26][28]; (3) neural-network-based models that utilize a multilayer neural structure to learn embeddings and estimate the plausibility of triples with nonlinear features: for example, R-GCN [27][29], ConvE [28][30] and so on [29][30][31][31,32,33]. The above methods learn representations based only on the triples existing in KGs, and the sparsity of data limits them. In order to address this issue and acquire semantic-rich representations, recent studies have made efforts to incorporate additional information apart from triples. These include contextual information [32][34], entity type information [33][34][35,36], ontological information [35][37], taxonomic information [36][38], textual descriptions [37][39], commonsense knowledge [38][40], and hierarchical information [39][41]. In contrast, the proposed Iterlogic-E utilizes embeddings to eliminate erroneous conclusions derived from rules, which combines the advantages of soft rules and embeddings.

4. Hybrid Reasoning

Both rule-based and embedding-based methods have their own advantages and disadvantages. Recent works have integrated these two types of reasoning methods together. Guo et al. [40][42] tried to learn a KGE from rule groundings and triples in combination. They utilized TransE to obtain entity and relation embeddings and computed the truth value for each rule using t-norm fuzzy logic. Then, they ranked the rules by their truth values and manually filtered those ranked at the top. Nayyeri et al. [41][43] enhanced KGEs by injecting rules. Although their works also learned embeddings jointly, they could not support the uncertainty of soft rules. Wang et al. [42][44] ordered relations approximately by maximizing the margin between negative and positive logical rules to capture the transitivity and asymmetry of rules. However, it cannot directly benefit from the forward reasoning of rules and does not utilize the confidence information corresponding to the rules. Zhang et al. [17], Guo et al. [18], and Guo et al. [43][45] obtained KGEs with supervision from soft rules, proving the effectiveness of logical rules. Whereas, they did not consider combining KGEs to filter out erroneous facts introduced by these rules during the iteration process. Qu et al. [44][46] used an MLN to model logical rules and inferred new triples to enhance KGEs. One main difficulty here is the large search space in determining rule structures and searching for support groundings. Zhang et al. [16] aimed to improve sparse data representation through iterative learning and update the confidence of rules through embeddings. Nevertheless, their method evaluates sparsity based on the frequency of entities participating in triples, and only conclusions related to entities with high sparsity are injected into the original knowledge graph. This design choice is specifically tailored to address the challenges posed by sparse data in the test set but did not take into account the potentially erroneous introduced by the rules. In contrast, Iterlogic-E models the conclusion labels as 0–1 variables and uses confidence regularization loss to eliminate uncertain conclusions.
Video Production Service