Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1490 2023-12-19 11:20:46 |
2 Reference format revised. Meta information modification 1490 2023-12-22 01:46:43 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Wu, M.; Sun, T.; Wang, Z.; Duan, J. Logical Reasoning Machine Reading Comperhension. Encyclopedia. Available online: https://encyclopedia.pub/entry/52921 (accessed on 03 July 2024).
Wu M, Sun T, Wang Z, Duan J. Logical Reasoning Machine Reading Comperhension. Encyclopedia. Available at: https://encyclopedia.pub/entry/52921. Accessed July 03, 2024.
Wu, Mingli, Tianyu Sun, Zhuangzhuang Wang, Jianyong Duan. "Logical Reasoning Machine Reading Comperhension" Encyclopedia, https://encyclopedia.pub/entry/52921 (accessed July 03, 2024).
Wu, M., Sun, T., Wang, Z., & Duan, J. (2023, December 19). Logical Reasoning Machine Reading Comperhension. In Encyclopedia. https://encyclopedia.pub/entry/52921
Wu, Mingli, et al. "Logical Reasoning Machine Reading Comperhension." Encyclopedia. Web. 19 December, 2023.
Logical Reasoning Machine Reading Comperhension
Edit

Logical reasoning requires correct understanding of the logical relationships between different sentences, pointing out a positive example that enhances the reliability of a conclusion or a negative example that weakens the reliability of a conclusion. The need for this capability places higher demands on the performance of existing reading comprehension models since the inference capability of a large number of models relies heavily on entities and their numerical weights. 

machine reading comprehension graph attention network logical reasoning

1. Introduction

Artificial intelligence has deeply influenced people’s work and daily lives today. For instance, voice assistants like Siri and Cortana can now help users operate their devices, and ChatGPT, with its impressive capabilities, provides users with inspiration, references, and aids in decision-making. But the foundation of all these is that AI can correctly understand the requirements you express in natural language. This is closely related to the goal of machine reading comprehension tasks.
Machine reading comprehension (MRC) is a fundamental task in the field of natural language processing that requires models to respond to a given passage of text and related questions. Just as we assess human comprehension of a passage of text through reading comprehension tests, MRC can be used to assess a computer system’s ability to understand human language.
As one of the important research task in natural language processing, a large number of reading comprehension datasets have been proposed such as SQuAD [1], an extractive reading comprehension dataset based on Wikipedia. HotpotQA [2], a multi-hop reading comprehension dataset that requires extracting information from multiple distinct text passages. And DROP [3], a generative reading comprehension dataset that assesses discrete reasoning abilities, etc. As the datasets continue to evolve, their difficulty is gradually increasing. Since logical reasoning ability has long been considered as a key thinking ability of the human brain [4], this has also been recognized by many cutting-edge academics in the field. Several challenging multiple-choice logical reasoning reading comprehension datasets have been built, such as LogiQA [5] and ReClor [6].
Following Google’s proposal of the BERT [7] model, the method based on pre-trained language models, which can fully exploit and utilize the predictive information and prior knowledge obtained from massive training data, have achieved substantial performance gains in 11 downstream tasks in the natural language processing domain, including machine reading comprehension tasks. The Transformer architecture was originally proposed and designed to address sequence transformation and machine translation tasks [8]. Its encoding layer employs a self-attention mechanism and significantly improved performance compared to the RNN method. Subsequently, an increasing number of NLP tasks use methods based on pre-trained models, including named entity recognition [9][10][11], machine translation [12][13], and machine reading comprehension [14][15][16][17][18].
Logical reasoning requires correct understanding of the logical relationships between different sentences, pointing out a positive example that enhances the reliability of a conclusion or a negative example that weakens the reliability of a conclusion. The need for this capability places higher demands on the performance of existing reading comprehension models since the inference capability of a large number of models relies heavily on entities and their numerical weights. However, due to the complexity of logical reasoning machine reading comprehension problems, pre-trained language models still do not perform cautiously well on such tasks and struggle to reach the average human level.
In recent years, researchers have worked on designing specific model architectures for integrating logical structures, Jiao et al. [4] proposed the introduction of symbolic logic as data expansion into neural network models using self-supervised and contrastive learning. Wang et al. [17] proposed LReasoner, a contextual and data enhancement framework based on parsing logical expressions.
With the introduction of DAGN [18], utilizing graph structures to model the abstract logical relationships in logical reasoning tasks and employing computational methods like GNN or Graph Transform to simulate the reasoning process, a novel approach to addressing this task has been presented. After that, Li [16] and Ouyang et al. [19] also proposed implicit inference of logical information of articles using graph structures, and these approaches have made a certain degree of progress on various datasets. Previous research believes that an intuitive idea for identifying logical relationships between text units for this goal is to use discourse relations [20], such as words like “because” and “therefore” for cause-effect relationships, and “if” words to indicate hypothetical relations, and implicit logical relations brought about by punctuation. Modeling logical structure has proven to be one of the effective methods for enhancing logical reasoning for the widely used pre-trained models so far in the current reasoning task.

2. Logical Reasoning Machine Reading Comperhension

In recent years, with the success of pre-trained language models in NLP, many pre-trained language models (e.g., BERT [7], RoBERTa [21], XLNet [22], GPT-3 [23], etc.) have met or exceeded human performance on popular MRC datasets.
However, those MRC datasets are lacking, or just have a little of data examining logical reasoning abilities. For example, according to Sugawara and Aizawa et al. [24], there is no logical reasoning content in the MCTest [25] dataset, while only 1.2% of the SQuAD dataset requires logical reasoning to answer questions. Therefore, Yu et al. [6] proposed the ReClor dataset, which focuses on examining logical reasoning ability. A task related to logical inference MRC is Natural Language Inference (NLI), which requires the model to classify the logical relationships of given sentence pairs. However, the NLI task only considers three simple logical relations (implication, contradiction, and irrelevance) at the sentence level, whereas logical reasoning MRC is more challenging as it needs to predict multiple complex logical relations at the chapter level to determine the answer.
As shown in Table 1, the approaches for logical reasoning machine reading comprehension in recent years can be divided into the following categories:
Table 1. Summary of related work.
Rule-Based Pre-Training Based Data Enhancement GNN Based
NatLog [26]
Stanford RTE [27]
LReasoner [17]
L-Datt [28]
MERIt [4]
LogiGAN [29]
DAGN [18]
AdaLoGN [16]
Logiformer [30]
LoCSGN [31]
The first category is the approaches from the pre-training perspective, based on heuristic rules to capture logical relations in large corpora, and design corresponding training tasks for these relations to secondary train the existing pre-trained language models, such as MERIt and LogiGAN [29]. MERIt [4] proposes to use rules based on a large amount of unlabeled textual data, modeled after the form of the logical inference MRC task, to construct data for self-supervised pre-training in contrast learning. LogiGAN first uses pre-specified logical indicators (e.g., “therefore”, “due to”, “we may infer that“) to identify logical inference phenomena from large-scale unlabeled text, and then masks the expressions that follow the logical indicators and trains the generative model to recover the masked expressions.
The second category is the approaches from data enhancement perspective, which symbolically infers implicitly existing expressions based on logical equivalence laws and expands the given text to match the answers, such as LReasoner [17]. It proposes a logic-driven context extension framework that integrates three steps: logical identification to parse out logical expressions from context, logical extension to derive implicit expressions, and logical verbalization to predict the answer.
The third category is the approaches which use predefined rules to construct a graph structure based on the content of the text and options. The nodes of the graph correspond to logical units in the text, i.e., meaningful sentences or text fragments, the edges of the graph represent the relationships between the logical units. By employing methods such as Graph Neural Networks (GNN) and Graph Transformers [32], the logical reasoning process is modeled, thereby enhancing the performance of logical reasoning.
As the difficulty of the logical reasoning machine reading comprehension task continues to increase, merely focusing on the interaction between tokens at the sentence-level granularity is far from sufficient. Models need to establish relationships between sentences at a holistic level consisting of context, questions, and answers. However, logical relationships are difficult to extract as implicit structures hidden in the context, and the existing datasets are not labeled with logical structures. Therefore, DAGN proposed by Huang et al. [18] and Logiformer proposed by Xu et al. [30] both use graph structure to represent logical information in the context. DAGN uses discourse relations in PDTB2.0 [33] as separators to divide articles into multiple elementary discourse units (EDUs). The graph structure is obtained by using EDUs as nodes and discourse relations as edges, and the graph network is used to learn logical features of the text from EDUs to improve its reasoning ability.
Currently graph neural networks (GNNs) are successfully used in logical reasoning tasks, but the node-to-node messaging in the model is still inadequate, resulting in a continued lack of adequate means of interaction between articles and options. To address the above challenges, AdaLoGN proposed by Li et al. [16] employs directed textual logical graphs and predefined logical relations, and makes these predefined relations to reason with each other based on certain rules, adaptively extending the already constructed discourse graphs in a relevant way so as to enhance symbolic reasoning capabilities. Logiformer proposed by Xu et al. [30] uses graph transformer to model the dependency relations in logical and syntactic graphs, respectively, and introduces the structural information of the graph by introducing the adjacency matrix corresponding to the graph into the attention computation process.

References

  1. Rajpurkar, P.; Zhang, J.; Lopyrev, K.; Liang, P. SQuAD: 100,000+ Questions for Machine Comprehension of Text. arXiv 2016, arXiv:1606.05250. Available online: http://arxiv.org/pdf/1606.05250v3 (accessed on 15 September 2021).
  2. Yang, Z.; Qi, P.; Zhang, S.; Bengio, Y.; Cohen, W.W.; Salakhutdinov, R.; Manning, C.D. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. arXiv 2018, arXiv:1809.09600. Available online: http://arxiv.org/pdf/1809.09600v1 (accessed on 16 September 2021).
  3. Dua, D.; Wang, Y.; Dasigi, P.; Stanovsky, G.; Singh, S.; Gardner, M. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. arXiv 2019, arXiv:1903.00161. Available online: http://arxiv.org/pdf/1903.00161v2 (accessed on 17 September 2021).
  4. Jiao, F.; Guo, Y.; Song, X.; Nie, L. MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning. arXiv 2022, arXiv:2203.00357. Available online: http://arxiv.org/pdf/2203.00357v1 (accessed on 14 February 2023).
  5. Liu, J.; Cui, L.; Liu, H.; Huang, D.; Wang, Y.; Zhang, Y. LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning. arXiv 2020, arXiv:2007.08124. Available online: http://arxiv.org/pdf/2007.08124v1 (accessed on 22 September 2021).
  6. Yu, W.; Jiang, Z.; Dong, Y.; Feng, J. ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning. arXiv 2020, arXiv:2002.04326. Available online: http://arxiv.org/pdf/2002.04326v3 (accessed on 25 September 2021).
  7. Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv 2018, arXiv:1810.04805. Available online: http://arxiv.org/pdf/1810.04805v2 (accessed on 4 September 2021).
  8. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. Available online: http://arxiv.org/pdf/1706.03762v5 (accessed on 7 September 2021).
  9. Zhang, J.; Liu, L.; Gao, K.; Hu, D. Few-shot Class-incremental Pill Recognition. arXiv 2023, arXiv:2304.11959. Available online: http://arxiv.org/pdf/2304.11959v1 (accessed on 17 April 2023).
  10. Cui, L.; Wu, Y.; Liu, J.; Yang, S.; Zhang, Y. Template-Based Named Entity Recognition Using BART. arXiv 2021, arXiv:2106.01760. Available online: http://arxiv.org/pdf/2106.01760v1 (accessed on 23 June 2022).
  11. Qin, Y.; Lin, Y.; Takanobu, R.; Liu, Z.; Li, P.; Ji, H.; Huang, M.; Sun, M.; Zhou, J. ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning. arXiv 2020, arXiv:2012.15022. Available online: http://arxiv.org/pdf/2012.15022v2 (accessed on 11 June 2022).
  12. Chen, G.; Ma, S.; Chen, Y.; Zhang, D.; Pan, J.; Wang, W.; Wei, F. Towards Making the Most of Cross-Lingual Transfer for Zero-Shot Neural Machine Translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 22–27 May 2022; (Volume 1: Long Papers). Muresan, S., Nakov, P., Villavicencio, A., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2022; pp. 142–157.
  13. Adelani, D.I.; Alabi, J.O.; Fan, A.; Kreutzer, J.; Shen, X.; Reid, M.; Ruiter, D.; Klakow, D.; Nabende, P.; Chang, E.; et al. A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation. arXiv 2022, arXiv:2205.02022. Available online: http://arxiv.org/pdf/2205.02022v2 (accessed on 6 June 2023).
  14. Sun, Y.; Guo, D.; Tang, D.; Duan, N.; Yan, Z.; Feng, X.; Qin, B. Knowledge Based Machine Reading Comprehension. arXiv 2018, arXiv:1809.04267. Available online: http://arxiv.org/pdf/1809.04267v1 (accessed on 20 May 2022).
  15. Tan, C.; Wei, F.; Zhou, Q.; Yang, N.; Lv, W.; Zhou, M. I Know There Is No Answer: Modeling Answer Validation for Machine Reading Comprehension. In Natural Language Processing and Chinese Computing; Zhang, M., Ng, V., Zhao, D., Li, S., Zan, H., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 85–97. ISBN 978-3-319-99494-9.
  16. Li, X.; Cheng, G.; Chen, Z.; Sun, Y.; Qu, Y. AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension. arXiv 2022, arXiv:2203.08992. Available online: http://arxiv.org/pdf/2203.08992v1 (accessed on 4 November 2022).
  17. Wang, S.; Zhong, W.; Tang, D.; Wei, Z.; Fan, Z.; Jiang, D.; Zhou, M.; Duan, N. Logic-Driven Context Extension and Data Augmentation for Logical Reasoning of Text. arXiv 2021, arXiv:2105.03659. Available online: http://arxiv.org/pdf/2105.03659v1 (accessed on 19 October 2022).
  18. Huang, Y.; Fang, M.; Cao, Y.; Wang, L.; Liang, X. DAGN: Discourse-Aware Graph Network for Logical Reasoning. arXiv 2021, arXiv:2103.14349. Available online: http://arxiv.org/pdf/2103.14349v2 (accessed on 13 November 2022).
  19. Ouyang, S.; Zhang, Z.; Zhao, H. Fact-driven Logical Reasoning for Machine Reading Comprehension. arXiv 2021, arXiv:2105.10334. Available online: http://arxiv.org/pdf/2105.10334v2 (accessed on 22 November 2022).
  20. Gao, Y.; Wu, C.-S.; Li, J.; Joty, S.; Hoi, S.C.H.; Xiong, C.; King, I.; Lyu, M.R. Discern: Discourse-Aware Entailment Reasoning Network for Conversational Machine Reading. arXiv 2020, arXiv:2010.01838. Available online: http://arxiv.org/pdf/2010.01838v3 (accessed on 29 November 2022).
  21. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv 2019, arXiv:1907.11692. Available online: http://arxiv.org/pdf/1907.11692v1 (accessed on 23 September 2021).
  22. Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R.; Le, V.Q. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv 2019, arXiv:1906.08237. Available online: http://arxiv.org/pdf/1906.08237v2 (accessed on 17 October 2021).
  23. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models are Few-Shot Learners. arXiv 2020, arXiv:2005.14165. Available online: http://arxiv.org/pdf/2005.14165v4 (accessed on 23 March 2021).
  24. Sugawara, S.; Aizawa, A. An Analysis of Prerequisite Skills for Reading Comprehension. In Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods, Austin, TX, USA, 5 November 2016; Louis, A., Roth, M., Webber, B., White, M., Zettlemoyer, L., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2016; pp. 1–5.
  25. Richardson, M. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text. In Proceedings of the 2013 Conference on Emprical Methods in Natural Language Processing (EMNLP 2013), Seattle, WA, USA, 18–21 October 2013.
  26. MacCartney, B.; Manning, C.D. Natural Logic for Textual Inference. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, Prague, Czech Republic, 28–29 June 2007; pp. 193–200.
  27. MacCartney, B.; Grenager, T.; de Marneffe, M.-C.; Cer, D.; Manning, C.D. Learning to recognize features of valid textual entailments. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, New York, NY, USA, 4–9 June 2006; pp. 41–48.
  28. Li, T.; Srikumar, V. Augmenting Neural Networks with First-order Logic. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; Korhonen, A., Traum, D., Màrquez, L., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA; pp. 292–302.
  29. Pi, X.; Zhong, W.; Gao, Y.; Duan, N.; Lou, J.-G. LogiGAN: Learning Logical Reasoning via Adversarial Pre-training. arXiv 2022, arXiv:2205.08794. Available online: http://arxiv.org/pdf/2205.08794v2 (accessed on 2 March 2023).
  30. Xu, F.; Liu, J.; Lin, Q.; Pan, Y.; Zhang, L. Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022; Volume 35, pp. 1055–1065.
  31. Zhao, X.; Zhang, T.; Lu, Y.; Liu, G. LoCSGN: Logic-Contrast Semantic Graph Network for Machine Reading Comprehension. In Natural Language Processing and Chinese Computing; Lu, W., Huang, S., Hong, Y., Zhou, X., Eds.; Springer International Publishing: Cham, Switzerkand, 2022; pp. 405–417. ISBN 978-3-031-17119-2.
  32. Ying, C.; Cai, T.; Luo, S.; Zheng, S.; Ke, G.; He, D.; Shen, Y.; Liu, T.-Y. Do Transformers Really Perform Bad for Graph Representation? arXiv 2021, arXiv:2106.05234.
  33. Prasad, R.; Dinesh, N.; Lee, A.; Miltsakaki, E.; Robaldo, L.; Joshi, A.; Webber, B. The Penn Discourse TreeBank 2.0. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008, Marrakech, Morocco, 26 May–1 June 2008.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , ,
View Times: 217
Revisions: 2 times (View History)
Update Date: 22 Dec 2023
1000/1000
Video Production Service