Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 39 2023-08-23 10:24:47 |
2 Change Chinese to English version + 1515 word(s) 1554 2023-08-23 11:39:30 | |
3 layout & references Meta information modification 1554 2023-08-24 03:06:11 |

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Wang, Z.; Yu, Q.; Wang, J.; Hu, Z.; Wang, A. Grammar Correction for Multiple Errors in Chinese. Encyclopedia. Available online: https://encyclopedia.pub/entry/48359 (accessed on 18 November 2024).
Wang Z, Yu Q, Wang J, Hu Z, Wang A. Grammar Correction for Multiple Errors in Chinese. Encyclopedia. Available at: https://encyclopedia.pub/entry/48359. Accessed November 18, 2024.
Wang, Zhici, Qiancheng Yu, Jinyun Wang, Zhiyong Hu, Aoqiang Wang. "Grammar Correction for Multiple Errors in Chinese" Encyclopedia, https://encyclopedia.pub/entry/48359 (accessed November 18, 2024).
Wang, Z., Yu, Q., Wang, J., Hu, Z., & Wang, A. (2023, August 23). Grammar Correction for Multiple Errors in Chinese. In Encyclopedia. https://encyclopedia.pub/entry/48359
Wang, Zhici, et al. "Grammar Correction for Multiple Errors in Chinese." Encyclopedia. Web. 23 August, 2023.
Grammar Correction for Multiple Errors in Chinese
Edit

Grammar Error Correction (GEC) is a key task in the field of Natural Language Processing (NLP). Its purpose is to automatically detect and correct grammatical errors in sentences, and it holds immense research value. The mainstream methods for grammar correction primarily rely on sequence tagging and text generation, which are two end-to-end approaches. These methods demonstrate exemplary performance in domains with low error density, but often fail to provide satisfactory results in high error density situations where multiple errors exist in a single sentence. As a result, these methods tend to over-correct correct words, leading to a high false alarm rate.

Chinese grammar error correction prompt templates sequence labeling

1. Introduction

Syntax correction is a highly important application task, playing roles in education, official document processing, and many preprocessing stages of natural language processing tasks. Although grammatical errors can occur in any language, this discussion focuses solely on the syntax correction task in Chinese texts. Influenced by the inherent characteristics and usage habits of Chinese texts, Chinese Grammar Error Correction (CGEC) exhibits clear differences and diversity. Furthermore, for non-native speakers' Chinese sentences, multiple types of errors often appear in a single sentence. Under such high error density conditions, accurately detecting and correcting the diverse and complex Chinese grammatical errors is a challenging task. The types of grammatical errors can be roughly classified based on their characteristics into redundancy errors (R), omission errors (M), word order errors (W), and incorrect word errors (S) [1].R-type errors refer to the presence of unnecessary or repetitive linguistic elements in a sentence, leading to verbosity or unnecessary repetition. M-type errors indicate the absence of essential linguistic elements or structures in a sentence, resulting in an incomplete or non-fluent sentence. W-type errors point to incorrect word or phrase order in a sentence, leading to unclear grammatical rules or meanings. S-type errors indicate the presence of misspelled words in a sentence, making the sentence inaccurate or hard to understand. For example, Table 1 displays the cases of these four types of errors in Chinese texts. 
Table 1. Types of Chinese Grammatical Errors.

2. Methods for Grammar Correction

This chapter mainly introduces two methods for grammar correction: the current mainstream sequence labeling paradigm and the text generation paradigm, as well as the exploration of related work using prompt learning and prompt templates.

2.1. Grammar Correction Methods based on Sequence Labeling and Text Generation

Research on Chinese grammar error correction can be divided into two categories: methods based on sequence labeling and methods based on text generation. The fundamental idea of sequence labeling-based methods is to define corresponding 'delete,' 'retain,' 'add,' and other operation tags according to error types like 'redundant,' 'correct,' 'missing,' etc. These operation tags are then added to the text sequence. The model learns the dependencies between these operation tags and predicts the operation tag for each character in the text sequence, which is then used for grammar correction. This type of method was earlier proposed and applied in the field of English error correction. Awasthi et al.[2] used sequence labeling to implement text correction by first marking characters in the sequence with self-defined tags, then predicting the corresponding operation tags through an iterative process involving multiple rounds of prediction and refinement. However, this paper only provided simple definitions for operation tags. Later, Omelianchuk et al. [3] refined the design of operation tags, defining 5000 tags, including 'add,' 'delete,' 'modify,' 'retain,' etc., and then using a pre-trained transformer and multi-round iterative sequence labeling to obtain the operation tags for the target sequence. Deng et al. [4] achieved text correction by combining a pre-trained Transformer encoder and an editing space in the field of Chinese text correction. This editing space comprises 8772 tags, also known as the operation tag set, where each tag represents a specific editing action, such as adding, deleting, or modifying a character. Given the characteristics of Chinese text, some scholars have tried to integrate phonetically and graphically similar knowledge into the grammar correction model. Li Jiacheng et al.[5] proposed a correction model integrating a pointer network with confusion set knowledge. While predicting word editing operations, the model also allows the pointer network to choose words from the confusion set incorporating phonetic and graphical similarity knowledge, thus improving correction results for substitution errors. However, sequence labeling methods, despite their fast inference speed and small dataset requirements, demand high-quality annotated data and are restricted by the size of the operation tag set, making it challenging to handle complex problems encountered in real-life applications.
Text generation-based methods incorporate the concept of neural machine translation, translating original sentences directly into correct ones by learning the dependencies between each word in the input sequence. However, unlike translation tasks, both the input and target sequences of grammar correction tasks are in the same language and share many identical characters. Therefore, characters can often be directly extracted from the input sequence to the target sequence during text generation. For this, Wang et al. [6] proposed a grammar correction model that integrates a copy mechanism. Based on Transformer architecture, this model predicts the character at the current position in the target sequence given the input sequence and uses a balancing factor to control whether to copy characters from the input sequence to the target generation sequence. Additionally, Wang et al. [7] proposed a grammar correction model that combines a dynamic residual structure with the Transformer model to capture semantic information during target sequence generation better. They also used corrupted text for data augmentation. Fu et al. [8] proposed a three-stage method for grammar correction. They first eliminated shallow errors like spelling or punctuation based on a pre-trained language model and a set of similar characters. Then they built Transformer models at the character and word levels to handle grammatical errors. Finally, they reordered the results from the previous two stages in the ensemble stage, selecting the optimal output. Text generation methods only need to generate correct text based on the input sequence using the learned dependencies during the correction process, hence eliminating the need to define specific error types. However, this method needs to improve on issues of controllability and interpretability.

2.2.  Prompt Learning and Prompt Templates

In recent years, with the emergence of various large-scale pre-training models, the research methodology is gradually transitioning from the traditional 'pre-training + fine-tuning' paradigm to the prompt-based 'pre-training + prompting + prediction' paradigm. The traditional 'pre-training + fine-tuning' paradigm involves training the model on a large dataset (pre-training) and optimizing it for a specific task (fine-tuning). It is usually necessary to set an objective function according to the specific downstream task and retrain the corresponding domain corpus to adjust the parameters of the pre-trained model to adapt to the downstream task. However, when it comes to using ultra-large-scale pre-trained models, such as the GPT-3 model  [9] with 175 billion parameters, matching downstream tasks using the 'pre-training + fine-tuning' paradigm is often time-consuming and costly. Moreover, since the pre-trained model already performs well in its original domain, using fine-tuning for domain transfer is restricted by the original domain, which might damage its performance. Therefore, modifications to the pre-trained model are avoided in the 'pre-training + prompting + prediction' paradigm of prompt learning. Instead, prompt templates are constructed better to fit the downstream tasks with the pre-trained model. As research on prompt learning flourishes, the 'pre-training + prompting + prediction' paradigm is gradually evolving into the fourth paradigm in the field of natural language processing [10].
In prompt learning, the design of prompt templates mainly involves the position and quantity of prompts, which can be divided into manually designed and automatically learned methods. Manually designed prompt templates are based on human experience and professional knowledge in the field of natural language. Petroni et al. [11] designed corresponding cloze templates for each relation in the knowledge source by manual definition, exploring the facts and common knowledge contained in language models. Schick et al. [12] Transformed input examples into cloze examples containing task description information, successfully combining task description with standard supervised learning. Manually designed prompt templates are intuitive and smooth but highly depend on human language expertise and frequent trial-and-error, resulting in high costs for high-quality, prompt templates. Therefore, automatic learning of prompt templates has been explored, which can be divided into discrete and continuous types. Discrete prompts use unique discrete characters as prompts to generate prompt templates automatically. Ben-David et al. [13] proposed a domain-adaptive algorithm that trains models to generate unique domain-related features, which are then connected with the original input to form prompt templates. Continuous prompts construct soft prompt templates from a vector embedding perspective and perform prompting directly in the model's embedding space. Li et al. [14] froze model parameters while constructing task-specific continuous vector sequences as soft prompts by adding prefixes. Furthermore, many scholars combine these two methods to obtain higher quality prompt templates, such as Zhong et al. [15] who initially defined prompt templates using a discrete search method, then initiated virtual tokens according to the template and fine-tuned embeddings for optimization. Han et al.  [16] proposed a rule-based prompt tuning method, which combines sub-templates manually designed into a complete prompt template according to logical rules and inserts virtual tokens with adjustable embeddings.

References

  1. Wang, H.; Zhang, Y.J.; Sun, X.M. Chinese grammatical error diagnosis based on sequence tagging methods. J. Phys. Conf. Ser. 2021, 1948, 12–27.
  2. Awasthi, A.; Sarawagi, S.; Coyal, R.; Ghosh, S.; Piratla, V. Parallel iterative edit models for local sequence transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, Hong Kong, China, 3–7 November 2019; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 4260–4270.
  3. Omelianchuk, K.; Atrasevych, V.; Chernodub, A.; Skurzhanskyi, O. Gector-grammatical error correction: Tag, not rewrite. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, Seattle, WA, USA, 10 July 2020; pp. 163–170.
  4. Deng, L.; Chen, Z.; Lei, G.; Xin, C.; Xiong, X.Z.; Rong, H.Q.; Dong, J.P. BERT enhanced neural machine translation and sequence tagging model for Chinese grammatical error diagnosis. In Proceedings of the 6th workshop on Natural Language Processing Techniques for Educational Applications, Suzhou, China, 4 December 2020; pp. 57–66.
  5. Li, J.C.; Shen, J.Y.; Gong, C.; Li, Z.H.; Zhang, M. Chinese grammar correction based on pointer network and incorporating confused set knowledge. J. Chin. Inf. Process. 2022, 36, 29–38. (In Chinese)
  6. Wang, C.C.; Zhang, Y.S.; Huang, G.J. An end-to-end Chinese text error correction method based on attention mechanism. Comput. Appl. Softw. 2022, 39, 141–147. (In Chinese)
  7. Wang, C.C.; Yangl, E.; Wang, Y.Y. Chinese grammatical error correction method based on Transformer enhanced architecture. J. Chin. Inf. Process. 2020, 34, 106–114.
  8. Fu, K.; Huang, J.; Duan, Y. Youdao’s Winning Solution to the NLPCC-2018 Task2 Challenge: A Neural Machine Translation Approach to Chinese Grammatical Error Correction. In Proceedings of the 2018 CCF International Conference on Natural Language Processing and Chinese Computing, Hohhot, China, 26–30 August 2018; Springer: Cham, Switzerland, 2018; pp. 341–350.
  9. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Amodei, D. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901.
  10. Liu, P.; Yuan, W.; Fu, J.; Hayashi, H.; Neubig, G. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv 2021, arXiv:2107.13586.
  11. Petroni, F.; Rocktaschel, T.; Riedel, S.; Bakhtin, A.; Wu, Y.; Miller, A.H.; Riedel, S. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process and the 9th International Joint Conference on Natural Language, Hong Kong, China, 3–7 November 2019; Association for Computational Linguistics: Stroudsburg, PA, USA, 2019; pp. 2463–2473.
  12. Schick, T.; Schutze, H. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, Online, 19–23 April 2021; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 255–269.
  13. Ben-David, E.; Oved, N.; Reichart, R. Pada: A prompt-based auto regressive approach for adaptation to unseen domains. arXiv 2021, arXiv:2102.12206.
  14. Li, X.L.; Liang, P. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume1: Long Papers), Online, 1–6 August 2021; pp. 4582–4597.
  15. Zhong, Z.; Friedman, D.; Chen, D. Factual probing is : Learning vs. learning to recall. arXiv 2021, arXiv:2104.05240.
  16. Han, X.; Zhao, W.; Ding, N.; Liu, Z.Y.; Sun, M.S. PTR: Prompt tuning with rules for text classification. AI Open 2022, 3, 182–192.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , , , ,
View Times: 473
Revisions: 3 times (View History)
Update Date: 24 Aug 2023
1000/1000
ScholarVision Creations