{ "File Number": "10", "Title": "CSCD-NS: a Chinese Spelling Check Dataset for Native Speakers", "7 Limitations": "Limitation of the CSCD-NS dataset: The data source for the CSCD-NS dataset is derived from a Chinese social networking platform. Therefore, it may not fully represent the error distribution of native speakers, as there may be slight differences in other scenarios, such as formal document writing.\nLimitation of the pseudo-data construction: The employed method of input simulation via IME is relatively basic, and the actual input scenario is more complex. For instance, individuals may utilize abbreviated pinyin to input common phrases, entering only the initials of characters (e.g., \"wm\" for \"我们\") (Tan et al., 2022). Moreover, a substantial number of users prefer the T9-style keyboard when employing IME on mobile devices. These factors collectively contribute to the inability of our pseudo-data construction method to accurately simulate the realistic input scenario.", "abstractText": "In this paper, we present CSCD-NS, the first Chinese spelling check (CSC) dataset designed for native speakers, containing 40,000 samples from a Chinese social platform. Compared with existing CSC datasets aimed at Chinese learners, CSCD-NS is ten times larger in scale and exhibits a distinct error distribution, with a significantly higher proportion of word-level errors. To further enhance the data resource, we propose a novel method that simulates the input process through an input method, generating large-scale and high-quality pseudo data that closely resembles the actual error distribution and outperforms existing methods. Moreover, we investigate the performance of various models in this scenario, including large language models (LLMs), such as ChatGPT. The result indicates that generative models underperform BERT-like classification models due to strict length and pronunciation constraints. The high prevalence of word-level errors also makes CSC for native speakers challenging enough, leaving substantial room for improvement. 1", "1 Introduction": "Chinese spelling check (CSC) is a task to detect and correct spelling errors in Chinese texts. There are two primary user groups for CSC: (1) Chinese learners, including teenage students and individuals who use Chinese as a second language, and (2) Chinese native speakers. It is obvious that the latter user group has a larger population and more diverse applications, therefore, this paper concentrates on CSC for native speakers.\nHowever, there is still no CSC dataset specifically designed for native speakers. Existing CSC datasets, such as SIGHAN13, 14, and 15 (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015), are all sourced from Chinese learners. Spelling errors made by Chinese learners differ greatly from those made by native speakers. This is because Chinese\n1https://github.com/nghuyong/cscd-ns\ninput relies on Chinese input methods (IME), and modern Chinese IMEs always have powerful language models, making it difficult to recommend candidates that clearly do not fit the context. As shown in Figure 1, native speakers using Chinese IMEs are unlikely to make such an unusual error.\nFurthermore, the size of existing datasets is limited. As shown in Table 1, for three SIGHAN datasets, the training set contains an average of merely 2158 samples, while the test set comprises an average of only 1054 samples, and no development set is provided. When using such small-scale datasets, it is difficult for models to be trained sufficiently and for evaluation results to be reliable.\nTo address the aforementioned issues, we introduce CSCD-NS, a Chinese spelling check dataset designed for native speakers. The dataset is sourced from real Weibo (a Chinese social media platform) posts, which contain genuine spelling errors made by native speakers during their input process. Moreover, the dataset comprises 40,000 samples, which is ten times larger than previous datasets and this is also the largest dataset for the CSC task. To conduct an in-depth investigation into the distribution of spelling errors, we develop a tagging system that\nar X\niv :2\n21 1.\n08 78\n8v 3\n[ cs\n.C L\n] 2\n3 M\nay 2\n02 4\n146\noperates at phonetic and semantic levels. The analysis indicates that native speakers make a higher proportion of homophonic and word-level errors compared to Chinese learners, with the proportion of word-level errors doubling.\nDue to the lack of labeled data, previous studies always build additional pseudo data to improve the performance of models. However, these methods, which rely on confusion sets (Liu et al., 2021; Zhang et al., 2020) or ASR transcriptions (Wang et al., 2018), do not align with the real-world input scenario. Therefore, we propose a novel method that directly simulates the input process through the Chinese IME and adds sampled noises to construct high-quality pseudo data. Experimental results show that our method can better fit the real error distribution and bring greater improvements.\nWe conduct comprehensive experiments on CSCD-NS, with different model sizes (from 0.1B to 13B parameters), architectures (encoder-only, encoder-decoder, and decoder-only), and learning approaches (fine-tuning and in-context learning). We also evaluate the performance of ChatGPT and GPT4. The results demonstrate that BERT-like classification models outperform generative models, as the latter struggle with the simultaneous constraints of text length and pronunciation. Concurrently, the CSC task for native speakers is challenging due to the high proportion of word-level errors, leaving substantial room for improvement.\nIn summary, our contributions are as follows:\n• We introduce the first Chinese spelling check dataset for native speakers which is also the largest dataset for the CSC task. Through\nquantitative analyses, we further unveil the specific error distribution for this scenario.\n• We propose a novel method for constructing high-quality and large-scale pseudo data through a Chinese IME. Experimental results show that our method can bring greater improvements than existing methods.\n• We explore the performance of different types of models in this scenario and analyze the challenges. To the best of our knowledge, we are the first to investigate the effectiveness and limitations of large language models (LLMs), such as ChatGPT, in addressing the CSC task.", "2 Related Work": "CSC Datasets: The existing CSC datasets, such as the SIGHAN series (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015), primarily cater to Chinese learners. However, these datasets suffer from limited data size and significant discrepancies in spelling errors compared to those made by native speakers. While there have been some efforts to develop Chinese grammatical error correction (CGEC) datasets for native speakers (Ma et al., 2022; Xu et al., 2022; Zhao et al., 2022; Wang et al., 2022), no such work has been undertaken for CSC datasets.\nCSC Data Augmentation: In order to compensate for the lack of labeled data, previous studies often create additional pseudo data to enhance performance. The mainstream method is based on confusion sets (Liu et al., 2021; Zhang et al., 2020), the pseudo data generated in this way is large in size but low in quality because context information is not considered. Another relatively high-quality construction method is based on ASR (Wang et al., 2018). However, this approach requires additional labeled ASR data, making it difficult to create largescale datasets. Moreover, the spelling errors generated by these two methods differ greatly from those produced by native speakers, such as having a much smaller proportion of word-level errors. We provide a detailed analysis in Appendix A.\nCSC models: In recent years, BERT-like (Devlin et al., 2019) classification models have dominated the research of the CSC task (Hong et al., 2019; Zhu et al., 2022; Huang et al., 2021; Zhang et al., 2020; Liu et al., 2021, 2022). However, due to the lack of large-scale and high-quality datasets, the performance of these models is greatly limited.", "3 CSCD-NS": "In this section, we will show how to build CSCDNS and discover the error distribution.", "3.1 Data Source": "We chose the LCSTS dataset (Hu et al., 2015) as our data source. This dataset is composed of authentic Weibo posts, which is a popular Chinese social media platform. As shown in Figure 2, spelling errors found within these posts reflect the genuine mistakes made by native speakers during the input process. Furthermore, this dataset contains over 2 million posts and covers a wide range of fields, such as finance, sports, and entertainment. The substantial scale and scope of the LCSTS make it suitable to serve as the data source.", "3.2 Data Selection": "We split posts in LCSTS into sentences and obtain over 8 million sentences. It is not realistic to label all of these sentences, and most of them are completely correct. Therefore, we use an error detection model to filter out these correct sentences.\nDetection Model: Given a source sequence X = {x1, x2, ..., xN}, the detection model is to check whether a token xi(1 ≤ i ≤ N) is correct or not. We use the label 1 and 0 to mark the misspelled and the correct, respectively. The detection model can be formalized as follows:\ny = sigmoid(W T (E(e))) (1)\nwhere e = {e1, e2, ..., eN} is the sequence of word embeddings and E(∗) is the pre-trained encoder. The output y = {y1, y2, ..., yN} is the sequence of probabilities, where yi ∈ (0, 1) denotes the probability that xi is erroneous.\nTraining: We follow the successful experience (Wang et al., 2020) of the NLPTEA2020 task (Rao et al., 2020) and use a Chinese ELECTRALarge discriminator model 2 (Clark et al., 2020) to initialize the detection model. Following previous research, we train the detection model on SIGHAN13-15’s training data and Wang’s pseudo data (Wang et al., 2018) and save the best checkpoint by SIGHAN13-15’s test data 3.\nFiltering: We then use the trained detection model to filter out correct sentences. For the input sentence, we can obtain the error probability\n2https://github.com/ymcui/Chinese-ELECTRA 3SIGHAN datasets have no development set.\nof each token y = {y1, y2, ..., yN}. Previous research indicates that the detection model struggles with certain Chinese particles (的/地/得) due to the poor labeling of these words in SIGHAN datasets. Additionally, low-frequency entity words, such as person names, are also prone to over-checking. To address these issues, we utilize a Chinese lexical analysis tool (LAC) (Jiao et al., 2018) to identify these particles and entities in the input sentence. We categorize tokens into three groups: Cparticle, Centity, Cothers. Then, we calculate the maximum error probability for tokens in each category. If a category is empty, the maximum error probability is 0. We only consider a sentence correct if all the maximum error probabilities for each category are below the corresponding threshold. This can be formalized as follows:\nmax({yi|xi ∈ Cparticle}) < δparticle max({yi|xi ∈ Centity}) < δentity max({yi|xi ∈ Cothers}) < δothers\n(2)\nHere, δparticle, δentity and δothers represent threshold values. These thresholds are determined using a small manually labeled set and are set to 0.05, 0.5, and 0.15 respectively.\nBased on the above method, we filter out approximately 91.2% of sentences, retaining around 700,000 sentences that may contain spelling errors. To verify the accuracy of our filtering, we randomly select 2,000 filtered sentences and find that the accuracy is 99.2%, aligning with our expectations. For the remaining sentences, we randomly select a portion for manual annotation.", "3.3 Data Annotation": "We recruit a group of native speakers for manual annotation. The annotators are required to check whether the given sentence contains any spelling errors and provide the correct sentence. To ensure the quality of annotation, each sentence is annotated at least twice by different annotators. If the results of the two annotations are inconsistent, a senior annotator will make the final decision.\nTo clarify the annotation rules and reduce disputes during the annotation process, sentences that fall into the following three categories will be directly discarded: (1) sentences with inherent ambiguity; (2) sentences with multiple reasonable answers to errors; (3) sentences with complex grammatical errors. Therefore, the sentence retained in the annotation process is semantically clear and has a unique correction result.\nIn the end, we obtain 40,000 manually annotated sentences, which constitute the CSCD-NS dataset. After random partitioning, there are 30,000 samples in the training set, and 5,000 samples each in the development and test sets.", "3.4 Analysis on Basic Statistics": "As shown in Table 1, the CSCD-NS is significantly larger in scale compared to existing datasets. Moreover, only the CSCD-NS provides a development set, is in Simplified Chinese, and originates from daily input by native speakers. Additionally, the CSCD-NS exhibits a more balanced distribution of positive and negative samples, with fewer spelling errors per sentence on average, suggesting a lower error rate among native speakers compared to Chinese learners.", "3.5 Analysis on Error Distribution": "To conduct an in-depth study on the differences between native speakers and Chinese learners in terms of spelling errors, we design a tagging system for quantitative analyses.\nTag definition: We define three phonetic-level tags and two semantic-level tags. The phonetic tags consist of: (1) same phonetic error: the erroneous character has the same pronunciation as the correct one. (2) similar phonetic error: the erroneous character’s pronunciation has an edit distance of 1 from the correct character’s pronunciation. (3) dissimilar\nphonetic error: the erroneous character’s pronunciation has an edit distance greater than 1 from the correct character’s pronunciation. The semantic tags consist of: (1) word-level error: the erroneous word is a valid Chinese word. (2) character-level error: the erroneous word is not a valid Chinese word, or the length of the erroneous word is 1.\nAs shown in Table 2, we first tokenize the correct sentence using LAC (Jiao et al., 2018) to obtain word-level correction pairs. For each pair, we compute the pinyin edit distance and assign a phonetic-level tag. Simultaneously, we check the original word’s validity in Chinese and incorporate its length to assign a semantic tag.\nPhonetic-level analysis: As illustrated in Figure 3, the proportion of same phonetic errors is the largest, while the proportion of dissimilar phonetic errors is the smallest in all four datasets. This feature is more pronounced in the CSCD-NS dataset, where the proportion of dissimilar phonetic errors is only 2.2%, significantly lower than in the other datasets. Over 97% of the errors are either the same phonetic or similar phonetic errors. This is because even if users make slight mistakes in their pinyin input, Chinese IME will auto-fix the input pinyin based on the context (Jia and Zhao, 2014).\nSemantic-level analysis: As shown in Figure 3, the proportion of word-level errors in CSCDNS (49.4%) far exceeds that of existing datasets, which is twice the average value (23.3%) of the\nSIGHAN datasets. This is because native speakers rely on the IME to input Chinese texts, which tends to recommend relatively reasonable valid words rather than strange \"error words\", resulting in a lower proportion of character-level errors. Compared to character-level errors, word-level errors pose a greater challenge to CSC systems.", "4 Data Augmentation": "The manual annotation of CSC dataset is very expensive, therefore, how to construct pseudo data has always been a valuable topic. In this section, we introduce a novel method that can generate highquality pseudo data on a large scale.", "4.1 Data Preparation": "The basic principle of pseudo-data construction is to add noise to accurate sentences. Therefore, it is necessary to first prepare completely correct sentences. Fortunately, such text data is readily available on the Internet, including Wikipedia articles and classic books. This availability also ensures the generation of a large-scale dataset.", "4.2 IME-based Pseudo Data Generation": "First, we should analyze and obtain the error distribution based on the annotated data, including the distribution of the number of errors per sentence Dnum, phonetic-level error distribution Dphonetic, and semantic-level error distribution Dsemantic.\nAs illustrated in Figure 4, the IME-based generation of pseudo data involves eight steps.\n(1) Sample a noise vnum based on Dnum, which indicates the number of generated spelling errors. The following steps are performed for each error.\n(2) Sample a semantic noise vsemantic based on Dsemantic, which indicates whether the error is at the word level or the character level.\n(3) Randomly select a token from the original text based on the sampled vsemantic.\n(4) Sample a phonetic noise vphonetic based on Dphonetic, which indicates whether the error is the same, similar, or dissimilar phonetic error.\n(5) Generate the new pinyin p, based on the sampled phonetic noise vphonetic and the actual pronunciation of the selected token.\n(6) In a Chinese IME, input the correct text before the selected token t and enter the generated pinyin p. The IME would then recommend reasonable candidates {c1, c2, ..., cn}. Leveraging the powerful language model of the IME, candidates are recommended by considering both the context before token C δ (4)\nThrough these steps, we can generate pseudo data that closely resembles the actual input process.", "4.3 LCSTS-IME-2M": "We apply the above method to construct a largescale CSC pseudo dataset LCSTS-IME-2M, consisting of about 2 million samples, based on the correct sentences filtered from LCSTS, the error distribution of CSCD-NS, and the Google IME 4.", "5 Experiments": "In this section, we evaluate the performance of different models on CSCD-NS and compare different pseudo-data construction methods.", "5.1 Basic Settings": "Data: We perform experiments based on the labled data CSCD-NS and the pseudo data LCSTS-IME-\n4https://www.google.com/inputtools/\n2M. For pseudo data, we pre-train the model on it first, then fine-tune the model on the labeled data.\nMetric: We compute detection and correction metrics at the sentence level and character level, including precision, recall, and F1 score. For sentence-level metrics, we use the calculation method in FASPell (Hong et al., 2019). For character-level metrics, we calculate all characters instead of only those correctly detected characters.\nBaselines: As shown in Table 3, the baselines encompass a diverse range of model structures, sizes, and learning methods. (1) BERT (Devlin et al., 2019) directly fine-tunes the standard masked language model to generate fixed-length corrections.\n(2) Soft-Masked BERT (SM BERT) (Zhang et al., 2020) employs an error detection model to provide better correction guidance. (3) PLOME (Liu et al., 2021) integrates phonetic and visual features into the pre-trained model. It has included a pre-training step on a confusion set-based pseudo dataset. (4) BART (Lewis et al., 2020) models the CSC as a sequence-to-sequence task. We use the Chinese BART-large version here 5. (5) Baichuan2 (Baichuan, 2023) models the CSC as a text generation task based on instructions. We fine-tune the model by LoRA (Hu et al., 2021) and use the version of 7B and 13B here 6. (6) ChatGPT and GPT4 perform the CSC task in a few-shot setting (10 examples) through in-context learning (ICL) (Dong et al., 2022).\nTo ensure that the correction results are of the\n5https://huggingface.co/fnlp/bart-large-chinese 6https://github.com/baichuan-inc/Baichuan2\nsame length as the input text, we only extract equallength substitution modifications for generative models (BART, Baichuan2, ChatGPT and GPT4). Further implementation details of these models can be found in Appendix B.", "5.2 Main Results": "(1) As shown in Table 4, compared with generative models, BERT-like token-level classification models (BERT, SM BERT, PLOME) remain the best approach for the CSC task, with smaller model size, higher performance, and faster inference speed.\n(2) The overall performance of generative models is relatively poor because the CSC task has strong constraints, requiring corrections to be of equal length and phonetically similar to the original text. These strong constraints make it easy for generative models to cause over-correction and incorrect correction.\n(3) For generative models, as the parameter size increases, their performance tends to improve gradually. This trend can be observed from smaller models like BART (0.4B) to larger ones such as Baichuan2-13B. Similarly, GPT4 outperforms ChatGPT, and it is only through in-context learning that GPT4 can achieve performance comparable to Baichuan2-7B fine-tuned on CSCD-NS.\n(4) Large-scale and high-quality pseudo data is important for improving the performance, bringing consistent improvements across all six models.\n(5) The task of CSC for native speakers is highly challenging and the best F1 score of baseline models is still below 80. A key characteristic of this\nscenario is the high proportion of word-level errors. As shown in Table 5, word-level errors are more difficult for models to handle than character-level errors, as they require understanding more complex contexts. The development of CSC models, from BERT to PLOME, has primarily focused on optimizing character-level errors, with little progress made in addressing word-level errors. Therefore, further efforts are required in this scenario.", "5.3 Better Data Augmentation Method": "In this part, we compare different pseudo-data construction methods. We conduct experiments on an existing ASR-based pseudo dataset (Wang et al., 2018), containing about 271K samples. We extract the correct sentences and construct new pseudodata based on confusion sets and IME, respectively.\nAs demonstrated in Table 7, our IME-based approach exhibits a substantial enhancement in performance compared to the other two methods. This improvement is even more pronounced when training exclusively on pseudo-data. The primary factor contributing to this success is the error distribution. As depicted in Figure 5, the pseudo-data generated via the IME-based method more accurately reflects the spelling errors made by native speakers. More analysis can be found in Appendix A.", "5.4 Discussions": "For generative models, it is difficult to ensure that the generated text satisfies constraints on length and pronunciation. In the original correction results produced by ChatGPT, a staggering 82.1% of modifications exhibit unequal length, while 35.4% display dissimilar pronunciation. As illustrated in Table 6, the replacement of \"处\" with \"处于\" (located in) disregards the length constraint by introducing an additional character. Similarly, the correction of \"仍旧\" to \"仍然\" (still) overlooks the pronunciation constraint. Although these alterations may appear reasonable, they fail to meet the CSC task’s requirements.\nBERT-like classification models have difficulty in addressing complex word-level errors and equallength grammatical errors, as these require a strong contextual understanding. For example, the PLOME model shows a recall rate of only 60% for word-level errors and merely 44% for particlerelated grammatical errors (的/地/得). Table 6 illustrates that the incorrect word \"报到\" (check-in) is a high-frequency term, necessitating the model to recognize its context and correct it to \"报道\" (report). Similarly, in the phrase \"尽快的打破\" (try to break), the model must comprehend the grammatical rule (the particle between the adjective and the verb should be \"地\" instead of \"的\") and apply the appropriate correction.\nMoreover, all baseline systems, which are based on pre-trained language models, exhibit a propensity to over-convert low-frequency expressions into more prevalent ones (Zhang et al., 2020; Liu et al., 2022). As demonstrated in Table 6, \"跟紧\" and \"跟 进\" share similar meanings (follow-up); however, since \"跟进\" is more frequently used, the model is prone to over-correcting.\nConsequently, enabling controlled text generation, addressing complex word-level and grammatical errors, and enhancing the understanding of low-frequency or new words all represent valuable avenues for future research.", "6 Conclusion": "In this paper, we focus on CSC for native speakers. For this scenario, we propose a new dataset, CSCDNS, which is also the largest dataset for CSC. We further unveil the specific error distribution, with a significantly higher proportion of word-level errors. Moreover, we introduce an IME-based pseudo-data construction approach, enabling large-scale generation of high-quality pseudo-data. We explore the performance of various models and first evaluate ChatGPT and GPT4 on the CSC task. Our experiments demonstrate that BERT-like models exhibit better performance than generative models, but there is still a considerable room for improvement. We hope these data resources and our findings could stimulate further research in this area.", "8 Ethics Statement": "License: CSCD-NS and the constructed pseudodata LCSTS-IME-2M are based on LCSTS (Hu et al., 2015), we applied for and obtained the right to use this dataset, and performed the academic research under the copyright.\nAnnotator Compensation: In this work, annotators are from a data labeling company in China. Through the pre-labeling, we estimate that each annotator could label 80 samples per hour and the label speed would be faster when they are skilled. In China, 60 yuan (8.76 dollars) per hour is a fair wage, therefore, we pay the annotator 0.75 yuan (0.11 dollars) for each sentence.", "A Pseudo Data Analysis": "A.1 Impact of LM Post-Filtering In this section, we investigate the influence of language model (LM) post-filtering, which constitutes the final stage of our proposed pseudo-data construction method. We extract accurate sentences from the Wang271K dataset (Wang et al., 2018) and generate pseudo-data using IME, incorporating various LM filtering strategies. We choose the basic BERT model to conduct the experiment and train the model only on the pseudo data to clearly distinguish the differences.\nAs demonstrated in Table 8, the lack of LM filtering results in the introduction of undesired noise. For example, the generated pseudo-data may consist of entirely accurate sentences. In contrast, when the threshold is excessively low (even below 0), the generated errors become more complex, leading to high recall but poor precision. Conversely, if the threshold is set too high, the generated errors tend to be relatively simple, resulting in better precision but lower recall. Therefore, LM filtering is necessary, and selecting an appropriate threshold is also very important.\nA.2 Error Distribution As illustrated in Figure 5, we analyze the error distribution of pseudo-data generated by various methods at both phonetic and semantic levels. It is clear that our pseudo-data construction method demonstrates the highest consistency with the CSCD-NS dataset, suggesting that our approach closely resembles real input scenarios. In contrast, the confusion set-based method and the ASR-based method exhibit a significant deviation from the actual error distribution.\nA.3 Case Study We sample some examples in Table 9. It can be observed that the confusion set-based method is capable of producing similar phonetic errors; however, these errors are entirely out of context and\ncan not accurately represent the real input scenario. The ASR-based method performs better but primarily generates character-level errors. Moreover, since the ASR-based method lacks an LM filtering module, the generated noise may occasionally be correct, as demonstrated by the third case in Table 9. In contrast, our method can effectively generate high-quality pseudo data, encompassing both word-level and character-level errors.", "B Experimental Details": "In this section, we provide comprehensive descriptions of the experimental procedures and parameter settings for each model.\nNote that for each experiment, we select the best checkpoint based on the development set and evaluate its performance on the test set. We carry out three trials for each experiment and report the av-\n7https://huggingface.co/bert-base-chinese 8https://www.pytorchlightning.ai/ 9The metric used to save the best model\n10https://share.weiyun.com/OREEY0H3 11https://www.tensorflow.org/\nerage results in the paper. The total training time is contingent upon the size of the training data and can be estimated based on the training speed.\nB.1 BERT-like Models\nSince there is no official implementation for BERT and SM BERT, we follow a widely-used opensource version12. For PLOME, we directly utilize the official code13. We adhere to the default hyperparameters, and the detailed configurations for these three models can be found in Table 10 and Table 11.\nB.2 BART\nWe choose the Chinese BART-large model as the base model and fine-tune it for the CSC task by treating it as a sequence-to-sequence task. The model takes the original sentence as input and produces the correct sentence as output. The decoding method employed is beam search with a beam size of 4. The specific model configuration can be found in Table 12.\n12https://github.com/gitabtion/BertBasedCorrectionModels 13https://github.com/liushulinle/PLOME 14https://huggingface.co/fnlp/bart-large-chinese 15https://github.com/huggingface/transformers\nB.3 Baichuan2\nBaichuan2 (Baichuan, 2023) is a powerful Chinese language model that includes two open-source models, Baichuan2-7B and Baichuan2-13B. The CSC task is modeled as an instruction tuning task, with the instruction being \"纠正句子中的拼写 错误\" (correct the spelling errors in the following sentence). We use LoRA (Hu et al., 2021) to finetune the model. During the decoding stage, random sampling is not performed, and the beam size is set to 1. Table 13 displays the specific configurations.\nB.4 ChatGPT and GPT4\nWe tested ChatGPT and GPT4 through OpenAI’s API on November 26, 2023, and the model id for ChatGPT is gpt-3.5-turbo-1106 and GPT4 is gpt-41106-preview. We set the temperature to 0 to reduce the influence of random sampling. As illustrated in Table 14, we devise three prompt templates, each comprising a task description, 10 examples, and a test sentence. These 10 examples encompass 5 positive instances (sentences containing spelling errors) and 5 negative instances (sentences without spelling errors), all of which are randomly chosen from the training set. As shown in Table 15, utilizing the same prompt template with varying example samples exerted a negligible effect on the outcomes. Likewise, employing different prompt\n16https://github.com/baichuan-inc/Baichuan2 17https://github.com/huggingface/transformers\ntemplates also has a minor impact on the results. Given that the outcomes obtained using \"prompt 3\" are slightly better, we present the average results derived from \"prompt 3\" in our paper." }