Residential College | false |
Status | 已發表Published |
Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model | |
Zhan, Runzhe1; Yang, Xinyi1; Wong, Derek F.1; Chao, Lidia S.1; Zhang, Yue2 | |
2024 | |
Conference Name | 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 |
Source Publication | Proceedings of the Annual Meeting of the Association for Computational Linguistics |
Volume | Findings of the Association for Computational Linguistics ACL 2024 |
Pages | 12131-12145 |
Conference Date | 11-16 August 2024 |
Conference Place | Bangkok, Thailand and virtual meeting |
Country | Thailand |
Publisher | Association for Computational Linguistics (ACL) |
Abstract | While supervised fine-tuning (SFT) has been a straightforward approach for tailoring the output of foundation large language model (LLM) to specific preferences, concerns have been raised about the depth of this alignment, with some critiques suggesting it is merely “superficial”. We critically examine this hypothesis within the scope of cross-lingual generation tasks, proposing that the effectiveness of SFT may be constrained by its reliance on prior tokens to guide cross-lingual generation. Based on this crucial insight, and in response to the challenges posed by the costly and limited availability of non-English data for SFT, we introduce a novel training-free alignment method named PRETTY, which employs minimal task-related prior tokens to bridge the foundation LLM and the SFT LLM, achieving comparable performance without training. Experiments on machine translation and part-of-speech tagging across eight languages demonstrate the efficacy of PRETTY in cross-lingual settings. Remarkably, by initiating the decoding process with only one or two prior tokens, foundation LLMs can achieve performance comparable to their SFT counterparts. This method presents a cost-effective alternative to SFT and advances the democratization of multilingual LLMs. |
DOI | 10.18653/v1/2024.findings-acl.722 |
URL | View the original |
Language | 英語English |
Scopus ID | 2-s2.0-85197193990 |
Fulltext Access | |
Citation statistics | |
Document Type | Conference paper |
Collection | Faculty of Science and Technology DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Wong, Derek F.; Zhang, Yue |
Affiliation | 1.NLP2CT Lab, Department of Computer and Information Science, University of Macau, Macao 2.School of Engineering, Westlake University, China |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Zhan, Runzhe,Yang, Xinyi,Wong, Derek F.,et al. Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model[C]:Association for Computational Linguistics (ACL), 2024, 12131-12145. |
APA | Zhan, Runzhe., Yang, Xinyi., Wong, Derek F.., Chao, Lidia S.., & Zhang, Yue (2024). Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model. Proceedings of the Annual Meeting of the Association for Computational Linguistics, Findings of the Association for Computational Linguistics ACL 2024, 12131-12145. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment