UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model
Zhan, Runzhe1; Yang, Xinyi1; Wong, Derek F.1; Chao, Lidia S.1; Zhang, Yue2
2024
Conference Name62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
Source PublicationProceedings of the Annual Meeting of the Association for Computational Linguistics
VolumeFindings of the Association for Computational Linguistics ACL 2024
Pages12131-12145
Conference Date11-16 August 2024
Conference PlaceBangkok, Thailand and virtual meeting
CountryThailand
PublisherAssociation for Computational Linguistics (ACL)
Abstract

While supervised fine-tuning (SFT) has been a straightforward approach for tailoring the output of foundation large language model (LLM) to specific preferences, concerns have been raised about the depth of this alignment, with some critiques suggesting it is merely “superficial”. We critically examine this hypothesis within the scope of cross-lingual generation tasks, proposing that the effectiveness of SFT may be constrained by its reliance on prior tokens to guide cross-lingual generation. Based on this crucial insight, and in response to the challenges posed by the costly and limited availability of non-English data for SFT, we introduce a novel training-free alignment method named PRETTY, which employs minimal task-related prior tokens to bridge the foundation LLM and the SFT LLM, achieving comparable performance without training. Experiments on machine translation and part-of-speech tagging across eight languages demonstrate the efficacy of PRETTY in cross-lingual settings. Remarkably, by initiating the decoding process with only one or two prior tokens, foundation LLMs can achieve performance comparable to their SFT counterparts. This method presents a cost-effective alternative to SFT and advances the democratization of multilingual LLMs.

DOI10.18653/v1/2024.findings-acl.722
URLView the original
Language英語English
Scopus ID2-s2.0-85197193990
Fulltext Access
Citation statistics
Document TypeConference paper
CollectionFaculty of Science and Technology
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorWong, Derek F.; Zhang, Yue
Affiliation1.NLP2CT Lab, Department of Computer and Information Science, University of Macau, Macao
2.School of Engineering, Westlake University, China
First Author AffilicationUniversity of Macau
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Zhan, Runzhe,Yang, Xinyi,Wong, Derek F.,et al. Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model[C]:Association for Computational Linguistics (ACL), 2024, 12131-12145.
APA Zhan, Runzhe., Yang, Xinyi., Wong, Derek F.., Chao, Lidia S.., & Zhang, Yue (2024). Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model. Proceedings of the Annual Meeting of the Association for Computational Linguistics, Findings of the Association for Computational Linguistics ACL 2024, 12131-12145.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhan, Runzhe]'s Articles
[Yang, Xinyi]'s Articles
[Wong, Derek F.]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhan, Runzhe]'s Articles
[Yang, Xinyi]'s Articles
[Wong, Derek F.]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhan, Runzhe]'s Articles
[Yang, Xinyi]'s Articles
[Wong, Derek F.]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.