Residential Collegefalse
Status已發表Published
Intrinsic Model Weaknesses: How Priming Attacks Unveil Vulnerabilities in Large Language Models
Yuyi Huang1,2; Runzhe Zhan1; Derek F. Wong1; Lidia S. Chao1; Ailin Tao2
2025-04
Conference Name2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics
Source PublicationFindings of the Association for Computational Linguistics: NAACL 2025
Conference Date2025-04-29
Conference PlaceAlbuquerque, New Mexico
Document TypeConference paper
CollectionDEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Affiliation1.NLP2CT Lab, Department of Computer and Information Science, University of Macau
2.The Second Affiliated Hospital, Guangdong Provincial Key Laboratory of Allergy and Clinical Immunology, Guangzhou Medical University
First Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Yuyi Huang,Runzhe Zhan,Derek F. Wong,et al. Intrinsic Model Weaknesses: How Priming Attacks Unveil Vulnerabilities in Large Language Models[C], 2025.
APA Yuyi Huang., Runzhe Zhan., Derek F. Wong., Lidia S. Chao., & Ailin Tao (2025). Intrinsic Model Weaknesses: How Priming Attacks Unveil Vulnerabilities in Large Language Models. Findings of the Association for Computational Linguistics: NAACL 2025.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yuyi Huang]'s Articles
[Runzhe Zhan]'s Articles
[Derek F. Wong]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yuyi Huang]'s Articles
[Runzhe Zhan]'s Articles
[Derek F. Wong]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yuyi Huang]'s Articles
[Runzhe Zhan]'s Articles
[Derek F. Wong]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.