Residential College | false |
Status | 已發表Published |
Intrinsic Model Weaknesses: How Priming Attacks Unveil Vulnerabilities in Large Language Models | |
Yuyi Huang1,2![]() ![]() ![]() | |
2025-04 | |
Conference Name | 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics |
Source Publication | Findings of the Association for Computational Linguistics: NAACL 2025
![]() |
Conference Date | 2025-04-29 |
Conference Place | Albuquerque, New Mexico |
Document Type | Conference paper |
Collection | DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Affiliation | 1.NLP2CT Lab, Department of Computer and Information Science, University of Macau 2.The Second Affiliated Hospital, Guangdong Provincial Key Laboratory of Allergy and Clinical Immunology, Guangzhou Medical University |
First Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Yuyi Huang,Runzhe Zhan,Derek F. Wong,et al. Intrinsic Model Weaknesses: How Priming Attacks Unveil Vulnerabilities in Large Language Models[C], 2025. |
APA | Yuyi Huang., Runzhe Zhan., Derek F. Wong., Lidia S. Chao., & Ailin Tao (2025). Intrinsic Model Weaknesses: How Priming Attacks Unveil Vulnerabilities in Large Language Models. Findings of the Association for Computational Linguistics: NAACL 2025. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment