UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
Anchor-based Large Language Models
Jianhui Pang1; Fanghua Ye2; Derek F. Wong1; Xin He3; Wanshun Chen3; Longyue Wang3
2024
Conference NameThe 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024)
Source PublicationFindings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024)
VolumeFindings of the Association for Computational Linguistics ACL 2024
Pages4958-4976
Conference Date11-07-2024
Conference PlaceBangkok
CountryThailand
PublisherAssociation for Computational Linguistics (ACL)
Abstract

Large language models (LLMs) predominantly employ decoder-only transformer architectures, necessitating the retention of keys/values information for historical tokens to provide contextual information and avoid redundant computation. However, the substantial size and parameter volume of these LLMs require massive GPU memory. This memory demand increases with the length of the input text, leading to an urgent need for more efficient methods of information storage and processing. This study introduces Anchor-based LLMs (AnLLMs), which utilize an innovative anchor-based self-attention network (AnSAN) and also an anchor-based inference strategy. This approach enables LLMs to compress sequence information into an anchor token, reducing the keys/values cache and enhancing inference efficiency. Experiments on question-answering benchmarks reveal that AnLLMs maintain similar accuracy levels while achieving up to 99% keys/values cache reduction and up to 3.5 times faster inference. Despite a minor compromise in accuracy, the substantial enhancements of AnLLMs employing the AnSAN technique in resource utilization and computational efficiency underscore their potential for practical LLM applications. 

DOI10.18653/v1/2024.findings-acl.295
URLView the original
Language英語English
Scopus ID2-s2.0-85197202139
Fulltext Access
Citation statistics
Document TypeConference paper
CollectionFaculty of Science and Technology
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorDerek F. Wong; Longyue Wang
Affiliation1.University of Macau
2.University College London
3.Tencent AI Lab
First Author AffilicationUniversity of Macau
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Jianhui Pang,Fanghua Ye,Derek F. Wong,et al. Anchor-based Large Language Models[C]:Association for Computational Linguistics (ACL), 2024, 4958-4976.
APA Jianhui Pang., Fanghua Ye., Derek F. Wong., Xin He., Wanshun Chen., & Longyue Wang (2024). Anchor-based Large Language Models. Findings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), Findings of the Association for Computational Linguistics ACL 2024, 4958-4976.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Jianhui Pang]'s Articles
[Fanghua Ye]'s Articles
[Derek F. Wong]'s Articles
Baidu academic
Similar articles in Baidu academic
[Jianhui Pang]'s Articles
[Fanghua Ye]'s Articles
[Derek F. Wong]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Jianhui Pang]'s Articles
[Fanghua Ye]'s Articles
[Derek F. Wong]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.