Residential College | false |
Status | 已發表Published |
Anchor-based Large Language Models | |
Jianhui Pang1; Fanghua Ye2; Derek F. Wong1; Xin He3; Wanshun Chen3; Longyue Wang3 | |
2024 | |
Conference Name | The 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) |
Source Publication | Findings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) |
Volume | Findings of the Association for Computational Linguistics ACL 2024 |
Pages | 4958-4976 |
Conference Date | 11-07-2024 |
Conference Place | Bangkok |
Country | Thailand |
Publisher | Association for Computational Linguistics (ACL) |
Abstract | Large language models (LLMs) predominantly employ decoder-only transformer architectures, necessitating the retention of keys/values information for historical tokens to provide contextual information and avoid redundant computation. However, the substantial size and parameter volume of these LLMs require massive GPU memory. This memory demand increases with the length of the input text, leading to an urgent need for more efficient methods of information storage and processing. This study introduces Anchor-based LLMs (AnLLMs), which utilize an innovative anchor-based self-attention network (AnSAN) and also an anchor-based inference strategy. This approach enables LLMs to compress sequence information into an anchor token, reducing the keys/values cache and enhancing inference efficiency. Experiments on question-answering benchmarks reveal that AnLLMs maintain similar accuracy levels while achieving up to 99% keys/values cache reduction and up to 3.5 times faster inference. Despite a minor compromise in accuracy, the substantial enhancements of AnLLMs employing the AnSAN technique in resource utilization and computational efficiency underscore their potential for practical LLM applications. |
DOI | 10.18653/v1/2024.findings-acl.295 |
URL | View the original |
Language | 英語English |
Scopus ID | 2-s2.0-85197202139 |
Fulltext Access | |
Citation statistics | |
Document Type | Conference paper |
Collection | Faculty of Science and Technology DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Derek F. Wong; Longyue Wang |
Affiliation | 1.University of Macau 2.University College London 3.Tencent AI Lab |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Jianhui Pang,Fanghua Ye,Derek F. Wong,et al. Anchor-based Large Language Models[C]:Association for Computational Linguistics (ACL), 2024, 4958-4976. |
APA | Jianhui Pang., Fanghua Ye., Derek F. Wong., Xin He., Wanshun Chen., & Longyue Wang (2024). Anchor-based Large Language Models. Findings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), Findings of the Association for Computational Linguistics ACL 2024, 4958-4976. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment