UM
Residential Collegefalse
Status已發表Published
Transformer in Transformer
Han, Kai1,2; Xiao, An2; Wu, Enhua1,3; Guo, Jianyuan2; Xu, Chunjing2; Wang, Yunhe2
2021
Conference Name35th Conference on Neural Information Processing Systems, NeurIPS 2021
Source PublicationAdvances in Neural Information Processing Systems
Volume19
Pages15908-15919
Conference Date6 December 2021through 14 December 2021
Conference PlaceVirtual, Online
Abstract

Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT). Specifically, we regard the local patches (e.g., 16×16) as "visual sentences" and present to further divide them into smaller patches (e.g., 4×4) as "visual words". The attention of each word will be calculated with other words in the given visual sentence with negligible computational costs. Features of both words and sentences will be aggregated to enhance the representation ability. Experiments on several benchmarks demonstrate the effectiveness of the proposed TNT architecture, e.g., we achieve an 81.5% top-1 accuracy on the ImageNet, which is about 1.7% higher than that of the state-of-the-art visual transformer with similar computational cost. The PyTorch code is available at https://github.com/huawei-noah/CV-Backbones, and the MindSpore code is available at https://gitee.com/mindspore/models/ tree/master/research/cv/TNT.

URLView the original
Language英語English
Scopus ID2-s2.0-85123841524
Fulltext Access
Citation statistics
Document TypeConference paper
CollectionUniversity of Macau
Affiliation1.State Key Lab of Computer Science, Iscas & Ucas,
2.Huawei Noah's Ark Lab,
3.University of Macau, Macao
Recommended Citation
GB/T 7714
Han, Kai,Xiao, An,Wu, Enhua,et al. Transformer in Transformer[C], 2021, 15908-15919.
APA Han, Kai., Xiao, An., Wu, Enhua., Guo, Jianyuan., Xu, Chunjing., & Wang, Yunhe (2021). Transformer in Transformer. Advances in Neural Information Processing Systems, 19, 15908-15919.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Han, Kai]'s Articles
[Xiao, An]'s Articles
[Wu, Enhua]'s Articles
Baidu academic
Similar articles in Baidu academic
[Han, Kai]'s Articles
[Xiao, An]'s Articles
[Wu, Enhua]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Han, Kai]'s Articles
[Xiao, An]'s Articles
[Wu, Enhua]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.