Residential Collegefalse
Status已發表Published
GhostNets on Heterogeneous Devices via Cheap Operations
Han, Kai1,2; Wang, Yunhe2; Xu, Chang3; Guo, Jianyuan2,3; Xu, Chunjing2; Wu, Enhua1,4; Tian, Qi2
2022-04-01
Source PublicationINTERNATIONAL JOURNAL OF COMPUTER VISION
ISSN0920-5691
Volume130Issue:4Pages:1050-1069
Abstract

Deploying convolutional neural networks (CNNs) on mobile devices is difficult due to the limited memory and computation resources. We aim to design efficient neural networks for heterogeneous devices including CPU and GPU, by exploiting the redundancy in feature maps, which has rarely been investigated in neural architecture design. For CPU-like devices, we propose a novel CPU-efficient Ghost (C-Ghost) module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed C-Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. C-Ghost bottlenecks are designed to stack C-Ghost modules, and then the lightweight C-GhostNet can be easily established. We further consider the efficient networks for GPU devices. Without involving too many GPU-inefficient operations (e.g., depth-wise convolution) in a building stage, we propose to utilize the stage-wise feature redundancy to formulate GPU-efficient Ghost (G-Ghost) stage structure. The features in a stage are split into two parts where the first part is processed using the original block with fewer output channels for generating intrinsic features, and the other are generated using cheap operations by exploiting stage-wise redundancy. Experiments conducted on benchmarks demonstrate the effectiveness of the proposed C-Ghost module and the G-Ghost stage. C-GhostNet and G-GhostNet can achieve the optimal trade-off of accuracy and latency for CPU and GPU, respectively. MindSpore code is available at https://gitee.com/mindspore/models/pulls/1809, and PyTorch code is available at https://github.com/huawei-noah/CV-Backbones.

KeywordConvolutional Neural Networks Efficient Inference Visual Recognition
DOI10.1007/s11263-022-01575-y
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000763208400001
Scopus ID2-s2.0-85125542520
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionDEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorWang, Yunhe
Affiliation1.State Key Laboratory of Computer Science, ISCAS, University of Chinese Academy of Sciences, Beijing, China
2.Huawei Noah’s Ark Lab, Shenzhen, China
3.The University of Sydney, Sydney, Australia
4.University of Macau, Macao
Recommended Citation
GB/T 7714
Han, Kai,Wang, Yunhe,Xu, Chang,et al. GhostNets on Heterogeneous Devices via Cheap Operations[J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130(4), 1050-1069.
APA Han, Kai., Wang, Yunhe., Xu, Chang., Guo, Jianyuan., Xu, Chunjing., Wu, Enhua., & Tian, Qi (2022). GhostNets on Heterogeneous Devices via Cheap Operations. INTERNATIONAL JOURNAL OF COMPUTER VISION, 130(4), 1050-1069.
MLA Han, Kai,et al."GhostNets on Heterogeneous Devices via Cheap Operations".INTERNATIONAL JOURNAL OF COMPUTER VISION 130.4(2022):1050-1069.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Han, Kai]'s Articles
[Wang, Yunhe]'s Articles
[Xu, Chang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Han, Kai]'s Articles
[Wang, Yunhe]'s Articles
[Xu, Chang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Han, Kai]'s Articles
[Wang, Yunhe]'s Articles
[Xu, Chang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.