Residential College | false |
Status | 已發表Published |
Multi-View CNN Feature Aggregation with ELM Auto-Encoder for 3D Shape Recognition | |
Yang, Zhi-Xin; Tang, Lulu; Zhang, Kun; Wong, Pak Kin | |
2018-12 | |
Source Publication | COGNITIVE COMPUTATION |
ISSN | 1866-9956 |
Volume | 10Issue:6Pages:908-921 |
Abstract | Fast and accurate detection of 3D shapes is a fundamental task of robotic systems for intelligent tracking and automatic control. View-based 3D shape recognition has attracted increasing attention because human perceptions of 3D objects mainly rely on multiple 2D observations from different viewpoints. However, most existing multi-view-based cognitive computation methods use straightforward pairwise comparisons among the projected images then follow with weak aggregation mechanism, which results in heavy computation cost and low recognition accuracy. To address such problems, a novel network structure combining multi-view convolutional neural networks (M-CNNs), extreme learning machine auto-encoder (ELM-AE), and ELM classifer, named as MCEA, is proposed for comprehensive feature learning, effective feature aggregation, and efficient classification of 3D shapes. Such novel framework exploits the advantages of deep CNN architecture with the robust ELM-AE feature representation, as well as the fast ELM classifier for 3D model recognition. Compared with the existing set-to-set image comparison methods, the proposed shape-to-shape matching strategy could convert each high informative 3D model into a single compact feature descriptor via cognitive computation. Moreover, the proposed method runs much faster and obtains a good balance between classification accuracy and computational efficiency. Experimental results on the benchmarking Princeton ModelNet, ShapeNet Core 55, and PSB datasets show that the proposed framework achieves higher classification and retrieval accuracy in much shorter time than the state-of-the-art methods. |
Keyword | Elm Auto-encoder Convolutional Neural Networks 3d Shape Recognition Multi-view Feature Aggregation |
DOI | 10.1007/s12559-018-9598-1 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science ; Neurosciences & Neurology |
WOS Subject | Computer Science, Artificial Intelligence ; Neurosciences |
WOS ID | WOS:000453344800003 |
Publisher | SPRINGER |
Scopus ID | 2-s2.0-85055340181 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | DEPARTMENT OF ELECTROMECHANICAL ENGINEERING Faculty of Science and Technology |
Corresponding Author | Tang, Lulu |
Affiliation | Univ Macau, Fac Sci & Technol, Dept Electromech Engn, Macau, Peoples R China |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Yang, Zhi-Xin,Tang, Lulu,Zhang, Kun,et al. Multi-View CNN Feature Aggregation with ELM Auto-Encoder for 3D Shape Recognition[J]. COGNITIVE COMPUTATION, 2018, 10(6), 908-921. |
APA | Yang, Zhi-Xin., Tang, Lulu., Zhang, Kun., & Wong, Pak Kin (2018). Multi-View CNN Feature Aggregation with ELM Auto-Encoder for 3D Shape Recognition. COGNITIVE COMPUTATION, 10(6), 908-921. |
MLA | Yang, Zhi-Xin,et al."Multi-View CNN Feature Aggregation with ELM Auto-Encoder for 3D Shape Recognition".COGNITIVE COMPUTATION 10.6(2018):908-921. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment