Residential College | false |
Status | 已發表Published |
Empirical kernel map-based multilayer extreme learning machines for representation learning | |
Vong, Chi Man1; Chen, Chuangquan1; Wong, Pak Kin2 | |
2018-10-08 | |
Source Publication | NEUROCOMPUTING |
ISSN | 0925-2312 |
Volume | 310Pages:265-276 |
Abstract | Recently, multilayer extreme learning machine (ML-ELM) and hierarchical extreme learning machine (HELM) were developed for representation learning whose training time can be reduced from hours to seconds compared to traditional stacked autoencoder (SAE). However, there are three practical issues in ML-ELM and H-ELM: (1) the random projection in every layer leads to unstable and suboptimal performance; (2) the manual tuning of the number of hidden nodes in every layer is time-consuming; and (3) under large hidden layer, the training time becomes relatively slow and a large storage is necessary. More recently, issues (1) and (2) have been resolved by kernel method, namely, multilayer kernel ELM (ML-KELM), which encodes the hidden layer in form of a kernel matrix (computed by using kernel function on the input data), but the storage and computation issues for kernel matrix pose a big challenge in large-scale application. In this paper, we empirically show that these issues can be alleviated by encoding the hidden layer in the form of an approximate empirical kernel map (EKM) computed from low-rank approximation of the kernel matrix. This proposed method is called ML-EKM-ELM, whose contributions are: (1) stable and better performance is achieved under no random projection mechanism; (2) the exhaustive manual tuning on the number of hidden nodes in every layer is eliminated; (3) EKM is scalable and produces a much smaller hidden layer for fast training and low memory storage, thereby suitable for large-scale problems. Experimental results on benchmark datasets demonstrated the effectiveness of the proposed ML-EKM-ELM. As an illustrative example, on the NORB dataset, ML-EKM-ELM can be respectively up to 16 times and 37 times faster than ML-KELM for training and testing with a little loss of accuracy of 0.35%, while the memory storage can be reduced up to 1/9. (C) 2018 Elsevier B.V. All rights reserved. |
Keyword | Kernel Learning Multilayer Extreme Learning Machine (Ml-elm) Empirical Kernel Map (Ekm) Representation Learning Stacked Autoencoder (Sae) |
DOI | 10.1016/j.neucom.2018.05.032 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Artificial Intelligence |
WOS ID | WOS:000437299800023 |
Publisher | ELSEVIER SCIENCE BV |
The Source to Article | WOS |
Scopus ID | 2-s2.0-85048832989 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | Faculty of Science and Technology DEPARTMENT OF ELECTROMECHANICAL ENGINEERING |
Corresponding Author | Vong, Chi Man |
Affiliation | 1.Department of Computer and Information Science, University of Macau, Macau 2.Department of Electromechanical Engineering, University of Macau, Macau |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Vong, Chi Man,Chen, Chuangquan,Wong, Pak Kin. Empirical kernel map-based multilayer extreme learning machines for representation learning[J]. NEUROCOMPUTING, 2018, 310, 265-276. |
APA | Vong, Chi Man., Chen, Chuangquan., & Wong, Pak Kin (2018). Empirical kernel map-based multilayer extreme learning machines for representation learning. NEUROCOMPUTING, 310, 265-276. |
MLA | Vong, Chi Man,et al."Empirical kernel map-based multilayer extreme learning machines for representation learning".NEUROCOMPUTING 310(2018):265-276. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment