Residential College | false |
Status | 已發表Published |
Easy Domain Adaptation for cross-subject multi-view emotion recognition | |
Chen, Chuangquan1; Vong, Chi Man2![]() ![]() | |
2021-12-21 | |
Source Publication | Knowledge-Based Systems
![]() |
ISSN | 0950-7051 |
Volume | 239Pages:107982 |
Abstract | Existing domain adaptation methods for cross-subject emotion recognition are primarily focused on accuracy and suffer from the issues of intensive hyperparameter tunings and high computational complexity. In this paper, we make the first attempt to address these issues by developing a domain-invariant classifier called Easy Domain Adaptation (EasyDA) based on multi-view emotion inputs (multiple modalities or multiple types of features). Firstly, EasyDA uses both the source domain (training subjects) and the target domain (test subject) to generate domain generalization features for each view by leveraging a fast, accurate, and low-memory approximate empirical kernel map (AEKM), followed by a parameterless weighted combination for multi-views. Secondly, EasyDA simultaneously learns an optimal separating hyperplane and the pseudo labels for the target domain such that (a) high classification accuracies are obtained on both labeled source domain data and the pseudo-labeled target domain data; (b) the distribution distance between source and target domain is reduced; (c) the predicted output vector in the target domain changes little over time in short time intervals, based on the biological evidence that emotion varies fluently and smoothly. Eventually, by summarizing these two steps with the ridge regression theory and alternating optimization, EasyDA can transfer knowledge across domains accurately, efficiently, and easily in a unified framework. Experimental results on the SEED and SEED-IV datasets demonstrate that EasyDA significantly outperforms multiple representative domain adaptation methods in terms of accuracy, computational time, and memory consumption. It is noteworthy that EasyDA achieves satisfactory performance under a wide range of parameter settings. |
Keyword | Approximate Empirical Kernel Map Distribution Alignment Domain Adaptation Emotion Recognition Manifold Regularization |
DOI | 10.1016/j.knosys.2021.107982 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Artificial Intelligence |
WOS ID | WOS:000788495000007 |
Publisher | Elsevier B.V. |
Scopus ID | 2-s2.0-85123233466 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | Faculty of Science and Technology DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Vong, Chi Man |
Affiliation | 1.Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China 2.Department of Computer and Information Science, University of Macau, Macao 3.School of AI and Computer Science, Jiangnan University, Wuxi, China |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Chen, Chuangquan,Vong, Chi Man,Wang, Shitong,et al. Easy Domain Adaptation for cross-subject multi-view emotion recognition[J]. Knowledge-Based Systems, 2021, 239, 107982. |
APA | Chen, Chuangquan., Vong, Chi Man., Wang, Shitong., Wang, Hongtao., & Pang, Miaoqi (2021). Easy Domain Adaptation for cross-subject multi-view emotion recognition. Knowledge-Based Systems, 239, 107982. |
MLA | Chen, Chuangquan,et al."Easy Domain Adaptation for cross-subject multi-view emotion recognition".Knowledge-Based Systems 239(2021):107982. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment