Residential Collegefalse
Status已發表Published
Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities
Jiandian Zeng1; Tianyi Liu2; Jiantao Zhou1
2022-07-07
Conference Name45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)
Source PublicationPROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22)
Pages1545-1554
Conference DateJuly 11th to 15th, 2022
Conference PlaceMadrid, Spain
CountrySpain
Author of SourceEnrique Amigo ; Pablo Castells ; Julio Gonzalo
Publication PlaceNew York, United States
PublisherAssociation for Computing Machinery
Abstract

Multimodal sentiment analysis has been studied under the assump- tion that all modalities are available. However, such a strong as- sumption does not always hold in practice, and most of multimodal fusion models may fail when partial modalities are missing. Several works have addressed the missing modality problem; but most of them only considered the single modality missing case, and ignored the practically more general cases of multiple modalities missing. To this end, in this paper, we propose a Tag-Assisted Transformer En- coder (TATE) network to handle the problem of missing uncertain modalities. Specifically, we design a tag encoding module to cover both the single modality and multiple modalities missing cases, so as to guide the network’s attention to those missing modalities. Besides, we adopt a new space projection pattern to align common vectors. Then, a Transformer encoder-decoder network is utilized to learn the missing modality features. At last, the outputs of the Transformer encoder are used for the final sentiment classification. Extensive experiments are conducted on CMU-MOSI and IEMO- CAP datasets, showing that our method can achieve significant improvements compared with several baselines.

KeywordMultimodal Sentiment Analysis Missing Modality Joint Representation
DOI10.1145/3477495.3532064
URLView the original
Indexed ByCPCI-S
WOS Research AreaComputer Science
WOS SubjectComputer Science, Information Systems
WOS IDWOS:000852715901058
Scopus ID2-s2.0-85135032569
Fulltext Access
Citation statistics
Document TypeConference paper
CollectionTHE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU)
Faculty of Science and Technology
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorJiantao Zhou
Affiliation1.State Key Laboratory of IoT for Smart City, University of Macau Macau, China
2.Shanghai Jiao Tong University Shanghai, China
First Author AffilicationUniversity of Macau
Corresponding Author AffilicationUniversity of Macau
Recommended Citation
GB/T 7714
Jiandian Zeng,Tianyi Liu,Jiantao Zhou. Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities[C]. Enrique Amigo, Pablo Castells, Julio Gonzalo, New York, United States:Association for Computing Machinery, 2022, 1545-1554.
APA Jiandian Zeng., Tianyi Liu., & Jiantao Zhou (2022). Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities. PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 1545-1554.
Files in This Item: Download All
File Name/Size Publications Version Access License
Tag-assisted Multimo(2207KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Jiandian Zeng]'s Articles
[Tianyi Liu]'s Articles
[Jiantao Zhou]'s Articles
Baidu academic
Similar articles in Baidu academic
[Jiandian Zeng]'s Articles
[Tianyi Liu]'s Articles
[Jiantao Zhou]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Jiandian Zeng]'s Articles
[Tianyi Liu]'s Articles
[Jiantao Zhou]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.