Residential College | false |
Status | 已發表Published |
Exploring Semantic Relations for Social Media Sentiment Analysis | |
Zeng,Jiandian1,2; Zhou,Jiantao3; Huang,Caishi3 | |
2023 | |
Source Publication | IEEE/ACM Transactions on Audio Speech and Language Processing |
ISSN | 2329-9290 |
Volume | 31Pages:2382-2394 |
Abstract | With the massive social media data available online, the conventional single modality emotion classification has developed into more complex models of multimodal sentiment analysis. Most existing works simply extracted image features at a coarse level, resulting in the absence of partially detailed visual features. Besides, social media data usually contain multiple images, while existing works considered a single image case and used only one image for representing visual features. In fact, it is nontrivial to extend the single image case to the multiple images case, due to the complex relations among multiple images. To solve the above issues, in this article, we propose a Gated Fusion Semantic Relation (GFSR) network to explore semantic relations for social media sentiment analysis. In addition to inter-relations between visual and textual modalities, we also exploit intra-relations among multiple images, potentially improving the sentiment analysis performance. Specifically, we design a gated fusion network to fuse global image embeddings and the corresponding local Adjective Noun Pair (ANP) embeddings. Then, apart from textual relations and cross-modal relations, we employ the multi-head cross attention mechanism between images and ANPs to capture similar semantic contents. Eventually, the updated textual and visual representations are concatenated for the final sentiment prediction. Extensive experiments are conducted on real-world Yelp and Flickr30 k datasets, showing that our GFSR can improve about 0.10% to 3.66% in terms of accuracy on the Yelp dataset with multiple images, and achieve the best accuracy for two classes and the best macro F1 for three classes on the Flickr30 k dataset with a single image. |
Keyword | Multimodal Fusion Sentiment Analysis Social Media |
DOI | 10.1109/TASLP.2023.3285238 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Acoustics ; Engineering |
WOS Subject | Acoustics ; Engineering, Electrical & Electronic |
WOS ID | WOS:001021220300001 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Scopus ID | 2-s2.0-85162691975 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | Faculty of Science and Technology STANLEY HO EAST ASIA COLLEGE DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Zhou,Jiantao |
Affiliation | 1.Beijing Normal University,Institute of Artificial Intelligence and Future Networks,Zhuhai,519087,China 2.University of Macau,999078,Macao 3.University of Macau,State Key Laboratory of Internet of Things for Smart City,Department of Computer and Information Science,999078,Macao |
First Author Affilication | University of Macau |
Corresponding Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Zeng,Jiandian,Zhou,Jiantao,Huang,Caishi. Exploring Semantic Relations for Social Media Sentiment Analysis[J]. IEEE/ACM Transactions on Audio Speech and Language Processing, 2023, 31, 2382-2394. |
APA | Zeng,Jiandian., Zhou,Jiantao., & Huang,Caishi (2023). Exploring Semantic Relations for Social Media Sentiment Analysis. IEEE/ACM Transactions on Audio Speech and Language Processing, 31, 2382-2394. |
MLA | Zeng,Jiandian,et al."Exploring Semantic Relations for Social Media Sentiment Analysis".IEEE/ACM Transactions on Audio Speech and Language Processing 31(2023):2382-2394. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment