Residential College | false |
Status | 已發表Published |
Collaborative group: Composed image retrieval via consensus learning from noisy annotations | |
Zhang, Xu1; Zheng, Zhedong2; Zhu, Linchao1; Yang, Yi1 | |
2024-09-27 | |
Source Publication | Knowledge-Based Systems |
ISSN | 0950-7051 |
Volume | 300Pages:112135 |
Abstract | Composed image retrieval extends content-based image retrieval systems by enabling users to search using reference images and captions that describe their intention. Despite great progress in developing image-text compositors to extract discriminative visual-linguistic features, we identify a hitherto overlooked issue, triplet ambiguity, which impedes robust feature extraction. Triplet ambiguity refers to a type of semantic ambiguity that arises between the reference image, the relative caption, and the target image. It is mainly due to the limited representation of the annotated text, resulting in many noisy triplets where multiple visually dissimilar candidate images can be matched to an identical reference pair (i.e., a reference image + a relative caption). To address this challenge, we propose the Consensus Network (Css-Net), inspired by the psychological concept that groups outperform individuals. Css-Net comprises two core components: (1) a consensus module with four diverse compositors, each generating distinct image-text embeddings, fostering complementary feature extraction and mitigating dependence on any single, potentially biased compositor; (2) a Kullback–Leibler divergence loss that encourages learning of inter-compositor interactions to promote consensual outputs. During evaluation, the decisions of the four compositors are combined through a weighting scheme, enhancing overall agreement. On benchmark datasets, particularly FashionIQ, Css-Net demonstrates marked improvements. Notably, it achieves significant recall gains, with a 2.77% increase in R@10 and 6.67% boost in R@50, underscoring its competitiveness in addressing the fundamental limitations of existing methods. |
Keyword | Compositional Image Retrieval Data Ambiguity Image Retrieval With Text Feedback Multi-modal Retrieval Noisy Annotation |
DOI | 10.1016/j.knosys.2024.112135 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Artificial Intelligence |
WOS ID | WOS:001261512500001 |
Publisher | ELSEVIER, RADARWEG 29, 1043 NX AMSTERDAM, NETHERLANDS |
Scopus ID | 2-s2.0-85196822175 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | Faculty of Science and Technology INSTITUTE OF COLLABORATIVE INNOVATION DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Zheng, Zhedong |
Affiliation | 1.College of Computer Science and Technology, Zhejiang University, Hangzhou, 310058, China 2.Faculty of Science and Technology, and Institute of Collaborative Innovation, University of Macau, 999078, China |
Corresponding Author Affilication | INSTITUTE OF COLLABORATIVE INNOVATION |
Recommended Citation GB/T 7714 | Zhang, Xu,Zheng, Zhedong,Zhu, Linchao,et al. Collaborative group: Composed image retrieval via consensus learning from noisy annotations[J]. Knowledge-Based Systems, 2024, 300, 112135. |
APA | Zhang, Xu., Zheng, Zhedong., Zhu, Linchao., & Yang, Yi (2024). Collaborative group: Composed image retrieval via consensus learning from noisy annotations. Knowledge-Based Systems, 300, 112135. |
MLA | Zhang, Xu,et al."Collaborative group: Composed image retrieval via consensus learning from noisy annotations".Knowledge-Based Systems 300(2024):112135. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment