Residential College | false |
Status | 已發表Published |
Double Correction Framework for Denoising Recommendation | |
He, Zhuangzhuang1; Wang, Yifan2; Yang, Yonghui1; Sun, Peijie2; Wu, Le3; Bai, Haoyue1; Gong, Jinqi4; Hong, Richang3; Zhang, Min2 | |
2024-08 | |
Conference Name | KDD '24: The 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining |
Source Publication | KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining |
Pages | 1062-1072 |
Conference Date | August 25-29, 2024 |
Conference Place | Barcelona |
Country | Spain |
Publication Place | New York, NY, USA |
Publisher | Association for Computing Machinery |
Abstract | As its availability and generality in online services, implicit feedback is more commonly used in recommender systems. However, implicit feedback usually presents noisy samples in real-world recommendation scenarios (such as misclicks or non-preferential behaviors), which will affect precise user preference learning. To overcome the noisy samples problem, a popular solution is based on dropping noisy samples in the model training phase, which follows the observation that noisy samples have higher training losses than clean samples. Despite the effectiveness, we argue that this solution still has limits. (1) High training losses can result from model optimization instability or hard samples, not just noisy samples. (2) Completely dropping of noisy samples will aggravate the data sparsity, which lacks full data exploitation. To tackle the above limitations, we propose a Double Correction Framework for Denoising Recommendation (DCF), which contains two correction components from views of more precise sample dropping and avoiding more sparse data. In the sample dropping correction component, we use the loss value of the samples over time to determine whether it is noise or not, increasing dropping stability. Instead of averaging directly, we use the damping function to reduce the bias effect of outliers. Furthermore, due to the higher variance exhibited by hard samples, we derive a lower bound for the loss through concentration inequality to identify and reuse hard samples. In progressive label correction, we iteratively re-label highly deterministic noisy samples and retrain them to further improve performance. Finally, extensive experimental results on three datasets and four backbones demonstrate the effectiveness and generalization of our proposed framework. Our code is available at https://github.com/bruno686/DCF. |
Keyword | Denoising Implicit Feedback Recommendation |
DOI | 10.1145/3637528.3671692 |
URL | View the original |
Language | 英語English |
Scopus ID | 2-s2.0-85203713958 |
Fulltext Access | |
Citation statistics | |
Document Type | Conference paper |
Collection | Faculty of Science and Technology |
Corresponding Author | Wu, Le; Zhang, Min |
Affiliation | 1.Hefei University of Technology, Hefei, China 2.Tsinghua University, Beijing, China 3.Hefei University of Technology, Institute of Dataspace, Hefei Comprehensive National Science Center, Hefei, China 4.University of Macau, Macao |
Recommended Citation GB/T 7714 | He, Zhuangzhuang,Wang, Yifan,Yang, Yonghui,et al. Double Correction Framework for Denoising Recommendation[C], New York, NY, USA:Association for Computing Machinery, 2024, 1062-1072. |
APA | He, Zhuangzhuang., Wang, Yifan., Yang, Yonghui., Sun, Peijie., Wu, Le., Bai, Haoyue., Gong, Jinqi., Hong, Richang., & Zhang, Min (2024). Double Correction Framework for Denoising Recommendation. KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1062-1072. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment