Residential College | false |
Status | 已發表Published |
Direction-Aware Video Demoiréing with Temporal-Guided Bilateral Learning | |
Xu, Shuning1; Song, Binbin1; Chen, Xiangyu1,2; Zhou, Jiantao1 | |
2024-03-25 | |
Conference Name | 38th AAAI Conference on Artificial Intelligence, AAAI 2024 |
Source Publication | Proceedings of the AAAI Conference on Artificial Intelligence |
Volume | 38 |
Issue | 6 |
Pages | 6360-6368 |
Conference Date | 20 February 2024through 27 February 2024 |
Conference Place | Vancouver |
Publisher | Association for the Advancement of Artificial Intelligence |
Abstract | Moiré patterns occur when capturing images or videos on screens, severely degrading the quality of the captured images or videos. Despite the recent progresses, existing video demoiréing methods neglect the physical characteristics and formation process of moiré patterns, significantly limiting the effectiveness of video recovery. This paper presents a unified framework, DTNet, a direction-aware and temporal-guided bilateral learning network for video demoiréing. DTNet effectively incorporates the process of moiré pattern removal, alignment, color correction, and detail refinement. Our proposed DTNet comprises two primary stages: Frame-level Direction-aware Demoiréing and Alignment (FDDA) and Tone and Detail Refinement (TDR). In FDDA, we employ multiple directional DCT modes to perform the moiré pattern removal process in the frequency domain, effectively detecting the prominent moiré edges. Then, the coarse and fine-grained alignment is applied on the demoiréd features for facilitating the utilization of neighboring information. In TDR, we propose a temporal-guided bilateral learning pipeline to mitigate the degradation of color and details caused by the moiré patterns while preserving the restored frequency information in FDDA. Guided by the aligned temporal features from FDDA, the affine transformations for the recovery of the ultimate clean frames are learned in TDR. Extensive experiments demonstrate that our video demoiréing method outperforms state-of-the-art approaches by 2.3 dB in PSNR, and also delivers a superior visual experience. |
DOI | 10.1609/aaai.v38i6.28455 |
URL | View the original |
Language | 英語English |
Scopus ID | 2-s2.0-85189559894 |
Fulltext Access | |
Citation statistics | |
Document Type | Conference paper |
Collection | THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) Faculty of Science and Technology DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Affiliation | 1.State Key Laboratory of Internet of Things for Smart City, Department of Computer and Information Science, University of Macau, Macao 2.Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China |
First Author Affilication | University of Macau |
Recommended Citation GB/T 7714 | Xu, Shuning,Song, Binbin,Chen, Xiangyu,et al. Direction-Aware Video Demoiréing with Temporal-Guided Bilateral Learning[C]:Association for the Advancement of Artificial Intelligence, 2024, 6360-6368. |
APA | Xu, Shuning., Song, Binbin., Chen, Xiangyu., & Zhou, Jiantao (2024). Direction-Aware Video Demoiréing with Temporal-Guided Bilateral Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6360-6368. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment