UM  > Faculty of Science and Technology
Residential Collegefalse
Status已發表Published
Self-ensembling depth completion via density-aware consistency
Zhang, Xuanmeng1; Zheng, Zhedong3,4; Jiang, Minyue2; Ye, Xiaoqing2
2024-10-01
Source PublicationPattern Recognition
ISSN0031-3203
Volume154Pages:110618
Abstract

Depth completion can predict a dense depth map by taking a sparse depth map and the aligned RGB image as input, but the acquisition of ground truth annotations is labor-intensive and non-scalable. Therefore, we resort to semi-supervised learning, where we only need to annotate a few images and leverage massive unlabeled data without ground truth labels to facilitate model learning. In this paper, we propose SEED, a SElf-Ensembling Depth completion framework to enhance the generalization of the model on unlabeled data. Specifically, SEED contains a pair of the teacher and student models, which are given high-density and low-density sparse depth maps as input respectively. The main idea underpinning SEED is to enforce the density-aware consistency by encouraging consistent prediction across different-density input depth maps. One empirical challenge is that the pseudo-depth labels produced by the teacher model inevitably contain wrong depth values, which would mislead the convergence of the student model. To resist the noisy labels, we propose an automatic method to measure the reliability of the generated pseudo-depth labels adaptively. By leveraging the discrepancy of prediction distributions, we model the pixel-wise uncertainty map as the prediction variance and rectify the training process from noisy labels explicitly. To our knowledge, we are among the early semi-supervised attempts on the depth completion task. Extensive experiments on both outdoor and indoor datasets demonstrate that SEED consistently improves the performance of the baseline model by a large margin and even is on par with several fully-supervised methods.

KeywordDensity-aware Consistency Depth Completion Semi-supervised Learning Uncertainty Estimation
DOI10.1016/j.patcog.2024.110618
URLView the original
Indexed BySCIE
Language英語English
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS IDWOS:001250349800001
PublisherELSEVIER SCI LTD, 125 London Wall, London EC2Y 5AS, ENGLAND
Scopus ID2-s2.0-85194953229
Fulltext Access
Citation statistics
Document TypeJournal article
CollectionFaculty of Science and Technology
INSTITUTE OF COLLABORATIVE INNOVATION
DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorZheng, Zhedong
Affiliation1.ReLER, AAII, University of Technology Sydney, Australia
2.Department of Computer Vision Technology, Baidu Inc., China
3.Faculty of Science and Technology, University of Macau, Macao
4.Institute of Collaborative Innovation, University of Macau, Macao
Corresponding Author AffilicationFaculty of Science and Technology;  INSTITUTE OF COLLABORATIVE INNOVATION
Recommended Citation
GB/T 7714
Zhang, Xuanmeng,Zheng, Zhedong,Jiang, Minyue,et al. Self-ensembling depth completion via density-aware consistency[J]. Pattern Recognition, 2024, 154, 110618.
APA Zhang, Xuanmeng., Zheng, Zhedong., Jiang, Minyue., & Ye, Xiaoqing (2024). Self-ensembling depth completion via density-aware consistency. Pattern Recognition, 154, 110618.
MLA Zhang, Xuanmeng,et al."Self-ensembling depth completion via density-aware consistency".Pattern Recognition 154(2024):110618.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhang, Xuanmeng]'s Articles
[Zheng, Zhedong]'s Articles
[Jiang, Minyue]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhang, Xuanmeng]'s Articles
[Zheng, Zhedong]'s Articles
[Jiang, Minyue]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhang, Xuanmeng]'s Articles
[Zheng, Zhedong]'s Articles
[Jiang, Minyue]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.