Residential College | false |
Status | 已發表Published |
Self-ensembling depth completion via density-aware consistency | |
Zhang, Xuanmeng1; Zheng, Zhedong3,4; Jiang, Minyue2; Ye, Xiaoqing2 | |
2024-10-01 | |
Source Publication | Pattern Recognition |
ISSN | 0031-3203 |
Volume | 154Pages:110618 |
Abstract | Depth completion can predict a dense depth map by taking a sparse depth map and the aligned RGB image as input, but the acquisition of ground truth annotations is labor-intensive and non-scalable. Therefore, we resort to semi-supervised learning, where we only need to annotate a few images and leverage massive unlabeled data without ground truth labels to facilitate model learning. In this paper, we propose SEED, a SElf-Ensembling Depth completion framework to enhance the generalization of the model on unlabeled data. Specifically, SEED contains a pair of the teacher and student models, which are given high-density and low-density sparse depth maps as input respectively. The main idea underpinning SEED is to enforce the density-aware consistency by encouraging consistent prediction across different-density input depth maps. One empirical challenge is that the pseudo-depth labels produced by the teacher model inevitably contain wrong depth values, which would mislead the convergence of the student model. To resist the noisy labels, we propose an automatic method to measure the reliability of the generated pseudo-depth labels adaptively. By leveraging the discrepancy of prediction distributions, we model the pixel-wise uncertainty map as the prediction variance and rectify the training process from noisy labels explicitly. To our knowledge, we are among the early semi-supervised attempts on the depth completion task. Extensive experiments on both outdoor and indoor datasets demonstrate that SEED consistently improves the performance of the baseline model by a large margin and even is on par with several fully-supervised methods. |
Keyword | Density-aware Consistency Depth Completion Semi-supervised Learning Uncertainty Estimation |
DOI | 10.1016/j.patcog.2024.110618 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science ; Engineering |
WOS Subject | Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic |
WOS ID | WOS:001250349800001 |
Publisher | ELSEVIER SCI LTD, 125 London Wall, London EC2Y 5AS, ENGLAND |
Scopus ID | 2-s2.0-85194953229 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | Faculty of Science and Technology INSTITUTE OF COLLABORATIVE INNOVATION DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Corresponding Author | Zheng, Zhedong |
Affiliation | 1.ReLER, AAII, University of Technology Sydney, Australia 2.Department of Computer Vision Technology, Baidu Inc., China 3.Faculty of Science and Technology, University of Macau, Macao 4.Institute of Collaborative Innovation, University of Macau, Macao |
Corresponding Author Affilication | Faculty of Science and Technology; INSTITUTE OF COLLABORATIVE INNOVATION |
Recommended Citation GB/T 7714 | Zhang, Xuanmeng,Zheng, Zhedong,Jiang, Minyue,et al. Self-ensembling depth completion via density-aware consistency[J]. Pattern Recognition, 2024, 154, 110618. |
APA | Zhang, Xuanmeng., Zheng, Zhedong., Jiang, Minyue., & Ye, Xiaoqing (2024). Self-ensembling depth completion via density-aware consistency. Pattern Recognition, 154, 110618. |
MLA | Zhang, Xuanmeng,et al."Self-ensembling depth completion via density-aware consistency".Pattern Recognition 154(2024):110618. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment