Residential Collegefalse
Status即將出版Forthcoming
Approaching Outside: Scaling Unsupervised 3D Object Detection from 2D Scene
Zhang, Ruiyang1; Zhang, Hu2; Yu, Hang3; Zheng, Zhedong1
2025
Conference Name18th European Conference on Computer Vision, ECCV 2024
Source PublicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume15069 LNCS
Pages249-266
Conference Date29 September 2024 to 4 October 2024
Conference PlaceMilan; Italy
PublisherSpringer Science and Business Media Deutschland GmbH
Abstract

The unsupervised 3D object detection is to accurately detect objects in unstructured environments with no explicit supervisory signals. This task, given sparse LiDAR point clouds, often results in compromised performance for detecting distant or small objects due to the inherent sparsity and limited spatial resolution. In this paper, we are among the early attempts to integrate LiDAR data with 2D images for unsupervised 3D detection and introduce a new method, dubbed LiDAR-2D Self-paced Learning (LiSe). We argue that RGB images serve as a valuable complement to LiDAR data, offering precise 2D localization cues, particularly when scarce LiDAR points are available for certain objects. Considering the unique characteristics of both modalities, our framework devises a self-paced learning pipeline that incorporates adaptive sampling and weak model aggregation strategies. The adaptive sampling strategy dynamically tunes the distribution of pseudo labels during training, countering the tendency of models to overfit easily detected samples, such as nearby and large-sized objects. By doing so, it ensures a balanced learning trajectory across varying object scales and distances. The weak model aggregation component consolidates the strengths of models trained under different pseudo label distributions, culminating in a robust and powerful final model. Experimental evaluations validate the efficacy of our proposed LiSe method, manifesting significant improvements of +7.1% AP and +3.4% AP on nuScenes, and +8.3% AP and +7.4% AP on Lyft compared to existing techniques.

Keyword2d scene Understanding Self-paced Learning Unsupervised 3d Object Detection Unsupervised Learning
DOI10.1007/978-3-031-73247-8_15
URLView the original
Indexed ByCPCI-S
Language英語English
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence ; Computer Science, Interdisciplinary Applications ; Computer Science, Theory & Methods
WOS IDWOS:001353688700015
Scopus ID2-s2.0-85209988423
Fulltext Access
Citation statistics
Document TypeConference paper
CollectionDEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorZheng, Zhedong
Affiliation1.FST and ICI, University of Macau, Macao
2.CSIRO Data61, Sydney, Australia
3.Shanghai University, Shanghai, China
First Author AffilicationFaculty of Science and Technology
Corresponding Author AffilicationFaculty of Science and Technology
Recommended Citation
GB/T 7714
Zhang, Ruiyang,Zhang, Hu,Yu, Hang,et al. Approaching Outside: Scaling Unsupervised 3D Object Detection from 2D Scene[C]:Springer Science and Business Media Deutschland GmbH, 2025, 249-266.
APA Zhang, Ruiyang., Zhang, Hu., Yu, Hang., & Zheng, Zhedong (2025). Approaching Outside: Scaling Unsupervised 3D Object Detection from 2D Scene. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 15069 LNCS, 249-266.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhang, Ruiyang]'s Articles
[Zhang, Hu]'s Articles
[Yu, Hang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhang, Ruiyang]'s Articles
[Zhang, Hu]'s Articles
[Yu, Hang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhang, Ruiyang]'s Articles
[Zhang, Hu]'s Articles
[Yu, Hang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.