3D LiDAR sensors are indispensable for the robust vision of autonomous mobile robots. However, deploying LiDAR-based perception algorithms often fails due to a domain gap from the training environment, such as inconsistent angular resolution and missing properties. Existing studies have tackled the issue by learning inter-domain mapping, while the transferability is constrained by the training configuration and the training is susceptible to peculiar lossy noises called ray-drop. To address the issue, this paper proposes a generative model of LiDAR range images applicable to the data-level domain transfer. Motivated by the fact that LiDAR measurement is based on point-by-point range imaging, we train an implicit image representation-based generative adversarial networks along with a differentiable ray-drop effect. We demonstrate the fidelity and diversity of our model in comparison with the point-based and image-based state-of-the-art generative models. We also showcase upsampling and restoration applications. Furthermore, we introduce a Sim2Real application for LiDAR semantic segmentation. We demonstrate that our method is effective as a realistic ray-drop simulator and outperforms state-of-the-art methods.
Corrupted data can also be restored by exploring learned scene priors.
The 1x result was obtained by reconstruction. The 2x and 4x results can be obtained just by changing coordinate queries.
Our model can be used as a ray-drop noise simulator!
We conducted the semantic segmentation on KITTI annotated with
car and pedestrian classes
[Wu et al. ICRA'19].
Baseline was trained on GTA-LiDAR only (in-game simulation w/o noise). Ours was trained on GTA-LiDAR w/ our noises.
@inproceedings{nakashima2023generative,
author = {Nakashima, Kazuto and Iwashita, Yumi and Kurazume, Ryo},
title = {Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
pages = {}
year = {2023}
}
This work was partially supported by a Grant-in-Aid for JSPS Fellows Grant Number JP19J12159, JSPS KAKENHI Grant Number JP20H00230, and JST Moonshot R&D Grant Number JPMJMS2032.