DUSty (2021)DUSty v2 (2023) → R2DM (2024)

LiDAR Data Synthesis
with Denoising Diffusion Probabilistic Models

Kazuto Nakashima     Ryo Kurazume
Kyushu University
ICRA 2024
Paper Code Demo
TL;DR: A diffusion model for LiDAR data generation, named R2DM

Abstract

Generative modeling of 3D LiDAR data is an emerging task with promising applications for autonomous mobile robots, such as scalable simulation, scene manipulation, and sparse-to-dense completion of LiDAR point clouds. While existing approaches have demonstrated the feasibility of image-based LiDAR data generation using deep generative models, they still struggle with fidelity and training stability. In this work, we present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds based on the image representation of range and reflectance intensity. Our method is built upon denoising diffusion probabilistic models (DDPMs), which have shown impressive results among generative model frameworks in recent years. To effectively train DDPMs in the LiDAR domain, we first conduct an in-depth analysis of data representation, loss functions, and spatial inductive biases. Leveraging our R2DM model, we also introduce a flexible LiDAR completion pipeline based on the powerful capabilities of DDPMs. We demonstrate that our method surpasses existing methods in generating tasks on the KITTI-360 and KITTI-Raw datasets, as well as in the completion task on the KITTI-360 dataset.

Overview Video

Approach

We cast our task as 360° image generation and trained LiDAR range & reflectance images with our modified DDPM.

Generation

R2DM can generate diverse and high-fidelity 3D scene point clouds based on the image representation.
More demos will be added soon!

Citation

@article{nakashima2023lidar,
    title   = {LiDAR Data Synthesis with Denoising Diffusion Probabilistic Models},
    author  = {Kazuto Nakashima and Ryo Kurazume},
    year    = 2023,
    journal = {arXiv:2309.09256}
}

Acknowledgments

This work was supported by JSPS KAKENHI Grant Number JP23K16974 and JST [Moonshot R&D] [Grant Number JPMJMS2032]