RPEFlow: Multimodal Fusion of RGB-PointCloud-Event for Joint Optical Flow and Scene Flow Estimation

IEEE International Conference on Computer Vision (ICCV 2023)


Zhexiong Wan1, Yuxin Mao1, Jing Zhang2, Yuchao Dai1#

1Northwestern Polytechnical University   2Australian National University  
# corresponding author
wanzhexiong@mail.nwpu.edu.cn, daiyuchao@nwpu.edu.cn

Abstract


Recently, the RGB images and point clouds fusion methods have been proposed to jointly estimate 2D optical flow and 3D scene flow. However, as both conventional RGB cameras and LiDAR sensors adopt a frame-based data acquisition mechanism, their performance is limited by the fixed low sampling rates, especially in highly-dynamic scenes. By contrast, the event camera can asynchronously capture the intensity changes with a very high temporal resolution, providing complementary dynamic information of the observed scenes. In this paper, we incorporate \textbf{R}GB images, \textbf{P}oint clouds and \textbf{E}vents for joint optical flow and scene flow estimation with our proposed multi-stage multimodal fusion model, \textbf{RPEFlow}. First, we present an attention fusion module with a cross-attention mechanism to implicitly explore the internal cross-modal correlation for 2D and 3D branches, respectively. Second, we introduce a mutual information regularization term to explicitly model the complementary information of three modalities for effective multimodal feature learning. We also contribute a new synthetic dataset to advocate further research. Experiments on both synthetic and real datasets show that our model outperforms the existing state-of-the-art by a wide margin.


EKubric Dataset Overview


EKubric

We use Kubric and ESIM simulator to make our EKubric dataset, which has 15,367 RGB-PointCloud-Event pairs with annotations (including optical flow, scene flow, surface normal, semantic segmentation and object coordinates ground truths).


Citation



 @InProceedings{Wan_RPEFlow_ICCV_2023,
  author    = {Wan, Zhexiong and Mao, Yuxin and Zhang, Jing and Dai, Yuchao},
  title     = {RPEFlow: Multimodal Fusion of RGB-PointCloud-Event for Joint Optical Flow and Scene Flow Estimation},
  booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
  year      = {2023},
}

Acknowledgments


This research was sponsored by Zhejiang Lab. Zhexiong Wan is sponsored by the scholarship from China Scholarship Council and the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University.

Thanks the ACs and the reviewers for their comments, which is very helpful to improve our paper.

Thanks for the following helpful open source projects: CamLiFlow, RAFT, RAFT-3D, kubric, esim_py, E-RAFT, DSEC, CFNet.