1Northwestern Polytechnical University 2Australian National University 2Shanghai AI Laboratory
# corresponding author
maoyuxin@mail.nwpu.edu.cn, daiyuchao@nwpu.edu.cn
We propose an Explicit Conditional Multimodal Variational Auto-Encoder (ECMVAE) for audio-visual segmentation (AVS), aiming to segment sound sources in the video sequence. Existing AVS methods focus on implicit feature fusion strategies, where models are trained to fit the discrete samples in the dataset. With a limited and less diverse dataset, the resulting performance is usually unsatisfactory. In contrast, we address this problem from an effective representation learning perspective, aiming to model the contribution of each modality explicitly. Specifically, we find that audio contains critical category information of the sound producers, and visual data provides candidate sound producer(s). Their shared information corresponds to the target sound producer(s) shown in the visual data. In this case, cross-modal shared representation learning is especially important for AVS. To achieve this, our ECMVAE factorizes the representations of each modality with a modality-shared representation and a modality-specific representation. An orthogonality constraint is applied between the shared and specific representations to maintain the exclusive attribute of the factorized latent code. Further, a mutual information maximization regularizer is introduced to achieve extensive exploration of each modality. Quantitative and qualitative evaluations on the AVSBench demonstrate the effectiveness of our approach, leading to a new state-of-the-art for AVS, with a 3.84 mIOU performance leap on the challenging MS3 subset for multiple sound source segmentation.
Overview of the proposed ECMVAE for audio-visual segmentation. The feature extractors are used to extract backbone features for the two modalities. We also design three latent encoders to achieve latent space factorization and obtain both task-driven shared representation and modality-related specific representation, achieving explicit multimodal representation learning. The decoder is introduced to obtain the final segmentation maps, indicating the sound producers of the audio-visual data.
Visualization of the modality-shared and modality-specific latent codes.
@InProceedings{Wan_RPEFlow_ICCV_2023,
author = {Yuxin Mao, Jing Zhang, Mochu Xiang, Yiran Zhong, Yuchao Dai},
title = {Multimodal Variational Auto-encoder based Audio-Visual Segmentation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
year = {2023},
}
This work was done when Yuxin Mao was an intern at Shanghai AI Laboratory, OpenNLPLab.
Thanks the ACs and the reviewers for their comments, which is very helpful to improve our paper.