MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images

ECCV 2024

1Monash University,  2ETH Zurich,  3University of Tübingen, Tübingen AI Center, 
4VGG, University of Oxford,  5Microsoft,  6Nanyang Technological University

TL;DR: MVSplat builds a cost volume representation to efficiently predict 3D Gaussians from sparse multi-view images in a single forward pass.

Abstract

We introduce MVSplat, an efficient model that, given sparse multi-view images as input, predicts clean feed-forward 3D Gaussians. To accurately localize the Gaussian centers, we build a cost volume representation via plane sweeping, where the cross-view feature similarities stored in the cost volume can provide valuable geometry cues to the estimation of depth. We also learn other Gaussian primitives' parameters jointly with the Gaussian centers while only relying on photometric supervision. We demonstrate the importance of the cost volume representation in learning feed-forward Gaussians via extensive experimental evaluations. On the large-scale RealEstate10K and ACID benchmarks, MVSplat achieves state-of-the-art performance with the fastest feed-forward inference speed (22 fps). More impressively, compared to the latest state-of-the-art method pixelSplat, MVSplat uses 10× fewer parameters and infers more than 2× faster while providing higher appearance and geometry quality as well as better cross-dataset generalization.

Overview

Overview of MVSplat. Given multiple posed images as input, we first extract multi-view image features with a multi-view Transformer, which contains self- and cross-attention layers to exchange information across views. Next, we construct per-view cost volumes using plane sweeping. The Transformer features and cost volumes are concatenated together as input to a 2D U-Net (with cross-view attention) for cost volume refinement and predicting per-view depth maps. The per-view depth maps are unprojected to 3D and combined using a simple deterministic union operation as the 3D Gaussian centers. The opacity, covariance and color Gaussian parameters are predicted jointly with the depth maps. Finally, novel views are rendered from the predicted 3D Gaussians with the splatting operation.

Comparisons with the State-of-the-art

We present qualitative comparisons with the following state-of-the-art models:

  • pixelSplat: The latest feed-forward 3D Gaussians model that utilies data-driven regression architecture to predict Gaussian centers, leading to poor geometry reconstruction and limited ability of cross-dataset generalization.
  • MuRF: The latest feed-forward NeRF model that leverages 3D volume and (2+1)D CNN, which is expensive to train and renders comparably slowly.
comparison on Real Estate 10k and ACID dataset

Comparisons of Geometry Reconstruction

Our MVSplat produces significantly higher-quality 3D Gaussian primitives than the latest state-of-the-art pixelSplat. The readers are invited to view the corresponding ".ply" files of the 3D Gaussians exported from both models provided at HERE. We recommend viewing them with online viewers, e.g., 3D Gaussian Splatting with Three.js (camera up should be set to "0,0,1").

point clouds and depth maps

Comparisons of Cross-dataset Generalization

Our MVSplat is inherently superior in generalizing to out-of-distribution novel scenes, primarily due to the fact that the cost volume captures the relative similarity between features, which remains invariant compared to the absolute scale of features. Here, we present cross-dataset generalization by training models solely on RealEstate10K (indoor scenes), and directly test them on DTU (object-centric scenes) and ACID (outdoor scenes).

trained on RealEstate10K, and tested on DTU and ACID

Acknowledgements

This research is supported by the Monash FIT Start-up Grant. Dr. Chuanxia Zheng is supported by EPSRC SYN3D EP/Z001811/1.

BibTeX

@article{chen2024mvsplat,
    title   = {MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images},
    author  = {Chen, Yuedong and Xu, Haofei and Zheng, Chuanxia and Zhuang, Bohan and Pollefeys, Marc and Geiger, Andreas and Cham, Tat-Jen and Cai, Jianfei},
    journal = {arXiv preprint arXiv:2403.14627},
    year    = {2024},
}