MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images

Yuedong Chen1 Haofei Xu2,3 Chuanxia Zheng4 Bohan Zhuang1
Marc Pollefeys2,5 Andreas Geiger3 Tat-Jen Cham6 Jianfei Cai1
1Monash University2ETH Zurich3University of Tübingen, Tübingen AI Center
4University of Oxford5Microsoft6Nanyang Technological University
Paper Code

TL;DR

MVSplat builds a cost volume representation to efficiently predict 3D Gaussians from sparse multi-view images in a single forward pass.

Abstract

We propose MVSplat, an efficient feed-forward 3D Gaussian Splatting model learned from sparse multi-view images. To accurately localize the Gaussian centers, we propose to build a cost volume representation via plane sweeping in the 3D space, where the cross-view feature similarities stored in the cost volume can provide valuable geometry cues to the estimation of depth. We learn the Gaussian primitives' opacities, covariances, and spherical harmonics coefficients jointly with the Gaussian centers while only relying on photometric supervision. We demonstrate the importance of the cost volume representation in learning feed-forward Gaussian Splatting models via extensive experimental evaluations. On the large-scale RealEstate10K and ACID benchmarks, our model achieves state-of-the-art performance with the fastest feed-forward inference speed (22 fps). Compared to the latest state-of-the-art method pixelSplat, our model uses 10× fewer parameters and infers more than 2× faster while providing higher appearance and geometry quality as well as better cross-dataset generalization.

Architecture

architecture
Overview of MVSplat. Given multiple posed images as input, we first extract multi-view image features with a multi-view Transformer, which contains self- and cross-attention layers to exchange information across views. Next, we construct per-view cost volumes using plane sweeping. The Transformer features and cost volumes are concatenated together as input to a 2D U-Net (with cross-view attention) for cost volume refinement and predicting per-view depth maps. The per-view depth maps are unprojected to 3D and combined using a simple deterministic union operation as the 3D Gaussian centers. The opacity, covariance and color Gaussian parameters are predicted jointly with the depth maps. Finally, novel views are rendered from the predicted 3D Gaussians with the splatting operation.

Comparisons with the State-of-the-art

We present qualitative comparisons with the following state-of-the-art models:

comparison on Real Estate 10k and ACID dataset

Comparisons of Geometry Reconstruction

Our MVSplat produces significantly higher-quality 3D Gaussian primitives than the latest state-of-the-art pixelSplat. The readers are invited to view the corresponding ".ply" files of the 3D Gaussians exported from both models provided at HERE. We recommend viewing them with online viewers, e.g., 3D Gaussian Splatting with Three.js (camera up should be set to "0,0,1").

point clouds and depth maps

Comparisons of Cross-dataset Generalization

Our MVSplat is inherently superior in generalizing to out-of-distribution novel scenes, primarily due to the fact that the cost volume captures the relative similarity between features, which remains invariant compared to the absolute scale of features. Here, we present cross-dataset generalization by training models solely on RealEstate10K (indoor scenes), and directly test them on DTU (object-centric scenes) and ACID (outdoor scenes).

trained on RealEstate10K, and tested on DTU and ACID

BibTeX

@article{chen2024mvsplat,
    title   = {MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images},
    author  = {Chen, Yuedong and Xu, Haofei and Zheng, Chuanxia and Zhuang, Bohan and Pollefeys, Marc and Geiger, Andreas and Cham, Tat-Jen and Cai, Jianfei},
    journal = {arXiv preprint arXiv:2403.14627},
    year    = {2024},
}