This is the code for GraphSplat: Sparse-View Generalizable 3D Gaussian Splatting is Worth Graph of Nodes by Zeyang Bai, Yunbiao Wang, Dongbo Yu, Jun Xiao, Lupeng Liu.
To get started, create a virtual environment using Python 3.12:
python3.12 -m venv venv
source venv/bin/activate
pip install wheel torch torchvision torchaudio
pip install -r requirements.txtIf your system does not use CUDA 12.1 by default, see the troubleshooting tips below.
Troubleshooting
The Gaussian splatting CUDA code (diff-gaussian-rasterization) must be compiled using the same version of CUDA that PyTorch was compiled with. As of December 2023, the version of PyTorch you get when doing pip install torch was built using CUDA 12.1. If your system does not use CUDA 12.1 by default, you can try the following:
- Install a version of PyTorch that was built using your CUDA version. For example, to get PyTorch with CUDA 11.8, use the following command (more details here):
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118- Install CUDA Toolkit 12.1 on your system. One approach (try this at your own risk!) is to install a second CUDA Toolkit version using the
runfile (local)option here. When you run the installer, disable the options that install GPU drivers and update the default CUDA symlinks. If you do this, you can point your system to CUDA 12.1 during installation as follows:
LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64 pip install -r requirements.txt
# If everything else was installed but you're missing diff-gaussian-rasterization, do:
LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64 pip install git+https://github.com/dcharatan/diff-gaussian-rasterization-modifiedGraphSplat was trained on the same datasets as pixelSplat and MVSplat. Below we quote pixelSplat's detailed instructions on getting datasets.
pixelSplat was trained using versions of the RealEstate10k and ACID datasets that were split into ~100 MB chunks for use on server cluster file systems. Small subsets of the Real Estate 10k and ACID datasets in this format can be found here. To use them, simply unzip them into a newly created
datasetsfolder in the project root directory. The preprocessed versions of the full datasets can be found here.
We use the same testing datasets as MVSplat and TranSplat. Below we quote MVSplat's detailed instructions on getting datasets.
- Download the preprocessed DTU data dtu_training.rar.
The main entry point is src/main.py. Call it via:
python3 -m src.main +experiment=re10kOur code supports multi-GPU training.
@inproceedings{bai2025graphsplat,
author = {Zeyang Bai and Yunbiao Wang and Dongbo Yu and Jun Xiao and Lupeng Liu},
title = {GraphSplat: Sparse-View Generalizable 3D Gaussian Splatting is Worth Graph of Nodes},
booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia (MM '25)},
year = {2025},
}
The project is largely based on pixelSplat, MVSplat and has incorporated numerous code snippets from UniMatch, GreedyViG. Many thanks to these four projects for their excellent contributions!