Skip to content

CompVis/EDGS

Repository files navigation

EDGS: Eliminating Densification for Efficient Convergence of 3DGS

Dmytro Kotovenko* · Olga Grebenkova* · Björn Ommer

CompVis @ LMU Munich · Munich Center for Machine Learning (MCML)

* equal contribution

Project Page Paper Open In Colab Hugging Face

3DGS initializes with a sparse set of Gaussians and progressively adds more in under-reconstructed regions. In contrast, EDGS starts with a dense initialization from triangulated 2D correspondences across training image pairs, requiring only minimal refinement. This leads to faster convergence and higher rendering quality. Our method reaches the original 3DGS LPIPS score in just 25% of the training time and uses only 60% of the splats. Renderings become nearly indistinguishable from ground truth after only 3,000 steps — without any densification.

3D scene reconstruction using our method in 11 seconds.

3D Reconstruction Demo

📚 Table of Contents

🚀 Quickstart

The fastest way to try our model is through the Hugging Face demo, which lets you upload images or a video and interactively rotate the resulting 3D scene. For broad accessibility, we currently support only forward-facing scenes.

Steps:

  1. Upload a list of photos or a single video.
  2. Click 📸 Preprocess Input to estimate 3D positions using COLMAP.
  3. Click 🚀 Start Reconstruction to run the model.

You can also explore the reconstructed scene in 3D directly in the browser.

⚡ Runtime: EDGS typically takes just 10–20 seconds, plus 5–10 seconds for COLMAP processing. Additional time may be needed to save outputs (model, video, 3D preview).

You can also run the same app locally on your machine with command: CUDA_VISIBLE_DEVICES=0 python gradio_demo.py --port 7862 --no_share Without --no_share flag you will get the adress for gradio app that you can share with the others allowing others to process their data on your server.

Alternatively, check our Colab notebook.

🛠️ Installation

You can either run install.sh or manually install using the following:

git clone [email protected]:CompVis/EDGS.git --recursive
cd EDGS
git submodule update --init --recursive 

conda create -y -n edgs python=3.10 pip
conda activate edgs

# Set up path to your CUDA. In our experience similar versions like 12.2 also work well 
export CUDA_HOME=/usr/local/cuda-12.1
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export PATH=$CUDA_HOME/bin:$PATH

conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia -y
conda install nvidia/label/cuda-12.1.0::cuda-toolkit -y

pip install -e submodules/gaussian-splatting/submodules/diff-gaussian-rasterization
pip install -e submodules/gaussian-splatting/submodules/simple-knn

# For COLMAP and pycolmap
# Optionally install original colmap but probably pycolmap suffices
# conda install conda-forge/label/colmap_dev::colmap
pip install pycolmap


pip install wandb hydra-core tqdm torchmetrics lpips matplotlib rich plyfile imageio imageio-ffmpeg
conda install numpy=1.26.4 -y -c conda-forge --override-channels

pip install -e submodules/RoMa
conda install anaconda::jupyter --yes

# Stuff necessary for gradio and visualizations
pip install gradio 
pip install plotly scikit-learn moviepy==2.1.1 ffmpeg
pip install open3d 

📦 Data

We evaluated on the following datasets:

  • MipNeRF360 — download here. Unzip "Dataset Pt. 1" and "Dataset Pt. 2", then merge scenes.
  • Tanks & Temples + Deep Blending — from the original 3DGS repo.

Using Your Own Dataset

You can use the same data format as the 3DGS project. Please follow their guide to prepare your scene.

Expected folder structure:

scene_folder
|---images
|   |---<image 0>
|   |---<image 1>
|   |---...
|---sparse
    |---0
        |---cameras.bin
        |---images.bin
        |---points3D.bin

Nerf synthetic format is also acceptable.

You can also use functions provided in our code to convert a collection of images or a sinlge video into a desired format. However, this may requre tweaking and processing time can be large for large collection of images with little overlap.

🏋️ Training

To optimize on a single scene in COLMAP format use this code.

python train.py \
  train.gs_epochs=30000 \
  train.no_densify=True \
  gs.dataset.source_path=<scene folder> \
  gs.dataset.model_path=<output folder> \
  init_wC.matches_per_ref=20000 \
  init_wC.nns_per_ref=3 \
  init_wC.num_refs=180
Command Line Arguments for train.py
  • train.gs_epochs Number of training iterations (steps) for Gaussian Splatting.

  • train.no_densify Disables densification. True by default.

  • gs.dataset.source_path Path to your input dataset directory. This should follow the same format as the original 3DGS dataset structure.

  • gs.dataset.model_path Output directory where the trained model, logs, and renderings will be saved.

  • init_wC.matches_per_ref Number of 2D feature correspondences to extract per reference view for initialization. More matches leads to more gaussians.

  • init_wC.nns_per_ref Number of nearest neighbor images used per reference during matching.

  • init_wC.num_refs Total number of reference views sampled.

  • wandb.mode Specifies how Weights & Biases (W&B) logging is handled.

    • Default: "disabled"
    • Options:
      • "online" — log to the W&B server in real-time
      • "offline" — save logs locally to sync later
      • "disabled" — turn off W&B logging entirely

    If you want to enable W&B logging, make sure to also configure:

    • wandb.project — the name of your W&B project
    • wandb.entity — your W&B username or team name

Example override:

wandb.mode=online wandb.project=EDGS wandb.entity=your_username train.gs_epochs=15_000 init_wC.matches_per_ref=15_000

To run full evaluation on all datasets:

python full_eval.py -m360 <mipnerf360 folder> -tat <tanks and temples folder> -db <deep blending folder>

🏗️ Reusing Our Model

Our model is essentially a better initialization module for Gaussian Splatting. You can integrate it into your pipeline by calling:

source.corr_init.init_gaussians_with_corr(...)

Input arguments:

  • A GaussianModel and Scene instance
  • A configuration namespace cfg.init_wC to specify parameters like the number of matches, neighbors, and reference views
  • A RoMA model (automatically instantiated if not provided)

📄 Citation

@misc{kotovenko2025edgseliminatingdensificationefficient,
      title={EDGS: Eliminating Densification for Efficient Convergence of 3DGS}, 
      author={Dmytro Kotovenko and Olga Grebenkova and Björn Ommer},
      year={2025},
      eprint={2504.13204},
      archivePrefix={arXiv},
      primaryClass={cs.GR},
      url={https://arxiv.org/abs/2504.13204}, 
}

TODO:

  • Code for training and processing forward-facing scenes.
  • More data examples