ICCV 2025
- Inference code for novel view synthesis
- GAS checkpoints release on Hugging Face
- Inference code for novel pose synthesis
- Inference code for synchronized multi-view video generation
- An NVIDIA GPU with CUDA support is required.
- We have tested on a single A4500 and A100 GPU.
- The minimum GPU memory required for inference is 20GB.
- Operating system: Linux
Instructions can be found in INSTALL.md.
Create a folder demo_images
inside the project directory and move your testing image into it, e.g., GAS/demo_images/image.png.
bash demo_scripts/nv.sh
TBD
TBD
If you find this code useful for your research, please cite it using the following BibTeX entry.
@article{lu2025gas,
title={Gas: Generative avatar synthesis from a single image},
author={Lu, Yixing and Dong, Junting and Kwon, Youngjoong and Zhao, Qin and Dai, Bo and De la Torre, Fernando},
journal={arXiv preprint arXiv:2502.06957},
year={2025}
}
This project builds on components that were implemented and released by SHERF, Champ, MimicMotion. We thank the authors for releasing the code. Please also consider citing these works if you find them helpful.
The purpose of this work is for research only. All the copyrights of the demo images and videos are from community users. If there is any infringement or offense, please get in touch with us ([email protected]), and we will delete them in time.