Use AnimeGANv2 to apply Hayao Miyazaki anime-like style to a video.
Preview:
Tested on platforms:
- Nvidia Turing, Ampere.
Demonstrated adapters:
- Multi-stream source adapter;
- Video/Metadata sink adapter.
git clone https://github.com/insight-platform/Savant.git
cd Savant
git lfs pull
./utils/check-environment-compatible
Note: Ubuntu 22.04 runtime configuration guide helps to configure the runtime to run Savant pipelines.
# you are expected to be in Savant/ directory
mkdir -p data
curl -o data/deepstream_sample_720p.mp4 https://eu-central-1.linodeobjects.com/savant-data/demo/deepstream_sample_720p.mp4
The demo uses models that are compiled into TensorRT engines the first time the demo is run. This takes time. Optionally, you can prepare the engines before running the demo by using the command:
# you are expected to be in Savant/ directory
./scripts/run_module.py --build-engines samples/animegan/module.yml
# you are expected to be in Savant/ directory
./samples/animegan/run.sh
The script waits for the module to complete processing and removes containers afterward. While waiting for the script to finish you can check current progress by reading container logs from a separate terminal:
docker logs -f animegan-source-1
# or
docker logs -f animegan-module-1
# or
docker logs -f animegan-video-sink-1
Result is written into Savant/data/results/animegan_result_0
.
Input video is expected to be located in the Savant/data
directory. File name can be set through INPUT_FILENAME
environment variable
INPUT_FILENAME=input.mp4 ./samples/animegan/run.sh
The sample uses generator_Hayao_weight AnimeGANv2 checkpoint that was:
- Converted to Pytorch with the help of convert_weights.py script.
- Exported to ONNX using standard Pytorch methods with (dynamic, 3, 720, 1280) inference dimensions.
- Simplified using onnx-simplifier