Image-conditioned Computer-Aided Design Generation with Transformer-based Contrastive Representation and Diffusion Priors
Download from here and place it in the data/
directory.
Download from here and place them in data/ckpt/
.
First download the checkpoints and the dataset and put them in their respective directories.
-
Clone the repo:
git clone https://github.com/ferdous-alam/GenCAD cd GenCAD
-
Build the Docker image:
docker build -t gencad:latest .
-
Run a script, for example training CSR:
docker run -it gencad:latest conda run -n gencad_env python train_gencad.py csr -name test -gpu 0
-
For headless visualization (inference):
First, enter the container with GPU access and mount the appropriate folders:
docker run --gpus all \ -v $(pwd)/data/images:/app/data/images \ -v $(pwd)/assets:/app/assets \ -v $(pwd)/results:/app/results \ -it gencad:latest /bin/bash
Then inside the container, run:
xvfb-run --server-args="-screen 0 2048x2048x24" python inference_gencad.py -image_path data/images -export_img
-
Create and activate a virtual environment with GPU support:
conda create -n gencad_env python=3.10 -y conda activate gencad_env
-
Install
pythonocc-core
using conda:conda install -c conda-forge pythonocc-core=7.9.0
-
Install the rest via pip:
pip install -r requirements.txt
-
Now run training or inference:
python train_gencad.py csr -name test -gpu 0
python train_gencad.py csr -name test -gpu 0
Optional checkpoint:
python train_gencad.py csr -name test -gpu 0 -ckpt "model/ckpt/ae_ckpt_epoch1000.pth"
python train_gencad.py ccip -name test -gpu 0 -cad_ckpt "model/ckpt/ae_ckpt_epoch1000.pth"
python train_gencad.py dp -name test -gpu 0 -cad_emb 'data/embeddings/cad_embeddings.h5' -img_emb 'data/embeddings/sketch_embeddings.h5'
For headless systems (e.g. servers):
xvfb-run python inference_gencad.py
Convert STL to PNG:
python stl2img.py -src path/to/stl/files -dst path/to/save/images
Coming soon.