Skip to content

icon-lab/SynDiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SynDiff

Official PyTorch implementation of SynDiff described in the paper.

Muzaffer Özbey*, Onat Dalmaz*, Salman UH Dar, Hasan A Bedel, Şaban Özturk, Alper Güngör, Tolga Çukur, "Unsupervised Medical Image Translation With Adversarial Diffusion Models," in IEEE Transactions on Medical Imaging, vol. 42, no. 12, pp. 3524-3539, Dec. 2023, doi: 10.1109/TMI.2023.3290149.

*: equal contribution

Dependencies

python>=3.6.9
torch>=1.7.1
torchvision>=0.8.2
cuda=>11.2
ninja
python3.x-dev (apt install, x should match your python3 version, ex: 3.8)

Installation

  • Clone this repo:
git clone https://github.com/icon-lab/SynDiff
cd SynDiff

Dataset

You should structure your aligned dataset in the following way:

input_path/
  ├── data_train_contrast1.mat
  ├── data_train_contrast2.mat
  ├── data_val_contrast1.mat
  ├── data_val_contrast2.mat
  ├── data_test_contrast1.mat
  ├── data_test_contrast2.mat

where .mat files has shape of (#images, width, height) and image values are between 0 and 1.0.

Sample Data

Sample toy data can also found under 'SynDiff_sample_data' folder of the repository.

Train


python3 train.py --image_size 256 --exp exp_syndiff --num_channels 2 --num_channels_dae 64 --ch_mult 1 1 2 2 4 4 --num_timesteps 4 --num_res_blocks 2 --batch_size 1 --contrast1 T1 --contrast2 T2 --num_epoch 500 --ngf 64 --embedding_type positional --use_ema --ema_decay 0.999 --r1_gamma 1. --z_emb_dim 256 --lr_d 1e-4 --lr_g 1.6e-4 --lazy_reg 10 --num_process_per_node 1 --save_content --local_rank 0 --input_path /input/path/for/data --output_path /output/for/results

Pretrained Models

We have released pretrained diffusive generators for T1->PD and PD->T1 tasks in IXI and T1->T2 and T2->T1 tasks in BRATS datasets. You can save these weights in relevant checkpoints folder and perform inference.

Test


python test.py --image_size 256 --exp exp_syndiff --num_channels 2 --num_channels_dae 64 --ch_mult 1 1 2 2 4 4 --num_timesteps 4 --num_res_blocks 2 --batch_size 1 --embedding_type positional  --z_emb_dim 256 --contrast1 T1  --contrast2 T2 --which_epoch 50 --gpu_chose 0 --input_path /input/path/for/data --output_path /output/for/results


Citation

Preliminary versions of SynDiff are presented in NeurIPS Medical Imaging Meets and IEEE ISBI 2023. You are encouraged to modify/distribute this code. However, please acknowledge this code and cite the paper appropriately.

@ARTICLE{ozbey_dalmaz_syndiff_2024,
  author={Özbey, Muzaffer and Dalmaz, Onat and Dar, Salman U. H. and Bedel, Hasan A. and Özturk, Şaban and Güngör, Alper and Çukur, Tolga},
  journal={IEEE Transactions on Medical Imaging}, 
  title={Unsupervised Medical Image Translation With Adversarial Diffusion Models}, 
  year={2023},
  volume={42},
  number={12},
  pages={3524-3539},
  keywords={Biological system modeling;Computational modeling;Training;Generative adversarial networks;Image synthesis;Task analysis;Generators;Medical image translation;synthesis;unsupervised;unpaired;adversarial;diffusion;generative},
  doi={10.1109/TMI.2023.3290149}}


For any questions, comments and contributions, please contact Muzaffer Özbey (muzafferozbey94[at]gmail.com) or Onat Dalmaz (onat[at]stanford.edu)

(c) ICON Lab 2023


Acknowledgements

This code uses libraries from, pGAN, StyleGAN-2, and DD-GAN repositories.

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •