- Python 3.10+
- Clone the repository and enter it:
- git clone https://github.com/creatis-myriad/cLDM_project
cd cLDM_poject/bin/
- Create a conda environment from the provided file:
- conda env create -f environment.yaml
- Activate it:
- conda activate cLDM_env
- Data come from the MYOSAIQ challenge and were put in one folder and divided by patients.
- The D8 subset was not used during training.
- In the code,
D_metricsis a dictionnary with specific informations (like the metrics) obtained from the MYOSAIQ database.
- Configurations files that were used to run the experiments can be found in the folder
nn_lib/config/. It is configurated as follow:Config_VanillaVAE.yamlis the main config file that will use other configuration files to run the VAE model.- The folder
config/modelcontains the configuration files with the parameters for the each model. - The folder
config/architecturecontains the configuration files to select specific architecture depending on the model you want to run. - The folder
config/processinghas the path to load a dictionnary containing the metrics of all patients for all slices. - The folders
config/datasetandconfig/datamodulecontain the configuration files to create the dataloaders and use them to train the model.
- When running a main configuration, the sub-configuration files used should have the same name.
- For strategy 1, the
cLDMmodel is used. The conditioning (using cross-attention) was done with:- A vector of scalars (clinical attributes) derived from the segmentations (strategy 1.1).
- The latent representation from the
VanillaVAEmodel trained on images (strategy 1.2). - The latent representation from the
ARVAEmodel trained on images and regularized with clinical attributes (strategy 1.3)
- For strategy 2, the
ControlNetarchitecture is employed with anLDMbackbone. - For strategy 3, the
cLDM_concatarchitecture is conditioned on 2D representation of segmentation masks obtained with theAE_KLmodel.
- Figure 2 and 3 where obtained using the file
fig_originalSeg_vs_generatedSeg.py. It needs the original segmentation as well as the segmentation derived from the nnU-Net model with synthetic images used as inputs. - To get Figure 2, we have chosen an arbitrary mask to illustrate our pipeline.
- To get Figure 3, we have selected specific masks with relevant characteristics. Therefore, synthetic images were generated and conditioned with those masks, as illustrated in the figure. For the final row, a manual rotation of 90°, 180° and 270° were applied to the mask.
Below are the command lines to run the models:
-
VanillaVAE
python train_VanillaVAE.py \ +config_name=Config_VanillaVAE.yaml \ model.train_params.num_workers=24 \ model.train_params.batch_size=32 \ model.train_params.max_epoch=500 \ model.net.shape_data=[1,128,128] \ model.net.lat_dims=8 \ model.net.alpha=5 \ model.net.beta=8e-3 -
ARVAE
python train_ARVAE.py \ +config_name=Config_ARVAE.yaml \ \ model.train_params.num_workers=24 \ model.train_params.batch_size=32 \ model.train_params.max_epoch=500 \ model.net.shape_data=[1,128,128] \ model.net.lat_dims=8 \ model.net.alpha=5 \ model.net.beta=8e-3 \ model.net.gamma=3 \ \ +model.net.keys_cond_data=["z_vals","transmurality","endo_surface_length","infarct_size_2D"] \ -
AE_KL
python train_AE_KL.py \ +config_name=ConfigAE_KL.yaml \ model.train_params.num_workers=24 \ model.train_params.batch_size=32 \ model.train_params.max_epoch=1000 \ model.net.lat_dims=1 \ -
cLDM
# Conditioning with Scalars python train_cLDM.py \ +config_name=Config_cLDM.yaml \ \ model.train_params.num_workers=24 \ model.train_params.batch_size=32 \ model.train_params.max_epoch=5100 \ model.path_model_cond=null \ \ processing=processing_CompressLgeSegCond_Scalars \ dataset=CompressLgeSegCond_Scalars_Dataset \ datamodule=CompressLgeSegCond_Scalars_Datamodule \ datamodule.keys_cond_data=["z_vals","transmurality","endo_surface_length","infarct_size_2D"] \ \ architecture/unets=unet_cLDM_light \
# Conditioning with latent representation from VAE python train_cLDM.py \ +config_name=Config_cLDM.yaml \ \ model.train_params.num_workers=24 \ model.train_params.batch_size=32 \ model.train_params.max_epoch=5100 \ model.path_model_cond="/home/deleat/Documents/RomainD/Working_space/NN_models/training_Pytorch/training_VAE/training_LgeMyosaiq_v2/2025-01-06 10:13:45_106e_img_base" \ \ architecture/unets=unet_cLDM_light \
# Conditioning with latent representation from ARVAE python nn_models/bin/train_cLDM.py \ +config_name=Config_cLDM.yaml \ \ model.train_params.num_workers=24 \ model.train_params.batch_size=32 \ model.train_params.max_epoch=5100 \ model.path_model_cond="/home/deleat/Documents/RomainD/Working_space/NN_models/training_Pytorch/training_ARVAE/training_LgeMyosaiq_v2/2025-01-06 14:23:29_72e_img_base" \ \ architecture/unets=unet_cLDM_light \
-
LDM
python train_LDM.py \ +config_name=Config_LDM.yaml \ model.train_params.num_workers=24 \ model.train_params.batch_size=32 \ model.train_params.max_epoch=5100 \ architecture/unets=unet_LDM_light \ -
ControlNet
python train_ControlNet.py \ +config_name=Config_ControlNet.yaml \ model.train_params.num_workers=24 \ model.train_params.batch_size=32 \ model.train_params.max_epoch=5100 \ -
cLDM_concat
python train_cLDM_concat.py \ +config_name=Config_cLDM_concat.yaml \ model.train_params.num_workers=24 \ model.train_params.batch_size=32 \ model.train_params.max_epoch=5100 \ \ architecture/unets=unet_cLDM_concat_light \


