diff --git a/containers/2_ApplicationSpecific/BindCraft/BUILD_with_PyRosetta.md b/containers/2_ApplicationSpecific/BindCraft/BUILD_with_PyRosetta.md
new file mode 100644
index 0000000..b63f065
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/BUILD_with_PyRosetta.md
@@ -0,0 +1,165 @@
+# Build the BindCraft container with PyRosetta
+
+To build and use the PyRosetta version you must have a PyRosetta license.
+You must eaither complete the application for a non commercial license or purchase
+a commercial licenses - see [PyRosetta Licensing](https://els2.comotion.uw.edu/product/pyrosetta) for more info.
+
+## Building the container
+
+1. Start an interactive job
+
+```
+salloc --cluster=ub-hpc --partition=debug --qos=debug --account="[SlurmAccountName]" \
+ --mem=0 --exclusive --time=01:00:00
+```
+
+sample outout:
+
+> ```
+> salloc: Pending job allocation 19781052
+> salloc: job 19781052 queued and waiting for resources
+> salloc: job 19781052 has been allocated resources
+> salloc: Granted job allocation 19781052
+> salloc: Nodes cpn-i14-39 are ready for job
+> CCRusername@cpn-i14-39:~$
+> ```
+
+2. Navigate to your build directory and use the Slurm job local temporary directory for cache
+
+You should now be on the compute node allocated to you. In this example we're using our project directory for our build directory.
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+Download the BindCraft build files, FreeBindCraft_with_PyRosetta.def and docker-entrypoint.sh to this directory
+
+```
+curl -L -o FreeBindCraft_with_PyRosetta.def https://raw.githubusercontent.com/tonykew/ccr-examples/refs/heads/BindCraft/containers/2_ApplicationSpecific/BindCraft/FreeBindCraft_with_PyRosetta.def
+curl -L -o docker-entrypoint.sh https://raw.githubusercontent.com/tonykew/ccr-examples/refs/heads/BindCraft/containers/2_ApplicationSpecific/BindCraft/docker-entrypoint.sh
+```
+
+Sample output:
+
+> ```
+> % Total % Received % Xferd Average Speed Time Time Time Current
+> Dload Upload Total Spent Left Speed
+> 100 5408 100 5408 0 0 25764 0 --:--:-- --:--:-- --:--:-- 258
+> % Total % Received % Xferd Average Speed Time Time Time Current
+> Dload Upload Total Spent Left Speed
+> 100 404 100 404 0 0 2889 0 --:--:-- --:--:-- --:--:-- 2906
+> ```
+
+3. Build your container
+
+Set the apptainer cache dir:
+
+```
+export APPTAINER_CACHEDIR=${SLURMTMPDIR}
+```
+
+Building the BindCraft container takes about half an hour...
+
+```
+apptainer build BindCraft_with_PyRosetta-$(arch).sif FreeBindCraft_with_PyRosetta.def
+```
+
+Sample truncated output:
+
+> ```
+> [....]
+> INFO: Adding environment to container
+> INFO: Creating SIF file...
+> INFO: Build complete: BindCraft-x86_64.sif
+> ```
+
+## Running the container
+
+Start an interactive job with a single GPU e.g.
+NOTE: BindCraft Inference only uses one GPU
+
+```
+salloc --cluster=ub-hpc --partition=general-compute --qos=general-compute \
+ --account="[SlurmAccountName]" --mem=128GB --nodes=1 --cpus-per-task=1 \
+ --tasks-per-node=12 --gpus-per-node=1 --time=05:00:00
+```
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+Create the input and output directories
+
+```
+mkdir -p ./input ./output
+```
+
+...then start the BindCraft container instance
+
+```
+apptainer shell \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B /util/software/data/alphafold/params:/app/params \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft_with_PyRosetta-$(arch).sif
+```
+
+Sample output:
+
+> ```
+>
+> GPU info:
+> GPU 0: Tesla T4 (UUID: GPU-fdc92258-5f5b-9ece-ceca-f1545184cd08)
+>
+> Apptainer>
+> ```
+
+Verify BindCraft is installed:
+
+The following command is run from the "Apptainer> " prompt
+
+```
+source /singularity
+```
+
+Expected output:
+
+> ```
+> BindCraft>
+> ```
+
+All the following commands are run from the "BindCraft> " prompt
+
+```
+python3 "/app/bindcraft.py" --help
+```
+
+Sample output:
+
+> ```
+> Open files limits OK (soft=65536, hard=65536)
+> usage: bindcraft.py [-h] [--settings SETTINGS] [--filters FILTERS] [--advanced ADVANCED] [--no-pyrosetta] [--verbose] [--no-plots] [--no-animations]
+> [--interactive]
+>
+> Script to run BindCraft binder design.
+>
+> options:
+> -h, --help show this help message and exit
+> --settings SETTINGS, -s SETTINGS
+> Path to the basic settings.json file. If omitted in a TTY, interactive mode is used.
+> --filters FILTERS, -f FILTERS
+> Path to the filters.json file used to filter design. If not provided, default will be used.
+> --advanced ADVANCED, -a ADVANCED
+> Path to the advanced.json file with additional design settings. If not provided, default will be used.
+> --no-pyrosetta Run without PyRosetta (skips relaxation and PyRosetta-based scoring)
+> --verbose Enable detailed timing/progress logs
+> --no-plots Disable saving design trajectory plots (overrides advanced settings)
+> --no-animations Disable saving design animations (overrides advanced settings)
+> --interactive Force interactive mode to collect target settings and options
+> ```
+
diff --git a/containers/2_ApplicationSpecific/BindCraft/EXAMPLES.md b/containers/2_ApplicationSpecific/BindCraft/EXAMPLES.md
new file mode 100644
index 0000000..c10e2da
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/EXAMPLES.md
@@ -0,0 +1,450 @@
+# BindCraft Examples (without PyRosetta)
+
+## BindCraft
+
+The following examples are from the [BindCraft GitHub README.md](https://github.com/cytokineking/FreeBindCraft/blob/master/README.md)
+
+Start an interactive job with a GPU e.g.
+NOTE: BindCraft only uses one GPU
+
+```
+salloc --cluster=ub-hpc --partition=debug --qos=debug \
+ --account="[SlurmAccountName]" --mem=128GB --nodes=1 --cpus-per-task=1 \
+ --tasks-per-node=12 --gpus-per-node=1 --time=01:00:00
+```
+
+Sample output:
+
+> ```
+> salloc: Pending job allocation 21355872
+> salloc: job 21355872 queued and waiting for resources
+> salloc: job 21355872 has been allocated resources
+> salloc: Granted job allocation 21355872
+> salloc: Nodes cpn-d01-39 are ready for job
+> ```
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+Create the top level input and output directories
+
+```
+mkdir -p ./input ./output
+```
+
+Start the container:
+Note that the AlpahFold 2 params files are mounted on /app/params inside the container.
+
+```
+apptainer shell \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B /util/software/data/alphafold/params:/app/params \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft-$(arch).sif
+```
+
+Sample output:
+
+> ```
+>
+> GPU info:
+> GPU 0: NVIDIA A40 (UUID: GPU-52ba81f0-dd77-1e16-417f-8c7218ee8bc6)
+>
+> Apptainer>
+> ```
+
+The following command is run from the "Apptainer> " prompt
+
+```
+source /singularity
+```
+
+Expected output:
+
+> ```
+> BindCraft>
+> ```
+
+All the following commands are run from the "BindCraft> " prompt
+
+Copy the sample input file
+
+```
+cp /app/example/PDL1.pdb /work/input/
+```
+
+Verify the copy:
+
+```
+ls -l /work/input/PDL1.pdb
+```
+
+Sample output:
+
+> ```
+-rw-rw-r-- 1 [CCRusername] nogroup 74686 Sep 9 15:39 /work/input/PDL1.pdb
+> ```
+
+## Run OpenMM relax Sanity test
+
+```
+python3 /app/extras/test_openmm_relax.py /work/input/PDL1.pdb /work/output/relax_test
+```
+
+Sample output:
+
+> ```
+> --- Test Script: Starting Relaxations ---
+> Input PDB: /work/input/PDL1.pdb
+> Base Output PDB Path: /work/output/relax_test
+> Cleaning input PDB file: /work/input/PDL1.pdb...
+> Input PDB file cleaned successfully.
+> Target OpenMM output: /work/output/relax_test_openmm.pdb
+> Target PyRosetta output: /work/output/relax_test_pyrosetta.pdb
+>
+> --- OpenMM Relax Run: single ---
+> Platform: OpenCL
+> Total seconds: 9.11
+> Initial minimization seconds: 0.35
+> Ramp count: 3, MD steps/shake: 5000
+> Best energy (kJ/mol): -20371.66
+> Stage 1: md_steps=5000, md_s=0.88, min_calls=5, min_s=1.23, E0=-20220.387287139893, Emd=-14546.19973397255, Efin=-20352.85933494568
+> Stage 2: md_steps=5000, md_s=0.86, min_calls=5, min_s=1.43, E0=-20352.85933494568, Emd=-14468.224707603455, Efin=-20269.905647277832
+> Stage 3: md_steps=0, md_s=0.00, min_calls=5, min_s=1.23, E0=-20269.905647277832, Emd=None, Efin=-20371.66457605362
+> OpenMM Relaxed PDB saved to: /work/output/relax_test_openmm.pdb
+>
+> --- Starting PyRosetta Relaxation ---
+> PyRosetta is not available (PYROSETTA_AVAILABLE=False). Skipping PyRosetta relaxation.
+> If you intended to test PyRosetta relaxation, please ensure it's correctly installed and configured.
+>
+> --- Test Script: Finished ---
+> ```
+
+The output for the run is in the ./output directory
+
+```
+ls -l ./output/relax_test_openmm*
+```
+
+Sample output:
+
+> ```
+> -rw-rw-r-- 1 [CCRusername] grp-ccradmintest 150101 Sep 9 15:43 relax_test_openmm.pdb
+> ```
+
+Exit the container
+
+```
+exit
+```
+
+Sample output:
+
+> ```
+> exit
+> [CCRusername]@cpn-d01-39$
+> ```
+
+Exit the Slurm job
+
+```
+exit
+```
+
+Sample output:
+
+> ```
+> logout
+> salloc: Relinquishing job allocation 21355872
+> salloc: Job allocation 21355872 has been revoked.
+> [CCRusername]@login1$
+> ````
+
+## Long Example run
+
+Start an interactive job with a GPU e.g.
+NOTE: BindCraft only uses one GPU (however, see [Using Multiple GPUs](#using-multiple-gpus) for work
+rounds with a Slurm job)
+
+For this example, the runtime is set to eigth hours, however such a run
+is better suited to a Slurm job.
+The job will not complete in this time, but, as per:
+https://github.com/martinpacesa/BindCraft/issues/258
+and:
+restarting the job with the same input & output parameters should continue
+the run.
+
+```
+salloc --cluster=ub-hpc --partition=general-compute --qos=general-compute \
+ --account="[SlurmAccountName]" --mem=128GB --nodes=1 --cpus-per-task=1 \
+ --tasks-per-node=12 --gpus-per-node=1 --time=08:00:00
+```
+
+Sample output:
+
+> ```
+> salloc: Pending job allocation 21357513
+> salloc: job 21357513 queued and waiting for resources
+> salloc: job 21357513 has been allocated resources
+> salloc: Granted job allocation 21357513
+> salloc: Nodes cpn-d01-39 are ready for job
+> ```
+
+Note the Slurm job id, "21357513" in this case, if you want to monitor the
+GPU utilizataion
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+Create the top level input and output directories
+
+```
+mkdir -p ./input ./output
+```
+
+Start the container:
+
+```
+apptainer shell \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B /util/software/data/alphafold/params:/app/params \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft-$(arch).sif
+```
+
+Sample output:
+
+> ```
+>
+> GPU info:
+> GPU 0: NVIDIA A40 (UUID: GPU-52ba81f0-dd77-1e16-417f-8c7218ee8bc6)
+>
+> Apptainer>
+> ```
+
+The following command is run from the "Apptainer> " prompt
+
+```
+source /singularity
+```
+
+Expected output:
+
+> ```
+> BindCraft>
+> ```
+
+All the following commands are run from the "BindCraft> " prompt
+
+Copy the settings file to the input directory
+
+```
+cp /app/settings_target/PDL1.json /work/input/
+```
+
+Verify the copy:
+
+```
+ls -l /work/input/PDL1.json
+```
+
+Sample output:
+
+> ```
+-rw-rw-r-- 1 [CCRusername] nogroup 74686 Sep 9 16:20 /work/input/PDL1.json
+> ```
+
+The output path is set by the "design_path" variable in this file
+
+```
+grep design_path /work/input/PDL1.json
+```
+
+Sample output:
+
+> ```
+> "design_path": "/work/output/",
+> ```
+
+Create the output directory:
+
+```
+mkdir -p /work/output/pdl1
+```
+
+...then modify the output path by changing the "design_path" variable to use
+this path. You can use "vim" or "nano" to do this inside the container, however
+"sed" is used here so you can just cut & paste...
+
+```
+sed -E -i -e "/\"design_path\"/s|:.*$|: \"/work/output/pdl1/\",|" /work/input/PDL1.json
+```
+
+Since this is a JSON file, it is important to maintain the syntax of the file,
+particularly the quotes " and the comma at the end of the line. You can verify
+the syntax by using the "jq" command to display the file, which will give errors
+if the syntax is incorrect.
+
+```
+jq . /work/input/PDL1.json
+```
+
+Expected output:
+
+> ```
+> {
+> "design_path": "/work/output/pdl1/",
+> "binder_name": "PDL1",
+> "starting_pdb": "/app/example/PDL1.pdb",
+> "chains": "A",
+> "target_hotspot_residues": "56",
+> "lengths": [
+> 65,
+> 150
+> ],
+> "number_of_final_designs": 100
+> }
+> ```
+
+
+```
+python3 /app/bindcraft.py \
+ --settings /work/input/PDL1.json \
+ --filters /app/settings_filters/default_filters.json \
+ --advanced /app/settings_advanced/default_4stage_multimer.json \
+ --no-pyrosetta
+```
+
+Sample output:
+
+> ```
+> Open files limits OK (soft=65536, hard=65536)
+> Available GPUs:
+> NVIDIA A401: gpu
+> Running in PyRosetta-free mode as requested by --no-pyrosetta flag.
+> Running binder design for target PDL1
+> Design settings used: default_4stage_multimer
+> Filtering designs based on default_filters
+> Starting trajectory: PDL1_l76_s683137
+> Stage 1: Test Logits
+> 1 models [2] recycles 1 hard 0 soft 0.02 temp 1 loss 16.46 helix 1.48 pae 0.90 i_pae 0.91 con 4.84 i_con 4.42 plddt 0.32 ptm 0.42 i_ptm 0.07 rg 23.59
+> 2 models [3] recycles 1 hard 0 soft 0.04 temp 1 loss 10.43 helix 1.00 pae 0.76 i_pae 0.80 con 4.16 i_con 4.18 plddt 0.41 ptm 0.44 i_ptm 0.10 rg 6.34
+> 3 models [4] recycles 1 hard 0 soft 0.05 temp 1 loss 11.81 helix 1.25 pae 0.82 i_pae 0.81 con 4.75 i_con 4.18 plddt 0.36 ptm 0.43 i_ptm 0.10 rg 9.11
+> [...]
+> ```
+
+
+Note that the output for the run will be in the "./output/pdl1/" directory tree
+
+```
+ls -l ./output/pdl1/
+```
+
+Sample output:
+
+> ```
+> total 12
+> drwxrwsr-x 2 [CCRusername] nogroup 4096 Sep 9 16:35 Accepted
+> -rw-rw-r-- 1 [CCRusername] nogroup 1116 Sep 9 16:35 failure_csv.csv
+> -rw-rw-r-- 1 [CCRusername] nogroup 3890 Sep 9 16:35 final_design_stats.csv
+> drwxrwsr-x 2 [CCRusername] nogroup 4096 Sep 9 16:35 MPNN
+> -rw-rw-r-- 1 [CCRusername] nogroup 3885 Sep 9 16:35 mpnn_design_stats.csv
+> drwxrwsr-x 2 [CCRusername] nogroup 4096 Sep 9 16:35 Rejected
+> -rw-rw-r-- 1 [CCRusername] nogroup 1004 Sep 9 16:35 rejected_mpnn_full_stats.csv
+> drwxrwsr-x 2 [CCRusername] nogroup 4096 Sep 9 16:35 Trajectory
+-rw-rw-r-- 1 [CCRusername] nogroup 592 Sep 9 16:35 trajectory_stats.csv
+> ```
+
+## Monitor the GPU activity of the job
+
+You can monitor the GPU utilization while the job is running. You need the
+Slurm job id for this ("squeue --me" will list your Slurm jobs.)
+
+Log into vortex in another terminal window and from there use the Slurm
+jod id in the following command:
+
+```
+srun --jobid="21357513" --export=HOME,TERM,SHELL --pty /bin/bash --login
+```
+
+Sample output:
+
+> ```
+> CCRusername@cpn-d01-39:~$
+> ```
+
+Show the GPU in the Slurm job:
+
+```
+nvidia-smi -L
+```
+
+Sample output:
+
+> ```
+> GPU 0: NVIDIA A40 (UUID: GPU-52ba81f0-dd77-1e16-417f-8c7218ee8bc6)
+> ```
+
+Monitor the GPU activity:
+
+```
+nvidia-smi -l
+```
+
+Sample output:
+
+> ```
+> Tue Sep 9 16:43:12 2025
+> +-----------------------------------------------------------------------------------------+
+> | NVIDIA-SMI 570.133.20 Driver Version: 570.133.20 CUDA Version: 12.8 |
+> |-----------------------------------------+------------------------+----------------------+
+> | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
+> | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
+> | | | MIG M. |
+> |=========================================+========================+======================|
+> | 0 NVIDIA A40 On | 00000000:0D:00.0 Off | 0 |
+> | 0% 52C P0 296W / 300W | 34501MiB / 46068MiB | 100% Default |
+> | | | N/A |
+> +-----------------------------------------+------------------------+----------------------+
+>
+> +-----------------------------------------------------------------------------------------+
+> | Processes: |
+> | GPU GI CI PID Type Process name GPU Memory |
+> | ID ID Usage |
+> |=========================================================================================|
+> | 0 N/A N/A 1028955 C python3 34492MiB |
+> +-----------------------------------------------------------------------------------------+
+> [...]
+> ```
+
+The GPU utilization should be 100% for the majority of the time.
+
+## Using Multiple GPUs
+
+As per https://github.com/martinpacesa/BindCraft/issues/258
+you can use Slurm job arrays to utilize multiple GPUs.
+HOWEVER, this will require a patch that is (at the time of writing) neither in
+FreeBindCraft nor the upstream BindCraft - see:
+https://github.com/martinpacesa/BindCraft/pull/264
+Currently this patch needs changes before it could be merged into BindCraft
+
+Once these changes are in the code, using the following lines in a BindCraft
+Slurm script will run 120 Slurm array jobs, running a maximum of 2 array jobs
+at a time, hence using 2 GPUs at the same time (if the necessary resources are
+available.)
+
+```
+> [...]
+> #SBATCH --gpus-per-node=1
+> #SBATCH --array=0-119%2
+> [...]
+```
+
diff --git a/containers/2_ApplicationSpecific/BindCraft/EXAMPLES_WITH_PYROSETTA.md b/containers/2_ApplicationSpecific/BindCraft/EXAMPLES_WITH_PYROSETTA.md
new file mode 100644
index 0000000..58e435b
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/EXAMPLES_WITH_PYROSETTA.md
@@ -0,0 +1,472 @@
+# BindCraft Examples with PyRosetta
+
+## BindCraft
+
+The following examples are from the [BindCraft GitHub README.md](https://github.com/cytokineking/FreeBindCraft/blob/master/README.md)
+
+Start an interactive job with a GPU e.g.
+NOTE: BindCraft only uses one GPU
+
+```
+salloc --cluster=ub-hpc --partition=debug --qos=debug \
+ --account="[SlurmAccountName]" --mem=128GB --nodes=1 --cpus-per-task=1 \
+ --tasks-per-node=12 --gpus-per-node=1 --time=01:00:00
+```
+
+Sample output:
+
+> ```
+> salloc: Pending job allocation 21408161
+> salloc: job 21408161 queued and waiting for resources
+> salloc: job 21408161 has been allocated resources
+> salloc: Granted job allocation 21408161
+> salloc: Nodes cpn-d01-39 are ready for job
+> ```
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+Create the top level input and output directories
+
+```
+mkdir -p ./input ./output
+```
+
+Start the container:
+Note that the AlpahFold 2 params files are mounted on /app/params inside the container.
+
+```
+apptainer shell \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B /util/software/data/alphafold/params:/app/params \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft_with_PyRosetta-$(arch).sif
+```
+
+Sample output:
+
+> ```
+>
+> GPU info:
+> GPU 0: NVIDIA A40 (UUID: GPU-52ba81f0-dd77-1e16-417f-8c7218ee8bc6)
+>
+> Apptainer>
+> ```
+
+The following command is run from the "Apptainer> " prompt
+
+```
+source /singularity
+```
+
+Expected output:
+
+> ```
+> BindCraft>
+> ```
+
+All the following commands are run from the "BindCraft> " prompt
+
+Copy the sample input file
+
+```
+cp /app/example/PDL1.pdb /work/input/
+```
+
+Verify the copy:
+
+```
+ls -l /work/input/PDL1.pdb
+```
+
+Sample output:
+
+> ```
+-rw-rw-r-- 1 [CCRusername] nogroup 74686 Sep 11 14:20 /work/input/PDL1.pdb
+> ```
+
+## Run OpenMM relax Sanity test
+
+```
+python3 /app/extras/test_openmm_relax.py /work/input/PDL1.pdb /work/output/relax_test
+```
+
+Sample output:
+
+> ```
+> --- Test Script: Starting Relaxations ---
+> Input PDB: /work/input/PDL1.pdb
+> Base Output PDB Path: /work/output/relax_test
+> Cleaning input PDB file: /work/input/PDL1.pdb...
+> Input PDB file cleaned successfully.
+> Target OpenMM output: /work/output/relax_test_openmm.pdb
+> Target PyRosetta output: /work/output/relax_test_pyrosetta.pdb
+>
+> --- OpenMM Relax Run: single ---
+> Platform: OpenCL
+> Total seconds: 8.79
+> Initial minimization seconds: 0.22
+> Ramp count: 3, MD steps/shake: 5000
+> Best energy (kJ/mol): -20230.12
+> Stage 1: md_steps=5000, md_s=0.87, min_calls=5, min_s=1.09, E0=-20218.36831188202, Emd=-14389.742311477661, Efin=-20187.55224609375
+> Stage 2: md_steps=5000, md_s=0.86, min_calls=5, min_s=1.32, E0=-20187.55224609375, Emd=-14475.73784828186, Efin=-20227.04603099823
+> Stage 3: md_steps=0, md_s=0.00, min_calls=5, min_s=1.18, E0=-20227.04603099823, Emd=None, Efin=-20230.118795394897
+> OpenMM Relaxed PDB saved to: /work/output/relax_test_openmm.pdb
+>
+> --- Starting PyRosetta Relaxation ---
+> Initializing PyRosetta...
+> ┌───────────────────────────────────────────────────────────────────────────────┐
+> │ PyRosetta-4 │
+> │ Created in JHU by Sergey Lyskov and PyRosetta Team │
+> │ (C) Copyright Rosetta Commons Member Institutions │
+> │ │
+> │ NOTE: USE OF PyRosetta FOR COMMERCIAL PURPOSES REQUIRES PURCHASE OF A LICENSE │
+> │ See LICENSE.PyRosetta.md or email license@uw.edu for details │
+> └───────────────────────────────────────────────────────────────────────────────┘
+> PyRosetta-4 2025 [Rosetta PyRosetta4.conda.ubuntu.cxx11thread.serialization.Ubuntu.python310.Release 2025.37+release.df75a9c48e763e52a7aa3f5dfba077f4da88dbf5 2025-09-03T12:23:30] retrieved from: http://www.pyrosetta.org
+> PyRosetta initialized successfully for the test script.
+> PyRosetta is available. Calling pr_relax function for /work/output/relax_test_pyrosetta.pdb...
+> pr_relax function completed.
+> PyRosetta Relaxed PDB saved to: /work/output/relax_test_pyrosetta.pdb
+>
+> --- Test Script: Finished ---
+
+> ```
+
+The output for the run is in the ./output directory
+
+```
+ls -l ./output/relax_test_*
+```
+
+Sample output:
+
+> ```
+> -rw-rw-r-- 1 [CCRusername] nogroup 150101 Sep 11 14:25 ./output/relax_test_openmm.pdb
+> -rw-rw-r-- 1 [CCRusername] nogroup 150093 Sep 11 14:25 ./output/relax_test_pyrosetta.pdb
+> ```
+
+Exit the container
+
+```
+exit
+```
+
+Sample output:
+
+> ```
+> exit
+> [CCRusername]@cpn-d01-39$
+> ```
+
+Exit the Slurm job
+
+```
+exit
+```
+
+Sample output:
+
+> ```
+> logout
+> salloc: Relinquishing job allocation 21408161
+> salloc: Job allocation 21408161 has been revoked.
+> [CCRusername]@login1$
+> ````
+
+## Long Example run
+
+Start an interactive job with a GPU e.g.
+NOTE: BindCraft only uses one GPU (however, see [Using Multiple GPUs](#using-multiple-gpus) for work
+rounds with a Slurm job)
+
+For this example, the runtime is set to eigth hours, however such a run
+is better suited to a Slurm job.
+The job will not complete in this time, but, as per:
+https://github.com/martinpacesa/BindCraft/issues/258
+and:
+restarting the job with the same input & output parameters should continue
+the run.
+
+```
+salloc --cluster=ub-hpc --partition=general-compute --qos=general-compute \
+ --account="[SlurmAccountName]" --mem=128GB --nodes=1 --cpus-per-task=1 \
+ --tasks-per-node=12 --gpus-per-node=1 --time=08:00:00
+```
+
+Sample output:
+
+> ```
+> salloc: Pending job allocation 21409013
+> salloc: job 21409013 queued and waiting for resources
+> salloc: job 21409013 has been allocated resources
+> salloc: Granted job allocation 21409013
+> salloc: Nodes cpn-q07-20 are ready for job
+> ```
+
+Note the Slurm job id, "21409013" in this case, if you want to monitor the
+GPU utilizataion
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+Create the top level input and output directories
+
+```
+mkdir -p ./input ./output
+```
+
+Start the container:
+
+```
+apptainer shell \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B /util/software/data/alphafold/params:/app/params \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft_with_PyRosetta-$(arch).sif
+```
+
+Sample output:
+
+> ```
+>
+> GPU info:
+> GPU 0: Tesla V100-PCIE-32GB (UUID: GPU-88d8d1ca-8d0d-7333-0585-00e3c88b669c)
+>
+> Apptainer>
+> ```
+
+The following command is run from the "Apptainer> " prompt
+
+```
+source /singularity
+```
+
+Expected output:
+
+> ```
+> BindCraft>
+> ```
+
+All the following commands are run from the "BindCraft> " prompt
+
+Copy the settings file to the input directory
+
+```
+cp /app/settings_target/PDL1.json /work/input/
+```
+
+Verify the copy:
+
+```
+ls -l /work/input/PDL1.json
+```
+
+Sample output:
+
+> ```
+-rw-rw-r-- 1 [CCRusername] nogroup 74686 Sep 11 14:35 /work/input/PDL1.json
+> ```
+
+The output path is set by the "design_path" variable in this file
+
+```
+grep design_path /work/input/PDL1.json
+```
+
+Sample output:
+
+> ```
+> "design_path": "/work/output/",
+> ```
+
+Create the output directory:
+
+```
+mkdir -p /work/output/pdl1
+```
+
+...then modify the output path by changing the "design_path" variable to use
+this path. You can use "vim" or "nano" to do this inside the container, however
+"sed" is used here so you can just cut & paste...
+
+```
+sed -E -i -e "/\"design_path\"/s|:.*$|: \"/work/output/pdl1/\",|" /work/input/PDL1.json
+```
+
+Since this is a JSON file, it is important to maintain the syntax of the file,
+particularly the quotes " and the comma at the end of the line. You can verify
+the syntax by using the "jq" command to display the file, which will give errors
+if the syntax is incorrect.
+
+```
+jq . /work/input/PDL1.json
+```
+
+Expected output:
+
+> ```
+> {
+> "design_path": "/work/output/pdl1/",
+> "binder_name": "PDL1",
+> "starting_pdb": "/app/example/PDL1.pdb",
+> "chains": "A",
+> "target_hotspot_residues": "56",
+> "lengths": [
+> 65,
+> 150
+> ],
+> "number_of_final_designs": 100
+> }
+> ```
+
+
+```
+python3 /app/bindcraft.py \
+ --settings /work/input/PDL1.json \
+ --filters /app/settings_filters/default_filters.json \
+ --advanced /app/settings_advanced/default_4stage_multimer.json
+```
+
+Sample output:
+
+> ```
+> Open files limits OK (soft=65536, hard=65536)
+> Available GPUs:
+> NVIDIA V100: gpu
+> ┌───────────────────────────────────────────────────────────────────────────────┐
+> │ PyRosetta-4 │
+> │ Created in JHU by Sergey Lyskov and PyRosetta Team │
+> │ (C) Copyright Rosetta Commons Member Institutions │
+> │ │
+> │ NOTE: USE OF PyRosetta FOR COMMERCIAL PURPOSES REQUIRES PURCHASE OF A LICENSE │
+> │ See LICENSE.PyRosetta.md or email license@uw.edu for details │
+> └───────────────────────────────────────────────────────────────────────────────┘
+> PyRosetta-4 2025 [Rosetta PyRosetta4.conda.ubuntu.cxx11thread.serialization.Ubuntu.python310.Release 2025.37+release.df75a9c48e763e52a7aa3f5dfba077f4da88dbf5 2025-09-03T12:23:30] retrieved from: http://www.pyrosetta.org
+> PyRosetta initialized successfully.
+> Running binder design for target PDL1
+> Design settings used: default_4stage_multimer
+> Filtering designs based on default_filters
+> Starting trajectory: PDL1_l89_s972985
+> Stage 1: Test Logits
+> 1 models [0] recycles 1 hard 0 soft 0.02 temp 1 loss 10.45 helix 1.42 pae 0.78 i_pae 0.77 con 4.55 i_con 4.25 plddt 0.30 ptm 0.54 i_ptm 0.11 rg 5.26
+> 2 models [2] recycles 1 hard 0 soft 0.04 temp 1 loss 13.38 helix 1.97 pae 0.84 i_pae 0.84 con 4.96 i_con 4.00 plddt 0.28 ptm 0.54 i_ptm 0.09 rg 14.91
+> 3 models [2] recycles 1 hard 0 soft 0.05 temp 1 loss 10.73 helix 1.08 pae 0.71 i_pae 0.70 con 4.59 i_con 3.94 plddt 0.42 ptm 0.54 i_ptm 0.14 rg 6.90
+> [...]
+> ```
+
+
+Note that the output for the run will be in the "./output/pdl1/" directory tree
+
+```
+ls -l ./output/pdl1/
+```
+
+Sample output:
+
+> ```
+> total 12
+> drwxrwsr-x 2 [CCRusername] nogroup 4096 Sep 9 16:35 Accepted
+> -rw-rw-r-- 1 [CCRusername] nogroup 1116 Sep 9 16:35 failure_csv.csv
+> -rw-rw-r-- 1 [CCRusername] nogroup 3890 Sep 9 16:35 final_design_stats.csv
+> drwxrwsr-x 2 [CCRusername] nogroup 4096 Sep 9 16:35 MPNN
+> -rw-rw-r-- 1 [CCRusername] nogroup 3885 Sep 9 16:35 mpnn_design_stats.csv
+> drwxrwsr-x 2 [CCRusername] nogroup 4096 Sep 9 16:35 Rejected
+> -rw-rw-r-- 1 [CCRusername] nogroup 1004 Sep 9 16:35 rejected_mpnn_full_stats.csv
+> drwxrwsr-x 2 [CCRusername] nogroup 4096 Sep 9 16:35 Trajectory
+-rw-rw-r-- 1 [CCRusername] nogroup 592 Sep 9 16:35 trajectory_stats.csv
+> ```
+
+## Monitor the GPU activity of the job
+
+You can monitor the GPU utilization while the job is running. You need the
+Slurm job id for this ("squeue --me" will list your Slurm jobs.)
+
+Log into vortex in another terminal window and from there use the Slurm
+jod id in the following command:
+
+```
+srun --jobid="21409013" --export=HOME,TERM,SHELL --pty /bin/bash --login
+```
+
+Sample output:
+
+> ```
+> CCRusername@cpn-q07-20:~$
+> ```
+
+Show the GPU in the Slurm job:
+
+```
+nvidia-smi -L
+```
+
+Sample output:
+
+> ```
+> GPU 0: Tesla V100-PCIE-32GB (UUID: GPU-88d8d1ca-8d0d-7333-0585-00e3c88b669c)
+> ```
+
+Monitor the GPU activity:
+
+```
+nvidia-smi -l
+```
+
+Sample output:
+
+> ```
+> Tue Sep 9 16:43:12 2025
+> +-----------------------------------------------------------------------------------------+
+> | NVIDIA-SMI 570.133.20 Driver Version: 570.133.20 CUDA Version: 12.8 |
+> |-----------------------------------------+------------------------+----------------------+
+> | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
+> | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
+> | | | MIG M. |
+> |=========================================+========================+======================|
+> | 0 Tesla V100-PCIE-32GB On | 00000000:3B:00.0 Off | 0 |
+> | 0% 58C P0 250W / 250W | 34501MiB / 32768MiB | 100% Default |
+> | | | N/A |
+> +-----------------------------------------+------------------------+----------------------+
+>
+> +-----------------------------------------------------------------------------------------+
+> | Processes: |
+> | GPU GI CI PID Type Process name GPU Memory |
+> | ID ID Usage |
+> |=========================================================================================|
+> | 0 N/A N/A 1028955 C python3 34492MiB |
+> +-----------------------------------------------------------------------------------------+
+> [...]
+> ```
+
+The GPU utilization should be 100% for the majority of the time.
+
+## Using Multiple GPUs
+
+As per https://github.com/martinpacesa/BindCraft/issues/258
+you can use Slurm job arrays to utilize multiple GPUs.
+HOWEVER, this will require a patch that is (at the time of writing) neither in
+FreeBindCraft nor the upstream BindCraft - see:
+https://github.com/martinpacesa/BindCraft/pull/264
+Currently this patch needs changes before it could be merged into BindCraft
+
+Once these changes are in the code, using the following lines in a BindCraft
+Slurm script will run 120 Slurm array jobs, running a maximum of 2 array jobs
+at a time, hence using 2 GPUs at the same time (if the necessary resources are
+available.)
+
+```
+> [...]
+> #SBATCH --gpus-per-node=1
+> #SBATCH --array=0-119%2
+> [...]
+```
+
diff --git a/containers/2_ApplicationSpecific/BindCraft/FreeBindCraft.def b/containers/2_ApplicationSpecific/BindCraft/FreeBindCraft.def
new file mode 100644
index 0000000..cf83011
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/FreeBindCraft.def
@@ -0,0 +1,159 @@
+Bootstrap: docker
+From: nvidia/cuda:12.6.3-base-ubuntu24.04
+
+%labels
+ org.opencontainers.image.source https://github.com/cytokineking/FreeBindCraft
+ org.opencontainers.image.description FreeBindCraft GPU (no PyRosetta)
+ org.opencontainers.image.licenses MIT
+
+%files
+ docker-entrypoint.sh /usr/local/bin/bindcraft-entrypoint.sh
+
+%post -c /bin/bash
+ # Set the timezone, if unset
+ test -h /etc/localtime || ln -fs /usr/share/zoneinfo/America/New_York /etc/localtime
+
+ # fix perms on the bindcraft-entrypoint.sh script
+ chmod 755 /usr/local/bin/bindcraft-entrypoint.sh
+
+ cp /etc/apt/sources.list /etc/apt/sources.list~
+ sed -E -i 's/^# deb-src /deb-src /' /etc/apt/sources.list
+ apt-get -y update
+
+ # Install man & man pages - this section can be removed if not needed
+ # NOTE: Do this before installing anything else so their man pages are installed
+ sed -e '\|/usr/share/man|s|^#*|#|g' -i /etc/dpkg/dpkg.cfg.d/excludes
+ DEBIAN_FRONTEND=noninteractive apt-get -y install apt-utils groff dialog man-db manpages manpages-posix manpages-dev
+ rm -f /usr/bin/man
+ dpkg-divert --quiet --remove --rename /usr/bin/man
+
+ # O/S package updates:
+ DEBIAN_FRONTEND=noninteractive apt-get -y upgrade
+
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ tzdata \
+ locales \
+ unzip \
+ wget \
+ git \
+ rsync \
+ curl \
+ tmux \
+ build-essential \
+ pkg-config \
+ procps \
+ jq \
+ nano \
+ vim \
+ apt-file
+
+ # NOTE: apt-file is generally not needed to run, but can be useful during development
+ apt-file update
+
+ # These steps are necessary to configure Perl and can cause issues with Python if omitted
+ sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen
+ dpkg-reconfigure --frontend=noninteractive locales
+ update-locale LANG=en_US.UTF-8
+
+ export CMAKE_BUILD_PARALLEL_LEVEL="$(nproc)"
+
+ # Install OpenCL ICD loader and tools; register NVIDIA OpenCL ICD
+ apt-get update && apt-get -y install ocl-icd-libopencl1 clinfo
+ mkdir -p /etc/OpenCL/vendors
+ echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
+
+ # Install Miniforge (Conda) at /miniforge3
+ export CONDA_DIR=/miniforge3
+
+ # set up Miniforge
+ wget -P /tmp \
+ -O /tmp/miniforge.sh \
+ "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-$(uname -m).sh"
+ bash /tmp/miniforge.sh -b -p ${CONDA_DIR}
+ rm -f /tmp/miniforge.sh
+
+ # add conda to the head of the PATH
+ export PATH="${CONDA_DIR}/bin:${PATH}"
+
+ # Improve conda robustness and cleanup
+ conda config --set channel_priority strict
+ conda config --set always_yes yes
+ conda update -n base -c conda-forge conda
+ conda clean -afy
+
+ # Create workdir and copy project
+ mkdir -p /app
+ cd /app
+ git clone https://github.com/cytokineking/FreeBindCraft.git .
+
+ # Ensure helper binaries are executable (also handled by installer)
+ chmod 755 /app/functions/dssp ||:
+ chmod 755 /app/functions/sc ||:
+
+ # To add PyRosetta support change the following line and the
+ # similar line in the %environment section below.
+ # NOTE: you must agree to the PyRosetta licensing terms - see:
+ # https://els2.comotion.uw.edu/product/pyrosetta
+ # https://github.com/RosettaCommons/rosetta/blob/main/LICENSE.md#rosetta-software-non-commercial-license-agreement
+ export WITH_PYROSETTA=false
+
+ # Build environment without PyRosetta
+ # Match CUDA to base image; installer pins jax/jaxlib=0.6.0
+ source ${CONDA_DIR}/etc/profile.d/conda.sh
+ EXTRA=""
+ if [ "${WITH_PYROSETTA}" != "true" ]
+ then
+ EXTRA="--no-pyrosetta"
+ fi
+### # fix tar extraction inside the container build
+### sed -E -i.old 's/(^|[[:space:]])tar[[:space:]]+-xvf/\1tar --no-same-owner -xvf/' /app/install_bindcraft.sh
+ # comment all the Alphafold 2 params download (bind mount the params
+ # on /app/params at runtime)
+ sed -E -i '/(params_file|params_model_)/s/^/###/' /app/install_bindcraft.sh
+ #
+ bash /app/install_bindcraft.sh --pkg_manager conda --cuda 12.6 ${EXTRA}
+
+ # Need to make "DAlphaBall.gcc" executable or "apptainer run" will complain
+ # about a "Read-only file system" error trying to fix the perms
+ chmod 755 /app/functions/DAlphaBall.gcc
+
+ # do NOT force /app to be the default directory
+ sed -E -i '/^[[:space:]]*cd[[:space:]]/s/^/#/' docker-entrypoint.sh /usr/local/bin/bindcraft-entrypoint.sh
+
+ # Create the input & output directories (bind mount them at runtime)
+ mkdir -p /work/input/ /work/output/
+ # ..and create links to the poor path choices in the GitHub examples:
+ ln -s /work/input/ /work/in
+ ln -s /work/output/ /work/out
+
+ # fix paths in the PDL1.json file
+ sed -E -i -e "/\"design_path\"/s|:.*$|: \"/work/output/\",|" \
+ -e "/\"starting_pdb\"/s|\./|/app/|" /app/settings_target/PDL1.json
+
+%environment
+ export WITH_PYROSETTA=false
+ export LANG=en_US.UTF-8
+ export PYTHONPATH="/app"
+ export PYTHONUNBUFFERED=1
+ export BINDCRAFT_HOME="/app"
+ export LD_LIBRARY_PATH="/miniforge3/envs/BindCraft/lib:${LD_LIBRARY_PATH}"
+ echo
+ if ! which nvidia-smi >/dev/null 2>&1
+ then
+ # Running on an non GPU node
+ echo "No GPU detected!!!"
+ else
+ echo "GPU info:"
+ LD_LIBRARY_PATH=/.singularity.d/libs nvidia-smi -L
+ fi
+ echo
+
+%runscript
+ #!/bin/bash
+ export CONDA_DIR=/miniforge3
+ export PATH="${CONDA_DIR}/bin:${PATH}"
+ source "/miniforge3/etc/profile.d/conda.sh"
+ conda activate BindCraft
+ export PS1="BindCraft> "
+ source /usr/local/bin/bindcraft-entrypoint.sh
+
diff --git a/containers/2_ApplicationSpecific/BindCraft/FreeBindCraft_with_PyRosetta.def b/containers/2_ApplicationSpecific/BindCraft/FreeBindCraft_with_PyRosetta.def
new file mode 100644
index 0000000..2e1a028
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/FreeBindCraft_with_PyRosetta.def
@@ -0,0 +1,159 @@
+Bootstrap: docker
+From: nvidia/cuda:12.6.3-base-ubuntu24.04
+
+%labels
+ org.opencontainers.image.source https://github.com/cytokineking/FreeBindCraft
+ org.opencontainers.image.description FreeBindCraft GPU (no PyRosetta)
+ org.opencontainers.image.licenses MIT
+
+%files
+ docker-entrypoint.sh /usr/local/bin/bindcraft-entrypoint.sh
+
+%post -c /bin/bash
+ # Set the timezone, if unset
+ test -h /etc/localtime || ln -fs /usr/share/zoneinfo/America/New_York /etc/localtime
+
+ # fix perms on the bindcraft-entrypoint.sh script
+ chmod 755 /usr/local/bin/bindcraft-entrypoint.sh
+
+ cp /etc/apt/sources.list /etc/apt/sources.list~
+ sed -E -i 's/^# deb-src /deb-src /' /etc/apt/sources.list
+ apt-get -y update
+
+ # Install man & man pages - this section can be removed if not needed
+ # NOTE: Do this before installing anything else so their man pages are installed
+ sed -e '\|/usr/share/man|s|^#*|#|g' -i /etc/dpkg/dpkg.cfg.d/excludes
+ DEBIAN_FRONTEND=noninteractive apt-get -y install apt-utils groff dialog man-db manpages manpages-posix manpages-dev
+ rm -f /usr/bin/man
+ dpkg-divert --quiet --remove --rename /usr/bin/man
+
+ # O/S package updates:
+ DEBIAN_FRONTEND=noninteractive apt-get -y upgrade
+
+ DEBIAN_FRONTEND=noninteractive apt-get -y install \
+ tzdata \
+ locales \
+ unzip \
+ wget \
+ git \
+ rsync \
+ curl \
+ tmux \
+ build-essential \
+ pkg-config \
+ procps \
+ jq \
+ nano \
+ vim \
+ apt-file
+
+ # NOTE: apt-file is generally not needed to run, but can be useful during development
+ apt-file update
+
+ # These steps are necessary to configure Perl and can cause issues with Python if omitted
+ sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen
+ dpkg-reconfigure --frontend=noninteractive locales
+ update-locale LANG=en_US.UTF-8
+
+ export CMAKE_BUILD_PARALLEL_LEVEL="$(nproc)"
+
+ # Install OpenCL ICD loader and tools; register NVIDIA OpenCL ICD
+ apt-get update && apt-get -y install ocl-icd-libopencl1 clinfo
+ mkdir -p /etc/OpenCL/vendors
+ echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
+
+ # Install Miniforge (Conda) at /miniforge3
+ export CONDA_DIR=/miniforge3
+
+ # set up Miniforge
+ wget -P /tmp \
+ -O /tmp/miniforge.sh \
+ "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-$(uname -m).sh"
+ bash /tmp/miniforge.sh -b -p ${CONDA_DIR}
+ rm -f /tmp/miniforge.sh
+
+ # add conda to the head of the PATH
+ export PATH="${CONDA_DIR}/bin:${PATH}"
+
+ # Improve conda robustness and cleanup
+ conda config --set channel_priority strict
+ conda config --set always_yes yes
+ conda update -n base -c conda-forge conda
+ conda clean -afy
+
+ # Create workdir and copy project
+ mkdir -p /app
+ cd /app
+ git clone https://github.com/cytokineking/FreeBindCraft.git .
+
+ # Ensure helper binaries are executable (also handled by installer)
+ chmod 755 /app/functions/dssp ||:
+ chmod 755 /app/functions/sc ||:
+
+ # To add PyRosetta support change the following line and the
+ # similar line in the %environment section below.
+ # NOTE: you must agree to the PyRosetta licensing terms - see:
+ # https://els2.comotion.uw.edu/product/pyrosetta
+ # https://github.com/RosettaCommons/rosetta/blob/main/LICENSE.md#rosetta-software-non-commercial-license-agreement
+ export WITH_PYROSETTA=true
+
+ # Build environment with PyRosetta
+ # Match CUDA to base image; installer pins jax/jaxlib=0.6.0
+ source ${CONDA_DIR}/etc/profile.d/conda.sh
+ EXTRA=""
+ if [ "${WITH_PYROSETTA}" != "true" ]
+ then
+ EXTRA="--no-pyrosetta"
+ fi
+### # fix tar extraction inside the container build
+### sed -E -i.old 's/(^|[[:space:]])tar[[:space:]]+-xvf/\1tar --no-same-owner -xvf/' /app/install_bindcraft.sh
+ # comment all the Alphafold 2 params download (bind mount the params
+ # on /app/params at runtime)
+ sed -E -i '/(params_file|params_model_)/s/^/###/' /app/install_bindcraft.sh
+ #
+ bash /app/install_bindcraft.sh --pkg_manager conda --cuda 12.6 ${EXTRA}
+
+ # Need to make "DAlphaBall.gcc" executable or "apptainer run" will complain
+ # about a "Read-only file system" error trying to fix the perms
+ chmod 755 /app/functions/DAlphaBall.gcc
+
+ # do NOT force /app to be the default directory
+ sed -E -i '/^[[:space:]]*cd[[:space:]]/s/^/#/' docker-entrypoint.sh /usr/local/bin/bindcraft-entrypoint.sh
+
+ # Create the input & output directories (bind mount them at runtime)
+ mkdir -p /work/input/ /work/output/
+ # ..and create links to the poor path choices in the GitHub examples:
+ ln -s /work/input/ /work/in
+ ln -s /work/output/ /work/out
+
+ # fix paths in the PDL1.json file
+ sed -E -i -e "/\"design_path\"/s|:.*$|: \"/work/output/\",|" \
+ -e "/\"starting_pdb\"/s|\./|/app/|" /app/settings_target/PDL1.json
+
+%environment
+ export WITH_PYROSETTA=true
+ export LANG=en_US.UTF-8
+ export PYTHONPATH="/app"
+ export PYTHONUNBUFFERED=1
+ export BINDCRAFT_HOME="/app"
+ export LD_LIBRARY_PATH="/miniforge3/envs/BindCraft/lib:${LD_LIBRARY_PATH}"
+ echo
+ if ! which nvidia-smi >/dev/null 2>&1
+ then
+ # Running on an non GPU node
+ echo "No GPU detected!!!"
+ else
+ echo "GPU info:"
+ LD_LIBRARY_PATH=/.singularity.d/libs nvidia-smi -L
+ fi
+ echo
+
+%runscript
+ #!/bin/bash
+ export CONDA_DIR=/miniforge3
+ export PATH="${CONDA_DIR}/bin:${PATH}"
+ source "/miniforge3/etc/profile.d/conda.sh"
+ conda activate BindCraft
+ export PS1="BindCraft> "
+ source /usr/local/bin/bindcraft-entrypoint.sh
+
diff --git a/containers/2_ApplicationSpecific/BindCraft/README.md b/containers/2_ApplicationSpecific/BindCraft/README.md
new file mode 100644
index 0000000..9df32e3
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/README.md
@@ -0,0 +1,206 @@
+# Example BindCraft container
+
+## Building the container
+
+A brief guide to building the BindCraft container follows:
+Please refer to CCR's [container documentation](https://docs.ccr.buffalo.edu/en/latest/howto/containerization/) for more detailed information on building and using Apptainer.
+
+NOTE: for building on the ARM64 platform see [BUILD-ARM64.md](./BUILD-ARM64.md)
+
+1. Start an interactive job
+
+Apptainer is not available on the CCR login nodes and the compile nodes may not provide enough resources for you to build a container. We recommend requesting an interactive job on a compute node to conduct this build process.
+Note: a GPU is NOT needed to build the BindCraft container
+See CCR docs for more info on [running jobs](https://docs.ccr.buffalo.edu/en/latest/hpc/jobs/#interactive-job-submission)
+
+```
+salloc --cluster=ub-hpc --partition=debug --qos=debug --account="[SlurmAccountName]" \
+ --mem=0 --exclusive --time=01:00:00
+```
+
+sample outout:
+
+> ```
+> salloc: Pending job allocation 19781052
+> salloc: job 19781052 queued and waiting for resources
+> salloc: job 19781052 has been allocated resources
+> salloc: Granted job allocation 19781052
+> salloc: Nodes cpn-i14-39 are ready for job
+> CCRusername@cpn-i14-39:~$
+> ```
+
+2. Navigate to your build directory and use the Slurm job local temporary directory for cache
+
+You should now be on the compute node allocated to you. In this example we're using our project directory for our build directory.
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+There are two sample .def files, FreeBindCraft.def and FreeBindCraft_with_PyRosetta.def
+The former does not install PyRosetta, the latter does.
+To build and use the PyRosetta version you must have a PyRosetta license.
+You must eaither complete the application for a non commercial license or purchase
+a commercial licenses - see [PyRosetta Licensing](https://els2.comotion.uw.edu/product/pyrosetta) for more info.
+
+The following example uses the non PyRosetta version.
+To build with PyRosetta see [Build with PyRosetta](./BUILD_with_PyRosetta.md)
+
+Download the BindCraft build files, FreeBindCraft.def and docker-entrypoint.sh to this directory
+
+```
+curl -L -o BindCraft.def https://raw.githubusercontent.com/tonykew/ccr-examples/refs/heads/BindCraft/containers/2_ApplicationSpecific/BindCraft/FreeBindCraft.def
+curl -L -o docker-entrypoint.sh https://raw.githubusercontent.com/tonykew/ccr-examples/refs/heads/BindCraft/containers/2_ApplicationSpecific/BindCraft/docker-entrypoint.sh
+```
+
+Sample output:
+
+> ```
+> % Total % Received % Xferd Average Speed Time Time Time Current
+> Dload Upload Total Spent Left Speed
+> 100 5413 100 5413 0 0 50616 0 --:--:-- --:--:-- --:--:-- 51066
+> % Total % Received % Xferd Average Speed Time Time Time Current
+> Dload Upload Total Spent Left Speed
+> 100 404 100 404 0 0 7185 0 --:--:-- --:--:-- --:--:-- 7214
+> ```
+
+3. Build your container
+
+Set the apptainer cache dir:
+
+```
+export APPTAINER_CACHEDIR=${SLURMTMPDIR}
+```
+
+Building the BindCraft container takes about half an hour...
+
+```
+apptainer build BindCraft-$(arch).sif FreeBindCraft.def
+```
+
+Sample truncated output:
+
+> ```
+> [....]
+> INFO: Adding environment to container
+> INFO: Creating SIF file...
+> INFO: Build complete: BindCraft-x86_64.sif
+> ```
+
+Exit the Slurm job
+
+```
+exit
+```
+
+Sample output:
+
+> ```
+> logout
+> salloc: Relinquishing job allocation 19781052
+> salloc: Job allocation 19781052 has been revoked.
+> [CCRusername]@login1$
+> ``
+
+## Running the container
+
+Start an interactive job with a single GPU e.g.
+NOTE: BindCraft only uses one GPU
+
+```
+salloc --cluster=ub-hpc --partition=general-compute --qos=general-compute \
+ --account="[SlurmAccountName]" --mem=128GB --nodes=1 --cpus-per-task=1 \
+ --tasks-per-node=12 --gpus-per-node=1 --time=05:00:00
+```
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+Create the input and output directories
+
+```
+mkdir -p ./input ./output
+```
+
+...then start the BindCraft container instance
+
+```
+apptainer shell \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B /util/software/data/alphafold/params:/app/params \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft-$(arch).sif
+```
+
+Sample output:
+
+> ```
+>
+> GPU info:
+> GPU 0: Tesla T4 (UUID: GPU-fdc92258-5f5b-9ece-ceca-f1545184cd08)
+>
+> Apptainer>
+> ```
+
+Verify BindCraft is installed:
+
+The following command is run from the "Apptainer> " prompt
+
+```
+source /singularity
+```
+
+Expected output:
+
+> ```
+> BindCraft>
+> ```
+
+All the following commands are run from the "BindCraft> " prompt
+
+```
+python3 "/app/bindcraft.py" --help
+```
+
+Sample output:
+
+> ```
+> Open files limits OK (soft=65536, hard=65536)
+> usage: bindcraft.py [-h] [--settings SETTINGS] [--filters FILTERS] [--advanced ADVANCED] [--no-pyrosetta] [--verbose] [--no-plots] [--no-animations]
+> [--interactive]
+>
+> Script to run BindCraft binder design.
+>
+> options:
+> -h, --help show this help message and exit
+> --settings SETTINGS, -s SETTINGS
+> Path to the basic settings.json file. If omitted in a TTY, interactive mode is used.
+> --filters FILTERS, -f FILTERS
+> Path to the filters.json file used to filter design. If not provided, default will be used.
+> --advanced ADVANCED, -a ADVANCED
+> Path to the advanced.json file with additional design settings. If not provided, default will be used.
+> --no-pyrosetta Run without PyRosetta (skips relaxation and PyRosetta-based scoring)
+> --verbose Enable detailed timing/progress logs
+> --no-plots Disable saving design trajectory plots (overrides advanced settings)
+> --no-animations Disable saving design animations (overrides advanced settings)
+> --interactive Force interactive mode to collect target settings and options
+> ```
+
+See the [EXAMPLE file](./EXAMPLE.md) for more info.
+
+## Sample Slurm scripts
+
+### x86_64 examples
+[BindCraft Slurm example script](https://raw.githubusercontent.com/tonykew/ccr-examples/refs/heads/BindCraft/containers/2_ApplicationSpecific/BindCraft/slurm_BindCraft_example.bash)
+[BindCraft with PyRosetta Slurm example script](https://raw.githubusercontent.com/tonykew/ccr-examples/refs/heads/BindCraft/containers/2_ApplicationSpecific/BindCraft/slurm_BindCraft_with_PyRosetta_example.bash)
+
+## Documentation Resources
+
+For more information on BindCraft see the [Free BindCraft GitHub page](https://github.com/cytokineking/FreeBindCraft), [the BindCraft wiki](https://github.com/martinpacesa/BindCraft/wiki/De-novo-binder-design-with-BindCraft) and the [BindCraft GitHub page](https://github.com/martinpacesa/BindCraft)
+
diff --git a/containers/2_ApplicationSpecific/BindCraft/SLURM_EXAMPLE.md b/containers/2_ApplicationSpecific/BindCraft/SLURM_EXAMPLE.md
new file mode 100644
index 0000000..41bfcb2
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/SLURM_EXAMPLE.md
@@ -0,0 +1,182 @@
+# BindCraft Slurm Example
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+Copy the sample Slurm script "slurm_BindCraft_example.bash" to this directory
+then modify for your use case.
+
+You should change the SLURM cluster, partition, qos and account; then change
+the "[YourGroupName]" in the cd command:
+
+for example:
+
+```
+cat slurm_BindCraft_example.bash
+```
+
+abridged sample output:
+
+> ```
+> [...]
+> ## Select a cluster, partition, qos and account that is appropriate for your use case
+> ## Available options and more details are provided in CCR's documentation:
+> ## https://docs.ccr.buffalo.edu/en/latest/hpc/jobs/#slurm-directives-partitions-qos
+> #SBATCH --cluster="ub-hpc"
+> #SBATCH --partition="general-compute"
+> #SBATCH --qos="general-compute"
+> #SBATCH --account="ccradmintest"
+>
+> [...]
+>
+> ## change to the BindCraft directory
+> cd /projects/academic/ccradmintest/BindCraft
+> [...]
+> ```
+
+NOTE: You can add other Slurm options to eiher script.
+For example, if you want to run on an H100 GPU (with 80GB RAM) add the
+following to the script:
+
+> ```
+> #SBATCH --constraint="H100"
+> ```
+
+The script has a job runtime set to (the "general-compute" partition maximum
+of) three days
+
+> ```
+> #SBATCH --time=3-00:00:00
+> ```
+
+The job will be incomplete ater this runtime, but resubmitting the job
+(i.e. re-running /app/bindcraft.py with the identical settings file)
+will continue the BindCraft run and should complete with the next run.
+
+Since we know that the first Slurm job will run out of time, we make
+note of the Slurm Job ID and submit a second Slurm job to run after the
+first run completes:
+
+```
+sbatch ./slurm_BindCraft_example.bash
+```
+
+sample output:
+
+> ```
+> Submitted batch job 21435656 on cluster ub-hpc
+> ```
+
+Submit the "follow on" job using the above Slurm Job ID
+
+
+```
+sbatch --dependency=afterany:21435656 ./slurm_BindCraft_example.bash
+```
+
+sample output:
+
+> ```
+> Submitted batch job 21438459 on cluster ub-hpc
+> ```
+
+There will be two output files, one for each job containg the Slurm Job IDs,
+in this case: "slurm-21435656.out" and "slurm-21438459.out"
+
+Once both Slurm jobs are completed, change to your BindCraft directory:
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+...then cat the job output files
+e.g.
+
+```
+cat slurm-21435656.out
+```
+
+sample output:
+
+> ```
+Running BindCraft on compute node: cpn-q09-20
+GPU info:
+GPU 0: Tesla V100-PCIE-32GB (UUID: GPU-54282c05-37f5-ccab-1be0-20bfd411efdd)
+[...]
+Stage 1: Test Logits
+1 models [0] recycles 1 hard 0 soft 0.02 temp 1 loss 14.04 helix 2.08 pae 0.86 i_pae 0.88 con 4.90 i_con 4.19 plddt 0.30 ptm 0.48 i_ptm 0.10 rg 16.75
+2 models [3] recycles 1 hard 0 soft 0.04 temp 1 loss 11.30 helix 0.97 pae 0.75 i_pae 0.79 con 4.18 i_con 4.01 plddt 0.43 ptm 0.49 i_ptm 0.11 rg 9.76
+[...]
+70 models [4] recycles 1 hard 0 soft 0.80 temp 1 loss 5.04 helix 1.57 pae 0.32 i_pae 0.30 con 2.38 i_con 2.86 plddt 0.73 ptm 0.66 i_ptm 0.49 rg 0.22
+71 models [0] recycles 1 hard 0 soft 0.84 temp 1 loss 5.49 helix 1.54 pae 0.37 i_pae 0.45 con 2.35 i_con 3.27 plddt 0.73 ptm 0.58 i_ptm 0.31 rg 0.26
+slurmstepd: error: *** JOB 21435656 ON cpn-q09-20 CANCELLED AT 2025-09-12T14:16:49 DUE TO TIME LIMIT ***
+> ```
+
+Note that, as expected, this job exceeded the three day walltime
+
+```
+cat slurm-21438459.out
+```
+
+sample output:
+
+> ```
+> Running BindCraft on compute node: cpn-q07-24
+> GPU info:
+> GPU 0: Tesla V100-PCIE-32GB (UUID: GPU-b04a1b86-23ee-53c9-e4d1-80ac55520086)
+> [...]
+> Stage 1: Test Logits
+> 1 models [3] recycles 1 hard 0 soft 0.02 temp 1 loss 10.71 helix 1.56 pae 0.80 i_pae 0.78 con 4.55 i_con 4.06 plddt 0.27 ptm 0.54 i_ptm 0.11 rg 6.83
+> 2 models [4] recycles 1 hard 0 soft 0.04 temp 1 loss 9.42 helix 0.86 pae 0.71 i_pae 0.67 con 4.32 i_con 3.97 plddt 0.36 ptm 0.55 i_ptm 0.13 rg 3.10
+> [...]
+> BindCraft run completed successfully
+> ```
+
+The BindCraft output files are (in this case) in the ./output/pdl1/ directory
+tree
+
+```
+ls -laR ./output/pdl1/
+```
+
+Sample output:
+
+> ```
+> output/pdl1/:
+> total 1326
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 9 11:32 .
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 15 15:27 ..
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 13 11:31 Accepted
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 1137 Sep 13 11:09 failure_csv.csv
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 151849 Sep 13 11:32 final_design_stats.csv
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 13 11:31 MPNN
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 701197 Sep 13 11:31 mpnn_design_stats.csv
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 13 11:09 Rejected
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 428916 Sep 13 11:09 rejected_mpnn_full_stats.csv
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 13 11:32 Trajectory
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 73494 Sep 13 11:27 trajectory_stats.csv
+>
+> output/pdl1/Accepted:
+> total 29610
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 13 11:31 .
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 9 11:32 ..
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 13 11:30 Animation
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 291091 Sep 11 06:33 PDL1_l102_s374445_mpnn1_model1.pdb
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 291172 Sep 11 06:35 PDL1_l102_s374445_mpnn2_model1.pdb
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 292063 Sep 13 02:31 PDL1_l103_s967444_mpnn5_model1.pdb
+> [...]
+> output/pdl1/Trajectory/Relaxed:
+> total 33876
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 13 11:26 .
+> drwxrwsr-x 2 [CCRusername] [YourGroupName] 4096 Sep 13 11:32 ..
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 288094 Sep 13 03:19 PDL1_l101_s944021.pdb
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 290119 Sep 12 00:57 PDL1_l102_s245763.pdb
+> [...]
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 290686 Sep 9 12:49 PDL1_l99_s308971.pdb
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 279265 Sep 10 20:41 PDL1_l99_s968849.pdb
+> -rw-rw-r-- 1 [CCRusername] [YourGroupName] 287203 Sep 10 21:43 PDL1_l99_s978526.pdb
+> ```
+
diff --git a/containers/2_ApplicationSpecific/BindCraft/SLURM_EXAMPLE_WITH_PYROSETT.md b/containers/2_ApplicationSpecific/BindCraft/SLURM_EXAMPLE_WITH_PYROSETT.md
new file mode 100644
index 0000000..28cf50a
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/SLURM_EXAMPLE_WITH_PYROSETT.md
@@ -0,0 +1,263 @@
+# BindCraft with PyRosetts Slurm Example
+
+Change to your BindCraft directory
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+Copy the sample Slurm script "slurm_BindCraft_with_PyRosetta_example.bash" to
+this directory then modify for your use case.
+
+You should change the SLURM cluster, partition, qos and account; then change
+the "[YourGroupName]" in the cd command:
+
+for example:
+
+```
+cat slurm_BindCraft_with_PyRosetta_example.bash
+```
+
+abridged sample output:
+
+> ```
+> [...]
+> ## Select a cluster, partition, qos and account that is appropriate for your use case
+> ## Available options and more details are provided in CCR's documentation:
+> ## https://docs.ccr.buffalo.edu/en/latest/hpc/jobs/#slurm-directives-partitions-qos
+> #SBATCH --cluster="ub-hpc"
+> #SBATCH --partition="general-compute"
+> #SBATCH --qos="general-compute"
+> #SBATCH --account="ccradmintest"
+>
+> [...]
+>
+> ## change to the BindCraft directory
+> cd /projects/academic/ccradmintest/BindCraft
+> [...]
+> ```
+
+NOTE: You can add other Slurm options to eiher script.
+For example, if you want to run on an H100 GPU (with 80GB RAM) add the
+following to the script:
+
+> ```
+> #SBATCH --constraint="H100"
+> ```
+
+The script has a job runtime set to (the "general-compute" partition maximum
+of) three days:
+
+> ```
+> #SBATCH --time=3-00:00:00
+> ```
+
+The job will be incomplete ater this runtime, but resubmitting the job
+(i.e. re-running /app/bindcraft.py with the identical settings file)
+will continue the BindCraft run and should complete with the next run.
+
+Since we know that the first Slurm job will run out of time, we make
+note of the Slurm Job ID and submit a second Slurm job to run after the
+first run completes:
+
+```
+sbatch ./slurm_BindCraft_with_PyRosetta_example.bash
+```
+
+sample output:
+
+> ```
+> Submitted batch job 21507613 on cluster ub-hpc
+> ```
+
+Submit the "follow on" job using the above Slurm Job ID
+
+
+```
+sbatch --dependency=afterany:21507613 ./slurm_BindCraft_with_PyRosetta_example.bash
+```
+
+sample output:
+
+> ```
+> Submitted batch job 21507614 on cluster ub-hpc
+> ```
+
+There will be two output files, one for each job containg the Slurm Job IDs,
+in this case: slurm-21507613.out and slurm-21507614.out
+
+Once both Slurm jobs are completed, change to your BindCraft directory:
+
+```
+cd /projects/academic/[YourGroupName]/BindCraft
+```
+
+...then cat the job output files
+e.g.
+
+```
+cat slurm-21507613.out
+```
+
+sample output:
+
+> ```
+> Running BindCraft on compute node: cpn-h25-29
+>
+> GPU info:
+> GPU 0: NVIDIA A100-PCIE-40GB (UUID: GPU-b5f2372f-bd99-dc66-1270-1b34ec7d2f1f)
+>
+> Open files limits OK (soft=65536, hard=65536)
+> Available GPUs:
+> NVIDIA A100-PCIE-40GB1: gpu
+> ┌───────────────────────────────────────────────────────────────────────────────┐
+> │ PyRosetta-4 │
+> │ Created in JHU by Sergey Lyskov and PyRosetta Team │
+> │ (C) Copyright Rosetta Commons Member Institutions │
+> │ │
+> │ NOTE: USE OF PyRosetta FOR COMMERCIAL PURPOSES REQUIRES PURCHASE OF A LICENSE │
+> │ See LICENSE.PyRosetta.md or email license@uw.edu for details │
+> └───────────────────────────────────────────────────────────────────────────────┘
+> PyRosetta-4 2025 [Rosetta PyRosetta4.conda.ubuntu.cxx11thread.serialization.Ubuntu.python310.Release 2025.37+release.df75a9c48e763e52a7aa3f5dfba077f4da88dbf5 2025-09-03T12:23:30] retrieved from: http://www.pyrosetta.org
+> PyRosetta initialized successfully.
+> Running binder design for target PDL1
+> Design settings used: default_4stage_multimer
+> Filtering designs based on default_filters
+> Starting trajectory: PDL1_l84_s935958
+> Stage 1: Test Logits
+> 1 models [1] recycles 1 hard 0 soft 0.02 temp 1 loss 9.91 helix 1.34 pae 0.79 i_pae 0.78 con 4.52 i_con 4.15 plddt 0.31 ptm 0.55 i_ptm 0.10 rg 3.80
+> 2 models [1] recycles 1 hard 0 soft 0.04 temp 1 loss 9.38 helix 0.65 pae 0.67 i_pae 0.69 con 4.06 i_con 4.09 plddt 0.45 ptm 0.56 i_ptm 0.13 rg 3.31
+> 3 models [2] recycles 1 hard 0 soft 0.05 temp 1 loss 8.69 helix 0.95 pae 0.65 i_pae 0.67 con 3.92 i_con 3.88 plddt 0.45 ptm 0.56 i_ptm 0.16 rg 2.51
+> [...]
+> Unmet filter conditions for PDL1_l88_s461145_mpnn7
+> Unmet filter conditions for PDL1_l88_s461145_mpnn8
+> slurmstepd: error: *** JOB 21507613 ON cpn-h25-29 CANCELLED AT 2025-09-18T18:57:26 DUE TO TIME LIMIT ***
+> ```
+
+
+```
+cat slurm-21507614.out
+```
+
+sample output:
+
+> ```
+> Running BindCraft on compute node: cpn-q07-20
+>
+> GPU info:
+> GPU 0: Tesla V100-PCIE-32GB (UUID: GPU-9aed1269-fe05-ca6a-6f44-56688afcbe50)
+>
+> Open files limits OK (soft=65536, hard=65536)
+> Available GPUs:
+> Tesla V100-PCIE-32GB1: gpu
+> ┌───────────────────────────────────────────────────────────────────────────────┐
+> │ PyRosetta-4 │
+> │ Created in JHU by Sergey Lyskov and PyRosetta Team │
+> │ (C) Copyright Rosetta Commons Member Institutions │
+> │ │
+> │ NOTE: USE OF PyRosetta FOR COMMERCIAL PURPOSES REQUIRES PURCHASE OF A LICENSE │
+> │ See LICENSE.PyRosetta.md or email license@uw.edu for details │
+> └───────────────────────────────────────────────────────────────────────────────┘
+> PyRosetta-4 2025 [Rosetta PyRosetta4.conda.ubuntu.cxx11thread.serialization.Ubuntu.python310.Release 2025.37+release.df75a9c48e763e52a7aa3f5dfba077f4da88dbf5 2025-09-03T12:23:30] retrieved from: http://www.pyrosetta.org
+> PyRosetta initialized successfully.
+> Running binder design for target PDL1
+> Design settings used: default_4stage_multimer
+> Filtering designs based on default_filters
+> Starting trajectory: PDL1_l133_s279404
+> Stage 1: Test Logits
+> 1 models [1] recycles 1 hard 0 soft 0.02 temp 1 loss 16.15 helix 2.11 pae 0.89 i_pae 0.88 con 5.05 i_con 3.99 plddt 0.28 ptm 0.46 i_ptm 0.09 rg 23.94
+> 2 models [1] recycles 1 hard 0 soft 0.04 temp 1 loss 15.37 helix 1.50 pae 0.84 i_pae 0.83 con 4.64 i_con 3.93 plddt 0.37 ptm 0.47 i_ptm 0.18 rg 22.42
+> 3 models [3] recycles 1 hard 0 soft 0.05 temp 1 loss 11.67 helix 1.12 pae 0.80 i_pae 0.83 con 4.42 i_con 4.24 plddt 0.39 ptm 0.46 i_ptm 0.10 rg 9.45
+> [...]
+> Base AF2 filters not passed for PDL1_l95_s421201_mpnn18, skipping full interface scoring.
+> Base AF2 filters not passed for PDL1_l95_s421201_mpnn19, skipping full interface scoring.
+> Unmet filter conditions for PDL1_l95_s421201_mpnn20
+> Found 1 MPNN designs passing filters
+>
+> Design and validation of trajectory PDL1_l95_s421201 took: 0 hours, 20 minutes, 26 seconds
+> Target number 100 of designs reached! Reranking...
+> Files in folder '/work/output/pdl1/Trajectory/Animation' have been zipped and removed.
+> Files in folder '/work/output/pdl1/Trajectory/Plots' have been zipped and removed.
+> Finished all designs. Script execution for 12 trajectories took: 5 hours, 8 minutes, 16 seconds
+>
+> BindCraft run completed successfully
+> ```
+
+We can look at the runtime for these two jobs:
+
+```
+sacct --format="Elapsed" -j 21507613
+```
+
+Job 21507613 runtime info:
+
+> ```
+> Elapsed
+> ----------
+> 3-00:00:20
+> 3-00:00:21
+> 3-00:00:21
+> ```
+
+```
+sacct --format="Elapsed" -j 21507614
+```
+
+Job 21507614 tuntime info:
+
+> ```
+> Elapsed
+> ----------
+> 05:09:05
+> 05:09:05
+> 05:09:05
+> ```
+
+...so, unfortunately only a few hours beyond the 3 day queue limit.
+
+The BindCraft output files are (in this case) in the ./output/pdl1/ directory
+tree
+
+```
+ls -laR ./output/pdl1/
+```
+
+Sample output:
+
+> ```
+> output/pdl1/:
+> total 2091
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 15 15:28 .
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 15 15:27 ..
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 19 19:12 Accepted
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 1144 Sep 19 19:28 failure_csv.csv
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 154700 Sep 19 19:28 final_design_stats.csv
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 19 19:28 MPNN
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 1150297 Sep 19 19:28 mpnn_design_stats.csv
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 19 19:28 Rejected
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 726597 Sep 19 19:28 rejected_mpnn_full_stats.csv
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 19 19:29 Trajectory
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 106598 Sep 19 19:08 trajectory_stats.csv
+>
+> output/pdl1/Accepted:
+> total 29602
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 19 19:12 .
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 15 15:28 ..
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 19 19:12 Animation
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 291114 Sep 16 12:56 PDL1_l103_s466116_mpnn1_model2.pdb
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 292167 Sep 16 13:02 PDL1_l103_s466116_mpnn9_model2.pdb
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 292815 Sep 17 03:40 PDL1_l103_s530314_mpnn3_model1.pdb
+> [...]
+> output/pdl1/Trajectory/Relaxed:
+> total 49282
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 19 19:07 .
+> drwxrwsr-x 2 tkewtest grp-ccradmintest 4096 Sep 19 19:29 ..
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 281880 Sep 17 11:44 PDL1_l100_s775462.pdb
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 289899 Sep 17 10:29 PDL1_l101_s615526.pdb
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 284796 Sep 17 19:04 PDL1_l101_s96790.pdb
+> [...]
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 281394 Sep 18 08:09 PDL1_l98_s487657.pdb
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 280260 Sep 16 11:58 PDL1_l98_s617776.pdb
+> -rw-rw-r-- 1 tkewtest grp-ccradmintest 281637 Sep 16 01:27 PDL1_l98_s684300.pdb
+> ```
+
diff --git a/containers/2_ApplicationSpecific/BindCraft/docker-entrypoint.sh b/containers/2_ApplicationSpecific/BindCraft/docker-entrypoint.sh
new file mode 100644
index 0000000..c77c2e2
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/docker-entrypoint.sh
@@ -0,0 +1,16 @@
+#!/usr/bin/env bash
+set -euo pipefail
+
+# Increase file descriptor limits if possible (BindCraft can open many files)
+if command -v prlimit >/dev/null 2>&1; then
+ prlimit --pid $$ --nofile=65536:65536 || true
+else
+ # Fallback: try ulimit within shell
+ ulimit -n 65536 || true
+fi
+
+# Ensure /app is working directory
+cd /app
+
+# Exec passed command (required for Modal ENTRYPOINT compatibility)
+exec "$@"
diff --git a/containers/2_ApplicationSpecific/BindCraft/slurm_BindCraft_example.bash b/containers/2_ApplicationSpecific/BindCraft/slurm_BindCraft_example.bash
new file mode 100644
index 0000000..7f7c65e
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/slurm_BindCraft_example.bash
@@ -0,0 +1,91 @@
+#!/bin/bash -l
+
+## This file is intended to serve as a template to be downloaded and modified for your use case.
+## For more information, refer to the following resources whenever referenced in the script-
+## README- https://github.com/ubccr/ccr-examples/tree/main/slurm/README.md
+## DOCUMENTATION- https://docs.ccr.buffalo.edu/en/latest/hpc/jobs
+
+## NOTE: This Slurm script was tested with the ccrsoft/2024.04 software release
+
+## Select a cluster, partition, qos and account that is appropriate for your use case
+## Available options and more details are provided in CCR's documentation:
+## https://docs.ccr.buffalo.edu/en/latest/hpc/jobs/#slurm-directives-partitions-qos
+#SBATCH --cluster="[cluster]"
+#SBATCH --partition="[partition]"
+#SBATCH --qos="[qos]"
+#SBATCH --account="[SlurmAccountName]"
+
+#SBATCH --nodes=1
+#SBATCH --tasks-per-node=1
+#SBATCH --cpus-per-task=4
+## BindCraft only uses one GPU
+#SBATCH --gpus-per-node=1
+#SBATCH --mem=64GB
+
+## Job runtime limit, the job will be canceled once this limit is reached. Format- dd-hh:mm:ss
+#SBATCH --time=3-00:00:00
+
+## change to the BindCraft directory
+cd /projects/academic/[YourGroupName]/BindCraft
+
+## Make sure the top input and output directories exist
+mkdir -p ./input ./output
+
+echo "Running BindCraft on compute node: $(hostname -s)"
+
+settings_file="./input/PDL1.json"
+
+# create the settings JSON file
+cat > "${settings_file}" << EOF
+{
+ "design_path": "/work/output/pdl1/",
+ "binder_name": "PDL1",
+ "starting_pdb": "/app/example/PDL1.pdb",
+ "chains": "A",
+ "target_hotspot_residues": "56",
+ "lengths": [65, 150],
+ "number_of_final_designs": 100
+}
+EOF
+
+# make sure the output directory for this job exists using the
+# "design_path" entry from the settings file
+output_dir="$(jq -r .design_path "${settings_file}")"
+if [ "${output_dir}" = "" ]
+then
+ echo "Check the the settings file \"${settings_file}\" - \"design_path\" not configured" >&2
+ exit 1
+fi
+apptainer run \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft-$(arch).sif \
+ mkdir -p "${output_dir}" > /dev/null
+if [ "$?" != "0" ]
+then
+ echo "Failed to creat the BindCraft output dir \"${output_dir}\" = bailing" >&2
+ exit 1
+fi
+
+apptainer run \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B /util/software/data/alphafold/params:/app/params \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft-$(arch).sif \
+ python3 /app/bindcraft.py \
+ --settings "${settings_file}" \
+ --filters /app/settings_filters/default_filters.json \
+ --advanced /app/settings_advanced/default_4stage_multimer.json \
+ --no-pyrosetta
+
+if [ "$?" = "0" ]
+then
+ echo
+ echo "BindCraft run completed successfully"
+else
+ echo
+ echo "WARNING: BindCraft run FAILED!" >&2
+fi
+
diff --git a/containers/2_ApplicationSpecific/BindCraft/slurm_BindCraft_with_PyRosetta_example.bash b/containers/2_ApplicationSpecific/BindCraft/slurm_BindCraft_with_PyRosetta_example.bash
new file mode 100644
index 0000000..d9fc3d7
--- /dev/null
+++ b/containers/2_ApplicationSpecific/BindCraft/slurm_BindCraft_with_PyRosetta_example.bash
@@ -0,0 +1,90 @@
+#!/bin/bash -l
+
+## This file is intended to serve as a template to be downloaded and modified for your use case.
+## For more information, refer to the following resources whenever referenced in the script-
+## README- https://github.com/ubccr/ccr-examples/tree/main/slurm/README.md
+## DOCUMENTATION- https://docs.ccr.buffalo.edu/en/latest/hpc/jobs
+
+## NOTE: This Slurm script was tested with the ccrsoft/2024.04 software release
+
+## Select a cluster, partition, qos and account that is appropriate for your use case
+## Available options and more details are provided in CCR's documentation:
+## https://docs.ccr.buffalo.edu/en/latest/hpc/jobs/#slurm-directives-partitions-qos
+#SBATCH --cluster="[cluster]"
+#SBATCH --partition="[partition]"
+#SBATCH --qos="[qos]"
+#SBATCH --account="[SlurmAccountName]"
+
+#SBATCH --nodes=1
+#SBATCH --tasks-per-node=1
+#SBATCH --cpus-per-task=4
+## BindCraft only uses one GPU
+#SBATCH --gpus-per-node=1
+#SBATCH --mem=64GB
+
+## Job runtime limit, the job will be canceled once this limit is reached. Format- dd-hh:mm:ss
+#SBATCH --time=3-00:00:00
+
+## change to the BindCraft directory
+cd /projects/academic/[YourGroupName]/BindCraft
+
+## Make sure the top input and output directories exist
+mkdir -p ./input ./output
+
+echo "Running BindCraft on compute node: $(hostname -s)"
+
+settings_file="./input/PDL1.json"
+
+# create the settings JSON file
+cat > "${settings_file}" << EOF
+{
+ "design_path": "/work/output/pdl1/",
+ "binder_name": "PDL1",
+ "starting_pdb": "/app/example/PDL1.pdb",
+ "chains": "A",
+ "target_hotspot_residues": "56",
+ "lengths": [65, 150],
+ "number_of_final_designs": 100
+}
+EOF
+
+# make sure the output directory for this job exists using the
+# "design_path" entry from the settings file
+output_dir="$(jq -r .design_path "${settings_file}")"
+if [ "${output_dir}" = "" ]
+then
+ echo "Check the the settings file \"${settings_file}\" - \"design_path\" not configured" >&2
+ exit 1
+fi
+apptainer run \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft-$(arch).sif \
+ mkdir -p "${output_dir}" > /dev/null
+if [ "$?" != "0" ]
+then
+ echo "Failed to creat the BindCraft output dir \"${output_dir}\" = bailing" >&2
+ exit 1
+fi
+
+apptainer run \
+ -B /projects:/projects,/scratch:/scratch,/util:/util,/vscratch:/vscratch \
+ -B /util/software/data/alphafold/params:/app/params \
+ -B ./input:/work/input,./output:/work/output \
+ --nv \
+ ./BindCraft_with_PyRosetta-x86_64.sif \
+ python3 /app/bindcraft.py \
+ --settings "${settings_file}" \
+ --filters /app/settings_filters/default_filters.json \
+ --advanced /app/settings_advanced/default_4stage_multimer.json
+
+if [ "$?" = "0" ]
+then
+ echo
+ echo "BindCraft run completed successfully"
+else
+ echo
+ echo "WARNING: BindCraft run FAILED!" >&2
+fi
+