Skip to content

fast, high-resolution lensless imaging in Tensorflow / PyTorch

License

Notifications You must be signed in to change notification settings

hoidn/PtychoPINN

Repository files navigation

Physics constrained machine learning for rapid, high resolution diffractive imaging

This repository contains the codebase for the methods presented in the papers "Physics Constrained Unsupervised Deep Learning for Rapid, High Resolution Scanning Coherent Diffraction Reconstruction" and "Towards generalizable deep ptychography neural networks".

Overview

PtychoPINN is a library of self-supervised neural networks for ptychography reconstruction. Its main features are its speed (relative to iterative solvers) and high resolution (relative to other ML methods).

For Developers

Start with the Unified Developer Guide for architecture, data flow, and development conventions.

Dual-Backend Architecture

PtychoPINN supports both TensorFlow and PyTorch backends:

  • Default Backend: TensorFlow remains the default for backward compatibility.
  • PyTorch Backend: PyTorch implementation is available via Lightning orchestration (ptycho_torch/workflows/components.py) with training, checkpointing, inference, and stitching.
  • Backend Selection: Configure backend choice through TrainingConfig.backend or InferenceConfig.backend fields ('tensorflow' or 'pytorch').

Architecture diagram

Installation

conda create -n ptycho python=3.11

conda activate ptycho

pip install .

Usage

Ptychodus

Ptychodus supports PtychoPINN-CNN. See, for example: https://github.com/AdvancedPhotonSource/ptychodus/blob/misc/src/ptychodus/scripts/ptychopinn_tf_test.py

Training

ptycho_train --train_data_file <train_path.npz> --test_data_file <test_path.npz> --output_dir <my_run>

Inference

ptycho_inference --model_path <my_run> --test_data <test_path.npz> --output_dir <inference_out>

Workflow Status

Use These by Default

  • Train with scripts/training/train.py (or ptycho_train).
  • Run inference with scripts/inference/inference.py (or ptycho_inference).
  • Pick backend with --backend tensorflow or --backend pytorch.
  • Use --n_groups for sample count. Add --n_subsample only when you want separate subsampling control.
  • For PyTorch execution flags:
    • Unified scripts: use --torch-accelerator and --torch-logger
    • PyTorch-native CLIs: use --accelerator and --logger

Also Supported

  • Grid-lines multi-model runs:
    • scripts/studies/grid_lines_compare_wrapper.py
  • Grid-lines Torch runner:
    • scripts/studies/grid_lines_torch_runner.py
    • Architectures: fno, hybrid, stable_hybrid, fno_vanilla, hybrid_resnet

Older Flags and Modes

  • --n_images in training is older; use --n_groups.
  • PyTorch --device and --disable_mlflow are older; use --accelerator and --logger none.
  • MLflow-only inference mode in ptycho_torch/inference.py (--run_id, --infer_dir) is still available, but not the default path.

See examples and READMEs under scripts/.

About

fast, high-resolution lensless imaging in Tensorflow / PyTorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •