Skip to content

Latest commit

 

History

History
67 lines (51 loc) · 3.89 KB

File metadata and controls

67 lines (51 loc) · 3.89 KB

DrEvalPy: Python Cancer Cell Line Drug Response Prediction Suite

PyPI Python Version License Read the documentation at https://drevalpy.readthedocs.io/ Build Package Status Run Tests Status Codecov pre-commit Black

Overview of the DrEval framework. Via input options, implemented state-of-the-art models can be compared against baselines of varying complexity. We address obstacles to progress in the field at each point in our pipeline: Our framework is available on PyPI and nf-core and we follow FAIReR standards for optimal reproducibility. DrEval is easily extendable as demonstrated here with an implementation of a proteomics-based random forest. Custom viability data can be preprocessed with CurveCurator, leading to more consistent data and metrics. DrEval supports five widely used datasets with application-aware train/test splits that enable detecting weak generalization. Models are free to use provided cell line- and drug features or custom ones. The pipeline supports randomization-based ablation studies and performs robust hyperparameter tuning for all models. Evaluation is conducted using meaningful, bias-resistant metrics to avoid inflated results from artifacts such as Simpson’s paradox. All results are compiled into an interactive HTML report.

Overview

Focus on Innovating Your Models — DrEval Handles the Rest! - DrEval is a toolkit that ensures drug response prediction evaluations are statistically sound, biologically meaningful, and reproducible. - Focus on model innovation while using our automated standardized evaluation protocols and preprocessing workflows. - A flexible model interface supports all model types (e.g. Machine Learning, Stats, Network-based analyses)

Use DrEval to Build Drug Response Models That Have an Impact

  1. Maintained, up-to-date baseline catalog, no need to re-implement literature models
  2. Gold standard datasets for benchmarking
  3. Consistent application-driven evaluation
  4. Ablation studies with permutation tests
  5. Cross-study evaluation for generalization analysis
  6. Optimized nextflow pipeline for fast experiments
  7. Easy-to-use hyperparameter tuning
  8. Paper-ready visualizations to display performance

This project is a collaboration of the Technical University of Munich (TUM, Germany) and the Freie Universität Berlin (FU, Germany).