Skip to content

Conversation

@Ulthran
Copy link
Contributor

@Ulthran Ulthran commented Jun 12, 2025

Summary

  • optionally run OpenAI analysis when sunbeam run fails
  • include helpful message if openai isn't installed
  • test the new --ai flag

Testing

  • black --check --line-length=88 .
  • snakefmt --check sunbeam/workflow/Snakefile sunbeam/workflow/rules/*.smk
  • pytest tests/unit tests/e2e/test_sunbeam_run.py::test_sunbeam_run_ai_option -vv

https://chatgpt.com/codex/tasks/task_e_684b204d251c8323939eb91e01f4e71f

Copilot AI review requested due to automatic review settings June 12, 2025 20:03
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces an optional AI diagnostic feature for failed runs of the Sunbeam pipeline. The changes include an integration of a new --ai flag in the CLI, the addition of an analyze_failure function that leverages the OpenAI API, and an end‐to‐end test to verify this behavior.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
tests/e2e/test_sunbeam_run.py Added an end-to-end test for the new --ai flag and its diagnostic behavior
sunbeam/scripts/run.py Introduced analyze_failure for AI-based failure diagnosis and updated main logic
Comments suppressed due to low confidence (1)

tests/e2e/test_sunbeam_run.py:124

  • [nitpick] Consider renaming 'fake_analyze' to 'mock_analyze_failure' for improved clarity in the test context.
def fake_analyze(log):

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants