Skip to content

Conversation

andrewgazelka
Copy link

@andrewgazelka andrewgazelka commented Jul 13, 2025

Warning: I made this with AI. If you think this is a good design, I can manually refactor the code to make it more readable.

Inspiration: https://x.com/mitsuhiko/status/1939105797448872265 mentions how LLMs perform better when they only need to rerun failed tests and there is not an easy way to do this in Rust.

Similar to pytest's --last-failed feature, this adds the ability to rerun only tests that failed in the previous run. This helps developers iterate faster when fixing failing tests.

New CLI options:

  • --last-failed / --lf: Run only tests that failed in the previous run
  • --failed-last / --fl: Run all tests, but prioritize failed tests first
  • --clear-failed: Clear the failed test history

Failed tests are stored in target/nextest//-last-failed.json and are automatically updated after each test run. Tests that pass are removed from the failed list.

This implementation:

  • Adds a new last_failed module in nextest-runner for data persistence
  • Integrates with the test execution flow to track failures
  • Uses the existing test filtering mechanism for --last-failed
  • Updates documentation to describe the new feature

Similar to pytest's --last-failed feature, this adds the ability to rerun
only tests that failed in the previous run. This helps developers iterate
faster when fixing failing tests.

New CLI options:
- --last-failed / --lf: Run only tests that failed in the previous run
- --failed-last / --fl: Run all tests, but prioritize failed tests first
- --clear-failed: Clear the failed test history

Failed tests are stored in target/nextest/<profile>/<profile>-last-failed.json
and are automatically updated after each test run. Tests that pass are
removed from the failed list.

This implementation:
- Adds a new last_failed module in nextest-runner for data persistence
- Integrates with the test execution flow to track failures
- Uses the existing test filtering mechanism for --last-failed
- Updates documentation to describe the new feature
Copy link

codecov bot commented Jul 13, 2025

Codecov Report

Attention: Patch coverage is 75.58685% with 52 lines in your changes missing coverage. Please review.

Project coverage is 79.38%. Comparing base (ec6ce90) to head (197ac74).

Files with missing lines Patch % Lines
cargo-nextest/src/dispatch.rs 80.15% 25 Missing ⚠️
nextest-runner/src/reporter/last_failed.rs 71.08% 24 Missing ⚠️
cargo-nextest/src/errors.rs 25.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2472      +/-   ##
==========================================
- Coverage   79.42%   79.38%   -0.04%     
==========================================
  Files         107      108       +1     
  Lines       23964    24171     +207     
==========================================
+ Hits        19033    19188     +155     
- Misses       4931     4983      +52     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

The integration tests were expecting specific output messages from the
--last-failed, --clear-failed, and --last-failed with no history cases.
Updated the messages to match test expectations.
@sunshowers
Copy link
Member

sunshowers commented Jul 16, 2025

Thank you for the contribution! This is definitely a feature I would like to add to nextest. However I would go about this in a more composable manner:

  1. Do a full log of events received by the reporter (see [WIP] [nextest-runner] initial support for recording runs #1265 for an initial attempt that can probably be resurrected).
  2. Allow loading the event dump in.
  3. Add a cargo nextest rerun command which looks at the current list of tests + what's completed, and does a set difference to find out which tests to run.

The goal is to use the event log not just for rerunning tests but also for replaying test runs, etc.

I know it's more work, but would you be willing to do it? It would be really valuable.

@sunshowers
Copy link
Member

Actually I like passing in --last-failed because it lets users say !! --last-failed or !! --lf in their shells. Something to consider though is what we expect to happen if the set of tests to run changes. It would be worth making a table to ensure we're reasoning about all the cases.

@andrewgazelka
Copy link
Author

Actually I like passing in --last-failed because it lets users say !! --last-failed or !! --lf in their shells. Something to consider though is what we expect to happen if the set of tests to run changes. It would be worth making a table to ensure we're reasoning about all the cases.

I do not really have capacity for this at this time but perhaps I can work on it a bit later. I'd love to see something like this at sometime though. I'd love bundling cargo nextest into default tools our agent can use. 🥲

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants