Tusk is an AI testing platform that helps you catch blind spots, surface edge cases cases, and write verified tests for your commits.
This GitHub Action facilitates running Tusk-generated tests on Github runners.
Onboard to the Tusk platform:
- Set your team up on Tusk [docs].
- Configure and validate your test execution environment (including a workflow that uses this action) through the setup wizard [docs].
When you push new commits, Tusk runs against your commit changes and generates tests. To ensure that test scenarios are meaningful and verified, Tusk will start this workflow and provision a runner (with a unique runId), using it as an ephemeral sandbox to run tests against your specific setup and dependencies. Essentially, this action polls for live commands emitted by Tusk based on the progress of the run, executes them, and sends the results back to Tusk for further processing.
Add the following workflow to your .github/workflows folder and adapt inputs accordingly. If your repo requires additional setup steps (e.g., installing dependencies, setting up a Postgres database, etc), add them before the Start runner step. If your repo is a monorepo with multiple services, each workflow corresponds to a service sub-directory when you set up Tusk.
Workflow examples:
- Testing for a single-service repo
- Testing for a particular service in a multi-service repo
- Setting up auxiliary testing dependencies (e.g., Postgres DB, Redis)
- Using external runners in workflows (e.g., Blacksmith)
name: Tusk Test Runner
on:
workflow_dispatch:
inputs:
runId:
description: "Tusk Run ID"
required: true
tuskUrl:
description: "Tusk server URL"
required: true
commitSha:
description: "Commit SHA to checkout"
required: true
jobs:
test-action:
name: Tusk Test Runner
runs-on: ubuntu-latest
steps:
- name: Checkout
id: checkout
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commitSha }}
- name: Install dependencies
run: pip install -r requirements.txt
- name: Start runner
id: test-action
uses: Use-Tusk/test-runner@v1
with:
runId: ${{ github.event.inputs.runId }}
tuskUrl: ${{ github.event.inputs.tuskUrl }}
commitSha: ${{ github.event.inputs.commitSha }}
authToken: ${{ secrets.TUSK_AUTH_TOKEN }}
appDir: "backend"
testFramework: "pytest"
testFileRegex: "^backend/tests/.*(test_.*|.*_test).py$"
lintScript: "black {{file}}"
testScript: "pytest {{file}}"
coverageScript: |
coverage run -m pytest {{testFilePaths}}
coverage json -o coverage.jsonThe test runner step takes as input these parameters:
| Parameter | Description | Example | Notes |
|---|---|---|---|
runId |
Tusk Run ID | ${{ github.event.inputs.runId }} |
Required. Value passed in from workflow dispatch. |
tuskUrl |
Tusk server URL | ${{ github.event.inputs.tuskUrl }} |
Required. Value passed in from workflow dispatch. |
commitSha |
Commit SHA to checkout | ${{ github.event.inputs.commitSha }} |
Required. Value passed in from workflow dispatch. |
authToken |
Your Tusk API key | ${{ secrets.TUSK_AUTH_TOKEN }} |
Required. Recommended to store as a GitHub secret |
appDir |
App directory for the service, if you have multiple services in the repo. | backend |
Optional |
testFramework |
Test framework used for your service | pytest / jest / vitest / rspec / etc |
Required. |
testFileRegex |
Regex pattern to match test file paths in your repo (or app directory). | ^backend/tests/.*(test_.*|.*_test)\.py$ |
Required. This is relative to the root of the repo (i.e., the appDir will be included in it, if applicable).If your pattern includes backslashes ( \) to escape certain characters, wrap your pattern in single quotes or omit quotes entirely. |
lintScript |
Command to execute to lint (fix) a file. Use {{file}} as a placeholder. |
black {{file}} |
Optional |
testScript |
Command to execute to run tests in a file. Use {{file}} as a placeholder. |
pytest {{file}} |
Required. |
coverageScript |
Command to execute to obtain coverage gain based on newly generated test files. Use {{testFilePaths}} as placeholder. |
coverage run -m pytest {{testFilePaths}} |
Optional. Only supported if testFramework is pytest or jest at the moment. |
In your lint and test scripts, use {{file}} as a placeholder for where a specific file path will be inserted. If you provide a coverage script, use {{testFilePaths}} as a placeholder for where generated test file paths will be inserted. These will be replaced by actual paths for test files that Tusk is working on at runtime.
For calculating test coverage gains, we support Pytest and Jest at the moment.
-
For Pytest, your coverage script should write coverage data into
coverage.json(the default file for Pytest). In the above example, we assume thecoveragepackage is installed as part of the project requirements. -
For Jest, your coverage script should write coverage data into
/coverage/coverage-summary.json(the default file for Jest).Example:
npm run test {{testFilePaths}} -- --coverage --coverageReporters=json-summary --coverageDirectory=./coverage
The test runner step also takes these optional inputs to further configure the command polling behavior. In general, we recommend to leave them as they are unless you have a specific reason to deviate from these defaults.
pollingDuration: How long to poll for commands (in seconds). Defaults to "7200".pollingInterval: How often to poll for commands (in seconds). Defaults to "2".maxConcurrency: Maximum number of commands to run concurrently. Defaults to "5".
We support using strategy: matrix to set up concurrent jobs. Each job will serve as a runner for test execution.
Set:
on.workflow_dispatch.inputs.runnerIndexesjobs.<action>.strategy.matrix.runnerIndexjobs.<action>.steps.<Tusk step>.with.runnerIndex
to the values in the example shown below.
Example
name: Tusk Test Runner
on:
workflow_dispatch:
inputs:
runId:
description: "Tusk Run ID"
required: true
tuskUrl:
description: "Tusk server URL"
required: true
commitSha:
description: "Commit SHA to checkout"
required: true
runnerIndexes:
description: "Runner indexes"
required: false
default: "[\"1\"]"
jobs:
test-action:
name: Tusk Test Runner
runs-on: ubuntu-latest
strategy:
matrix:
runnerIndex: ${{ fromJson(github.event.inputs.runnerIndexes) }}
steps:
- name: Checkout
id: checkout
uses: actions/checkout@v4
with:
ref: ${{ github.event.inputs.commitSha }}
- name: Install dependencies
run: pip install -r requirements.txt
- name: Start runner
id: test-action
uses: Use-Tusk/test-runner@v1
with:
runId: ${{ github.event.inputs.runId }}
tuskUrl: ${{ github.event.inputs.tuskUrl }}
commitSha: ${{ github.event.inputs.commitSha }}
authToken: ${{ secrets.TUSK_AUTH_TOKEN }}
testFramework: "pytest"
testFileRegex: "^tests/.*(test_.*|.*_test).py$"
lintScript: black {{ file }}
testScript: pytest {{ file }}
runnerIndex: ${{ matrix.runnerIndex }}You will also need to set the number of sandboxes to use in the Tusk app. Tusk will use that to determine the indexes to set during workflow dispatch.
Need help? Drop us an email at [email protected].