gpuci provides a GitHub Action for easy CI/CD integration. Test your CUDA kernels on every push or PR.
# gpuci.yml
targets:
- name: gpu-server
provider: ssh
host: gpu.yourcompany.com
user: ubuntu
gpu: A100- Go to your repo → Settings → Secrets → Actions
- Add
GPU_SSH_KEYwith your private SSH key
# .github/workflows/gpu-test.yml
name: GPU Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: rightnow-ai/gpuci@v1
with:
kernel: 'kernels/*.cu'
ssh-key: ${{ secrets.GPU_SSH_KEY }}That's it! GPU tests will run on every push and PR.
| Input | Required | Default | Description |
|---|---|---|---|
kernel |
Yes | - | Path to kernel file(s). Supports globs: src/*.cu |
config |
No | gpuci.yml |
Path to config file |
target |
No | - | Filter to specific target(s) |
runs |
No | 10 |
Number of benchmark iterations |
warmup |
No | 3 |
Number of warmup iterations |
ssh-key |
No | - | SSH private key for GPU machines |
brev-token |
No | - | Brev.dev API token |
post-comment |
No | true |
Post results as PR comment |
fail-on-error |
No | true |
Fail workflow if tests fail |
| Output | Description |
|---|---|
results |
Full text output from gpuci |
passed |
true if all tests passed, false otherwise |
- uses: rightnow-ai/gpuci@v1
with:
kernel: 'src/kernels/*.cu'
config: 'config/gpuci.yml'
target: 'h100'
runs: '20'
warmup: '5'
ssh-key: ${{ secrets.GPU_SSH_KEY }}
post-comment: 'true'
fail-on-error: 'true'-
Generate a key pair (if you don't have one):
ssh-keygen -t ed25519 -f gpuci_key -N "" -
Add public key to your GPU server:
ssh-copy-id -i gpuci_key.pub [email protected]
-
Add private key to GitHub:
- Go to: Repo → Settings → Secrets → Actions
- Click "New repository secret"
- Name:
GPU_SSH_KEY - Value: Contents of
gpuci_key(private key)
- Get your token from brev.dev dashboard
- Add to GitHub Secrets as
BREV_TOKEN
name: GPU Tests
on:
push:
branches: [main]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: rightnow-ai/gpuci@v1
with:
kernel: '**/*.cu'
ssh-key: ${{ secrets.GPU_SSH_KEY }}jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
gpu: [h100, a100, rtx4090]
steps:
- uses: actions/checkout@v4
- uses: rightnow-ai/gpuci@v1
with:
kernel: 'kernels/*.cu'
target: ${{ matrix.gpu }}
ssh-key: ${{ secrets.GPU_SSH_KEY }}on:
push:
paths:
- '**/*.cu'
- '**/*.cuh'
- 'gpuci.yml'on:
workflow_dispatch:
inputs:
kernel:
description: 'Kernel to test'
required: true
default: 'kernels/matmul.cu'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: rightnow-ai/gpuci@v1
with:
kernel: ${{ inputs.kernel }}
ssh-key: ${{ secrets.GPU_SSH_KEY }}When post-comment: true (default), gpuci posts results as a PR comment:
## GPU Kernel Test Results
✅ All GPU tests passed
<details>
<summary>Full Results</summary>
┌─────────────┬───────────────────┬────────┬──────────┐
│ Target │ GPU │ Status │ Median │
├─────────────┼───────────────────┼────────┼──────────┤
│ h100-server │ NVIDIA H100 │ PASS │ 0.42ms │
│ a100-server │ NVIDIA A100 │ PASS │ 0.61ms │
└─────────────┴───────────────────┴────────┴──────────┘
</details>
The comment is updated on each push, not duplicated.
If you have GPU machines, you can run the GitHub Actions runner directly on them:
-
Install runner on GPU machine:
# Follow GitHub's instructions for self-hosted runners # Add labels: self-hosted, gpu, cuda
-
Use in workflow:
jobs: test: runs-on: [self-hosted, gpu, cuda] steps: - uses: actions/checkout@v4 - run: | nvcc -O3 -o test kernel.cu ./test
- Check that the SSH key is correct
- Verify the host is reachable from GitHub Actions
- Try adding the host to known_hosts
- Ensure CUDA toolkit is installed on the target machine
- Check that nvcc is in PATH
- Check SSH key permissions
- Verify the user has access to run CUDA programs
- Increase the
timeoutin gpuci.yml - Check network connectivity to GPU machines
- SSH keys are stored as GitHub Secrets (encrypted)
- Keys are written to disk only during workflow execution
- Keys are cleaned up after the job completes
- Use deploy keys with minimal permissions when possible