Skip to content

docs(skills): add 9 task-focused skills under skills/usecase/#249

Merged
Leechael merged 2 commits into
mainfrom
feat/phala-cli-task-skills
May 6, 2026
Merged

docs(skills): add 9 task-focused skills under skills/usecase/#249
Leechael merged 2 commits into
mainfrom
feat/phala-cli-task-skills

Conversation

@Marvin-Cypher
Copy link
Copy Markdown
Contributor

@Marvin-Cypher Marvin-Cypher commented May 6, 2026

Summary

Adds 9 task-focused skill markdowns under `skills/usecase/` that AI coding agents (Claude Code, Cursor, Codex) fetch to scaffold Phala Cloud projects end-to-end. Builds on the foundational `skills/phala-cli/SKILL.md` (CLI reference).

All commands validated against live `phala` CLI v1.1.9 + live API (`cloud-api.phala.com/api/v1/attestations/verify` returned `verified: true` with matching `mr_config` ↔ `compose_hash` against a real running CVM).

Skills

Skill What it covers
`agent-deploy.md` Confidential AI agent — sealed credentials via `-e`, RA-TLS, Sign-RPC action log, multi-agent fleet
`gpu-vllm-deploy.md` Self-hosted vLLM on H200 GPU TEE — model catalog, multi-GPU tensor-parallel, GPU CC verification
`inference-call.md` Call the hosted Confidential AI API at `api.redpill.ai/v1` — cURL/Python/TS, streaming, tool calling
`training-run.md` SFT / DPO / LoRA / continued PT on GPU TEE with TRL or Unsloth — sealed dataset, signed manifest
`data-coanalysis.md` Multi-party cohort analysis — owner sealing, multi-sig DstackApp, prepare-only/commit, DP-aggregate output
`gpu-tee-custom.md` Generic custom workload on GPU TEE — Jupyter quickstart + common Docker images
`dstack-self-host.md` Self-host `dstack-vmm` + `dstack-kms` + `dstack-gateway` on bare-metal TDX with `vmm-cli.py`
`cloud-migration.md` Migrate from AWS Nitro / GCP CC VM / Tinfoil — diff map, shadow traffic, cutover
`verify-attestation.md` End-to-end TEE verification — NVIDIA NRAS + Intel TDX + report-data binding + compose-hash + Sigstore

Folder layout

```
skills/
├── phala-cli/ ← existing CLI reference (SKILL.md)
│ ├── SKILL.md
│ └── references/
└── usecase/ ← new: task-focused recipes (9 files)
├── agent-deploy.md
├── gpu-vllm-deploy.md
├── inference-call.md
├── training-run.md
├── data-coanalysis.md
├── gpu-tee-custom.md
├── dstack-self-host.md
├── cloud-migration.md
└── verify-attestation.md
```

The use-case skills cross-reference `../phala-cli/SKILL.md` for foundational steps (install, login, debug, SSH).

Use case

Marketing pages on `phala.com` will embed 2-line "vibe-code" prompts pointing at the raw URLs:

```
Set up a confidential AI agent on Phala Cloud.
Fetch https://raw.githubusercontent.com/Phala-Network/phala-cloud/main/skills/usecase/agent-deploy.md
and follow the steps. Confirm with `phala cvms attestation `.
```

The user pastes this into their AI coding agent, which fetches the skill and walks through scaffolding + deploy + verification.

Format

Each skill matches the existing `SKILL.md` convention:

  • YAML front-matter with `name` + multi-line `description`
  • H1 title + 1-line tool intro
  • `## Operations` decision matrix
  • Task sections as `## TaskName` with numbered `### Step N` substeps
  • Bash code blocks with comments
  • Reference tables for instance types / models / failure modes
  • Cross-references each other (e.g., `agent-deploy.md` references `verify-attestation.md` for the full verification flow)

Test plan

  • Each `Reference: minimal end-to-end` snippet at the bottom of each skill copy-pastes cleanly into a terminal
  • Cross-references between skills resolve once merged to `main`
  • `verify-attestation.md` minimal Python end-to-end runs against a live deploy

🤖 Generated with Claude Code

Marvin-Cypher and others added 2 commits May 5, 2026 22:41
Each skill matches the existing SKILL.md format (YAML front-matter,
Operations decision matrix, task sections, real CLI commands) and is
designed to be fetched by AI coding agents (Claude Code / Cursor /
Codex) when scaffolding a Phala Cloud project.

Skills:
- agent-deploy.md       — confidential AI agent (sealed creds, RA-TLS, action log)
- gpu-vllm-deploy.md    — self-hosted vLLM on H200 GPU TEE
- inference-call.md     — call the hosted Confidential AI API at api.redpill.ai
- training-run.md       — TRL/Unsloth fine-tuning on GPU TEE
- data-coanalysis.md    — multi-party cohort analysis with multi-sig DstackApp
- gpu-tee-custom.md     — generic custom workload on GPU TEE
- dstack-self-host.md   — self-host dstack-vmm + dstack-kms + dstack-gateway
- cloud-migration.md    — migrate from AWS Nitro / GCP CC VM / Tinfoil
- verify-attestation.md — end-to-end TEE verification (NVIDIA NRAS + Intel TDX
                          + report-data binding + compose-hash + Sigstore)

Validated against live phala CLI v1.1.9 and live API responses
(phala cvms attestation + cloud-api.phala.com/api/v1/attestations/verify
returned verified=true with matching mr_config / compose_hash).

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
The 9 use-case recipe skills belong under skills/usecase/ rather than
skills/phala-cli/, which is reserved for the foundational `phala` CLI
reference (SKILL.md). The use-case skills build on top of phala-cli
SKILL.md and reference it via ../phala-cli/SKILL.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
@Marvin-Cypher Marvin-Cypher changed the title docs(skills): add 9 task-focused phala-cli skills docs(skills): add 9 task-focused skills under skills/usecase/ May 6, 2026
@Leechael Leechael merged commit cb85ca2 into main May 6, 2026
1 check passed
@Leechael Leechael deleted the feat/phala-cli-task-skills branch May 6, 2026 07:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants