Reusable AI agent skills for idea validation, advanced brainstorming, closure retrospectives, startup research, and workspace memory.
This repository contains installable skills for the skills.sh ecosystem and for coding agents such as Codex, Claude Code, Cursor, Cline, OpenCode, and Goose. The current skills focus on four practical jobs:
- validating whether a product, startup, feature, or workflow idea is worth pursuing
- expanding an idea into broader, less conservative, more imaginative directions
- reflecting at task closure to decide whether reusable guidance should be codified
- preserving durable workspace memory across repeated agent sessions inside a repository
Install from GitHub with npx skills add fightZy/simple-skills.
Many agent workflows fail for one of four reasons:
- teams build ideas before checking whether the market is too crowded or the positioning is too weak
- teams brainstorm but still collapse too quickly into safe or conventional options
- teams finish work without converting repeated friction into reusable guidance
- teams lose project context between sessions and keep re-explaining the same decisions, conventions, and follow-ups
This repo packages these workflows as reusable agent skills so they can be installed, shared, and reused across projects.
Evaluate whether an idea is worth pursuing before building.
Use it when you want a sharper answer than brainstorming alone, especially for differentiation, alternatives, market crowdedness, or a continue / pivot / stop call.
Docs: ICA-EN, ICA-ZH, ICA-SKILL
Expand a proposal into broader, less conservative, more imaginative directions.
Use it when the user wants higher-order ideation instead of the safest recommendation, a shallow idea list, or an early MVP plan.
Review a task at wrap-up and decide whether any reusable lesson is worth codifying.
Use it when a non-trivial task is reaching closure and the work may justify a suggestion for an existing skill, a new skill, or AGENTS.md / CLAUDE.md.
Preserve repo-local project context across repeated agent sessions.
Use it when a repository needs durable memory for decisions, conventions, rationale, summaries, and follow-ups instead of re-explaining the same context every time.
Docs: WMS-EN, WMS-ZH, WMS-SKILL
List the installable skills available in this GitHub repository:
npx skills add fightZy/simple-skills --listInstall the full repository:
npx skills add fightZy/simple-skillsInstall a specific skill from the repo:
npx skills add fightZy/simple-skills --skill idea-credibility-analyst
npx skills add fightZy/simple-skills --skill advanced-brainstorming
npx skills add fightZy/simple-skills --skill closure-retrospective
npx skills add fightZy/simple-skills --skill workspace-memory-skillRepository-level tests live under tests/. Installable skill payloads stay under .agents/skills/ and should not include development-only test files.
These commands work with the skills CLI and are intended for skill-compatible agents and editors.
After installation, each skill runs according to its own SKILL.md. Use the doc links above to understand scope, capabilities, and scenarios before installing or invoking a skill.
The workspace-memory skill now includes a repository-level benchmark harness for retrieval evaluation. This harness lives outside the installable skill payload and exercises the real runtime query script through subprocess calls.
Repo-owned benchmark fixtures live in tests/workspace-memory-skill/benchmark_fixtures/. They cover:
- current-state layered retrieval
- experience retrieval with lineage promotion
- norms queries that should prefer crystals
- exact-id lookup
- sparse or negative retrieval behavior
Run the retrieval benchmark suite:
python -m scripts.benchmarks.workspace_memory.runner tests/workspace-memory-skill/benchmark_fixturesRun the focused benchmark tests:
python -m pytest tests/workspace-memory-skill/test_workspace_memory_benchmark.py -qThe benchmark harness also includes:
- an optional fixed-LLM QA adapter boundary for end-to-end evaluation
- a
LoCoMoadapter that converts external records into the repository's internal benchmark case schema
Current boundary:
- retrieval benchmarking is implemented and covered by tests
- fixed-LLM QA is scaffolded but only runs when model configuration is supplied
LoCoMosupport is adapter-based and intended for staged integration, not leaderboard-compatible claims in this first batch
Useful search terms for this repository:
- AI agent skills
- skills.sh repository
- Codex skills
- Claude Code skills
- Cursor skills
- Cline skills
- idea validation skill
- advanced brainstorming skill
- ideation skill
- frame-breaking brainstorming
- closure retrospective skill
- agent wrap-up reflection
- codify reusable workflow lessons
- startup research skill
- competitor analysis skill
- workspace memory skill
- workspace memory benchmark
- project memory for coding agents
- reusable prompt engineering workflows
Abbreviation guide:
ICA=Idea Credibility AnalystAB=Advanced BrainstormingCR=Closure RetrospectiveWMS=Workspace Memory SkillEN= English READMEZH= Chinese README
This repository is licensed under the MIT License.