A standardized templated LLM selection tree for AI agents, code generators, and automated systems.
With the proliferation of LLM models available through cloud APIs and the wide selection of locally-run models, it becomes crucial to steer AI agents towards your preferred models. This repository provides a structured approach to avoid repetitive model selection decisions when working on projects.
This is an experimental approach that provides LLM selection preferences to agents in easily parsable formats, enabling consistent and intelligent model routing based on task requirements.
Modern AI workflows often involve:
- Multiple model options (cloud vs local)
- Different providers (OpenAI, Anthropic, Google, etc.)
- Varying cost/performance trade-offs
- Task-specific optimization needs
Without standardized preferences, agents repeatedly ask for model selection guidance or make suboptimal choices.
This repository provides:
- Structured decision logic for model selection
- Machine-readable format (YAML) for agent integration
- Human-readable documentation (Markdown) for reference
- Cost-optimization guidelines and fallback strategies
Machine-readable LLM selection logic that agents can parse directly:
- Primary deployment preference (cloud-first)
- Task categorization (cost-effective, deep reasoning, flagship)
- Provider routing and access methods
- Model upgrade policies
Human-readable version with instructions for agents, including the complete YAML structure with contextual explanations.
Include the contents of tree.yaml
or tree.md
in your agent's context to enable automatic model selection based on task requirements.
Reference this structure when building systems that need to make LLM routing decisions programmatically.
# For agents that can read files directly
cat tree.yaml | your-agent --context-file -
# For systems that need the decision logic
curl -s https://raw.githubusercontent.com/danielrosehill/LLM-Preferences-Guide/main/tree.yaml
- Default to cloud unless compelling local reasons exist
- Task categorization determines model tier:
- Cost-effective: Simple tasks →
gpt-5.1-mini
- Deep reasoning: Complex problems →
claude-3.5-sonnet
- Flagship: Cutting-edge capabilities → Latest premium models
- Cost-effective: Simple tasks →
- Provider routing through OpenRouter for cloud access
- Local fallback via Ollama when needed
- Privacy/security requirements
- Offline operation needed
- Specific local model advantages
- Cost constraints for high-volume tasks
Fork this repository and modify the YAML structure to match your preferences:
- Update model names as new versions release
- Adjust cost/performance thresholds
- Add provider-specific configurations
- Include custom local model preferences