Skip to content

danielrosehill/LLM-Preferences-Guide

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

LLM Preferences Guide

A standardized templated LLM selection tree for AI agents, code generators, and automated systems.

Purpose

With the proliferation of LLM models available through cloud APIs and the wide selection of locally-run models, it becomes crucial to steer AI agents towards your preferred models. This repository provides a structured approach to avoid repetitive model selection decisions when working on projects.

This is an experimental approach that provides LLM selection preferences to agents in easily parsable formats, enabling consistent and intelligent model routing based on task requirements.

Problem Statement

Modern AI workflows often involve:

  • Multiple model options (cloud vs local)
  • Different providers (OpenAI, Anthropic, Google, etc.)
  • Varying cost/performance trade-offs
  • Task-specific optimization needs

Without standardized preferences, agents repeatedly ask for model selection guidance or make suboptimal choices.

Solution

This repository provides:

  • Structured decision logic for model selection
  • Machine-readable format (YAML) for agent integration
  • Human-readable documentation (Markdown) for reference
  • Cost-optimization guidelines and fallback strategies

Files

tree.yaml

Machine-readable LLM selection logic that agents can parse directly:

  • Primary deployment preference (cloud-first)
  • Task categorization (cost-effective, deep reasoning, flagship)
  • Provider routing and access methods
  • Model upgrade policies

tree.md

Human-readable version with instructions for agents, including the complete YAML structure with contextual explanations.

Usage

For AI Agents

Include the contents of tree.yaml or tree.md in your agent's context to enable automatic model selection based on task requirements.

For Code Generators

Reference this structure when building systems that need to make LLM routing decisions programmatically.

Integration Examples

# For agents that can read files directly
cat tree.yaml | your-agent --context-file -

# For systems that need the decision logic
curl -s https://raw.githubusercontent.com/danielrosehill/LLM-Preferences-Guide/main/tree.yaml

Model Selection Logic

Primary Decision Tree

  1. Default to cloud unless compelling local reasons exist
  2. Task categorization determines model tier:
    • Cost-effective: Simple tasks → gpt-5.1-mini
    • Deep reasoning: Complex problems → claude-3.5-sonnet
    • Flagship: Cutting-edge capabilities → Latest premium models
  3. Provider routing through OpenRouter for cloud access
  4. Local fallback via Ollama when needed

Compelling Local Reasons

  • Privacy/security requirements
  • Offline operation needed
  • Specific local model advantages
  • Cost constraints for high-volume tasks

Customization

Fork this repository and modify the YAML structure to match your preferences:

  • Update model names as new versions release
  • Adjust cost/performance thresholds
  • Add provider-specific configurations
  • Include custom local model preferences

About

(Experiment) Predefined set of instructions for local agents governing LLM usage and selection

Topics

Resources

Stars

Watchers

Forks