uv run gaia-emr init
warning: No `requires-python` value found in the workspace. Defaulting to `>=3.12`.
╭─────────────────────────────────────────╮
│ EMR Agent Setup │
│ Downloading and loading required models │
╰─────────────────────────────────────────╯
Step 1: Checking Lemonade server...
✓ Lemonade server is running
✓ Context size: 32,768 tokens (recommended: 32,768)
Step 2: Checking required models...
✓ VLM: Qwen3-VL-4B-Instruct-GGUF
✓ LLM: Qwen3-Coder-30B-A3B-Instruct-GGUF
✓ Embedding: nomic-embed-text-v2-moe-GGUF
Step 3: Loading required models...
Loading models into memory for fast inference...
Loading VLM: Qwen3-VL-4B-Instruct-GGUF...
✓ VLM loaded (4.0s)
Loading LLM: Qwen3-Coder-30B-A3B-Instruct-GGUF...
✓ LLM loaded (8.2s)
Loading Embedding: nomic-embed-text-v2-moe-GGUF...
✓ Embedding loaded (0.1s)
Clearing VLM context for clean memory...
[2026-01-08 10:31:00] | INFO | gaia.llm.lemonade_client.unload_model | lemonade_client.py:2275 | Model unloaded successfully: {'message': 'All models unloaded successfully', 'status': 'success'}
✓ VLM context cleared
Step 4: Verifying models are ready...
✓ VLM: Ready for form extraction
✓ LLM: Ready for chat queries
✓ Embedding: Ready for search
✓ Context size: 32,768 tokens
Step 5: Model inventory...
VLM Models: Qwen3-VL-4B-Instruct-GGUF
LLM Models: Qwen2.5-0.5B-Instruct-CPU, Qwen3-1.7B-GGUF, Qwen3-4B-Instruct-2507-GGUF (+2 more)
Embedding Models: nomic-embed-text-v2-moe-GGUF
Total models available: 7
....
Quick Check ✨
What's on your mind?
GAIA code should follow the same CLI path as EMR did by utilizing
gaia-code init.For example: