-
Notifications
You must be signed in to change notification settings - Fork 84
Description
System Info
Environment Information
Python: 3.11
OS: macOS
llama-prompt-ops: 0.0.7
dspy: Latest compatible version
Installation Method: pip
To capture full environment info, run:
python -m torch.utils.collect_env
Information
- The official example scripts
- My own modified scripts
🐛 Describe the bug
🐛 Describe the bug
The llama-prompt-ops
library (version 0.0.7) consistently fails with OptimizationError: 'str' object has no attribute 'kwargs'
when attempting to run prompt optimization. This is a persistent library bug that prevents the core optimization functionality from working.
Root Cause: The error occurs in llama_prompt_ops/core/prompt_strategies.py
around line 383, where a scope issue causes an optimizer
variable to be accessed as a string before it's properly defined as an object with the expected kwargs
attribute.
Impact: This bug makes the library completely unusable for its primary purpose - prompt optimization.
# Minimal reproduction command
python run_llama_prompt_ops.py
Results: The optimization process crashes immediately with the kwargs error, preventing any optimization from completing.
Minimal Reproduction Code:
from llama_prompt_ops.core.migrator import PromptMigrator
from llama_prompt_ops.core.prompt_strategies import BasicOptimizationStrategy
# Configuration
config = {
'model': {
'name': 'openai/gpt-4o-mini',
'api_base': 'https://oai.helicone.ai/v1',
'temperature': 0.0
},
'optimization': {
'strategy': 'basic',
'max_rounds': 2
}
}
# This triggers the error
strategy = BasicOptimizationStrategy(
max_rounds=config['optimization']['max_rounds']
)
migrator = PromptMigrator(strategy=strategy)
# Error occurs here
result = migrator.optimize(
prompt_data={'text': 'Test prompt', 'inputs': ['input'], 'outputs': ['output']},
trainset=[],
save_to_file=False
)
Expected behavior
The optimization should complete successfully and return an optimized prompt with performance metrics, or fail gracefully with a meaningful error message if there are configuration issues.
Instead: The library crashes with an unclear error that points to an internal scope issue, making it impossible to use the core functionality.
Additional Context
- Reproducibility: 100% reproducible across different environments
- Fix Attempts: Multiple comprehensive fixes attempted (scope fixes, patches, clean reinstalls) - all failed
- Severity: High - blocks core functionality
- Workaround: None available - recommend using DSPy directly instead
- Repository: https://github.com/meta-llama/llama-prompt-ops
This appears to be a fundamental library bug requiring maintainer intervention to fix the scope issue in prompt_strategies.py
.
Error logs
OptimizationError: 'str' object has no attribute 'kwargs'
The error originates from:
File: /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/llama_prompt_ops/core/prompt_strategies.py
Location: Around line 383
Issue: Nested function accessing optimizer.proposer_kwargs.get('tip') when optimizer is still a string
Expected behavior
The optimization should complete successfully and return an optimized prompt with performance metrics, or fail gracefully with a meaningful error message if there are configuration issues.
Instead: The library crashes with an unclear error that points to an internal scope issue, making it impossible to use the core functionality.