A friendly tool that helps you adapt your content for different platforms. It keeps your message clear while changing the style to fit where you're posting.
Note: Right now it's a helpful pipeline, but we're working on making it more independent. Soon it'll be able to choose models and make decisions on its own!
- π― Multi-platform Content Generation: Create content for Twitter, LinkedIn, and blog posts
- π¨ Style Transfer: Change your writing style from formal to casual, technical to simple, and more
- π Flexible Input Sources: Works with URLs, markdown files, PDFs, and other formats
- π€ Multiple AI Providers: Choose from Google, OpenAI, and Anthropic
- π Structured Output: Get content formatted exactly how you need it
- π Content Evaluation: Built-in system to check quality and style
- π» Interactive CLI: Easy-to-use command line interface
- β‘ Batch Processing: Handle multiple pieces of content at once
uv sync
That's it! π
Create a .env
file with your API keys:
# Required for Google AI
GOOGLE_API_KEY=your_google_api_key
# Optional for OpenAI
OPENAI_API_KEY=your_openai_api_key
# Optional for Anthropic
ANTHROPIC_API_KEY=your_anthropic_api_key
For testing and development, you can use encrypted environment files:
-
Install all dependencies:
uv sync --active --all-extras
-
Create your
.env.test
file:# .env.test OPENAI_API_KEY=your_openai_test_key_here ANTHROPIC_API_KEY=your_anthropic_test_key_here GOOGLE_API_KEY=your_google_test_key_here
-
Encrypt the file:
uv run python scripts/env_vault.py encrypt
-
For team collaboration:
- Commit
.env.test.vault
and.env.key
files - Share both files with your team
- Other developers can decrypt with:
uv run python scripts/env_vault.py decrypt
- Commit
- β
.env.test.vault
is safe to commit to version control β οΈ .env.key
should be in.gitignore
but shared with your team- β
.env.test
should be in.gitignore
- π For production, use proper secret management (GitHub Secrets, etc.)
python main.py
The interface supports three operations:
- Generate content only (default)
- Evaluate existing content only
- Generate content and evaluate
π¨ Style Transfer Agent with Evaluation
========================================
π― Choose operation:
1. Generate content only (default)
2. Evaluate existing content only
3. Generate content and evaluate
Operation (1-3, default=1): 1
π Directory to browse (default: fixtures): fixtures
π Available requests in fixtures:
1. linkedin-request.json
2. twitter-request.json
3. blog-request.json
4. Enter custom path
Select request (1-4): 1
β
Selected: fixtures/linkedin-request.json
β
Loaded JSON from fixtures/linkedin-request.json
β
Parsed StyleTransferRequest with 1 target documents
π€ Choose AI provider:
1. Google - Free tier available
2. OpenAI - Requires billing
3. Anthropic - Requires credits
Provider (1-3, default=1): 1
π§ Available google_genai models:
1. gemini-1.5-flash (default)
2. gemini-1.5-pro
3. gemini-pro
Model (1-3, default=1): 1
π‘οΈ Temperature controls creativity:
0.0-0.3 = Very focused/conservative
0.4-0.7 = Balanced (recommended)
0.8-1.0 = Very creative/random
Temperature (0.0-1.0, default=0.7): 0.8
π Request Summary:
- Reference styles: 1
- Target schemas: 1
- LLM Provider: google_genai
- Model: gemini-1.5-flash
- Temperature: 0.8
π Processing with google_genai/gemini-1.5-flash (temp: 0.8)...
β
Generated 1 response(s):
--- Response 1: LinkedIn Professional Post ---
Style: LinkedIn Tech Thought Leader
Content:
{
"text": "\"2024 Full-Stack Developer Skills Report: What Employers Are Actually Looking For\"\n\nThe landscape of full-stack development is constantly evolving. To help you navigate this dynamic environment, we analyzed 50,000+ job postings from LinkedIn, Indeed, and Stack Overflow to identify the most in-demand skills for 2024. Our findings reveal some key trends that full-stack developers should prioritize to remain competitive.\n\n**Key Skills in High Demand:**\n\n* **Frontend Development:** React, Angular, Vue.js continue to dominate, with a strong emphasis on component-based architecture and performance optimization. Experience with modern JavaScript frameworks and libraries is essential.\n* **Backend Development:** Node.js, Python (Django/Flask), and Java remain popular choices. Cloud-native development skills (AWS, Azure, GCP) are increasingly important, alongside proficiency in containerization (Docker, Kubernetes).\n* **Databases:** SQL and NoSQL databases are both crucial. Expertise in database design, optimization, and querying is highly valued.\n* **DevOps:** Understanding CI/CD pipelines, infrastructure-as-code, and cloud deployment strategies is becoming a non-negotiable skill for full-stack developers.\n* **Testing and Quality Assurance:** Proficiency in automated testing methodologies and frameworks is essential for ensuring high-quality software.\n\n**Emerging Trends:**\n\n* **AI/ML Integration:** Incorporating AI and machine learning capabilities into applications is gaining significant traction. Familiarity with relevant libraries and frameworks is advantageous.\n* **Web3 Development:** While still emerging, skills in blockchain technologies and decentralized applications are becoming increasingly sought after.\n* **Security Best Practices:** Developers must demonstrate a strong understanding of security principles and practices to protect applications from vulnerabilities.\n\n**Actionable Takeaways:**\n\nBased on our analysis, here's what you can do to enhance your skillset and boost your job prospects:\n\n* **Upskill/Reskill:** Identify skill gaps based on the analysis above and focus on acquiring the most in-demand skills. Numerous online courses and bootcamps can help with this.\n* **Build a Strong Portfolio:** Showcase your expertise by building compelling projects that demonstrate your mastery of these skills.\n* **Network Strategically:** Attend industry events and connect with professionals to stay informed about emerging trends and opportunities.\n\nThe full-stack development landscape is competitive, but with focused effort and a strategic approach to upskilling, you can significantly improve your chances of success. Start building your future-proof skillset today!\n",
"multimedia_url": null
}
πΎ Save results to file? (y/n, default=n): y
π Save to fixtures:
π Output filename (default: results.json): my-linkedin-content.json
β
Results saved to fixtures/my-linkedin-content.json
Temperature controls how creative the AI gets:
Range | Description | Use Case |
---|---|---|
0.0-0.3 | Very focused/conservative | Follows the style closely, very predictable |
0.4-0.7 | Balanced (default: 0.7) | Good mix of creativity and consistency |
0.8-1.0 | Very creative/random | More creative, might surprise you |
Create a JSON file with the following structure:
{
"reference_style": [
{
"name": "Style Name",
"description": "Description of the style",
"style_definition": {
"tone": "casual and engaging",
"formality_level": 0.3,
"sentence_structure": "short and punchy",
"vocabulary_level": "simple",
"personality_traits": ["enthusiastic", "knowledgeable"],
"writing_patterns": {
"use_emojis": true,
"hashtag_frequency": "moderate"
}
}
}
],
"intent": "Your content goal",
"focus": "How to process the content",
"target_content": [
{
"url": "https://example.com/source-content",
"type": "Blog",
"category": "Technical",
"title": "Source Content Title"
}
],
"target_schemas": [
{
"name": "Output Name",
"output_type": "tweet_single",
"max_length": 280,
"tweet_single": {
"text": "",
"url_allowed": true
}
}
]
}
- name: Identifier for the style
- style_definition: Writing characteristics including:
tone
: Overall tone (casual, formal, professional, etc.)formality_level
: 0.0 (very casual) to 1.0 (very formal)sentence_structure
: short, long, varied, etc.vocabulary_level
: simple, moderate, advanced, technicalpersonality_traits
: Array of traits like ["confident", "humble"]writing_patterns
: Platform-specific patterns (emojis, hashtags, etc.)
- url: Source content URL
- type: Content type (Blog, Twitter, LinkedIn, etc.)
- category: Content category (Technical, Casual, Formal, etc.)
- title: Content title
- author: Content author (optional)
- date_published: Publication date (optional)
- name: Output identifier
- output_type: One of:
tweet_single
: Single Twitter posttweet_thread
: Twitter threadlinkedin_post
: LinkedIn postlinkedin_comment
: LinkedIn commentblog_post
: Blog article
- max_length: Maximum word count
- min_length: Minimum word count (optional)
You can choose from three AI providers:
- Default:
gemini-1.5-flash
- Options:
gemini-1.5-flash
,gemini-1.5-pro
,gemini-pro
- Default:
gpt-3.5-turbo
- Options:
gpt-3.5-turbo
,gpt-4
,gpt-4-turbo
- Default:
claude-3-haiku-20240307
- Options:
claude-3-haiku-20240307
,claude-3-sonnet-20240229
,claude-3-opus-20240229
The project includes a comprehensive custom evaluation system that assesses generated content across multiple dimensions:
- π¨ Style Adherence: How well the content matches the target style
- π Content Quality: Overall writing quality and coherence
- π± Platform Appropriateness: Suitability for the target platform
- π Engagement Potential: Likelihood of audience engagement
- π Technical Accuracy: Factual correctness and technical precision
Why Custom Evaluation? The evaluation system uses a custom implementation rather than established frameworks like LangSmith, OpenEval, or AgentEvals due to incompatibility issues with model formatting requirements.
agent_style_transfer/
βββ evaluation.py # Main entry point for evaluations
βββ evals/ # Repository of available evaluations
β βββ __init__.py # Exports all evaluation functions
β βββ style_fidelity.py # Style adherence evaluation
β βββ content_preservation.py # Content preservation check
β βββ quality.py # Overall quality assessment
β βββ platform_appropriateness.py # Platform suitability
βββ utils/
βββ evaluation.py # Shared evaluation utilities
βββ content_extractor.py # Content extraction helpers
βββ pydantic_utils.py # Pydantic schema utilities
Evaluation | Purpose | Focus |
---|---|---|
Style Fidelity | Style adherence evaluation | Tone, formality, vocabulary, writing patterns |
Content Preservation | Content preservation check | Key information, factual accuracy, core message |
Quality Assessment | Overall quality evaluation | Grammar, coherence, engagement, readability |
Platform Appropriateness | Platform suitability | Platform-specific requirements and conventions |
Single Response Evaluation:
from agent_style_transfer.evaluation import evaluate
results = evaluate(request, response, provider="openai", model="gpt-4")
Batch Evaluation:
results = evaluate(request, responses, provider="anthropic", model="claude-3-haiku")
Individual Evaluations:
from agent_style_transfer.evals import evaluate_style_fidelity, evaluate_quality
style_score = evaluate_style_fidelity(request, response, "openai", "gpt-4")
quality_score = evaluate_quality(request, response, "anthropic", "claude-3-haiku")
This style transfer system works well with other tools and systems. It accepts JSON objects for easy integration.
- JSON Input Only: Send JSON strings or objects, not Python objects
- Pydantic Validation: JSON must follow the
StyleTransferRequest
schema - Required Fields: Include all required fields in your JSON
- Error Handling: Returns clear error messages for invalid input
Check the fixtures/
directory for ready-to-use templates:
linkedin-request.json
: Professional LinkedIn content generationtwitter-request.json
: Twitter post creationblog-request.json
: Blog article generation
For detailed input/output examples showing how the style transfer works in practice, see examples.md.
The project uses a structured file organization:
fixtures/
: Contains example request files and generated results- Files ending with
-request.json
: Input files for content generation - Files ending with
-response.json
: Generated content files - Other JSON files: Evaluation results and other outputs
- Files ending with
agent_style_transfer/
: Core package with all functionalitytests/
: Test suite with comprehensive coverage
The system uses comprehensive Pydantic models for type safety and validation. All schemas are defined in agent_style_transfer/schemas.py
.
Key models include StyleTransferRequest
, StyleTransferResponse
, Document
, ReferenceStyle
, and various output schemas for different platforms. See the schema file for complete definitions and validation rules.
Run the test suite:
pytest
Tests use VCR.py to record and replay API interactions, ensuring consistent test results.
- Twitter: Single tweets and threads
- LinkedIn: Posts and comments
- Blog: Articles with markdown formatting
The system can read content from various sources (Twitter, LinkedIn, Reddit, Facebook, Instagram, TikTok, blogs) but currently generates output for Twitter, LinkedIn, and blog posts only.
This project is complete and production-ready. All core functionality has been implemented and tested:
- Multi-platform content generation (Twitter, LinkedIn, Blog)
- Style transfer with customizable writing styles
- Multiple AI provider support (Google, OpenAI, Anthropic)
- Interactive CLI with guided workflows
- Custom evaluation system with detailed scoring
- Comprehensive test suite with VCR.py for consistent testing
- Pydantic schemas for type safety and validation
- Error handling and user-friendly error messages
- Documentation and examples
- Modular design for easy extension
- Agent chaining compatibility with JSON interfaces
- Clean separation of concerns
- Scalable evaluation framework
The project is designed to evolve and can be easily extended with:
- True Agent Capabilities: Intelligent model selection, autonomous decision-making, adaptive behavior
- New AI providers and models
- Additional content platforms
- Enhanced evaluation metrics
- Custom style definitions
- Integration with external systems
- Tool usage and external API integration
- Memory and learning capabilities
See LICENSE file for details.