A local, privacy-focused prompt generator for image and video generation using Ollama. Supports reference image analysis and generates detailed, uncensored prompts.
- 🔓 Uncensored: Runs locally with no content restrictions
- 🖼️ Image Reference: Analyze and incorporate reference images
- 🎬 Multi-Format: Generate prompts for both images and videos
- 🏠 100% Local: All processing happens on your machine
- 🚀 Interactive & CLI modes: Use interactively or in scripts
- Python 3.7+
- Ollama installed and running
Windows: Download from ollama.com
Linux/Mac:
curl -fsSL https://ollama.com/install.sh | shpip install -r requirements.txtUncensored text models (choose one or more):
# Dolphin Mistral (Recommended - highly uncensored)
ollama pull dolphin-mistral
# Dolphin Mixtral (Larger, more capable)
ollama pull dolphin-mixtral
# Nous Hermes (Alternative uncensored model)
ollama pull nous-hermes2
# WizardLM Uncensored
ollama pull wizardlm-uncensoredVision models (for image reference support):
# LLaVA (Recommended)
ollama pull llava
# BakLLaVA (Alternative)
ollama pull bakllava
# LLaVA 34B (Larger, more detailed)
ollama pull llava:34bSimply run the script:
python prompt_generator.pyCommands:
- Type your request directly to generate prompts
/image [path]- Set a reference image/video- Switch to video prompt mode/img- Switch to image prompt mode/model [name]- Override the default model/clear- Clear reference image/quit- Exit
Example session:
[IMAGE] > a cyberpunk city at night
🔄 Generating prompt...
============================================================
GENERATED PROMPT:
============================================================
A sprawling cyberpunk metropolis under a neon-lit night sky...
============================================================
[IMAGE] > /image reference.jpg
✓ Reference image set: reference.jpg
[IMAGE] > create similar but with rain
Use in scripts or one-off generations:
# Basic usage
python prompt_generator.py "a fantasy landscape"
# Video prompt
python prompt_generator.py "epic battle scene" --type video
# With reference image
python prompt_generator.py "enhance this style" --image reference.jpg
# Override model
python prompt_generator.py "portrait" --model dolphin-mixtral- dolphin-mistral - Fast, highly uncensored, great for prompts
- dolphin-mixtral - More capable but slower
- nous-hermes2 - Good alternative
- wizardlm-uncensored - Reliable uncensored option
- llava - Standard vision model
- llava:34b - Higher quality analysis (requires more RAM)
- bakllava - Alternative vision model
Edit the script to change default models:
self.text_model = "dolphin-mistral" # Your preferred text model
self.vision_model = "llava" # Your preferred vision modelOr use the /model command in interactive mode.
- Be Specific: The more details you provide, the better the output
- Use References: Image references help maintain style consistency
- Iterate: Generate, refine, and regenerate for best results
- Model Selection: Larger models (mixtral, 34b) give more detailed prompts but are slower
Make sure Ollama is running:
ollama servePull the model first:
ollama pull dolphin-mistral
ollama pull llava- Use smaller models (mistral instead of mixtral)
- Close other applications
- Check if GPU acceleration is working:
ollama list
- Use smaller models
- For vision models, try
llava:7binstead ofllava:34b - Close other applications
- ✅ 100% local processing
- ✅ No data sent to external servers
- ✅ No logging or tracking
- ✅ Full control over content generation
Create a script to generate multiple prompts:
from prompt_generator import PromptGenerator
generator = PromptGenerator()
prompts = [
"fantasy castle",
"sci-fi spaceship",
"portrait of warrior"
]
for prompt in prompts:
result = generator.generate_prompt(prompt)
print(f"{prompt} -> {result}\n")Modify the system prompts in the script for different output styles.
Free to use and modify for any purpose.
This tool is for creative and artistic purposes. Users are responsible for ensuring their use complies with applicable laws and the terms of service of any platforms where generated content is used.