This Streamlit application helps users refine their system prompts for AI assistants using either a local Ollama LLM or Azure OpenAI.
- Prompt Refinement: Submit an initial system prompt and receive a more effective and well-structured version generated by a Large Language Model (LLM).
- Provider Selection: Choose between Ollama (local LLM) and Azure OpenAI (cloud-based LLM) as your AI provider.
- Model Selection: Select from a list of available models for the chosen provider directly within the application.
- Configurable System Prompt: The core system prompt for the prompt builder itself is externalized in
config.yaml, allowing for easy modification and versioning. - LLM Call Logging: Every interaction with the LLM is logged in a structured JSON format (
llm_calls.log), providing details such as timestamp, prompt ID, model used, tokens, latency, and outcome. - Secure Credential Management: Azure OpenAI API keys and endpoints are loaded securely from a
.envfile, keeping sensitive information out of the codebase.
Follow these steps to get the AI Prompt Builder running on your local machine.
- Python 3.8+: Ensure you have Python installed.
- Ollama (Optional, for local LLM): If you plan to use Ollama, you need to have it installed and running. Download it from ollama.ai.
- Ollama Models (Optional): If using Ollama, download the required LLM models (e.g.,
llama2,mistral,codellama) using the Ollama CLI. For example:ollama pull llama2 ollama pull mistral ollama pull codellama
- Azure OpenAI Account (Optional, for cloud LLM): If you plan to use Azure OpenAI, you will need an active subscription, an Azure OpenAI resource, and a deployed model.
-
Clone the repository (if applicable) or navigate to the project directory:
cd /Users/m/projects/work/od/promptER -
Create a Python virtual environment:
python3 -m venv .venv
-
Activate the virtual environment:
- On macOS/Linux:
source .venv/bin/activate - On Windows:
.venv\Scripts\activate
- On macOS/Linux:
-
Install dependencies:
pip install -r requirements.txt
The application's behavior can be configured via the config.yaml file:
{
"provider": "ollama",
"ollama": {
"model": "llama2",
"models": [
"llama2",
"mistral",
"codellama"
]
},
"azure_openai": {
"models": [
"gpt-4",
"gpt-35-turbo"
]
},
"system_prompt": {
"id": "sp-4f8c2e",
"version": "0.1.0",
"purpose": "To assist users in refining system prompts for AI assistants.",
"owner": "Gemini",
"date_created": "2025-10-23",
"intended_model": "llama2",
"tags": [
"prompt-engineering",
"ai-assistant",
"refinement"
],
"notes": "This is the initial version of the system prompt.",
"content": "You are a prompt engineering assistant. Your task is to take a user's system prompt and rewrite it to be a more effective and well-structured prompt for an AI assistant. Provide only the refined prompt in your response."
}
}provider: The default LLM provider to use (ollamaorazure_openai).ollama.model: The default Ollama model to be selected.ollama.models: A list of available Ollama models that the user can select from. Ensure these models are pulled in your Ollama instance.azure_openai.models: A list of available Azure OpenAI models (deployment names) that the user can select from.system_prompt: An object containing metadata and the content of the system prompt used by the prompt builder itself.id: Unique identifier for the system prompt.version: Semantic version of the system prompt.purpose: Description of the system prompt's role.owner: Creator of the system prompt.date_created: Date the system prompt was created.intended_model: The model this system prompt was designed for.tags: Keywords for categorization.notes: Any additional notes.content: The actual system prompt text that guides the LLM's behavior in refining user prompts.
If you are using Azure OpenAI, create a file named .env in the root directory of the project (the same directory as app.py and config.yaml). This file will store your sensitive API credentials.
.env.template:
AZURE_OPENAI_API_KEY=
AZURE_OPENAI_ENDPOINT=
AZURE_OPENAI_DEPLOYMENT_NAME=
Example .env file (replace with your actual credentials):
AZURE_OPENAI_API_KEY="your_azure_openai_api_key_here"
AZURE_OPENAI_ENDPOINT="https://your-resource-name.openai.azure.com/"
AZURE_OPENAI_DEPLOYMENT_NAME="your-deployment-name"
Important: Do not commit your .env file to version control. Add it to your .gitignore file.
-
Ensure your virtual environment is activated.
-
Ensure your Ollama server is running (if using Ollama).
-
Start the Streamlit application:
streamlit run app.py
(Note: If you ran it in the background previously, you might need to kill the old process first:
kill $(ps aux | grep streamlit | grep -v grep | awk '{print $2}')) -
Open your web browser and navigate to the URL provided by Streamlit (usually
http://localhost:8501).
All LLM calls are logged to llm_calls.log in JSON format. This file can be used for monitoring, debugging, and analysis of LLM interactions.
{"message": "LLM call successful", "prompt_id": "sp-4f8c2e", "prompt_version": "0.1.0", "model_used": "llama2", "tokens_used": 123, "latency": 0.543, "outcome": "success", "time": "2025-10-23T10:30:00.123456"}