An agentic application that analyzes user comments for profanity, sentiment, and relevance to movies or books. It fetches additional information from OMDb and provides a simple API and UI to interact with the system.
src/: Contains the core application logic.main.py: The FastAPI application.agent.py: The Pydantic AI agent for comment analysis.database.py: A simple in-memory database to store comment analysis results.
gradio_app.py: The Gradio web interface.mcp_client.py: The Master Control Program client to test the API.requirements.txt: Python dependencies..env.example: Example environment file. Copy to.envand fill in your details.start.sh: Script to start the FastAPI server and Gradio app.stop.sh: Script to stop the running processes.
-
Install dependencies:
Option A: Using uv (Recommended - Faster)
uv is an extremely fast Python package installer and resolver.
-
Install uv:
# macOS/Linux curl -LsSf https://astral.sh/uv/install.sh | sh # Windows powershell -c "irm https://astral.sh/uv/install.ps1 | iex" # Or with pip pip install uv
-
Install dependencies with uv:
uv pip install -r requirements.txt
-
Create and activate virtual environment with uv:
uv venv source .venv/bin/activate # On macOS/Linux # or .venv\Scripts\activate # On Windows
Option B: Using pip (Traditional)
pip install -r requirements.txt
-
-
Set up Ollama (Local LLM):
This project uses Ollama to run a local LLM (llama3.1) for comment analysis.
-
Install Ollama:
- macOS/Linux:
curl -fsSL https://ollama.ai/install.sh | sh - Windows: Download from https://ollama.ai/download
- Or visit: https://ollama.ai for other installation methods
- macOS/Linux:
-
Pull the llama3.1 model:
ollama pull llama3.1:latest
-
Start Ollama (if not already running):
ollama serve
Ollama will run on
http://localhost:11434by default. -
Verify Ollama is running:
curl http://localhost:11434/api/tags
You should see a list of available models including
llama3.1:latest.
Alternative: Using OpenAI instead of Ollama
If you prefer to use OpenAI's API instead of a local model, modify
src/agent.py:# Replace the ollama_model configuration with: ai_analyzer = Agent( "openai:gpt-4-turbo", # or "openai:gpt-3.5-turbo" output_type=CommentAnalysis, )
And set your OpenAI API key in
.env:OPENAI_API_KEY=your_openai_api_key_here -
-
Set up environment variables:
- Copy
.env.exampleto a new file named.env. - Get a free API key from OMDb API.
- Add your API key to the
.envfile:OMDB_API_KEY=YOUR_API_KEY_HERE
- Copy
-
To start everything:
./start.sh
- FastAPI will be available at
http://127.0.0.1:8000. - Gradio will be available at
http://127.0.0.1:7860.
- FastAPI will be available at
-
To stop everything:
./stop.sh