-
Install Ollama
- Download from Ollama.com
- Pull your desired models from the Ollama library
-
Edit my-ai.py
- Open
my-ai.pyand update themodel_listwith the models you want to use - Set your preferred
multimodal_model(for image processing) - Update the
basepathto your desired folders for logs, images, text, and context files
- Open
sudo apt install python3-pippip install ollama
pip install textwrap
pip install timeTo use the models configured in this project, download them with:
ollama pull llama3
ollama pull llava
ollama pull gemma2
ollama pull qwen2
ollama pull phi3:medium
ollama pull codellama
ollama pull dolphin-llama3python3 my-ai.py- Multi-model support - Choose from multiple AI models
- Multi-line input - Enter multiple lines by pressing Enter twice
- Image processing - Upload images with
/icommand (requires multimodal model) - Context/RAG - Upload text context with
/ccommand - Session customization - Fine-tune the AI role and behavior
- Conversation history - Automatic logging and context management
| Command | Function |
|---|---|
/i |
Upload an image (multimodal models only) |
/c |
Upload context text file |
/x |
Clear context and start fresh |
/r |
Redo current entry |
/? |
Show help information |
/bye |
Exit and optionally save context |
Edit the following variables in my-ai.py:
model_list- Add or remove your preferred modelsmultimodal_model- Set which model to use for image processingbasepath- Change to your desired working directory
Default folder structure created:
images/- For image uploadslogs/- For query logstext/- For text filescontext/- For context summaries
- The program uses local Ollama models running on
http://localhost:11434 - Context is maintained throughout your session and can be saved upon exit
- Query logs are automatically saved with timestamps
- Multiline input requires hitting Enter twice to send