Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 36 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,47 @@
'''
Prerequisites:
Pre-install Ollama from Ollama.ai and pre-load your desired models.
Then edit the my-ai.py file to include the models you want to use.

Install these from the command line.
# How to Use Your Local AI
1. Pre-install Ollama from Ollama.ai and pre-load your desired models.
2. Then edit the my-ai.py file to include the models you want to use.
3. Run the software on your terminal

Dependency Summary
## Pre-installing Ollama
Install these from the command line.
```bash
# Installing Python and Ollama and other project dependencies:
sudo apt install python3-pip
pip install ollama
pip install textwrap
pip install time

To match the models here download the models as follows
# or for Python3
pip install textwrap3
```

## Pre-Load Desired Models
After installing Ollama, to load the match the pre-defeined modesl in the my-ai.py file, download the models as follows:
```bash
ollama pull llama3
ollama pull llava
ollama pull phi3:medium
ollama pull codellama
ollama pull dolphin-llama3
'''
```

You can browse [the Ollama library](https://ollama.com/library) for more models.

### Note regarding model sizes
You will not be able to use a model size larger than your available RAM.
I.e. if you have 32GB of RAM, but you're using 16GB (50%) of it for other tasks, say a few browser windows with multiple tabs, a couple of apps, and running VS Code, then you can only use a model that is under 16GB.

### Running a large model
If your model takes up a significant portion of your available RAM, it will operate quite slowly.

I.e. a 10GB model on a 16GB RAM computer that only has 12GB open will run, but will take a couple minutes to complete each prompt.

Having or setting up a private dedicated server (preferably offline to keep this local) would be ideal.

## Running the program
```bash
python3 my-ai.py
```

And proceed to follow instructions.