Porting the Getting Started example to Python.
The following steps will get you up and running on your machine.
- Install Ollama and run:
- Download and install: https://ollama.com
- Verify the install and start Llama in a terminal window:
~ % ollama run llama3.2
- Clone this project into a local directory:
~ % git clone <url>
- Install Python (if necessary)
It's recommended that you setup a virtual environment before installing Python or Python modules. You can see how to do that here:
If you would rather install Python globally, follow the instructions here:
- Download and install: https://www.python.org/downloads/
- Verify the install in your terminal:
~ % python -V
- If the installation succeeded, the version will print
- Install Python modules
- Navigate to the project
part1/getting_started_pythondirectory in your terminal and run:
~/developers-guide-to-ai/part1/getting_started_python % pip install -r requirements.txt
- Navigate to the project
part1/clientdirectory in your terminal and run:
~/developers-guide-to-ai/part1/client % npm install
- Launch the server
- In a terminal, navigate to the
part1/getting_started_pythondirectory and run the following command:
~/developers-guide-to-ai/part1/getting_started_python % fastapi dev main.py
- Launch the client
- In a separate terminal, navigate to the
part1/clientdirectory and run the following commands:
~/developers-guide-to-ai/part1/client % npm run dev
-
Open your web browser and visit: http://localhost:5173
-
Input a question and click
Call your APIto see the response streamed from Llama 3.2 3B