Your personal task breakdown assistant, powered by FastAPI and a local LLM (Llama3.2 via Ollama).
- π FastAPI backend
- π₯ Local LLM (Ollama) integration
- π§ Smart prompt engineering (goal β steps)
- π Web frontend (React Native + Expo exported)
- π 100% Offline or Local server running
- π¦ Deployable to Render / Vercel / your own server
taskwizard-server/ βββ main.py # FastAPI server logic βββ requirements.txt # Python dependencies βββ .gitignore βββ LICENSE βββ README.md
- Python 3.10+ β
- Ollama (local LLM runner) β
- FastAPI, Uvicorn, Requests
# Clone the repo
git clone https://github.com/prjwrld/taskwizard-server.git
cd taskwizard-server
# Install dependencies
pip install -r requirements.txt
# Start Ollama (make sure it's running)
ollama run llama3.2
# Run the FastAPI server
uvicorn main:app --reload --port 8000
β’ Visit: http://localhost:8000/docs (Swagger UI to test)
π Credits
β’ FastAPI
β’ Ollama
β’ Llama 3.2B Model