
A complete starter project for building voice AI apps with LiveKit Agents for Python.
The starter project includes:
- A simple voice AI assistant based on the Voice AI quickstart
- Voice AI pipeline based on OpenAI, Cartesia, and Deepgram
- Easily integrate your preferred LLM, STT, and TTS instead, or swap to a realtime model like the OpenAI Realtime API
- Eval suite based on the LiveKit Agents testing & evaluation framework
- LiveKit Turn Detector for contextually-aware speaker detection, with multilingual support
- LiveKit Cloud enhanced noise cancellation
- Integrated metrics and logging
This starter app is compatible with any custom web/mobile frontend or SIP-based telephony.
Clone the repository and install dependencies to a virtual environment:
cd agent-starter-python
uv sync
Set up the environment by copying .env.example
to .env
and filling in the required values:
LIVEKIT_URL
: Use LiveKit Cloud or run your ownLIVEKIT_API_KEY
LIVEKIT_API_SECRET
OPENAI_API_KEY
: Get a key or use your preferred LLM providerDEEPGRAM_API_KEY
: Get a key or use your preferred STT providerCARTESIA_API_KEY
: Get a key or use your preferred TTS provider
You can load the LiveKit environment automatically using the LiveKit CLI:
lk app env -w .env
Before your first run, you must download certain models such as Silero VAD and the LiveKit turn detector:
uv run python src/agent.py download-files
Next, run this command to speak to your agent directly in your terminal:
uv run python src/agent.py console
To run the agent for use with a frontend or telephony, use the dev
command:
uv run python src/agent.py dev
In production, use the start
command:
uv run python src/agent.py start
Get started quickly with our pre-built frontend starter apps, or add telephony support:
Platform | Link | Description |
---|---|---|
Web | livekit-examples/agent-starter-react |
Web voice AI assistant with React & Next.js |
iOS/macOS | livekit-examples/agent-starter-swift |
Native iOS, macOS, and visionOS voice AI assistant |
Flutter | livekit-examples/agent-starter-flutter |
Cross-platform voice AI assistant app |
React Native | livekit-examples/voice-assistant-react-native |
Native mobile app with React Native & Expo |
Android | livekit-examples/agent-starter-android |
Native Android app with Kotlin & Jetpack Compose |
Web Embed | livekit-examples/agent-starter-embed |
Voice AI widget for any website |
Telephony | 📚 Documentation | Add inbound or outbound calling to your agent |
For advanced customization, see the complete frontend guide.
This project includes a complete suite of evals, based on the LiveKit Agents testing & evaluation framework. To run them, use pytest
.
uv run pytest
Once you've started your own project based on this repo, you should:
-
Check in your
uv.lock
: This file is currently untracked for the template, but you should commit it to your repository for reproducible builds and proper configuration management. (The same applies tolivekit.toml
, if you run your agents in LiveKit Cloud) -
Remove the git tracking test: Delete the "Check files not tracked in git" step from
.github/workflows/tests.yml
since you'll now want this file to be tracked. These are just there for development purposes in the template repo itself. -
Add your own repository secrets: You must add secrets for
OPENAI_API_KEY
or your other LLM provider so that the tests can run in CI.
This project is production-ready and includes a working Dockerfile
. To deploy it to LiveKit Cloud or another environment, see the deploying to production guide.
This project is licensed under the MIT License - see the LICENSE file for details.