CodeGate is a local gateway that makes AI coding assistants safer. CodeGate ensures AI-generated recommendations adhere to best practices, while safeguarding your code's integrity, and protecting your individual privacy. With CodeGate, you can confidently leverage AI in your development workflow without compromising security or productivity. CodeGate is designed to work seamlessly with coding assistants, allowing you to safely enjoy all the benefits of AI code generation.
CodeGate is developed by Stacklok, a group of security experts with many years of experience building developer friendly open source security software tools and platforms.
Check out the CodeGate website and documentation to learn more.
CodeGate is in active development and subject to rapid change.
- Features may change frequently
- Expect possible bugs and breaking changes
- Contributions, feedback, and testing are highly encouraged and welcomed!
In today's world where AI coding assistants are becoming ubiquitous, security can't be an afterthought. CodeGate sits between you and AI, actively protecting your development process by:
- 🔒 Preventing accidental exposure of secrets and sensitive data
- 🛡️ Ensuring AI suggestions follow secure coding practices
⚠️ Blocking recommendations of known malicious or deprecated libraries- 🔍 Providing real-time security analysis of AI suggestions
CodeGate works with multiple development environments and AI providers.
-
GitHub Copilot with Visual Studio Code and JetBrains IDEs
-
Continue with Visual Studio Code and JetBrains IDEs
With Continue, you can choose from several leading AI model providers:
- 💻 Local LLMs with Ollama and llama.cpp (run AI completely offline!)
- ⚡ vLLM (OpenAI-compatible mode, including OpenRouter)
- 🤖 Anthropic API
- 🧠 OpenAI API
🔮 Many more on the way!
Unlike E.T., your code never phones home! 🛸 CodeGate is designed with privacy at its core:
- 🏠 Everything stays on your machine
- 🚫 No external data collection
- 🔐 No calling home or telemetry
- 💪 Complete control over your data
Check out the quickstart guides to get up and running quickly!
- Quickstart guide for GitHub Copilot with VS Code
- Quickstart guide for Continue with VS Code and Ollama
Simply open the Continue or Copilot chat in your IDE to start interacting with your AI assistant - now protected by CodeGate!
Refer to the CodeGate docs for more information:
Check out the developer reference guides:
# Get the code
git clone https://github.com/stacklok/codegate.git
cd codegate
# Set up virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dev dependencies
pip install -e ".[dev]"
By default weaviate is picking the default route as the ip for the cluster nodes. It may cause some issues when dealing with multiple interfaces. To make it work, localhost needs to be the default route:
sudo route delete default
sudo route add default 127.0.0.1
sudo route add -net 0.0.0.0/1 <public_ip_gateway>
sudo route add -net 128.0.0.0/1 <public_ip_gateway>
To run the unit tests, execute this command:
pytest
To run the integration tests, create a .env
file in the repo root directory and add the
following properties to it:
ENV_OPENAI_KEY=<YOUR_KEY>
ENV_VLLM_KEY=<YOUR_KEY>
ENV_ANTHROPIC_KEY=<YOUR_KEY>
Then the integration tests can be executed by running:
python tests/integration/integration_tests.py
make image-build
# Basic usage with local image
docker run -p 8989:8989 -p 9090:80 codegate:latest
# With pre-built pulled image
docker pull ghcr.io/stacklok/codegate:latest
docker run --name codegate -d -p 8989:8989 -p 9090:80 ghcr.io/stacklok/codegate:latest
# It will mount a volume to /app/codegate_volume
# The directory supports storing Llama CPP models under subdirectory /models
# A sqlite DB with the messages and alerts is stored under the subdirectory /db
docker run --name codegate -d -v /path/to/volume:/app/codegate_volume -p 8989:8989 -p 9090:80 ghcr.io/stacklok/codegate:latest
- CODEGATE_VLLM_URL: URL for the inference engine (defaults to https://inference.codegate.ai)
- CODEGATE_OPENAI_URL: URL for OpenAI inference engine (defaults to https://api.openai.com/v1)
- CODEGATE_ANTHROPIC_URL: URL for Anthropic inference engine (defaults to https://api.anthropic.com/v1)
- CODEGATE_OLLAMA_URL: URL for OLlama inference engine (defaults to http://localhost:11434/api)
- CODEGATE_APP_LOG_LEVEL: Level of debug desired when running the codegate server (defaults to WARNING, can be ERROR/WARNING/INFO/DEBUG)
- CODEGATE_LOG_FORMAT: Type of log formatting desired when running the codegate server (default to TEXT, can be JSON/TEXT)
docker run -p 8989:8989 -p 9090:80 -e CODEGATE_OLLAMA_URL=http://1.2.3.4:11434/api ghcr.io/stacklok/codegate:latest
We welcome contributions! Whether it's bug reports, feature requests, or code contributions, please feel free to contribute to making CodeGate better.
Start by reading the Contributor Guidelines.
This project is licensed under the terms specified in the LICENSE file.