Skip to content

A FastAPI proxy server that seamlessly turns GitHub Copilot's chat completion/embeddings capabilities into OpenAI compatible API service.

License

Notifications You must be signed in to change notification settings

yuchanns/copilot-openai-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

21 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– Copilot OpenAI API

Python 3.10+ FastAPI Docker License Image Tags Image Size

A FastAPI proxy server that seamlessly turns GitHub Copilot's chat completion/embeddings capabilities into OpenAI compatible API service.

✨ Key Features

πŸš€ Advanced Integration

  • Seamless GitHub Copilot chat completion API proxy
  • Real-time streaming response support
  • High-performance request handling

πŸ” Security & Reliability

  • Secure authentication middleware
  • Automatic token management and refresh
  • Built-in CORS support for web applications

πŸ’» Universal Compatibility

  • Cross-platform support (Windows and Unix-based systems)
  • Docker containerization ready
  • Flexible deployment options

πŸ§ͺ Experimental Features

  • Anthropic API compatibility

πŸš€ Prerequisites

  • Python 3.10+
  • pip (Python package manager)
  • GitHub Copilot subscription
  • GitHub authentication token

πŸ“¦ Installation

  1. Clone the repository:
git clone https://github.com/yuchanns/copilot-openai-api.git
cd copilot_provider
  1. Install dependencies:
# Install PDM first if you haven't
pip install -U pdm

# Install project dependencies using PDM
pdm install --prod

βš™οΈ Configuration

  1. Set up environment variables:
# Windows
set COPILOT_TOKEN=your_access_token_here
set COPILOT_SERVER_PORT=9191          # Optional: Server port (default: 9191)
set COPILOT_SERVER_WORKERS=4          # Optional: Number of workers (default: min(CPU_COUNT, 4))

# Unix/Linux/macOS
export COPILOT_TOKEN=your_access_token_here
export COPILOT_SERVER_PORT=9191       # Optional: Server port (default: 9191)
export COPILOT_SERVER_WORKERS=4       # Optional: Number of workers (default: min(CPU_COUNT, 4))

πŸ“ Note:

  • COPILOT_TOKEN: Required for authentication. If not set, a random token will be generated.
  • COPILOT_SERVER_PORT: Optional. Controls which port the server listens on.
  • COPILOT_SERVER_WORKERS: Optional. Controls the number of worker processes.
  1. Configure GitHub Copilot:
    • Windows users: Check %LOCALAPPDATA%/github-copilot/
    • Unix/Linux/macOS users: Check ~/.config/github-copilot/

Required configuration files:

  • apps.json or hosts.json (containing GitHub OAuth token)
  • token.json (will be created automatically)

πŸ’‘ How to get a valid Github Copilot configuration?

Choose any of these official GitHub Copilot plugins:

After installing and signing in, configuration files will be automatically created in your system's config directory.

πŸš€ Usage

Choose between local or Docker deployment:

πŸ–₯️ Local Run

Start the server with:

pdm dev

🐳 Docker Run

Launch the containerized version:

# Unix/Linux/macOS
docker run --rm -p 9191:9191 \
    -v ~/.config/github-copilot:/home/appuser/.config/github-copilot \
    ghcr.io/yuchanns/copilot-openai-api

# Windows
docker run --rm -p 9191:9191 ^
    -v %LOCALAPPDATA%/github-copilot:/home/appuser/.config/github-copilot ^
    ghcr.io/yuchanns/copilot-openai-api

The Docker setup:

  • Maps port 9191 to your host
  • Mounts your Copilot configuration
  • Provides identical functionality to local deployment

πŸ”„ Making API Requests

Access the chat completion endpoint:

curl -X POST http://localhost:9191/v1/chat/completions \
  -H "Authorization: Bearer your_access_token_here" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{"role": "user", "content": "Hello, Copilot!"}]
  }'

Access the embeddings endpoint:

curl -X POST http://localhost:9191/v1/embeddings \
  -H "Authorization: Bearer your_access_token_here" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "copilot-text-embedding-ada-002",
    "input": ["The quick brown fox", "Jumped over the lazy dog"]
  }'

πŸ”Œ API Reference

POST /v1/chat/completions

Proxies requests to GitHub Copilot's Completions API.

Required Headers:

  • Authorization: Bearer <your_access_token>
  • Content-Type: application/json

Request Body:

  • Follow GitHub Copilot chat completion API format

Response:

  • Streams responses directly from GitHub Copilot's API
  • Supports both streaming and non-streaming modes

POST /v1/embeddings

Proxies requests to Github Copilot's Embeddings API.

Required Headers:

  • Authorization: Bearer <your_access_token>
  • Content-Type: application/json

Request Body:

  • Follow GitHub Copilot embeddings API format

Response:

  • JSON response from GitHub Copilot's embeddings API

POST /v1/messages

Converts Anthropic API format to GitHub Copilot chat completion format.

**Required Headers:

  • Authorization: Bearer <your_access_token>
  • Content-Type: application/json

Request Body:

  • Follow Anthropic API message format Response:
  • Follow Anthropic API response format

✨Integrate with Claude Code (experimental):

export ANTHROPIC_BASE_URL="http://localhost:9191" # Your Copilot OpenAI API server URL
export ANTHROPIC_AUTH_TOKEN="<your_access_token>"  # Your access token
export ANTHROPIC_MODEL="claude-sonnet-4"
export ANTHROPIC_SMALL_FAST_MODEL="claude-sonnet-4"

πŸ”’ Authentication

Secure your endpoints:

  1. Set COPILOT_TOKEN in your environment
  2. Include in request headers:
    Authorization: Bearer your_access_token_here
    

⚠️ Error Handling

The server provides clear error responses:

  • 401: Missing/invalid authorization header
  • 403: Invalid access token
  • Other errors are propagated from GitHub Copilot API

πŸ›‘οΈ Security Best Practices

  • Configure CORS for your specific domains (default: *)
  • Safeguard your COPILOT_TOKEN and GitHub OAuth token
  • Built-in token management with concurrent access protection

πŸ“„ License

Licensed under the Apache License 2.0 - see the LICENSE file for details.

About

A FastAPI proxy server that seamlessly turns GitHub Copilot's chat completion/embeddings capabilities into OpenAI compatible API service.

Topics

Resources

License

Stars

Watchers

Forks

Packages