A lightweight OpenAI-compatible proxy server for Maple/OpenSecret's TEE infrastructure. Works with any OpenAI client library while providing the security and privacy benefits of Trusted Execution Environment (TEE) processing.
- 100% OpenAI Compatible - Drop-in replacement for OpenAI API
- Secure TEE Processing - All requests processed in secure enclaves
- Streaming Support - Full Server-Sent Events streaming for chat completions
- Flexible Authentication - Environment variables or per-request API keys
- Zero Client Changes - Works with existing OpenAI client code
- Lightweight - Minimal overhead, maximum performance
- CORS Support - Ready for web applications
git clone <repository>
cd maple-proxy
cargo build --releaseAdd to your Cargo.toml:
[dependencies]
maple-proxy = { git = "https://github.com/opensecretcloud/maple-proxy" }
# Or if published to crates.io:
# maple-proxy = "0.1.0"Set environment variables or use command-line arguments:
# Environment Variables
export MAPLE_HOST=127.0.0.1 # Server host (default: 127.0.0.1)
export MAPLE_PORT=3000 # Server port (default: 3000)
export MAPLE_BACKEND_URL=http://localhost:3000 # Maple backend URL (prod: https://enclave.trymaple.ai)
export MAPLE_API_KEY=your-maple-api-key # Default API key (optional)
export MAPLE_DEBUG=true # Enable debug logging
export MAPLE_ENABLE_CORS=true # Enable CORSOr use CLI arguments:
cargo run -- --host 0.0.0.0 --port 8080 --backend-url https://enclave.trymaple.aicargo runYou should see:
π Maple Proxy Server started successfully!
π Available endpoints:
GET /health - Health check
GET /v1/models - List available models
POST /v1/chat/completions - Create chat completions (streaming)
curl http://localhost:8080/v1/models \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY"curl -N http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3-3-70b",
"messages": [
{"role": "user", "content": "Write a haiku about technology"}
],
"stream": true
}'Note: Maple currently only supports streaming responses.
You can also embed Maple Proxy in your own Rust application:
use maple_proxy::{Config, create_app};
use tokio::net::TcpListener;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize tracing
tracing_subscriber::fmt::init();
// Create config programmatically
let config = Config::new(
"127.0.0.1".to_string(),
8081, // Custom port
"https://enclave.trymaple.ai".to_string(),
)
.with_api_key("your-api-key-here".to_string())
.with_debug(true)
.with_cors(true);
// Create the app
let app = create_app(config.clone());
// Start the server
let addr = config.socket_addr()?;
let listener = TcpListener::bind(addr).await?;
println!("Maple proxy server running on http://{}", addr);
axum::serve(listener, app).await?;
Ok(())
}Run the example:
cargo run --example library_usageimport openai
client = openai.OpenAI(
api_key="YOUR_MAPLE_API_KEY",
base_url="http://localhost:8080/v1"
)
# Streaming chat completion
stream = client.chat.completions.create(
model="llama3-3-70b",
messages=[{"role": "user", "content": "Hello, world!"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'YOUR_MAPLE_API_KEY',
baseURL: 'http://localhost:8080/v1',
});
const stream = await openai.chat.completions.create({
model: 'llama3-3-70b',
messages: [{ role: 'user', content: 'Hello!' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}# Health check
curl http://localhost:8080/health
# List models
curl http://localhost:8080/v1/models \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY"
# Streaming chat completion
curl -N http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3-3-70b",
"messages": [{"role": "user", "content": "Tell me a joke"}],
"stream": true
}'Maple Proxy supports two authentication methods:
Set MAPLE_API_KEY - all requests will use this key by default:
export MAPLE_API_KEY=your-maple-api-key
cargo runOverride the default key or provide one if not set:
curl -H "Authorization: Bearer different-api-key" ...Enable CORS for web applications:
export MAPLE_ENABLE_CORS=true
cargo runPull and run the official image from GitHub Container Registry:
# Pull the latest image
docker pull ghcr.io/opensecretcloud/maple-proxy:latest
# Run with your API key
docker run -p 8080:8080 \
-e MAPLE_BACKEND_URL=https://enclave.trymaple.ai \
ghcr.io/opensecretcloud/maple-proxy:latest# Build the image locally
just docker-build
# Run the container
just docker-run- Option A: Use pre-built image from GHCR
# In your docker-compose.yml, use:
image: ghcr.io/opensecretcloud/maple-proxy:latest- Option B: Build your own image
docker build -t maple-proxy:latest .- Run with docker-compose:
# Copy the example environment file
cp .env.example .env
# Edit .env with your configuration
vim .env
# Start the service
docker-compose up -dWhen deploying Maple Proxy on a public network:
- DO NOT set
MAPLE_API_KEYin the container environment - Instead, require clients to pass their API key with each request:
# Client-side authentication for public proxy
client = OpenAI(
base_url="https://your-proxy.example.com/v1",
api_key="user-specific-maple-api-key" # Each user provides their own key
)This ensures:
- Users' API keys remain private
- Multiple users can share the same proxy instance
- No API keys are exposed in container configurations
# Build image
just docker-build
# Run interactively
just docker-run
# Run in background
just docker-run-detached
# View logs
just docker-logs
# Stop container
just docker-stop
# Use docker-compose
just compose-up
just compose-logs
just compose-downThe Docker image:
- Uses multi-stage builds for minimal size (~130MB)
- Runs as non-root user for security
- Includes health checks
- Optimizes dependency caching with cargo-chef
- Supports both x86_64 and ARM architectures
# docker-compose.yml environment section
environment:
- MAPLE_BACKEND_URL=https://enclave.trymaple.ai # Production backend
- MAPLE_ENABLE_CORS=true # Enable for web apps
- RUST_LOG=info # Logging level
# - MAPLE_API_KEY=xxx # Only for private deployments!Automated Builds (GitHub Actions)
- Every push to
masterautomatically builds and publishes toghcr.io/opensecretcloud/maple-proxy:latest - Git tags (e.g.,
v1.0.0) trigger versioned releases - Multi-platform images (linux/amd64, linux/arm64) built automatically
- No manual intervention needed - just push your code!
Local Development (Justfile)
# For local testing and debugging
just docker-build # Build locally
just docker-run # Test locally
just ghcr-push v1.2.3 # Manual push (requires login)Use GitHub Actions for production releases, Justfile for local development.
cargo buildexport MAPLE_DEBUG=true
cargo runcargo testMaple Proxy supports all models available in the Maple/OpenSecret platform, including:
llama3-3-70b- Llama 3.3 70B parameter model- And many others - check
/v1/modelsendpoint for current list
"No API key provided"
- Set
MAPLE_API_KEYenvironment variable or provideAuthorization: Bearer <key>header
"Failed to establish secure connection"
- Check your
MAPLE_BACKEND_URLis correct - Ensure your API key is valid
- Check network connectivity
Connection refused
- Make sure the server is running on the specified host/port
- Check firewall settings
Enable debug logging for detailed information:
export MAPLE_DEBUG=true
cargo runβββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β OpenAI Client βββββΆβ Maple Proxy βββββΆβ Maple Backend β
β (Python/JS) β β (localhost) β β (TEE) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
- Client makes standard OpenAI API calls to localhost
- Maple Proxy handles authentication and TEE handshake
- Requests are securely forwarded to Maple's TEE infrastructure
- Responses are streamed back to the client in OpenAI format
MIT License - see LICENSE file for details.
Contributions welcome! Please feel free to submit a Pull Request.