Skip to content

OpenSecretCloud/maple-proxy

Repository files navigation

🍁 Maple Proxy

A lightweight OpenAI-compatible proxy server for Maple/OpenSecret's TEE infrastructure. Works with any OpenAI client library while providing the security and privacy benefits of Trusted Execution Environment (TEE) processing.

πŸš€ Features

  • 100% OpenAI Compatible - Drop-in replacement for OpenAI API
  • Secure TEE Processing - All requests processed in secure enclaves
  • Streaming Support - Full Server-Sent Events streaming for chat completions
  • Flexible Authentication - Environment variables or per-request API keys
  • Zero Client Changes - Works with existing OpenAI client code
  • Lightweight - Minimal overhead, maximum performance
  • CORS Support - Ready for web applications

πŸ“¦ Installation

As a Binary

git clone <repository>
cd maple-proxy
cargo build --release

As a Library

Add to your Cargo.toml:

[dependencies]
maple-proxy = { git = "https://github.com/opensecretcloud/maple-proxy" }
# Or if published to crates.io:
# maple-proxy = "0.1.0"

βš™οΈ Configuration

Set environment variables or use command-line arguments:

# Environment Variables
export MAPLE_HOST=127.0.0.1                    # Server host (default: 127.0.0.1)
export MAPLE_PORT=3000                         # Server port (default: 3000)
export MAPLE_BACKEND_URL=http://localhost:3000         # Maple backend URL (prod: https://enclave.trymaple.ai)
export MAPLE_API_KEY=your-maple-api-key        # Default API key (optional)
export MAPLE_DEBUG=true                        # Enable debug logging
export MAPLE_ENABLE_CORS=true                  # Enable CORS

Or use CLI arguments:

cargo run -- --host 0.0.0.0 --port 8080 --backend-url https://enclave.trymaple.ai

πŸ› οΈ Usage

Using as a Binary

Start the Server

cargo run

You should see:

πŸš€ Maple Proxy Server started successfully!
πŸ“‹ Available endpoints:
   GET  /health              - Health check
   GET  /v1/models           - List available models
   POST /v1/chat/completions - Create chat completions (streaming)

API Endpoints

List Models

curl http://localhost:8080/v1/models \
  -H "Authorization: Bearer YOUR_MAPLE_API_KEY"

Chat Completions (Streaming)

curl -N http://localhost:8080/v1/chat/completions \
  -H "Authorization: Bearer YOUR_MAPLE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3-3-70b",
    "messages": [
      {"role": "user", "content": "Write a haiku about technology"}
    ],
    "stream": true
  }'

Note: Maple currently only supports streaming responses.

Using as a Library

You can also embed Maple Proxy in your own Rust application:

use maple_proxy::{Config, create_app};
use tokio::net::TcpListener;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Initialize tracing
    tracing_subscriber::fmt::init();

    // Create config programmatically
    let config = Config::new(
        "127.0.0.1".to_string(),
        8081,  // Custom port
        "https://enclave.trymaple.ai".to_string(),
    )
    .with_api_key("your-api-key-here".to_string())
    .with_debug(true)
    .with_cors(true);

    // Create the app
    let app = create_app(config.clone());

    // Start the server
    let addr = config.socket_addr()?;
    let listener = TcpListener::bind(addr).await?;
    
    println!("Maple proxy server running on http://{}", addr);
    
    axum::serve(listener, app).await?;

    Ok(())
}

Run the example:

cargo run --example library_usage

πŸ’» Client Examples

Python (OpenAI Library)

import openai

client = openai.OpenAI(
    api_key="YOUR_MAPLE_API_KEY",
    base_url="http://localhost:8080/v1"
)

# Streaming chat completion
stream = client.chat.completions.create(
    model="llama3-3-70b",
    messages=[{"role": "user", "content": "Hello, world!"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="")

JavaScript/Node.js

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'YOUR_MAPLE_API_KEY',
  baseURL: 'http://localhost:8080/v1',
});

const stream = await openai.chat.completions.create({
  model: 'llama3-3-70b',
  messages: [{ role: 'user', content: 'Hello!' }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

cURL

# Health check
curl http://localhost:8080/health

# List models
curl http://localhost:8080/v1/models \
  -H "Authorization: Bearer YOUR_MAPLE_API_KEY"

# Streaming chat completion
curl -N http://localhost:8080/v1/chat/completions \
  -H "Authorization: Bearer YOUR_MAPLE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3-3-70b",
    "messages": [{"role": "user", "content": "Tell me a joke"}],
    "stream": true
  }'

πŸ” Authentication

Maple Proxy supports two authentication methods:

1. Environment Variable (Default)

Set MAPLE_API_KEY - all requests will use this key by default:

export MAPLE_API_KEY=your-maple-api-key
cargo run

2. Per-Request Authorization Header

Override the default key or provide one if not set:

curl -H "Authorization: Bearer different-api-key" ...

🌐 CORS Support

Enable CORS for web applications:

export MAPLE_ENABLE_CORS=true
cargo run

🐳 Docker Deployment

Quick Start with Pre-built Image

Pull and run the official image from GitHub Container Registry:

# Pull the latest image
docker pull ghcr.io/opensecretcloud/maple-proxy:latest

# Run with your API key
docker run -p 8080:8080 \
  -e MAPLE_BACKEND_URL=https://enclave.trymaple.ai \
  ghcr.io/opensecretcloud/maple-proxy:latest

Build from Source

# Build the image locally
just docker-build

# Run the container
just docker-run

Production Docker Setup

  1. Option A: Use pre-built image from GHCR
# In your docker-compose.yml, use:
image: ghcr.io/opensecretcloud/maple-proxy:latest
  1. Option B: Build your own image
docker build -t maple-proxy:latest .
  1. Run with docker-compose:
# Copy the example environment file
cp .env.example .env

# Edit .env with your configuration
vim .env

# Start the service
docker-compose up -d

πŸ”’ Security Note for Public Deployments

When deploying Maple Proxy on a public network:

  • DO NOT set MAPLE_API_KEY in the container environment
  • Instead, require clients to pass their API key with each request:
# Client-side authentication for public proxy
client = OpenAI(
    base_url="https://your-proxy.example.com/v1",
    api_key="user-specific-maple-api-key"  # Each user provides their own key
)

This ensures:

  • Users' API keys remain private
  • Multiple users can share the same proxy instance
  • No API keys are exposed in container configurations

Docker Commands

# Build image
just docker-build

# Run interactively
just docker-run

# Run in background
just docker-run-detached

# View logs
just docker-logs

# Stop container
just docker-stop

# Use docker-compose
just compose-up
just compose-logs
just compose-down

Container Configuration

The Docker image:

  • Uses multi-stage builds for minimal size (~130MB)
  • Runs as non-root user for security
  • Includes health checks
  • Optimizes dependency caching with cargo-chef
  • Supports both x86_64 and ARM architectures

Environment Variables for Docker

# docker-compose.yml environment section
environment:
  - MAPLE_BACKEND_URL=https://enclave.trymaple.ai  # Production backend
  - MAPLE_ENABLE_CORS=true                         # Enable for web apps
  - RUST_LOG=info                                  # Logging level
  # - MAPLE_API_KEY=xxx                            # Only for private deployments!

πŸ”§ Development

Docker Images & CI/CD

Automated Builds (GitHub Actions)

  • Every push to master automatically builds and publishes to ghcr.io/opensecretcloud/maple-proxy:latest
  • Git tags (e.g., v1.0.0) trigger versioned releases
  • Multi-platform images (linux/amd64, linux/arm64) built automatically
  • No manual intervention needed - just push your code!

Local Development (Justfile)

# For local testing and debugging
just docker-build        # Build locally
just docker-run          # Test locally
just ghcr-push v1.2.3   # Manual push (requires login)

Use GitHub Actions for production releases, Justfile for local development.

Build from Source

cargo build

Run with Debug Logging

export MAPLE_DEBUG=true
cargo run

Run Tests

cargo test

πŸ“Š Supported Models

Maple Proxy supports all models available in the Maple/OpenSecret platform, including:

  • llama3-3-70b - Llama 3.3 70B parameter model
  • And many others - check /v1/models endpoint for current list

πŸ” Troubleshooting

Common Issues

"No API key provided"

  • Set MAPLE_API_KEY environment variable or provide Authorization: Bearer <key> header

"Failed to establish secure connection"

  • Check your MAPLE_BACKEND_URL is correct
  • Ensure your API key is valid
  • Check network connectivity

Connection refused

  • Make sure the server is running on the specified host/port
  • Check firewall settings

Debug Mode

Enable debug logging for detailed information:

export MAPLE_DEBUG=true
cargo run

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   OpenAI Client │───▢│   Maple Proxy   │───▢│  Maple Backend  β”‚
β”‚   (Python/JS)   β”‚    β”‚   (localhost)   β”‚    β”‚      (TEE)      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  1. Client makes standard OpenAI API calls to localhost
  2. Maple Proxy handles authentication and TEE handshake
  3. Requests are securely forwarded to Maple's TEE infrastructure
  4. Responses are streamed back to the client in OpenAI format

πŸ“ License

MIT License - see LICENSE file for details.

🀝 Contributing

Contributions welcome! Please feel free to submit a Pull Request.

About

Proxy API for Maple AI

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors 2

  •  
  •