Skip to content

Latest commit

 

History

History
125 lines (85 loc) · 4.47 KB

File metadata and controls

125 lines (85 loc) · 4.47 KB

Providers

Raw HTTP clients for LLM backends. No external AI SDKs — all providers speak the same contract.

Note: This SDK is designed to work with the Autohand Code CLI. While the SDK can be used standalone, we recommend installing the CLI for the best experience.

from autohand_agents.providers import Provider, ProviderFactory, ProviderNotConfiguredError

Provider (Abstract Base)

The interface every provider implements.

Members

Method Signature Description
model_name (model: str) -> str Returns the full model identifier (e.g., anthropic/your-modelcard-id-here)
chat async (messages, model, tools=None, **kwargs) -> ChatResponse Single-shot completion
chat_stream async (messages, model, tools=None, **kwargs) -> AsyncIterator[ChatResponse] Streaming completion

ProviderNotConfiguredError

Raised when a provider is requested but not configured.

except ProviderNotConfiguredError as e:
    print(f"Provider '{e.provider_name}' needs an API key")

ProviderFactory

Resolves provider names to instances from a configuration dict.

from autohand_agents.providers import ProviderFactory

config = {
    "provider": "openrouter",
    "openrouter": {"apiKey": "sk-or-v1-...", "model": "your-modelcard-id-here"},
}
provider = ProviderFactory.create(config)

VALID_PROVIDERS

ProviderFactory.is_valid_provider("openrouter")  # True
ProviderFactory.is_valid_provider("vertex")       # False

OpenRouterProvider

Param Type Default
api_key str
model str "your-modelcard-id-here"
base_url str | None None

Base URL: https://openrouter.ai/api/v1

Automatically prefixes model names with anthropic/ unless already qualified. Adds HTTP-Referer and X-Title headers per OpenRouter's requirements.

from autohand_agents.providers.openrouter import OpenRouterProvider

p = OpenRouterProvider(api_key="sk-or-v1-...")
p.model_name("your-modelcard-id-here")  # "anthropic/your-modelcard-id-here"
p.model_name("google/gemini-pro")  # "google/gemini-pro" (already prefixed)

OpenAIProvider

Param Type Default
api_key str
model str "gpt-4o"

Base URL: https://api.openai.com/v1

OllamaProvider

Param Type Default
base_url str | None http://localhost:11434/api

No API key needed. Uses /chat endpoint.

AzureProvider

Param Type Default
api_key str
resource_name str
deployment_name str
api_version str "2024-02-01"

Endpoint: https://{resource_name}.openai.azure.com/openai/deployments/{deployment_name}/chat/completions?api-version={version}

LlamaCppProvider

Param Type Default
base_url str | None http://localhost:8080

Prompt-based (not messages). Uses /completion endpoint with n_predict for max_tokens.

LLMGatewayProvider

Param Type Default
base_url str | None http://localhost:8000
api_key str | None None

OpenAI-compatible internal gateway.

MLXProvider

Param Type Default
model_path str
port int 9898

Apple Silicon local inference. Lazy-starts HTTP client to localhost:{port}.