diff --git a/README.md b/README.md index da0f2fd..12efbeb 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,26 @@ Convert CLI-based AI agents (Claude Code, etc.) to OpenAI-compatible API endpoin This adapter allows you to use local CLI tools like Claude Code as drop-in replacements for OpenAI's API in your development environment, while keeping the same code structure for production. +LangChain Demos +- JavaScript/TypeScript (Node): see `examples/langchain-js`. Minimal usage: + ```ts + import { ChatOpenAI } from "@langchain/openai"; + const llm = new ChatOpenAI({ + apiKey: "dummy-key", + baseURL: "http://localhost:8000/v1", + model: "claude-code", + }); + const resp = await llm.invoke("hi!"); + console.log(resp.content); + ``` + +- Python: see `examples/langchain-py`. Minimal usage: + ```py + from langchain_openai import ChatOpenAI + llm = ChatOpenAI(api_key="dummy-key", base_url="http://localhost:8000/v1", model="claude-code") + print(llm.invoke("hi!").content) + ``` + **Use Cases:** - **Production**: Use OpenAI API (pay per token) - **Development**: Use local Claude Code with Haiku model (reduce costs) diff --git a/examples/langchain-js/README.md b/examples/langchain-js/README.md new file mode 100644 index 0000000..4577ccd --- /dev/null +++ b/examples/langchain-js/README.md @@ -0,0 +1,23 @@ +LangChain JS Demo (OpenAI-compatible Adapter) + +This example shows how to call the local adapter using LangChain JS. + +Prerequisites +- Adapter server running locally: `npm run dev` (defaults to http://localhost:8000) +- Node.js >= 20 + +Setup +``` +cd examples/langchain-js +npm init -y +npm i @langchain/openai langchain +``` + +Run +``` +node chat.mjs +``` + +Files +- `chat.mjs`: Minimal chat completion using `ChatOpenAI` with a custom `baseURL`. + diff --git a/examples/langchain-js/chat.mjs b/examples/langchain-js/chat.mjs new file mode 100644 index 0000000..ffcf50b --- /dev/null +++ b/examples/langchain-js/chat.mjs @@ -0,0 +1,12 @@ +import { ChatOpenAI } from "@langchain/openai"; + +// Point LangChain to the local adapter's OpenAI-compatible endpoint +const llm = new ChatOpenAI({ + apiKey: process.env.OPENAI_API_KEY || "dummy-key", // not used by local adapter + baseURL: process.env.ADAPTER_BASE_URL || "http://localhost:8000/v1", + model: process.env.ADAPTER_MODEL || "claude-code", +}); + +const resp = await llm.invoke("hi! Reply in one short sentence."); +console.log(resp.content); + diff --git a/examples/langchain-py/README.md b/examples/langchain-py/README.md new file mode 100644 index 0000000..15fe0e8 --- /dev/null +++ b/examples/langchain-py/README.md @@ -0,0 +1,24 @@ +LangChain Python Demo (OpenAI-compatible Adapter) + +This example shows how to call the local adapter using LangChain for Python. + +Prerequisites +- Adapter server running locally: `npm run dev` (defaults to http://localhost:8000) +- Python 3.9+ + +Setup +``` +cd examples/langchain-py +python -m venv .venv +source .venv/bin/activate # Windows: .venv\\Scripts\\activate +pip install langchain langchain-openai +``` + +Run +``` +python demo.py +``` + +Files +- `demo.py`: Minimal chat completion using `ChatOpenAI` with a custom `base_url`. + diff --git a/examples/langchain-py/demo.py b/examples/langchain-py/demo.py new file mode 100644 index 0000000..79d5c4f --- /dev/null +++ b/examples/langchain-py/demo.py @@ -0,0 +1,12 @@ +from langchain_openai import ChatOpenAI + +# Point LangChain to the local adapter's OpenAI-compatible endpoint +llm = ChatOpenAI( + api_key="dummy-key", # not used by local adapter + base_url="http://localhost:8000/v1", + model="claude-code", +) + +resp = llm.invoke("hi! Reply in one short sentence.") +print(resp.content) +