Skip to content

Commit

Permalink
feat: examples in monorepo (CopilotKit#590)
Browse files Browse the repository at this point in the history
* add all demo apps under "examples"

* add renovate

* remove todos and  textarea example

* fix docs

* add examples to gitignore
  • Loading branch information
arielweinberger authored Sep 24, 2024
1 parent 6a1de69 commit 3fb3f04
Show file tree
Hide file tree
Showing 216 changed files with 66,775 additions and 3 deletions.
2 changes: 2 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,13 @@ on:
paths-ignore:
- 'docs/**'
- 'README.md'
- 'examples/**'
pull_request:
branches: [main]
paths-ignore:
- 'docs/**'
- 'README.md'
- 'examples/**'
jobs:
test:
name: 'Test'
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/quality.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,14 @@ on:
paths-ignore:
- 'docs/**'
- 'README.md'
- 'examples/**'
pull_request:
branches: [main]
paths-ignore:
- 'docs/**'
- 'README.md'
- 'examples/**'

jobs:
prettier:
name: 'Prettier'
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ on:
- main
paths-ignore:
- 'docs/**'
- 'examples/**'

concurrency: ${{ github.workflow }}-${{ github.ref }}

Expand Down
2 changes: 1 addition & 1 deletion docs/pages/coagents/coagent-demo.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@ into a comprehensive answer, including references to the sources used.
The agent state is streamed live to the frontend, so that the user can see the work progressing.

Both the LangGraph and the CopilotKit specific code for this demo is [available
on GitHub](https://github.com/CopilotKit/examples/tree/main/ai-researcher).
on GitHub](https://github.com/CopilotKit/CopilotKit/tree/main/examples/coagents-ai-researcher).
4 changes: 2 additions & 2 deletions docs/pages/coagents/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -229,8 +229,8 @@ In the meantime, you can achieve agent Q&A in the following way:
We've built a simple example with everything you need to get started:

```bash
git clone https://github.com/copilotkit/examples ./copilotkit-examples
cd ./copilotkit-examples/bootstrap
git clone https://github.com/CopilotKit/CopilotKit.git
cd ./CopilotKit/examples/coagents-starter

# Install Python dependencies
pip install -r requirements.txt
Expand Down
58 changes: 58 additions & 0 deletions examples/coagents-ai-researcher/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# AI Researcher Example

**These instructions assume you are in the `coagents-ai-researcher/` directory**

## Running the Agent

First, install the dependencies:

```sh
cd agent
poetry install
```

Then, create a `.env` file inside `./agent` with the following:
```
OPENAI_API_KEY=...
TAVILY_API_KEY=...
```

IMPORTANT:
Make sure the OpenAI API Key you provide, supports gpt-4o.

Then, run the demo:

```sh
poetry run demo
```

## Running the UI

First, install the dependencies:

```sh
cd ./ui
pnpm i
```

Then, create a `.env` file inside `./ui` with the following:
```
OPENAI_API_KEY=...
```

Then, run the Next.js project:

```sh
pnpm run dev
```

## Usage

Navigate to [http://localhost:3000](http://localhost:3000).


# LangGraph Studio

Run LangGraph studio, then load the `./agent` folder into it.

Make sure to create teh `.env` mentioned above first!
5 changes: 5 additions & 0 deletions examples/coagents-ai-researcher/agent/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
venv/
__pycache__/
*.pyc
.env
.vercel
17 changes: 17 additions & 0 deletions examples/coagents-ai-researcher/agent/.vscode/cspell.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
{
"version": "0.2",
"language": "en",
"words": [
"langgraph",
"langchain",
"perplexity",
"openai",
"ainvoke",
"pydantic",
"tavily",
"copilotkit",
"fastapi",
"uvicorn",
"checkpointer"
]
}
File renamed without changes.
58 changes: 58 additions & 0 deletions examples/coagents-ai-researcher/agent/ai_researcher/agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
"""
This is the main entry point for the AI.
It defines the workflow graph and the entry point for the agent.
"""
# pylint: disable=line-too-long, unused-import
import json

from langgraph.graph import StateGraph, END
from langgraph.checkpoint.memory import MemorySaver

from ai_researcher.state import AgentState
from ai_researcher.steps import steps_node
from ai_researcher.search import search_node
from ai_researcher.summarize import summarize_node
from ai_researcher.extract import extract_node

def route(state):
"""Route to research nodes."""
if not state.get("steps", None):
return END

current_step = next((step for step in state["steps"] if step["status"] == "pending"), None)

if not current_step:
return "summarize_node"

if current_step["type"] == "search":
return "search_node"

raise ValueError(f"Unknown step type: {current_step['type']}")

# Define a new graph
workflow = StateGraph(AgentState)
workflow.add_node("steps_node", steps_node)
workflow.add_node("search_node", search_node)
workflow.add_node("summarize_node", summarize_node)
workflow.add_node("extract_node", extract_node)
# Chatbot
workflow.set_entry_point("steps_node")

workflow.add_conditional_edges(
"steps_node",
route,
["summarize_node", "search_node", END]
)

workflow.add_edge("search_node", "extract_node")

workflow.add_conditional_edges(
"extract_node",
route,
["summarize_node", "search_node"]
)

workflow.add_edge("summarize_node", END)

memory = MemorySaver()
graph = workflow.compile(checkpointer=memory)
27 changes: 27 additions & 0 deletions examples/coagents-ai-researcher/agent/ai_researcher/demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
"""Demo"""

from dotenv import load_dotenv
load_dotenv()

from fastapi import FastAPI
import uvicorn
from copilotkit.integrations.fastapi import add_fastapi_endpoint
from copilotkit import CopilotKitSDK, LangGraphAgent
from ai_researcher.agent import graph

app = FastAPI()
sdk = CopilotKitSDK(
agents=[
LangGraphAgent(
name="search_agent",
description="Search agent.",
agent=graph,
)
],
)

add_fastapi_endpoint(app, sdk, "/copilotkit")

def main():
"""Run the uvicorn server."""
uvicorn.run("ai_researcher.demo:app", host="127.0.0.1", port=8000, reload=True)
57 changes: 57 additions & 0 deletions examples/coagents-ai-researcher/agent/ai_researcher/extract.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
"""
The extract node is responsible for extracting information from a tavily search.
"""
import json

from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage

from langchain_core.runnables import RunnableConfig

from ai_researcher.state import AgentState

async def extract_node(state: AgentState, config: RunnableConfig):
"""
The extract node is responsible for extracting information from a tavily search.
"""

current_step = next((step for step in state["steps"] if step["status"] == "pending"), None)

if current_step is None:
raise ValueError("No current step")

if current_step["type"] != "search":
raise ValueError("Current step is not of type search")

system_message = f"""
This step was just executed: {json.dumps(current_step)}
This is the result of the search:
Please summarize ONLY the result of the search and include all relevant information from the search and reference links.
DO NOT INCLUDE ANY EXTRA INFORMATION. ALL OF THE INFORMATION YOU ARE LOOKING FOR IS IN THE SEARCH RESULTS.
DO NOT answer the user's query yet. Just summarize the search results.
Use markdown formatting and put the references inline and the links at the end.
Like this:
This is a sentence with a reference to a source [source 1][1] and another reference [source 2][2].
[1]: http://example.com/source1 "Title of Source 1"
[2]: http://example.com/source2 "Title of Source 2"
"""

response = await ChatOpenAI(model="gpt-4o").ainvoke([
*state["messages"],
SystemMessage(content=system_message)
], config)

current_step["result"] = response.content
current_step["search_result"] = None
current_step["status"] = "complete"
current_step["updates"] = [*current_step["updates"], "Done."]

next_step = next((step for step in state["steps"] if step["status"] == "pending"), None)
if next_step:
next_step["updates"] = ["Searching the web..."]

return state
62 changes: 62 additions & 0 deletions examples/coagents-ai-researcher/agent/ai_researcher/search.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
"""
The search node is responsible for searching the internet for information.
"""
import json
from datetime import datetime
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage

from langchain_core.runnables import RunnableConfig
from langchain_community.tools import TavilySearchResults

from ai_researcher.state import AgentState

async def search_node(state: AgentState, config: RunnableConfig):
"""
The search node is responsible for searching the internet for information.
"""
tavily_tool = TavilySearchResults(
max_results=10,
search_depth="advanced",
include_answer=True,
include_raw_content=True,
include_images=True,
)

current_step = next((step for step in state["steps"] if step["status"] == "pending"), None)

if current_step is None:
raise ValueError("No step to search for")

if current_step["type"] != "search":
raise ValueError("Current step is not a search step")

system_message = f"""
This is a step in a series of steps that are being executed to answer the user's query.
These are all of the steps: {json.dumps(state["steps"])}
You are responsible for carrying out the step: {json.dumps(current_step)}
The current date is {datetime.now().strftime("%Y-%m-%d")}.
This is what you need to search for, please come up with a good search query: {current_step["description"]}
"""
model = ChatOpenAI(model="gpt-4o").bind_tools(
[tavily_tool],
parallel_tool_calls=False,
tool_choice=tavily_tool.name
)

response = await model.ainvoke([
*state["messages"],
SystemMessage(
content=system_message
)
], config)

tool_msg = tavily_tool.invoke(response.tool_calls[0])

current_step["search_result"] = json.loads(tool_msg.content)
current_step["updates"] = [*current_step["updates"],"Extracting information..."]

return state
28 changes: 28 additions & 0 deletions examples/coagents-ai-researcher/agent/ai_researcher/state.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
"""
This is the state definition for the AI.
It defines the state of the agent and the state of the conversation.
"""

from typing import List, TypedDict, Optional
from langgraph.graph import MessagesState

class Step(TypedDict):
"""
Represents a step taken in the research process.
"""
id: str
description: str
status: str
type: str
description: str
search_result: Optional[str]
result: Optional[str]
updates: Optional[List[str]]

class AgentState(MessagesState):
"""
This is the state of the agent.
It is a subclass of the MessagesState class from langgraph.
"""
steps: List[Step]
answer: Optional[str]
Loading

0 comments on commit 3fb3f04

Please sign in to comment.