Skip to content

Conversation

@tcdent
Copy link
Contributor

@tcdent tcdent commented Dec 1, 2025

Implements LangChainRunner to support LangChain agents with the same orchestration capabilities as OpenAIRunner.

Changes:

  • Add LangChainRunner class with support for AgentExecutor
  • Implement async invoke and streaming via astream_events
  • Add max iterations recovery similar to OpenAI's max turns
  • Include report_status tool injection for activity tracking
  • Add optional langchain dependencies to pyproject.toml
  • Create comprehensive LangChain example in examples/langchain-agents-fastapi/
  • Update main README with LangChain documentation and examples

The LangChainRunner provides the same features as OpenAIRunner:

  • Automatic activity tracking
  • Agent self-reporting via report_activity tool
  • Max iterations recovery with wrap-up prompts
  • Streaming support
  • Compatible with ReAct, tool-calling, and LangGraph agents

Implements LangChainRunner to support LangChain agents with the same
orchestration capabilities as OpenAIRunner.

Changes:
- Add LangChainRunner class with support for AgentExecutor
- Implement async invoke and streaming via astream_events
- Add max iterations recovery similar to OpenAI's max turns
- Include report_status tool injection for activity tracking
- Add optional langchain dependencies to pyproject.toml
- Create comprehensive LangChain example in examples/langchain-agents-fastapi/
- Update main README with LangChain documentation and examples

The LangChainRunner provides the same features as OpenAIRunner:
- Automatic activity tracking
- Agent self-reporting via report_activity tool
- Max iterations recovery with wrap-up prompts
- Streaming support
- Compatible with ReAct, tool-calling, and LangGraph agents
Refactored LangChain integration to align with:
1. Latest agentexec architecture (v0.1.0+ changes)
2. LangChain's native early_stopping_method feature
3. Latest LangChain conventions and best practices

Changes to LangChain Runner:
- Leverage LangChain's early_stopping_method='generate' instead of manual exception handling
- Simplified run() and run_streamed() methods - no try/except needed
- AgentExecutor handles max iterations gracefully without throwing exceptions
- Better documentation explaining the difference from OpenAI's approach
- Added notes about LangGraph's GraphRecursionError for future support

Changes to LangChain Example:
- Updated to match new agentexec architecture with typed contexts
- Context now uses Pydantic BaseModel (ResearchCompanyContext)
- Task handlers signature: async def handler(agent_id, context) -> Result
- Added typed return values (ResearchCompanyResult)
- Created context.py and db.py following OpenAI example pattern
- Updated views.py to use typed context instead of generic payload dict
- Updated main.py to use new imports and patterns
- Changed pool.start()/shutdown() to pool.run()
- Updated README with corrected examples and curl commands

Key Improvements:
- Defers to LangChain's built-in max iterations recovery
- Cleaner, simpler code without manual error handling
- Better alignment with LangChain conventions
- Matches latest agentexec patterns from main branch

Sources:
- https://python.langchain.com/docs/modules/agents/how_to/max_iterations/
- https://python.langchain.com/api_reference/langchain/agents/langchain.agents.agent.AgentExecutor.html
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants