Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ setup-temporal-mac:
brew install temporal
temporal server start-dev

make start-temporal-server:
temporal server start-dev

# Run all development services
run-dev:
@echo "Starting all development services..."
Expand All @@ -60,4 +63,4 @@ help:
@echo " make run-legacy-worker - Start the legacy worker"
@echo " make run-enterprise - Build and run the enterprise .NET worker"
@echo " make setup-temporal-mac - Install and start Temporal server on Mac"
@echo " make run-dev - Start all development services (worker, API, frontend) in parallel"
@echo " make run-dev - Start all development services (worker, API, frontend) in parallel"
5 changes: 5 additions & 0 deletions activities/tool_activities.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,13 @@ def __init__(self, mcp_client_manager: MCPClientManager = None):
self.llm_model = os.environ.get("LLM_MODEL", "openai/gpt-4")
self.llm_key = os.environ.get("LLM_KEY")
self.llm_base_url = os.environ.get("LLM_BASE_URL")
self.llm_provider = os.environ.get("LLM_PROVIDER")
self.mcp_client_manager = mcp_client_manager
print(f"Initializing ToolActivities with LLM model: {self.llm_model}")
if self.llm_base_url:
print(f"Using custom base URL: {self.llm_base_url}")
if self.llm_provider:
print(f"Using LLM provider: {self.llm_provider}")
if self.mcp_client_manager:
print("MCP client manager enabled for connection pooling")

Expand Down Expand Up @@ -134,6 +137,8 @@ async def agent_toolPlanner(self, input: ToolPromptInput) -> dict:
if self.llm_base_url:
completion_kwargs["base_url"] = self.llm_base_url

if self.llm_provider:
completion_kwargs["provider"] = self.llm_provider
response = completion(**completion_kwargs)

response_content = response.choices[0].message.content
Expand Down
57 changes: 57 additions & 0 deletions notes/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Learning Journey: Temporal AI Agent Experiments

This directory contains my personal notes, experiments, and learnings while exploring the Temporal AI Agent system. The goal is to understand how to effectively use agentic systems within Temporal's durable workflow model.

## Directory Structure

- [`experiments/`](./experiments/): Detailed documentation of specific experiments and tests
- Each experiment is numbered for easy reference
- Contains setup, observations, and results

- [`learnings/`](./learnings/): Key concepts and patterns discovered
- Temporal concepts
- Agent patterns
- Challenges and solutions

- [`resources/`](./resources/): Useful references and links
- External documentation
- Relevant articles
- Community discussions

## Quick Start

The experiments are numbered sequentially (001, 002, etc.) and can be found in the `experiments/` directory. Each experiment includes:
- Objective
- Setup instructions
- Code changes/additions
- Results and observations
- Learnings and takeaways

## Key Topics

1. Temporal Workflows with AI Agents
2. Agent-based Decision Making
3. Durable Execution Patterns
4. Error Handling and Recovery
5. Testing and Debugging Strategies

## Progress Tracking

- [ ] Basic workflow setup and execution
- [ ] Agent integration and configuration
- [ ] Complex decision-making scenarios
- [ ] Error handling and recovery patterns
- [ ] Performance optimization
- [ ] Production-ready considerations

## Notes

- This is a learning repository forked from the original temporal-ai-agent demo
- These notes are personal and reflect my learning journey
- Feel free to adapt and modify the structure as needed

## References

- [Original Repository](https://github.com/temporalio/temporal-ai-agent)
- [Temporal Documentation](https://docs.temporal.io/)
- [AI Agent Documentation](./docs/README.md)
24 changes: 24 additions & 0 deletions notes/experiments/000-run-tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Experiment 000 - Run Tests
## Objective
- Run tests for the Temporal AI Agent project

## Steps
1. Install poetry if not already installed
`brew install poetry`
2. Install development dependencies
`poetry install --with dev`
2. Run the tests `poetry run pytest`

## Issues and fixes

### No module named pytest_asyncio
**Error**
```
ImportError while loading conftest '/Users/joeszodfridt/src/temporal/temporal-ai-agent/tests/conftest.py'.
tests/conftest.py:7: in <module>
import pytest_asyncio
E ModuleNotFoundError: No module named 'pytest_asyncio'
```
**Remediation**
- Install development dependencies
`poetry install --with dev`
87 changes: 87 additions & 0 deletions notes/experiments/001-basic-workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Experiment 001: Setup and run locally out of the box demo

**Date**:

## Objective
To understand what's involved in setting up and running the Temporal AI Agent locally out of the box demo.

## Setup

### Environment
- Local development environment
- I'm using asdf (already installed)to manage python versions
- Temporal is already installed using brew, `brew install temporal`

- Temporal server running
- Start server using make commands: `make start-temporal-server `

### Configuration
Copy `.env.example` to `.env` and configure
- Set `LLM_KEY=YOUR_API_KEY`
- Set `LLM_MODEL=YOUR_MODEL_NAME`, e.g `anthropic/claude-3-5-sonnet-20240620`
- Set `STRIPE_API_KEY=` to generate mock data, use your Stripe API key if you have one


## Implementation Steps

Start local Temporal server
`make start-temporal-server`

Start api, workers, workflow and UI
`make run-dev`

Temporal workflow monitor
- URL is displayed in the output when server is started

Application UI


## Observations

### What Worked Well
- Demo runs when using Antrhopic and my personal key. NF key is always out of capacity
- Demo fails when running with Ollama locally. Will need some code changes to support Ollama

### Challenges Encountered
- Challenge 1
- Solution/Workaround:
- Challenge 2
- Solution/Workaround:

## Key Learnings

1. Learning 1
- Details...
2. Learning 2
- Details...

## Questions to Explore
- [ ] Question 1
- [ ] Question 2

## Next Steps
- [ ] What to try next
- [ ] Areas to improve

## Resources Used
- Link 1
- Link 2

## Code Snippets and Examples

### Example 1: Description
```python
# Add example code here
```

### Example 2: Description
```python
# Add example code here
```

## Notes for Future Reference
- Important note 1
- Important note 2

---
Last Updated:
73 changes: 73 additions & 0 deletions notes/experiments/002-basic-workflow-groq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Experiment 002: Run basic demo using Groq models

**Date**: 6/26/25

## Objective
Get the out of the box demo using Groq for LLM models


## Setup

### Environment
- Same as in basic workflow no changes

### Configuration
- Uncomment the Groq LLM configuration in .env

## Implementation Steps
- Start temporal server: `make start-temporal-server`
- Start the apis, workers, etc.: `make run-dev`
- Run the demo

## Observations
- Groq is fast!!!
- Quickly burn through the free tokens as Groq is rate limiting my free plan requests
- The demo handles the rate limit without crashing
- Workflow will timeout gracefully after 30 minutes, and app will stay alive
- Demo doesn't have any messaging about service provider hitting rate limiting

### What Worked Well

### Challenges Encountered
- Challenge 1
- Solution/Workaround:
- Challenge 2
- Solution/Workaround:

## Key Learnings

1. Learning 1
- Details...
2. Learning 2
- Details...

## Questions to Explore
- [ ] Question 1
- [ ] Question 2

## Next Steps
- [ ] What to try next
- [ ] Areas to improve

## Resources Used
- Link 1
- Link 2

## Code Snippets and Examples

### Example 1: Description
```python
# Add example code here
```

### Example 2: Description
```python
# Add example code here
```

## Notes for Future Reference
- Important note 1
- Important note 2

---
Last Updated:
65 changes: 65 additions & 0 deletions notes/experiments/experiment-template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# Experiment 000: Title

**Date**:

## Objective


## Setup

### Environment


### Configuration


## Implementation Steps


## Observations

### What Worked Well

### Challenges Encountered
- Challenge 1
- Solution/Workaround:
- Challenge 2
- Solution/Workaround:

## Key Learnings

1. Learning 1
- Details...
2. Learning 2
- Details...

## Questions to Explore
- [ ] Question 1
- [ ] Question 2

## Next Steps
- [ ] What to try next
- [ ] Areas to improve

## Resources Used
- Link 1
- Link 2

## Code Snippets and Examples

### Example 1: Description
```python
# Add example code here
```

### Example 2: Description
```python
# Add example code here
```

## Notes for Future Reference
- Important note 1
- Important note 2

---
Last Updated:
Loading