Python SDK for monitoring AI agents with Trusera. Track LLM invocations, tool calls, data access, and more to ensure your AI agents are secure and compliant.
pip install trusera-sdkFor framework-specific integrations:
# LangChain integration
pip install trusera-sdk[langchain]
# CrewAI integration
pip install trusera-sdk[crewai]
# AutoGen integration
pip install trusera-sdk[autogen]
# Development tools
pip install trusera-sdk[dev]from trusera_sdk import TruseraClient, Event, EventType
# Initialize the client
client = TruseraClient(api_key="tsk_your_api_key")
# Register your agent
agent_id = client.register_agent(
name="my-agent",
framework="custom",
metadata={"version": "1.0.0"}
)
# Track events
client.track(Event(
type=EventType.TOOL_CALL,
name="web_search",
payload={"query": "latest AI news"},
metadata={"duration_ms": 250}
))
# Events are automatically flushed in batches
# Manual flush if needed
client.flush()
# Clean up
client.close()The @monitor decorator automatically tracks function calls:
from trusera_sdk import TruseraClient, monitor, set_default_client, EventType
# Set up client
client = TruseraClient(api_key="tsk_your_api_key")
client.register_agent("my-agent", "custom")
set_default_client(client)
# Decorate your functions
@monitor(event_type=EventType.TOOL_CALL)
def search_database(query: str) -> list[dict]:
# Your implementation
return [{"id": 1, "title": "Result"}]
@monitor(event_type=EventType.LLM_INVOKE, name="gpt4_call")
async def call_llm(prompt: str) -> str:
# Works with async functions too
return "AI response"
# Function calls are automatically tracked
results = search_database("user query")
response = await call_llm("What is AI?")from langchain.llms import OpenAI
from langchain.agents import initialize_agent, Tool
from trusera_sdk import TruseraClient
from trusera_sdk.integrations.langchain import TruseraCallbackHandler
# Initialize Trusera
client = TruseraClient(api_key="tsk_your_api_key")
client.register_agent("langchain-agent", "langchain")
handler = TruseraCallbackHandler(client)
# Use with LangChain
llm = OpenAI(callbacks=[handler])
agent = initialize_agent(
tools=[...],
llm=llm,
callbacks=[handler]
)
# All LLM calls and tool usage are tracked
agent.run("Your query here")from crewai import Crew, Agent, Task
from trusera_sdk import TruseraClient
from trusera_sdk.integrations.crewai import TruseraCrewCallback
# Initialize Trusera
client = TruseraClient(api_key="tsk_your_api_key")
client.register_agent("crew-agent", "crewai")
callback = TruseraCrewCallback(client)
# Create your crew
researcher = Agent(role="Researcher", goal="Research topics")
task = Task(description="Research AI trends", agent=researcher)
crew = Crew(
agents=[researcher],
tasks=[task],
step_callback=callback.step_callback
)
# Execute with tracking
result = crew.kickoff()import autogen
from trusera_sdk import TruseraClient
from trusera_sdk.integrations.autogen import TruseraAutoGenHook
# Initialize Trusera
client = TruseraClient(api_key="tsk_your_api_key")
client.register_agent("autogen-agent", "autogen")
hook = TruseraAutoGenHook(client)
# Create AutoGen agents
assistant = autogen.AssistantAgent(
name="assistant",
llm_config={"model": "gpt-4"}
)
# Register hook
hook.setup_agent(assistant)
# All interactions are tracked
user_proxy = autogen.UserProxyAgent(name="user")
user_proxy.initiate_chat(assistant, message="Hello")The SDK supports tracking various types of agent activities:
EventType.TOOL_CALL- Tool or function invocationsEventType.LLM_INVOKE- LLM API callsEventType.DATA_ACCESS- Database queries, file readsEventType.API_CALL- External API requestsEventType.FILE_WRITE- File system modificationsEventType.DECISION- Agent decision points
client = TruseraClient(
api_key="tsk_your_api_key",
base_url="https://api.trusera.dev", # Optional, defaults to production
flush_interval=5.0, # Seconds between auto-flushes
batch_size=100, # Events per batch
timeout=10.0, # HTTP request timeout
max_retries=3 # Retries before dropping events
)Use the client as a context manager for automatic cleanup:
with TruseraClient(api_key="tsk_your_api_key") as client:
client.register_agent("my-agent", "custom")
# ... track events ...
# Automatically flushed and closedFor asyncio applications, use AsyncTruseraClient:
from trusera_sdk import AsyncTruseraClient, Event, EventType
async with AsyncTruseraClient(api_key="tsk_your_api_key") as client:
await client.register_agent("my-agent", "custom")
client.track(Event(
type=EventType.TOOL_CALL,
name="async_search",
payload={"query": "test"}
))
await client.flush()- Use Context Manager: Ensures events are flushed on exit
- Set Agent ID Early: Call
register_agent()orset_agent_id()before tracking - Batch Operations: Let the SDK handle batching automatically
- Sensitive Data: Use
capture_args=Falsein@monitorfor sensitive functions - Error Handling: The SDK logs errors but won't crash your application
# Optional: Set default API key
export TRUSERA_API_KEY=tsk_your_api_key
# Optional: Custom API endpoint
export TRUSERA_API_URL=https://api.trusera.dev# Clone the repository
git clone https://github.com/Trusera/trusera-agent-sdk.git
cd trusera-agent-sdk
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Run linter
ruff check .
# Type checking
mypy trusera_sdkFull documentation is available at docs.trusera.dev/sdk/python
- Website: trusera.dev
- Documentation: docs.trusera.dev
- Issues: GitHub Issues
- Email: dev@trusera.dev
Apache License 2.0 - see LICENSE file for details.
Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.
Built with care by the Trusera team. Making AI agents secure and trustworthy.