Skip to content

Conversation

MagellaX
Copy link

@MagellaX MagellaX commented Sep 30, 2025

Summary

  • wire the new MCP-backed agent runner into chat routing with feature-flag control
  • expose the dedicated MCP tool server for conversations, memories, and action items
  • include regression tests and e2e harness for agent streaming

covers #2935

@MagellaX
Copy link
Author

hey @Arshroop-Saini wdyt? any takes?

@aaravgarg aaravgarg requested review from beastoin and mdmohsin7 and removed request for mdmohsin7 October 2, 2025 00:54
@Arshroop-Saini
Copy link
Collaborator

@MagellaX I already built this, check PR #3048

@MagellaX
Copy link
Author

MagellaX commented Oct 2, 2025

@MagellaX I already built this, check PR #3048

Nice work on #3048 direction’s solid!! That said, a few bits feel verbose/brittle: a global app_state with cross‑module import from main, lots of print/emoji logs, graph compiled in two places, a string-based tool loop guard, and a metadata-only vector “full scan” that’ll be pricey. My PR delivers the same agentic behavior but cleaner and lower‑risk: MCP tools + agent flow behind a feature flag with automatic fallback, tools/graph stored on FastAPI app.state (no globals), slimmer prompts/logging, DI instead of importing main, and full unit/E2E coverage with streaming + agent action logging. I mean, it’s a smaller, surgical diff that stays backward compatible and is easier to operate/monitor. I’m happy to fold in anything missing, but merging this version should cut noise, reduce regressions, and keep iteration fast.

@beastoin
Copy link
Collaborator

beastoin commented Oct 8, 2025

@MagellaX interesting.

btw, why do we need the mcp when we could build function_tool(s) to achieve better performance?

and, do we have any test results, sir?

@MagellaX
Copy link
Author

MagellaX commented Oct 8, 2025

@MagellaX interesting.

btw, why do we need the mcp when we could build function_tool(s) to achieve better performance?

and, do we have any test results, sir?

hey, thanks for digging in. I kept the MCP layer because we already expose the same 14 tools (memories, action items, Pinecone search, etc.) to the wearable, the CLI, and now chat.
By going through MCP ,we only maintain the tool logic once, we can gate/rotate credentials server-side, and we get consistent usage/telemetry across surfaces. If we were to re-implement them as function tools inside LangGraph, it would duplicate the adapters, drift from the other clients, and require touching multiple stacks for future tool additions(which can be done, but would take up a bit of time). The streaming hop adds only a few ms on my profiling, so the practical performance cost is tiny compared with the reuse we get.

@beastoin beastoin mentioned this pull request Oct 9, 2025
@beastoin
Copy link
Collaborator

beastoin commented Oct 9, 2025

#3092

@beastoin
Copy link
Collaborator

beastoin commented Oct 9, 2025

@MagellaX @Romamo would be great if you guys could collaborate and help omi chat great // also feel free to ping me @thinh via discord http://discord.omi.me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants