Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show how the LLM is involved in the MCP workflow #53

Open
hgmuc opened this issue Nov 27, 2024 · 2 comments
Open

Show how the LLM is involved in the MCP workflow #53

hgmuc opened this issue Nov 27, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@hgmuc
Copy link

hgmuc commented Nov 27, 2024

I would appreciate, if the high-level architecture diagrams would also show the data flow (requests, response, message payloads from the context) to and from the LLM. From the weather-server example with Claude Desktop it seems to me that Claude Desktop calls the LLM to create the tool call with the correct arguments. But I can't see how the returned forecast data will be formatted in a user-friendly way. Is Claude Desktop passing the forecast data to the LLM to get a nicely formatted answer or is Claude Desktop able to show this data in a nicely formatted way (e.g. table) out of the box?

  • Which component (host, server) calls the LLM and when?
  • Or is it up to developer to choose if the host or the server calls the LLM when it seems to be best from the workflow perspective?
  • Are there any recommendations or best practises for getting the LLM involved?

Finally, adding the LLM to high-level diagrams would also help to understand potential risks when using MCP with sensitive data. Claude Desktop obviously uses Claude LLMs via the internet. But with sensitive data one might consider using a local LLM, which would maybe called by the MCP server.

@hgmuc hgmuc changed the title Show how the LLM is involved in this workflow Show how the LLM is involved in the MCP workflow Nov 27, 2024
@jspahrsummers jspahrsummers added the enhancement New feature or request label Nov 27, 2024
@JonasHelming
Copy link
Contributor

I was about to write the exact same text, this would be really interesting!

@jimlloyd
Copy link

See also #231. The fact that the LLM is not clearly identified makes it awkward to talk about it, especially with topologies where an MCP Server is a gateway for using another LLM. There are already MCP Servers that use the OpenAI or Gemini LLM APIs. This means that the User on Host is able to communicate directly with one LLM and indirectly with multiple secondary LLMs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants