You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 8, 2025. It is now read-only.
Describe the bug
This is a bug only with respect to documentation and perhaps the specification.
To Reproduce
Steps to reproduce the behavior:
Try to ask questions (on Perplexity or Claude Desktop with search enabled) about MCP Hosts that are already enabled to work with Non-Claude models such as OpenAI or Google Gemini.
Note that the initial response will give examples of MCP Servers that can use the Non-Claude API to provide data via the MCP Client/Server protocol to the LLM that the MCP Hosts is communicating with.
Meta: Note that the above sentence is awkward because there is no established terminology for the LLM that the LLM Host is connected to that the User is conversing with.
Expected behavior
It should be relatively easy to talk about the different components of a system that uses MCP in different configurations.
Additional context
I have been thinking about more advanced topologies for MCP use. I am considering use cases where there might be more than one MCP Host, and the hosts are connected to different LLM APIs. This is more difficult than it should be.
p.s. Congratulations that OpenAI has endorsed the MCP. That is huge. I can't wait for Google to follow suit.
The text was updated successfully, but these errors were encountered:
The terminology is unclear partly because the LLM is not identified in the workflow. See #53. But just adding a box for the LLM in the diagram and labelling the box "LLM" is not sufficient.
Even the user is not shown in the diagrams, and perhaps should be in some diagrams where the topology is a bit more complex. Consider an MCP Server that can function as an MCP Host, i.e. it communicates with a secondary LLM, but has its own set of MCP Server tools that it can use. In this case, the MCP Server used by the main Host acts in the role of the user for the communication done through the secondary Host to the secondary LLM. So user is a role that is sometimes performed by a real user, and other times performed by an agent.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Describe the bug
This is a bug only with respect to documentation and perhaps the specification.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
It should be relatively easy to talk about the different components of a system that uses MCP in different configurations.
Logs
As an example, consider this Perplexity session: https://www.perplexity.ai/search/what-mcp-hosts-are-able-to-use-old4.5WpStihTUON9v9aOg
Additional context
I have been thinking about more advanced topologies for MCP use. I am considering use cases where there might be more than one MCP Host, and the hosts are connected to different LLM APIs. This is more difficult than it should be.
p.s. Congratulations that OpenAI has endorsed the MCP. That is huge. I can't wait for Google to follow suit.
The text was updated successfully, but these errors were encountered: