Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

langchain-ollama (partners): allow passing ChatMessage (with custom 'role') to ChatOllama #30191

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

rylativity
Copy link

**Description: currently, ChatOllama will raise a value error if a ChatMessage is passed to it, as described here. This PR enables passing a langchain ChatMessage with a custom 'role' attribute through the langchain ChatOllama class.
Issue: resolves #30122 (also related to PR #30147 and ollama-python issue ollama/ollama#8955)
Dependencies: no new dependencies

  • PR title
  • PR message
  • Lint and test: format, lint, and test all running successfully and passing

Copy link

vercel bot commented Mar 9, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Mar 10, 2025 1:35am

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Mar 9, 2025
@lemassykoi
Copy link

from my initial stack trace in 30122, I'm afraid that I will not be able to use create_react_agent to work with granite3.2
could you confirm ? thanks for your implication

@rylativity
Copy link
Author

rylativity commented Mar 10, 2025

@lemassykoi you will have to explicitly use a ChatMessage instance for the "control/thinking" message like so:

from langchain_core.messages import ChatMessage

...

    messages = {
                "messages": [
                    ChatMessage(role="control", content="thinking"),
                    (
                        "user",
                        "explain catdog",
                    )
                ]
    }

Or, for example with more context:

from langchain_ollama import ChatOllama
from langchain_core.messages import ChatMessage
from langgraph.prebuilt import create_react_agent

llm = ChatOllama(
        base_url    = "http://localhost:11434",
        model       = "granite3.2",
        verbose     = True,
        disable_streaming = True,
    )
agent_supervisor = create_react_agent(
        model        = llm,
        tools        = [],
        name         = "ReAct_Agent_WIP",
        debug        = True
    )

messages = {
            "messages": [
                ChatMessage(role="control", content="thinking"),
                {
                   "role": "user",
                    "content": "explain options trading",
                }
            ]
}

for event in agent_supervisor.stream(
    input = messages,
    # config = config,
    stream_mode = "values",
    debug = True
):
    print(event)

For now, I'd recommend using a ChatMessage instance as described above. I'm looking into whether we can have it accept a dictionary message like this:

...

messages = {
            "messages": [
                {
                    "type": "chat",
                    "role": "control",
                    "content": "thinking"
                },
                {
                   "role": "user",
                    "content": "explain options trading",
                }
            ]
}

...

@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. and removed size:XS This PR changes 0-9 lines, ignoring generated files. labels Mar 10, 2025
@dosubot dosubot bot added size:XS This PR changes 0-9 lines, ignoring generated files. and removed size:S This PR changes 10-29 lines, ignoring generated files. labels Mar 10, 2025
@rylativity
Copy link
Author

In order to make this also work with dict type messages it would require fixing something in langchain-core - Right now there is an implicit assumption in the _convert_to_message function in langchain_core/messages/utils.py: that 'role' and 'type' values are interchangable in dict type messages (see https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/messages/utils.py#L325-L328), which I don't think is the desired or intended behavior. Handling this will require a deeper look at langchain-core to avoid breaking something else, but that issue already exists and is not something that's introduced by this PR.

@ccurme my suggestion would be to merge this PR, which allows using ChatMessage with custom 'role' attributes with ChatOllama and closes #30122. And I can open a separate discussion and/or issue regarding how to handle 'role' vs 'type' keys in dict type messages in langchain-core (described above).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Granite 3.2 Thinking
2 participants