Skip to content

Cohere Command R and R+ throw botocore.errorfactory.ValidationException when invoked with tools #566

@irenelavopa

Description

@irenelavopa

DESCRIPTION
When invoking Converse operation with bind tools using cohere.command-r-plus-v1:0 and cohere.command-r-v1:0 an error is thrown and no response is given. The error is: botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the Converse operation: The model returned the following errors: Invalid parameter combination. Please check and try again.

PREMISE

  • we have a chatbot with tools available (in this case just on tool is accessible by the llm)
  • the code has been tested with other bedrock models and it's working (anthropic.claude-3-7-sonnet-20250219-v1:0)
  • the code has been tested with cohere API and integrated with langchain_cohere and it's working
  • the code has been tested with no tools bound to the llm and it's working

CODE SNIPPETS

def search_assistant_tool(cfg: AgentChatbotCfg,search_assistant_repo: SearchAssistantRepo,input_index: Optional[str] = None,) -> StructuredTool:
    def _search_adapter(query: str, items_from: int):
        """We use an adapter method to call search assistant with the LLM"""
        opts = {"query": query, "items_from": items_from}

        merged_opts = opts | (cfg.search_options or {})

        if input_index:
            merged_opts["index_name"] = input_index  # add index if provided

        with tracer.start_as_current_span("search_assistant_call") as span:
            try:
                response = search_assistant_repo.search(options=merged_opts)

                span.set_status(Status(StatusCode.OK))
                span.set_attribute("agent.name", cfg.name)
            except Exception as exc:
                span.set_status(Status(StatusCode.ERROR, str(exc)))
                span.record_exception(exc)
                raise

        result = response.get("result", dict)
        items = result.get("items", [])
        content = json.dumps(items)
        return content, result

def _query_or_respond(self, state: MessagesState):
        """Generate tool call for retrieval or respond."""
        messages = state["messages"]

        tool_llm = LLMProvider.get_llm(
            options=LLMProviderInput(
                provider="bedrock",
                model_id="cohere.command-r-v1:0",
                aws_region="us-east-1",
                model_kwargs={"temperature": 0},
            )
        )

        llm_with_tools = tool_llm.bind_tools([self.search_assistant_tool])

        response = llm_with_tools.invoke(messages)
  return {"messages": [response]} 

The llm is created as:

ChatBedrockConverse(
            client=client,
            model=options.model_id,
            max_tokens=model_kwargs.pop("max_tokens", None),
            temperature=model_kwargs.pop("temperature", 0),
            additional_model_request_fields=model_kwargs,
        )

ERROR MESSAGE AND STACK TRACE


Traceback (most recent call last):

  File "...", line 140, in run_chatbot_agent
    ).run(body)
      ~~~^^^^^^
  File "....", line 131, in run
    for step in graph.stream(
                ~~~~~~~~~~~~^
        {"messages": messages},
        ^^^^^^^^^^^^^^^^^^^^^^^
        thread,
        ^^^^^^^
        stream_mode="values",
        ^^^^^^^^^^^^^^^^^^^^^
    ):
    ^
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langgraph/pregel/main.py", line 2644, in stream
    for _ in runner.tick(
             ~~~~~~~~~~~^
        [t for t in loop.tasks.values() if not t.writes],
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<2 lines>...
        schedule_task=loop.accept_push,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ):
    ^
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langgraph/pregel/_runner.py", line 162, in tick
    run_with_retry(
    ~~~~~~~~~~~~~~^
        t,
        ^^
    ...<10 lines>...
        },
        ^^
    )
    ^
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langgraph/pregel/_retry.py", line 42, in run_with_retry
    return task.proc.invoke(task.input, config)
           ~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langgraph/_internal/_runnable.py", line 640, in invoke
    input = context.run(step.invoke, input, config, **kwargs)
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langgraph/_internal/_runnable.py", line 384, in invoke
    ret = self.func(*args, **kwargs)
  File "...", line 205, in _query_or_respond
    response = llm_with_tools.invoke(messages)
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 3046, in invoke
    input_ = context.run(step.invoke, input_, config)
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langchain_core/runnables/base.py", line 5434, in invoke
    return self.bound.invoke(
           ~~~~~~~~~~~~~~~~~^
        input,
        ^^^^^^
        self._merge_configs(config),
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        **{**self.kwargs, **kwargs},
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 395, in invoke
    self.generate_prompt(
    ~~~~~~~~~~~~~~~~~~~~^
        [self._convert_input(input)],
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<6 lines>...
        **kwargs,
        ^^^^^^^^^
    ).generations[0][0],
    ^
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 980, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 799, in generate
    self._generate_with_cache(
    ~~~~~~~~~~~~~~~~~~~~~~~~~^
        m,
        ^^
    ...<2 lines>...
        **kwargs,
        ^^^^^^^^^
    )
    ^
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langchain_core/language_models/chat_models.py", line 1045, in _generate_with_cache
    result = self._generate(
        messages, stop=stop, run_manager=run_manager, **kwargs
    )
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/langchain_aws/chat_models/bedrock_converse.py", line 661, in _generate
    response = self.client.converse(
        messages=bedrock_messages, system=system, **params
    )
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/botocore/client.py", line 601, in _api_call
    return self._make_api_call(operation_name, kwargs)
           ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/botocore/context.py", line 123, in wrapper
    return func(*args, **kwargs)
  File "../.pyenv/versions/agent-assistant-3.13/lib/python3.13/site-packages/botocore/client.py", line 1074, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the Converse operation: The model returned the following errors: Invalid parameter combination. Please check and try again.
During task with name 'query_or_respond' and id '2541467c-f1d7-7a7e-843c-94eca81dc548'

SYSTEM INFO

OS: macOS Sequoia 15.5
Python Version: 3.13.1

PACKAGE INFO

boto3==1.38.19
botocore==1.38.19
langchain==0.3.25
langchain-aws==0.2.23
langchain-community==0.3.24
langchain-core==0.3.60
langgraph==0.4.5

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions