Skip to content

use_responses_api= True gives error with AzureChatOpenAI #31653

@sneharosegeorge1

Description

@sneharosegeorge1

Checked other resources

  • I added a very descriptive title to this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Example Code

az_llm_chain = AzureChatOpenAI(
'azure_endpoint': 'my-azure-endpoint',
'openai_api_version': '2025-04-01-preview',
'deployment_name':'my_depl_name',
'openai_api_type': 'my_depl_type',
'openai_api_key': *****,
streaming': True,
'use_responses_api': True,
)

messages = [
HumanMessagePromptTemplate.from_template("{question}")
]

prompt = ChatPromptTemplate.from_messages(messages)
llm_chain = prompt | az_llm_chain
response = llm_chain.invoke(prompt_vars, config=RunnableConfig(callbacks=[callback]))

Error Message and Stack Trace (if applicable)

File \"venv/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 3047, in invoke\n input_ = context.run(step.invoke, input_, config)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py\", line 372, in invoke\n self.generate_prompt(\n File \"venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py\", line 957, in generate_prompt\n return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py\", line 776, in generate\n self._generate_with_cache(\n File \"venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py\", line 1022, in _generate_with_cache\n result = self._generate(\n ^^^^^^^^^^^^^^^\n File \"venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py\", line 1036, in _generate\n return generate_from_stream(stream_iter)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py\", line 156, in generate_from_stream\n generation = next(stream, None)\n ^^^^^^^^^^^^^^^^^^\n File \"venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py\", line 846, in _stream_responses\n context_manager = self.root_client.responses.create(**payload)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"venv/lib/python3.11/site-packages/openai/_utils/_utils.py\", line 287, in wrapper\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"venv/lib/python3.11/site-packages/openai/resources/responses/responses.py\", line 690, in create\n return self._post(\n ^^^^^^^^^^^\n File \"/venv/lib/python3.11/site-packages/openai/_base_client.py\", line 1242, in post\n return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/venv/lib/python3.11/site-packages/openai/_base_client.py\", line 1037, in request\n raise self._make_status_error_from_response(err.response) from None\nopenai.APIStatusError: Method Not Allowed"}

Description

I am trying to Use OpenAI response APi via langchain and Azure.And used 'use_responses_api': True, with AzureChatOpenAI.But it gives APIStatusError: Method Not Allowed

System Info

langchain-core==0.3.65
langchain==0.3.25
langchain_community==0.3.20
langchain-openai==0.3.24

Model used:GPT 4.1

Metadata

Metadata

Assignees

Labels

bugRelated to a bug, vulnerability, unexpected error with an existing featureinvestigateFlagged for investigation.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions