How does llm.bind_tools
work in langchain?
#29345
anmol-aidora
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I need to understand how exactly does langchain convert information from code to LLM prompt, because end of the day, the LLM will need only text to be passed to it.
If I am incorrect somewhere in my understanding, kindly point it out as well when answering the question
This is the bind_tools function:
class BaseChatOpenAI(BaseChatModel):
...
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type, Callable, BaseTool]],
...
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
"""Bind tool-like objects to this chat model.
On going to definition of super():
class Runnable(Generic[Input, Output], ABC):
...
def bind(self, **kwargs: Any) -> Runnable[Input, Output]:
"""
Bind arguments to a Runnable, returning a new Runnable.
On going to RunnableBinding:
class RunnableBinding(RunnableBindingBase[Input, Output]):
"""Wrap a Runnable with additional functionality.
After this, I am not able to understand how exactly the bind_tools function pass info to the LLM.
Beta Was this translation helpful? Give feedback.
All reactions