-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify LLM config #120
Simplify LLM config #120
Conversation
llm_model: "anthropic.claude-3-5-sonnet-20241022-v2:0" | ||
# llm_provider: openai | ||
# llm_model: gpt-4o-2024-08-06 | ||
llm_provider: ${llm.provider} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how and where is llm.provider defined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is defined in the config file under the llm section
provider: bedrock |
and similarly for llm.model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we define them only once, and use the same model and provider defined for task inference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we not supporting different models again?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please elaborate "not supporting different models again"? We will be able to support all the models that were also supported before. It is just easier that the user has to pass their model and their provider once, instead of doing it twice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we not supporting different models for task inference and feature generation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that a use-case? If yes, we don't have that option in the UI. I can revert this change, and keep the redundancy if that is a use-case we foresee.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And btw if someone wants to do that it is possible via the overrides. It's just the defaults that will be the same, which were already same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Example: caafe should be more expensive, so the customer want to use a cheaper LLM. but I think I'm fine for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes so that is possible you just have to override the parameter and provide it explicitly.
Description of changes:
This PR makes Bedrock & OpenAI config setup similar, and removes the requirement of passing in
api_key_location
parameter. Also I've removed the redundancy of specifying llm model and provider again if CAAFE is used, by loading in the same llm parameters defined for task inference.By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.