forked from langchain-ai/langchain
-
Notifications
You must be signed in to change notification settings - Fork 0
Patch 0 #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
Softwilft
wants to merge
2,247
commits into
Anshler:master
Choose a base branch
from
Softwilft:patch-1
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Patch 0 #1
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…in-ai#14765) ## Description Similar to langchain-ai#5861, I've experienced `KeyError`s resulting from unsafe lookups in the `convert_dict_to_message` function in [this file](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/adapters/openai.py). While that issue focused on `KeyError 'content'`, I've opened another issue (langchain-ai#14764) about how the problem still exists in the same function but with `KeyError 'role'`. The fix for langchain-ai#5861 only added a safe lookup to the specific line that was giving them trouble.. This PR fixes the unsafe lookup in the rest of the function but the problem still exists across the repo. ## Issues * langchain-ai#14764 * langchain-ai#5861 ## Dependencies * None ## Checklist [x] make format [x] make lint [ ] make test - Results in `make: *** No rule to make target 'test'. Stop.` ## Maintainers * @hinthornw --------- Co-authored-by: Bagatur <[email protected]>
…ore (langchain-ai#14914) - **Description:** This PR fixes the issue faces with duplicate input id in Clarifai vectorstore class when ingesting documents into the vectorstore more than the batch size. --------- Co-authored-by: Bagatur <[email protected]>
## Description This PR intends to add support for Qdrant's new [sparse vector retrieval](https://qdrant.tech/articles/sparse-vectors/) by introducing a new retriever class, `QdrantSparseVectorRetriever`. Necessary usage docs and integration tests have been added for the retriever. --------- Co-authored-by: Bagatur <[email protected]>
…erialization of transform_output_fn (langchain-ai#14933) **What is the reproduce code?** ```python from langchain.chains import LLMChain, load_chain from langchain.llms import Databricks from langchain.prompts import PromptTemplate def transform_output(response): # Extract the answer from the responses. return str(response["candidates"][0]["text"]) def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return request chat_model = Databricks( endpoint_name="llama2-13B-chat-Brambles", transform_input_fn=transform_input, transform_output_fn=transform_output, verbose=True, ) print(f"Test chat model: {chat_model('What is Apache Spark')}") # This works llm_chain = LLMChain(llm=chat_model, prompt=PromptTemplate.from_template("{chat_input}")) llm_chain("colorful socks") # this works llm_chain.save("databricks_llm_chain.yaml") # transform_input_fn and transform_output_fn are not serialized into the model yaml file loaded_chain = load_chain("databricks_llm_chain.yaml") # The Databricks LLM is recreated with transform_input_fn=None, transform_output_fn=None. loaded_chain("colorful socks") # Thus this errors. The transform_output_fn is needed to produce the correct output ``` Error: ``` File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-6c34afab-3473-421d-877f-1ef18930ef4d/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation text str type expected (type=type_error.str) request payload: {'query': 'What is a databricks notebook?'}'} ``` **What does the error mean?** When the LLM generates an answer, represented by a Generation data object. The Generation data object takes a str field called text, e.g. Generation(text=”blah”). However, the Databricks LLM tried to put a non-str to text, e.g. Generation(text={“candidates”:[{“text”: “blah”}]}) Thus, pydantic errors. **Why the output format becomes incorrect after saving and loading the Databricks LLM?** Databrick LLM does not support serializing transform_input_fn and transform_output_fn, so they are not serialized into the model yaml file. When the Databricks LLM is loaded, it is recreated with transform_input_fn=None, transform_output_fn=None. Without transform_output_fn, the output text is not unwrapped, thus errors. Missing transform_output_fn causes this error. Missing transform_input_fn causes the additional prompt “Be Concise.” to be lost after saving and loading. <!-- Thank you for contributing to LangChain! Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes (if applicable), - **Dependencies:** any dependencies required for this change, - **Tag maintainer:** for a quicker response, tag the relevant maintainer (see below), - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/ If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/extras` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. --> --------- Co-authored-by: Bagatur <[email protected]>
Description: Adding Summarization to Vectara, to reflect it provides not only vector-store type functionality but also can return a summary. Also added: MMR capability (in the Vectara platform side) Updated templates Updated documentation and IPYNB examples Tag maintainer: @baskaryan Twitter handle: @ofermend --------- Co-authored-by: Ofer Mendelevitch <[email protected]>
- **Description:** - [OCI Data Science](https://docs.oracle.com/en-us/iaas/data-science/using/home.htm) is a fully managed and serverless platform for data science teams to build, train, and manage machine learning models in the Oracle Cloud Infrastructure. This PR add integration for using LangChain with an LLM hosted on a [OCI Data Science Model Deployment](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm). To authenticate, [oracle-ads](https://accelerated-data-science.readthedocs.io/en/latest/user_guide/cli/authentication.html) has been used to automatically load credentials for invoking endpoint. - **Issue:** None - **Dependencies:** `oracle-ads` - **Tag maintainer:** @baskaryan - **Twitter handle:** None --------- Co-authored-by: Erick Friis <[email protected]>
The [provider page](https://python.langchain.com/docs/integrations/providers/alibabacloud_opensearch) holds the vector store information. The [Chat example](https://python.langchain.com/docs/integrations/chat/pai_eas_chat_endpoint) was incorrectly sorted in the navbar because of the wrong file name. - Recreated a provide page - Added missed links and descriptions - Compound information about vector store from two pages into one - Fixed file name
…in-ai#14805) * This PR adds `stream` implementations to Runnable Branch. * Runnable Branch still does not support `transform` so it'll break streaming if it happens in middle or end of sequence, but will work if happens at beginning of sequence. * Fixes use the async callback manager for async methods * Handle BaseException rather than Exception, so more errors could be logged as errors when they are encountered --------- Co-authored-by: Eugene Yurtsev <[email protected]>
…#14884) Co-authored-by: Erick Friis <[email protected]>
Templates for [local multi-modal LLMs](https://llava-vl.github.io/llava-interactive/) using - * Image summaries * Multi-modal embeddings --------- Co-authored-by: Erick Friis <[email protected]>
- **Description:** Fix typo in class Docstring to replace AZURE_OPENAI_API_ENDPOINT by AZURE_OPENAI_ENDPOINT - **Issue:** the issue langchain-ai#14901 - **Dependencies:** NA - **Twitter handle:** Co-authored-by: Yacine Bouakkaz <[email protected]>
…ngchain-ai#14978) **Description** For the Momento Vector Index (MVI) vector store implementation, pass through `filter_expression` kwarg to the MVI client, if specified. This change will enable the MVI self query implementation in a future PR. Also fixes some integration tests.
…4985) - **Description:** Fixed jaguar.py to import JaguarHttpClient with try and catch - **Issue:** the issue # Unable to use the JaguarHttpClient at run time - **Dependencies:** It requires "pip install -U jaguardb-http-client" - **Twitter handle:** workbot --------- Co-authored-by: JY <jyjy@jaguardb> Co-authored-by: Bagatur <[email protected]>
<!-- Thank you for contributing to LangChain! Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified. Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes if applicable, - **Dependencies:** any dependencies required for this change, - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/ If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. -->
Mulitple -> Multiple
… metadata (langchain-ai#14997) Surrealdb client changes from 0.3.1 to 0.3.2 broke the surrealdb vectore integration. This PR updates the code to work with the updated client. The change is backwards compatible with previous versions of surrealdb client. Also expanded the vector store implementation to store and retrieve metadata that's included with the document object.
…i#14614) Replace this entire comment with: - **Description:** @kurtisvg has raised a point that it's a good idea to have a fixed version for embeddings (since otherwise a user might run a query with one version vs a vectorstore where another version was used). In order to avoid breaking changes, I'd suggest to give users a warning, and make a `model_name` a required argument in 1.5 months.
Builds on langchain-ai#14040 with community refactor merged and notebook updated. Note that with this refactor, models will be imported from `langchain_community.chat_models.huggingface` rather than the main `langchain` repo. --------- Signed-off-by: harupy <[email protected]> Signed-off-by: ugm2 <[email protected]> Signed-off-by: Yuchen Liang <[email protected]> Co-authored-by: Andrew Reed <[email protected]> Co-authored-by: Andrew Reed <[email protected]> Co-authored-by: A-Roucher <[email protected]> Co-authored-by: Aymeric Roucher <[email protected]>
<!-- Thank you for contributing to LangChain! Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified. Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes if applicable, - **Dependencies:** any dependencies required for this change, - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/ If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. -->
Jobs like https://github.com/langchain-ai/langchain/actions/runs/7389187843/job/20101494206 only receive the first 300 changed files. Because of the opportunity to miss packages, better to auto-fail and manually run. Checking that it does what I expect in langchain-ai#15424
Includes code from this PR: langchain-ai/langchain@HEAD...m0kr4n3:security/fix_ssrf with additional fixes Unit tests cover new test cases
…-ai#15505) Should be imported from community directly
removed the deprecated model from text embedding page of openai notebook and added the suggested model from openai page <!-- Thank you for contributing to LangChain! Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified. Replace this entire comment with: - **Description:** a description of the change, - **Issue:** the issue # it fixes if applicable, - **Dependencies:** any dependencies required for this change, - **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out! Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally. See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/ If you're adding a new integration, please include: 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17. -->
**Description**: Fixed a minor typo in the RAG Docs: - ~~This usually happen offline~~ -> This usually happen**s** offline
there are still a few broken ones: - some in the chains docs, which I will delete soon :) - some pointing to a sqlite tool, which we should add
Co-authored-by: Bagatur <[email protected]>
…i#15218) added langchain_google_vertexai package --------- Co-authored-by: Erick Friis <[email protected]>
Todo - [x] copy over integration tests - [x] update docs with new instructions in langchain-ai#15513 - [x] add linear ticket to bump core -> community, community->langchain, and core->openai deps - [ ] (optional): add `pip install langchain-openai` command to each notebook using it - [x] Update docstrings to not need `openai` install - [x] Add serialization - [x] deprecate old models Contributor steps: - [x] Add secret names to manual integrations workflow in .github/workflows/_integration_test.yml - [x] Add secrets to release workflow (for pre-release testing) in .github/workflows/_release.yml Maintainer steps (Contributors should not do these): - [x] set up pypi and test pypi projects - [x] add credential secrets to Github Actions - [ ] add package to conda-forge Functional changes to existing classes: - now relies on openai client v1 (1.6.1) via concrete dep in langchain-openai package Codebase organization - some function calling stuff moved to `langchain_core.utils.function_calling` in order to be used in both community and langchain-openai
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
/<!-- Thank you for contributing to LangChain!
Softwilft
Replace this entire comment with:
Please make sure your PR is passing linting and testing before submitting. Run
make format,make lintandmake testto check this locally.See contribution guidelines for more information on how to write/run tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
docs/extrasdirectory.If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->