-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Issue]: <title> ZeroDivisionError: Weights sum to zero, can't be normalized #619
Comments
It should be that something went wrong in your index phase. You can look at the logs in the index phase. |
Yes this is due to your locally run embedding model, not returning the weights in a correct format. OpenAI uses internally base64 encoded floats, and most other models will return floats as numbers. I've hacked the encoding_format into this piece of code to make local search work: def map_query_to_entities(
query: str,
text_embedding_vectorstore: BaseVectorStore,
text_embedder: BaseTextEmbedding,
all_entities: list[Entity],
embedding_vectorstore_key: str = EntityVectorStoreKey.ID,
include_entity_names: list[str] | None = None,
exclude_entity_names: list[str] | None = None,
k: int = 10,
oversample_scaler: int = 2,
) -> list[Entity]:
"""Extract entities that match a given query using semantic similarity of text embeddings of query and entity descriptions."""
if include_entity_names is None:
include_entity_names = []
if exclude_entity_names is None:
exclude_entity_names = []
matched_entities = []
if query != "":
# get entities with highest semantic similarity to query
# oversample to account for excluded entities
search_results = text_embedding_vectorstore.similarity_search_by_text(
text=query,
text_embedder=lambda t: text_embedder.embed(t, encoding_format="float"), # added to make embedding api work, openai uses base64 by default
k=k * oversample_scaler,
)
for result in search_results:
matched = get_entity_by_key(
entities=all_entities,
key=embedding_vectorstore_key,
value=result.document.id,
)
if matched:
matched_entities.append(matched)
else:
all_entities.sort(key=lambda x: x.rank if x.rank else 0, reverse=True)
matched_entities = all_entities[:k]
# filter out excluded entities
if exclude_entity_names:
matched_entities = [
entity
for entity in matched_entities
if entity.title not in exclude_entity_names
]
# add entities in the include_entity list
included_entities = []
for entity_name in include_entity_names:
included_entities.extend(get_entity_by_name(all_entities, entity_name))
return included_entities + matched_entities |
这好像改了还是不工作 |
It's because you're using local model. |
Consolidating alternate model issues here: #657 |
Hello, currently running through the same problem, I am using an azure openai instance
This returns the error
|
I also encountered this situation, but because I did not connect to openai, I checked the api_base and api_key, and there was no problem. |
where I should place this code? |
This may be caused by an invalid Api_key |
Describe the issue
When I run the query using local scope I got the error of ZeroDivisionError: Weights sum to zero, can't be normalized. But for the Global scope it worked correctly. If any one have the Idea please give the solution.
Steps to reproduce
No response
GraphRAG Config Used
No response
Logs and screenshots
No response
Additional Information
The text was updated successfully, but these errors were encountered: