Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error pops on local search while embedding #39

Open
navinshah opened this issue Aug 6, 2024 · 1 comment
Open

error pops on local search while embedding #39

navinshah opened this issue Aug 6, 2024 · 1 comment

Comments

@navinshah
Copy link

error pops on local search while embedding, global search works fine.

creating embedding llm client with {'api_key': 'REDACTED,len=32', 'type': "openai_embedding", 'model': 'nomic-embed-text', 'max_tokens': 4000, 'temperature': 0, 'top_p': 1, 'request_timeout': 180.0, 'api_base': 'http://localhost:11434/api', 'api_version': None, 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': None, 'model_supports_json': None, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 50}
Error embedding chunk {'OpenAIEmbedding': "'NoneType' object is not iterable"}
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/ec2-user/SageMaker/DataScienceSharedRepo/Kbot/supplierKbot/graphrag-local-ollama/graphrag/query/main.py", line 76, in
run_local_search(
File "/home/ec2-user/SageMaker/DataScienceSharedRepo/Kbot/supplierKbot/graphrag-local-ollama/graphrag/query/cli.py", line 154, in run_local_search
result = search_engine.search(query=query)
File "/home/ec2-user/SageMaker/DataScienceSharedRepo/Kbot/supplierKbot/graphrag-local-ollama/graphrag/query/structured_search/local_search/search.py", line 118, in search
context_text, context_records = self.context_builder.build_context(
File "/home/ec2-user/SageMaker/DataScienceSharedRepo/Kbot/supplierKbot/graphrag-local-ollama/graphrag/query/structured_search/local_search/mixed_context.py", line 139, in build_context
selected_entities = map_query_to_entities(
File "/home/ec2-user/SageMaker/DataScienceSharedRepo/Kbot/supplierKbot/graphrag-local-ollama/graphrag/query/context_builder/entity_extraction.py", line 55, in map_query_to_entities
search_results = text_embedding_vectorstore.similarity_search_by_text(
File "/home/ec2-user/SageMaker/DataScienceSharedRepo/Kbot/supplierKbot/graphrag-local-ollama/graphrag/vector_stores/lancedb.py", line 118, in similarity_search_by_text
query_embedding = text_embedder(text)
File "/home/ec2-user/SageMaker/DataScienceSharedRepo/Kbot/supplierKbot/graphrag-local-ollama/graphrag/query/context_builder/entity_extraction.py", line 57, in
text_embedder=lambda t: text_embedder.embed(t),
File "/home/ec2-user/SageMaker/DataScienceSharedRepo/Kbot/supplierKbot/graphrag-local-ollama/graphrag/query/llm/oai/embedding.py", line 96, in embed
chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens)
File "/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/numpy/lib/function_base.py", line 550, in average
raise ZeroDivisionError(
ZeroDivisionError: Weights sum to zero, can't be normalized

@jia95812
Copy link

jia95812 commented Aug 7, 2024

我也遇到了类似的问题:
creating llm client with {'api_key': 'REDACTED,len=6', 'type': "openai_chat", 'model': 'mistral', 'max_tokens': 4000, 'temperature': 0.0, 'top_p': 1.0, 'request_timeout': 180.0, 'api_base': 'http://localhost:11434/v1', 'api_version': None, 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': None, 'model_supports_json': True, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25}
creating embedding llm client with {'api_key': 'REDACTED,len=6', 'type': "openai_embedding", 'model': 'nomic-embed-text', 'max_tokens': 4000, 'temperature': 0, 'top_p': 1, 'request_timeout': 180.0, 'api_base': 'http://localhost:11434/api', 'api_version': None, 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': None, 'model_supports_json': None, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25}
Error embedding chunk {'OpenAIEmbedding': "'NoneType' object is not iterable"}
Traceback (most recent call last):
File "D:\ProgramData\anaconda3\envs\graphrag-ollama-local\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "D:\ProgramData\anaconda3\envs\graphrag-ollama-local\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "E:\graphrag-local-ollama-main\graphrag\query_main
.py", line 76, in
run_local_search(
File "E:\graphrag-local-ollama-main\graphrag\query\cli.py", line 154, in run_local_search
result = search_engine.search(query=query)
File "E:\graphrag-local-ollama-main\graphrag\query\structured_search\local_search\search.py", line 118, in search
context_text, context_records = self.context_builder.build_context(
File "E:\graphrag-local-ollama-main\graphrag\query\structured_search\local_search\mixed_context.py", line 139, in build_context
selected_entities = map_query_to_entities(
File "E:\graphrag-local-ollama-main\graphrag\query\context_builder\entity_extraction.py", line 55, in map_query_to_entities
search_results = text_embedding_vectorstore.similarity_search_by_text(
File "E:\graphrag-local-ollama-main\graphrag\vector_stores\lancedb.py", line 118, in similarity_search_by_text
query_embedding = text_embedder(text)
File "E:\graphrag-local-ollama-main\graphrag\query\context_builder\entity_extraction.py", line 57, in
text_embedder=lambda t: text_embedder.embed(t),
File "E:\graphrag-local-ollama-main\graphrag\query\llm\oai\embedding.py", line 96, in embed
chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens)
File "D:\ProgramData\anaconda3\envs\graphrag-ollama-local\lib\site-packages\numpy\lib\function_base.py", line 550, in average
raise ZeroDivisionError(
ZeroDivisionError: Weights sum to zero, can't be normalized

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants