Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

has method local been solved yet? #51

Open
Chenboshi114514 opened this issue Aug 23, 2024 · 2 comments
Open

has method local been solved yet? #51

Chenboshi114514 opened this issue Aug 23, 2024 · 2 comments

Comments

@Chenboshi114514
Copy link

(graphrag) D:\anaconda\env\graphrag>python -m graphrag.query --root ./ragtest --method local "本文的主旨是什么?"

INFO: Reading settings from ragtest\settings.yaml

INFO: Vector Store Args: {}
creating llm client with {'api_key': 'REDACTED,len=9', 'type': "openai_chat", 'model': 'llama3.1:latest', 'max_tokens': 4000, 'temperature': 0.0, 'top_p': 1.0, 'n': 1, 'request_timeout': 180.0, 'api_base': 'http://localhost:11434/v1', 'api_version': None, 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': None, 'model_supports_json': True, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25}
creating embedding llm client with {'api_key': 'REDACTED,len=9', 'type': "openai_embedding", 'model': 'nomic-embed-text', 'max_tokens': 4000, 'temperature': 0, 'top_p': 1, 'n': 1, 'request_timeout': 180.0, 'api_base': 'http://localhost:11434/api', 'api_version': None, 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': None, 'model_supports_json': None, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25}
Error embedding chunk {'OpenAIEmbedding': "'NoneType' object is not iterable"}
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "D:\anaconda\env\graphrag\graphrag\query_main
.py", line 86, in
run_local_search(
File "D:\anaconda\env\graphrag\graphrag\query\cli.py", line 98, in run_local_search
return asyncio.run(
^^^^^^^^^^^^
File "D:\anaconda\miniconda3\envs\graphrag\Lib\asyncio\runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "D:\anaconda\miniconda3\envs\graphrag\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\miniconda3\envs\graphrag\Lib\asyncio\base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "D:\anaconda\env\graphrag\graphrag\query\api.py", line 190, in local_search
result = await search_engine.asearch(query=query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\env\graphrag\graphrag\query\structured_search\local_search\search.py", line 66, in asearch
context_text, context_records = self.context_builder.build_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\env\graphrag\graphrag\query\structured_search\local_search\mixed_context.py", line 139, in build_context
selected_entities = map_query_to_entities(
^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\env\graphrag\graphrag\query\context_builder\entity_extraction.py", line 55, in map_query_to_entities
search_results = text_embedding_vectorstore.similarity_search_by_text(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\env\graphrag\graphrag\vector_stores\lancedb.py", line 118, in similarity_search_by_text
query_embedding = text_embedder(text)
^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\env\graphrag\graphrag\query\context_builder\entity_extraction.py", line 57, in
text_embedder=lambda t: text_embedder.embed(t),
^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\env\graphrag\graphrag\query\llm\oai\embedding.py", line 96, in embed
chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda\miniconda3\envs\graphrag\Lib\site-packages\numpy\lib\function_base.py", line 550, in average
raise ZeroDivisionError(
ZeroDivisionError: Weights sum to zero, can't be normalized

@kilimanj4r0
Copy link

Hi! Changing api_base: 'http://localhost:11434/api' to api_base: 'http://localhost:11434/v1' for embedding model config (in settings.yaml) solved the issue for me. Also, refer to this thread which explains what line to add to graphrag\graphrag\query\llm\oai\embedding.py.

@Chenboshi114514
Copy link
Author

Hi! Changing api_base: 'http://localhost:11434/api' to api_base: 'http://localhost:11434/v1' for embedding model config (in settings.yaml) solved the issue for me. Also, refer to this thread which explains what line to add to graphrag\graphrag\query\llm\oai\embedding.py.

thx!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants