-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SUCCESS: Global Search Response: I am sorry but I am unable to answer this question given the provided data. #44
Comments
same issue, also following the tutorial |
This error seems to happen here because the generated response is not JSON serializable (it's not in format I didn't continue my investigation but my guess is that if the output of the LLM doesn't follow the imposed JSON format then it will simply fail. Please @TheAiSingularity if you have any idea to confirm or deny it could be very helpful as I'm seeing multiple users having the same error while reproducing the tutorial. Thanks a lot! |
well, the issue is not with the repo implementation..it works fine if we follow the mentioned steps exactly and we have tested ut several times in our environments on different settings.. you guys can play around with the parameters in the settings.yaml file and see. i suspect the issue is with the user's other configurations on their system. the same setup has worked for significant amount of people, hence the stars of the repo. we have observed one pattern though..if we just change the question and query the graphrag, we were able to retreive the response for the same successfully generated database. hope this helps. |
run this cmd I print(search_response) and get the search_response as below: search_response is plain text instead of json. tried mistral and qwen2:72b, got the same result. |
Seems some llms(mistral or qwen2:72b) cannot follow JSON format when so I add it to role user's content here simply like this : |
Thanks for the help! I really appreciate it. |
The reason this error occur is because mistral is too bad to understand our demand. It cannot output json format. |
I am using Anaconda to build my own project. I am using Python version 3.10.14 and downloaded Ollama, pulled Mistral for my LLM, and pulled Nomic-Embed-Text for my embedding model. I followed the instructions step by step. The following are four screenshots from when I launched the command python -m graphrag.index --root ./ragtest.
It seems like everything looks fine when I root my Graphrag index. However, when I use the query command, it doesn’t produce the expected output.
Does anyone share the same or similar experiences as me? Big thanks.
The text was updated successfully, but these errors were encountered: