-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when run LlamaIndex_Mistral7B-RAG.ipynb in Jupyter Notebook. #1
Comments
Additional information.
|
I try again by install pytorch stable 2.2
So I went back to current version in file environment.yml Please help, Thank you. |
Hi @chanwitkepha, would you be able to run |
Here is the list of packages that show with conda list. |
Thanks @chanwitkepha, is it possible to run the following in your condo environment to downgrade from python 3.12 to 3.11?
I have seen an issue that pytorch is about to support 3.12 but not yet. Though I don't know why it is working for me. I will also double check but in the meantime if you can try 3.11 please. |
Ok I will try it again. |
After trying Could you change your environment.yml file for compatible with python 3.11 so I can try to install conda environment again? Thank you. |
Hi @chanwitkepha, I've just run the Mistral 7B notebook with the environment as it was (Python 3.12.1 and PyTorch 2.1.0) and it ran okay. I am running it in Visual Studio Code and selecting the Conda environment as the kernel. I was wondering if it is a Nvidia CUDA library error - can you run |
My issue is llama-index failure after conda install pytorch. And solved by install anything by pip only. |
Thanks for sharing your experience @linuaries, that could be tried - just a note that the recent LlamaIndex libraries have been split into legacy and core so the code won't run as is. I am looking into updating it and will update the repository with a version that works then. @chanwitkepha, the error does look like a cuda library issue, I'm wondering if there's a difference between cuda v12.1 which you have and version v12.2 which I have? Is it possible to try to update nvidia cuda drivers, as per |
Thank you, I will upgrade this old machine from Ubuntu 18.04.6 to 22.04 + CUDA 12.2 (or 12.3) and try again. |
Thanks @chanwitkepha, please try 12.2 so it matches the libraries |
Hi @chanwitkepha (and @linuaries) - I'm close to finishing the update to the latest llama-index version (core) and that includes an update to llama-cpp-python as well. It is advised, by llama-index, that a brand new environment is created and I did this and installed the nvidia cuda toolkit (Base Installer instructions here: https://developer.nvidia.com/cuda-downloads This is now Cuda v12.3. I will update the environment.yml file as well. It is working for me and hopefully will work for you, too. Please wait for the update over the next hour before cloning. |
Hi, I use git pull and try it again, It seem this environment.yml can work with CUDA 12.1 in my old Ubuntu 18.04.6 LTS. Now I can pass step 3 but still have error in step 8. It seem my First GPU have too small memory, But my PC have 2 GPU (GTX 1080) How can I use both GPU for processing this Mistral 7B Models. |
Hi @chanwitkepha, it may be trying to load too many layers onto the GPU. For this n_gpu_layers value, try setting it lower, such as 10 and then if that leaves you a lot of VRAM (check with nvidia-smi) then you can increase it. Note: The last line visible in my screenshot above says it found 1 CUDA device, I would think you should see this line and it should say 2 CUDA devices. So, I'm not 100% sure if it is using your cards correctly. |
According to README file
https://github.com/marklysze/LlamaIndex-RAG-Linux-CUDA/blob/main/README.md
When I install from environment files and activate conda.
I install Jupyter Notebook and run.
Then I run
LlamaIndex_Mistral7B-RAG.ipynb
in Jupyter Notebook. I select Kernel from Preferred Session (LlamaIndex_Mistral7B-RAG.ipynb)When I run Code in Notebook at step 3
It show error output.
Please advise how to solve this error, Thank you.
The text was updated successfully, but these errors were encountered: