-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to install llama-cpp-python with poetry #9656
Comments
but the conclusion in #9323 was that environment variables are passed through and so this already worked? request for config-settings duplicates #8909, python-poetry/poetry-core#715 |
First of all sorry for the duplicate, I missed that Based on my tests and on what you can read here it doesn't work. These are my experiments: With pip : conda activate base && conda env remove --name llmdoc -y && conda create --name llmdoc -y python=3.10 && conda activate llmdoc
CMAKE_ARGS="-DLLAVA_BUILD=OFF -DGGML_CUDA=ON" pip install llama-cpp-python --upgrade --force-reinstall --no-cache-dir
python -c "from llama_cpp.llama_cpp import _load_shared_library; print(bool(_load_shared_library('llama').llama_supports_gpu_offload()))" response : True With poetry : conda activate base && conda env remove --name llmdoc -y && conda create --name llmdoc -y python=3.10 && conda activate llmdoc && pip install poetry
CMAKE_ARGS="-DLLAVA_BUILD=OFF -DGGML_CUDA=ON" poetry add llama-cpp-python --no-cache
python -c "from llama_cpp.llama_cpp import _load_shared_library; print(bool(_load_shared_library('llama').llama_supports_gpu_offload()))" reponse False The last line is used to know if the gpu support is enabled I hope I haven't missed any details... |
"perhaps have an already-built wheel in your cache" -> I thought that if I reset the environment and set the "--no-cache" flag, this wouldn't happen. Can you tell me more ? I will test it with docker to be sure and post my results here in the next few days |
This is now just a straight duplicate of #9323 |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Issue Kind
Brand new capability
Description
Based on the llama-cpp-python installation documentation, if we want to install the lib with CUDA support (for example) we have 2 options :
Pass a CMAKE env var :
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python
Or use the
--config-settings
argument of pip like this :pip install llama-cpp-python --config-settings cmake.args="-DGGML_CUDA=on"
As far as I know, it's not possible to do something equivalent with poetry because :
--config-settings
I saw that there had already been conversations on similar subject here but they date from a while and maybe things have changed in the meantime?
I understand that pip and poetry are two different projects with different objectives, but it would be really useful (from my point of view) to be able to handle this kind of installation.
Impact
As I see it, llama-cpp-python will become an important lib in the python ecosystem (it probably already is to some extent).
In addition, llama-cpp-python is not the only one to use the
--config-settings
functionality for installation.That's why I think it would be interesting to be able to allow a fluid installation with poetry.
Workarounds
There is a workaround as explained here
But it's not very practical because it breaks the poetry workflow.
The text was updated successfully, but these errors were encountered: