-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linking of cudart #579
Comments
Heya, Admittedly, I don't know why this was changed. It always worked for me. I wonder if it for the cases where users install CUDA but not add it to PATH? Is there maybe an alternative we can add that joins two things? something like |
Hi, I'm running into this issue with distributable builds of WebCT, the embedded cudatoolkit isn't being correctly used by TIGRE, resulting in a crash on startup on systems that don't previously have the CUDA libraries installed 2024-11-21 11:16:33,520 [INFO] root: Welcome to WebCT 0.1.3
Traceback (most recent call last):
File "app.py", line 24, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "webct\__init__.py", line 223, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "webct\blueprints\app\__init__.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "webct\blueprints\app\routes.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "tigre\__init__.py", line 18, in <module>
File "os.py", line 680, in __getitem__
KeyError: 'CUDA_PATH' This is an issue since although |
@gfardell @WYVERN2742 removed the code suggested, have a test if this fixes the issue. I'll reopen it if it doesn't |
Thanks for the fast response! I'll test when I get more time later 👍 |
Finally got around to testing, can confirm this works and I'm able to run TIGRE from a conda environment with cudatoolkit, without requiring the host to have CUDA installed! 🎉 |
Reopening this. Since I fixed it a month and a half ago, I got 4 different people having the issue that DLLs are not found, so this solution fixes it for people who have CUDA via conda, but the mayority of my users don't. I won't revert the changes yet, but I do need to find a way to cater to both cases. I have CUDA installed in the host (because I need it for GPU code shenanigans) so its a bit hard to test, but maybe @gfardell you can give me a hand? Is there any OS/conda parameter that we can pull in the code that you mentioned above, so I can add it in the |
As you know, for use with CIL we build tigre with conda and host under the ccpi channel
The environment we build in to contains the right version of the cudart redistributable shared library, and this should be automatically found and linked when running within the virtual environment.
However we have an issue where users need to install the cuda sdk installed in order to run tigre.
Commenting out these lines:
TIGRE/Python/tigre/__init__.py
Lines 9 to 19 in 729f146
The correct version of cudart is found automatically. With conda it's in somewhere like
C:\Users\[USER]\miniforge3\envs\[ENV]\Library\bin
With the lines it forces it to use the system installation - which if you're building and running on the same system isn't an issue but obviously our aim is easy redistribution and
CUDA_HOME
is therefore oftenNone
. Even if it's notNone
it may not point to the right CUDA version.I've read through the linked issues to the code, and still struggle to see why it would be necessary if PATH was set correctly to something like:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin
which containscudart
.Specifications
The text was updated successfully, but these errors were encountered: