-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Splayed layout support #46
Splayed layout support #46
Comments
Another option is to support |
Thanks Rob! 🙏 Converted the ideas to an enumerated list and added that idea to the list |
cc @bdice |
Would it make sense to standardize a tool that uses the path to nvcc to parse information out of nvcc.profile? Something that could be used by anyone, not just cuda-python? |
Sorry for lack of response. Is this still an issue? It seems we have been able to build cuda-python on conda-forge? |
Yep it is still an issue We workaround it in conda-forge Through improvements on the packaging side, we have minimized the workarounds needed ( for example: conda-forge/cuda-python-feedstock#75 ), but we are unable to eliminate them without this fix |
@jakirkham I am sorry but I don't get it. In any case we need a way to specify where the CUDA headers are, and setting |
Think an ideal solution (from my perspective), would be moving to a CMake-based build system here as CMake already understands how to work with splayed layouts correctly. For example moving to scikit-build-core would very likely solve this issue. This may be desirable anyways to simplify the build process and make it less bespoke |
I still don't understand what specific changes are needed to support splayed layout. @jakirkham this issue might have been outdated since the earlier (CTK 12.0) efforts? Right now, all headers are expected by CUDA Python to locate in one place, for the purposes of both parsing and compiling. So, it doesn't really matter if headers are in Also, there is no C++ components in this project (at least not yet; there will be once we start working on memory management but it would not happen before this Fall the earliest) and we need to balance with the currently limited engineering resource we have. I don't see the immediate benefit of moving away from setuptools to CMake (+ something that understands CMake like scikit-build-core) when things are already working. That said, if I can enlist a RAPIDS build expert to help with the build system migration, I would not object the change to CMake 😉 |
Nope. This issue was opened because of issues encountered adding CUDA 12 support Yes CUDA-Python doesn't support splayed layouts. We agree 🙂 The issue is the CUDA compiler and tightly coupled headers and libraries live together. In Conda this means As the wheel ecosystem continues to build out CUDA library wheels, it will likely wind up in the same situation with the same problems Hopefully that explanation makes more sense. Please feel free to ask more questions |
Could you tell me where this statement comes from? Sorry but I just don't think we are getting the points across and we don't understand each other through this media. Please feel free to arrange an offline meeting. |
I expect the problem is this: cuda-python/examples/common/common.py Line 22 in 81e6f86
|
Right like this line Line 81 in 2f9d31c
Though am now noticing that in CUDA-Python 12.4.0 there may have been related changes for splayed layouts Line 30 in 2be0aac
Line 77 in 2be0aac
So maybe we should give this a try to see if it helps |
Ok made some changes to the conda-forge build to take advantage of the changes in how If someone could take a look, that would be appreciated 🙂 |
Currently
cuda-python
relies on all binaries (likenvcc
), all headers, and all libraries to live in a single directory (specified by$CUDA_HOME
or similar).However there are use cases (like cross-compilation, as with conda-build) where the build tools may live in one location (and perform builds on that architecture) whereas the headers and libraries may live in a different location (and target a different architecture). In this case not everything lives in
$CUDA_HOME
.It would be helpful to have a way of specifying where these different components come from. Here are some options:
$NVCC
for thenvcc
location$CUDA_BIN
(if specified) to get build tool directory$CUDA_HOME
Maybe there are other reasonable options worth considering?
The text was updated successfully, but these errors were encountered: