-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disallow cuda-python 12.6.1 and 11.8.4 #1720
Disallow cuda-python 12.6.1 and 11.8.4 #1720
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like cuda-python=12.6.1
is still making it into the test environment for rmm
conda builds on CUDA 12.
TEST START: /tmp/conda-bld-output/linux-64/rmm-24.12.00a25-cuda12_py311_241106_gec071874_25.conda
...
The following NEW packages will be INSTALLED:
...
cuda-python: 12.6.1-py311h817de4b_0 conda-forge
...
AttributeError: module 'cuda.ccudart' has no attribute '__pyx_capi__'
I guess because cuda-python
has a run export like this:
run_exports:
- {{ pin_subpackage('cuda-python', min_pin='x', max_pin='x') }}
Think we probably need to ignore run exports from cuda-python
and make the run:
dependency explicit?
Or maybe we can just add the !=
pins explicitly in run:
and leave the run export as-is? I'm not sure if those things can be mixed like that.
My guess is you just need a wildcard, like |
With the updated pins, conda build jobs are now getting
I strongly suspect that there's some other import error just not making its way to the logs. Investigating. Wheel tests also are now failing because of this new deprecation warning treated as an error in CI, coming from
I'll push a fix to ignore that (for CI purposes) for now, and open an issue about updating if we don't already have one. |
Looking more closely... looks like all Python 3.10 / 3.11 |
I can reproduce the conda build failure locally (on an x86_64 machine): docker run \
--rm \
-v $(pwd):/opt/work \
-w /opt/work \
--env CMAKE_GENERATOR=Ninja \
--env RAPIDS_PACKAGE_VERSION=24.12.00a24 \
--env RAPIDS_BUILD_TYPE=nightly \
--env RAPIDS_REPOSITORY=rapidsai/rmm \
--env RAPIDS_REF_NAME=branch-24.12 \
--env RAPIDS_SHA=dbae8c0 \
--env RAPIDS_NIGHTLY_DATE=2024-11-05 \
-it rapidsai/ci-conda:cuda11.8.0-rockylinux8-py3.10 \
bash
source rapids-date-string
CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp)
conda mambabuild \
--channel "${CPP_CHANNEL}" \
conda/recipes/rmm Tried instead running that build with conda mambabuild \
--channel "${CPP_CHANNEL}" \
conda/recipes/rmm
conda install \
--channel "${RAPIDS_CONDA_BLD_OUTPUT_DIR}" \
rmm="${RAPIDS_PACKAGE_VERSION}" That's definitely picking up the package just built locally...
... but I can't reproduce the import error. python -c "import rmm; print(rmm.__git_commit__)"
# 84765d347813b0296ed66daf81cf33ad1639d46a So I'm thinking it has to be something specific to the test environment conda-build is creating. |
blegh there are even more deprecation warnings causing the
Notice that's about |
/merge |
Thanks all! 🙏 |
Follow-up to #1720 Contributes to rapidsai/build-planning#116 That PR used `!=` requirements to skip a particular version of `cuda-python` that `rmm` was incompatible with. A newer version of `cuda-python` (12.6.2 for CUDA 12, 11.8.5 for CUDA 11) was just released, and it also causes some build issues for RAPIDS libraries: rapidsai/cuvs#445 (comment) To unblock CI across RAPIDS, this proposes **temporarily** switching to ceilings on `rmm`'s `cuda-python` dependency. Authors: - James Lamb (https://github.com/jameslamb) Approvers: - Vyas Ramasubramani (https://github.com/vyasr) URL: #1723
Due to a bug in cuda-python we must disallow cuda-python 12.6.1 and 11.8.4. See rapidsai/build-planning#116 for more information.
This PR disallows those versions, and other changes following from that:
python
in bothhost:
andrun:
dependencies for thermm
conda packagecuda-python