-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GitHub Actions Workflows #1062
Conversation
Codecov ReportBase: 0.00% // Head: 0.00% // No change to project coverage 👍
Additional details and impacted files@@ Coverage Diff @@
## branch-23.02 #1062 +/- ##
============================================
Coverage 0.00% 0.00%
============================================
Files 26 26
Lines 3439 3433 -6
============================================
+ Misses 3439 3433 -6
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
Thanks @pentschev for helping with the libnuma questions and @ajschmidt8 for updating the base images. Tests appear to be passing now. I’m going to open this for review but it should not be merged until after the 22.12 release, when we will migrate most repos to GitHub Actions. Can someone with privileges mark this DO NOT MERGE? |
- `strategy.fail-fast` is only necessary for matrix builds - `runs-on` can be changed to use GH hosted runners since the respective builds are quick - GH hosted runners are cheaper and are usually available a bit quicker than our self-hosted nodes - `env` vars aren't necessary
the environment variables standardized by these steps are only used in the `gha-tools` scripts that upload/download artifacts from S3. since these jobs don't upload/download artifacts from S3, these enironment variables are not necessary
`shellcheck` prefers `$()` vs. backticks
using a more generic matching pattern will help prevent potential issues in the future if the tags somehow get out of sync
and remove `external_contributors` since its not a valid entry
I don't think the latest They also seem to be appearing in #1069. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @bdice and @ajschmidt8 for the hard work you've put on this. I've added a few comments.
|
||
# Usage: | ||
# conda build -c conda-forge . | ||
{% set data = load_file_data("pyproject.toml") %} | ||
|
||
{% set version = environ.get('GIT_DESCRIBE_TAG', '0.0.0.dev').lstrip('v') + environ.get('VERSION_SUFFIX', '') %} | ||
{% set number = environ.get('GIT_DESCRIBE_NUMBER', 0) %} | ||
{% set py_version = environ.get('CONDA_PY', 36) %} | ||
{% set py_version = environ['CONDA_PY'] %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand probably now we are sure that CONDA_PY
is set, but what if there's a regression, wouldn't be better to keep a safe default like other variables have?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is set by conda, not by RAPIDS. This should be safe -- and if it's not, we want to have a hard failure. Also we want to avoid setting defaults that need continually updated in the recipes as we change Python version support.
echo "FAILED: Local benchmark with explicit comms" | ||
fi | ||
|
||
exit ${SUITEERROR} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will only capture the latest error, which may or may not be fine depending on how this is handled outside. Since we can't return multiple error codes on a single script, I think it might make sense to:
- Have one script per executed test (in this particular case we would need 3, the first
pytest
and two benchmarks); or - Logging the error code for each failing test and returning a generic error code that the caller of this script will be able to parse (e.g.,
0
for "success" and1
for "error(s) occurred").
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have plans to revisit and simplify some of this logic in follow-up PRs, but ultimately I think we only care to distinguish zero (success) from non-zero (failure) exit codes for the purposes of GitHub Actions calling this script.
- rapidsai | ||
- rapidsai-nightly |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really want both rapidsai
and rapidsai-nightly
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The tests require cudf (among other RAPIDS packages), and the pinning of cudf=23.02
requires rapidsai-nightly
. For any dependencies on other RAPIDS packages, we want to test against nightlies so we are aware of breakage before release time.
Lines 87 to 88 in 297accb
- cudf=23.02 | |
- dask-cudf=23.02 |
rerun tests |
.github/workflows/build.yaml
Outdated
- name: Build wheel | ||
run: ci/build_python_pypi.sh | ||
- name: Publish distribution 📦 to PyPI | ||
if: inputs.build_type == 'branch' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if: inputs.build_type == 'branch' | |
if: inputs.build_type == 'nightly' |
For parity with the Jenkins setup, we should only publish this once a day (at night).
We'll also need to add this repo to the RAPIDS nightly pipeline below
/merge |
Description
This PR adds GitHub Actions workflows to
dask-cuda
.Task list
Coverage required for this PR:
Future work required: