-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce error handling verbosity in CI tests scripts #1113
Reduce error handling verbosity in CI tests scripts #1113
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless I'm missing some detail, I must say I don't like this approach. The problem is, and correct me if I'm wrong, the script will terminate upon the first error and thus we would miss running all the other tests, which IMO is not something we want, if there's a problem we would like to see everything that is failing, not just the first failure.
@pentschev from what I understand the So in essence, the behavior of the tests scripts should not change by these changes, just a less verbose method of doing the same thing. @ajschmidt8 correct me if I am wrong. |
@AjayThorve I think you meant trap "echo CTRL+C pressed, exiting; break" SIGINT
set +e
while true
do
echo "Press CTRL+C to stop"
sleep 1
done
echo Trapped With set +e
trap "echo CTRL+C pressed, going to next loop; break" SIGINT
while true
do
echo "Press CTRL+C to stop first trap"
sleep 1
done
trap "echo CTRL+C pressed again, exiting; break" SIGINT
while true
do
echo "Press CTRL+C to stop second trap"
sleep 1
done
echo Trapped The code above would outputs the following:
I'm not sure this is absolutely necessary, but throwing that as an idea to see what you think of this. |
@pentschev, I'd like to keep this PR as it is for consistency with other projects. To your point about easily discerning test failures: our shared workflow for Python tests makes use of the test-summary/action (see here for where we implement it). This means that Here's a quick example of what that looks like for a failing pytest for cuml. |
@pentschev, I'll wait for your reply to merge. Or feel free to merge this yourself if you're okay with the current changes. |
…update-error-handling
In our CI, not everything is a |
@pentschev, what's your ideal way to report? GitHub Summary Cards are markdown files that you can redirect output to. So you could effectively print anything you want to it. In your shell script, you would just have to do something like: echo "this is my output" >> "$GITHUB_STEP_SUMMARY" |
This looks super cool, I think we could print a summary of our results there, but probably have to rethink how we run those benchmarks, and if it makes sense to run them here. In any case, I think this is a little beyond this PR, so let's merge it as is for now and I'll try to rethink this a bit later. Thanks so much for the suggestion, I was not aware of that! |
/merge |
This PR adds a less verbose trap method, for error handling to help ensure that we capture all potential error codes in our test scripts, and works as follows:
cc @ajschmidt8