-
-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - Prevent PR benchmarks to run on label changes #2119
Conversation
Test262 conformance changesVM implementation
|
Codecov Report
@@ Coverage Diff @@
## main #2119 +/- ##
=======================================
Coverage 43.57% 43.57%
=======================================
Files 217 217
Lines 19695 19695
=======================================
Hits 8583 8583
Misses 11112 11112 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunate. Still pretty usable though
bors r+ |
This changes the trigger type for PR benchmarks back to the default (`opened`, `synchronize`, `reopened`). As part of #2114 I added the `labeled` trigger type. This causes the benchmarks to run when the `run-benchmark` label is present and another label is added. For example in #2116 I added the `run-benchmark` label while creating the PR. The benchmarks then where triggered six times; one for the PR creation (`opened`) and five times for each label that I initially added to the PR. The only drawback is that the benchmarks are not triggered, when we just add the label, but unfortunately I don't have a clever idea on how to achieve that right now. We will have to add the label and then trigger the run via a `synchronize` (push).
Pull request successfully merged into main. Build succeeded: |
This changes the trigger type for PR benchmarks back to the default (
opened
,synchronize
,reopened
). As part of #2114 I added thelabeled
trigger type. This causes the benchmarks to run when therun-benchmark
label is present and another label is added.For example in #2116 I added the
run-benchmark
label while creating the PR. The benchmarks then where triggered six times; one for the PR creation (opened
) and five times for each label that I initially added to the PR.The only drawback is that the benchmarks are not triggered, when we just add the label, but unfortunately I don't have a clever idea on how to achieve that right now. We will have to add the label and then trigger the run via a
synchronize
(push).