-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(NA): rebalance x-pack cigroups #84099
Conversation
Pinging @elastic/kibana-operations (Team:Operations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There's a chance that this is going to cause stability problems / flaky failures for ES snapshot and code coverage jobs. They aren't using the "tasks" framework that the tracked branch CI jobs are using, and are going to have fewer resources available for running the x-pack ciGroups in parallel. I suppose that we could make the machines for those jobs bigger if we need, they don't run too often compared to normal CI and PRs. |
@brianseeders in that case is your suggestion to go ahead with it and later on resize the machines both for snapshots and coverage jobs if we need to? Or should we resize those machines right now? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SIEM/Endpoint LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@mistic That's probably okay. I would kick off some jobs after merging this one and keep an eye out. |
@elasticmachine merge upstream |
💚 Build SucceededMetrics [docs]
History
To update your PR or re-run it, just comment with: |
7.x: a357416 |
* chore(NA): rebalance cigroup1 into cigroup5 * chore(NA): get list api integration into cigropup1 again * chore(NA): get apm integration basic into cigropup1 again * chore(NA): move back apm_api_integration trial tests into ciGroup1 * chore(NA): move exception operators data types into ciGroup1 again * chore(NA): move detection engine api security and spaces back into ciGroup1 * chore(NA): add a new xpack cigroup11 * chore(NA): correctly create 11 xpack ci groups * chore(NA): try to balance ciGroup2 and 8 * chore(NA): reset number of xpack parallel worker builds to 10 Co-authored-by: Kibana Machine <[email protected]> # Conflicts: # vars/kibanaCoverage.groovy
* chore(NA): rebalance cigroup1 into cigroup5 * chore(NA): get list api integration into cigropup1 again * chore(NA): get apm integration basic into cigropup1 again * chore(NA): move back apm_api_integration trial tests into ciGroup1 * chore(NA): move exception operators data types into ciGroup1 again * chore(NA): move detection engine api security and spaces back into ciGroup1 * chore(NA): add a new xpack cigroup11 * chore(NA): correctly create 11 xpack ci groups * chore(NA): try to balance ciGroup2 and 8 * chore(NA): reset number of xpack parallel worker builds to 10 Co-authored-by: Kibana Machine <[email protected]> # Conflicts: # vars/kibanaCoverage.groovy
The current ci groups are not well balanced with
ciGroup1
being taking the double of the time fromciGroup5
.I've tried balancing by not create a new ciGroup but that endup bot being possible because either
ciGroup1
orciGroup5
will became too long. I've just created a new x-packciGroup11
and moved two long tasks fromciGroup1
intociGroup11
.