-
Notifications
You must be signed in to change notification settings - Fork 579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[E2E] Tests for MachinePool failing in open PRs #3295
Comments
@Ankitasw: This issue is currently awaiting triage. If CAPA/CAPI contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @shivi28 |
@Ankitasw: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@Ankitasw: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind failing-test |
I think it might be flaky as it didn't fail for this PR |
Hi @shivi28, I don't have clear evidence to support that this failure is happening only because of the PR mentioned above, that's why raised this issue. Why I suspect it is we didn't test e2e in the above PR before merging and the changes are related to machinepool controller, so as part of this issue, we can find out whether E2E upgrade tests are failing because of above PR merge. |
Actually test was failing only in upgrade PR because CAPI has enabled scaling machinePool to zero in latest release. To handle this change we need to enable zero as the minimum size for AWSMachinePools in CAPA, which is in progress. And that's why tests were failing only in this PR and it's already on hold we can merge that after this fix. May be we can close this issue as I can't see any other PR facing test failure. cc: @sedefsavas |
Thanks @shivi28 , if we have found the root cause for issue and can be fixed in other PR, we can attach this issue to open PR itself so that it will be closed with the PR. Wdyt? |
@shivi28 |
Yes @sedefsavas they have added a new test case |
making sure this issue is also updated: kubernetes-sigs/cluster-api#6312 is a required fix for #3253 to pass. Not sure if we should skip that E2E for now, until that other PR is merged and released? |
@mweibel I will leave this to you(since I was not involved in discussions) and let @sedefsavas take a look. |
We can wait for kubernetes-sigs/cluster-api#6312 to be in before merging CAPI version, if there are no objections to that. Trying to avoid the risk of forgetting re-enabling the e2e test once disabled. |
/kind bug
What steps did you take and what happened:
E2E tests for machinepools failing with below error
What did you expect to happen:
E2E tests should pass
Anything else you would like to add:
Probably happened because of #3255
Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: