-
Notifications
You must be signed in to change notification settings - Fork 579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Query AWS to find subnets with explicitely specified subnet IDs #2864
Query AWS to find subnets with explicitely specified subnet IDs #2864
Conversation
Hi @codablock. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
6c536a6
to
3c73384
Compare
Thanks for this. I'm sure you're right, but I need have a think about it, as ever, with any of the networking code. |
Still not had time to review this. Will look on monday. In the meantime, /ok-to-test |
3c73384
to
177cf73
Compare
FYI, the note about not requiring to check for AZ/publicIPS that I made in the initial commit and in the PR description is not really true anymore. One commit later I decided to re-add comparable checks to the filter code so that error messages are more useful. Always being to told that a subnet does not exist when in reality it simply doesn't match the AZ/IP criteria ist very confusing. |
This make sense to me. |
@randomvariable Sorry to bother you, but did you find time to look into this? :) |
cc @richardcase @sedefsavas for awareness, any objections to proceed with this? |
Will take a look @enxebre |
There's an bug in CAPI AWS provider which prevent Machines from targeting subnets ids which are not present on the infra CR. This PR is a workaround by keeping the infra CR subnets in sync which what is required by NodePools. This should b dropped once kubernetes-sigs/cluster-api-provider-aws#2864 gets merged.
There's an bug in CAPI AWS provider which prevent Machines from targeting subnets ids which are not present on the infra CR. This PR is a workaround by keeping the infra CR subnets in sync with what is required by NodePools. This should b dropped once kubernetes-sigs/cluster-api-provider-aws#2864 gets merged.
There's an bug in CAPI AWS provider which prevent Machines from targeting subnets ids which are not present on the infra CR. This PR is a workaround by keeping the infra CR subnets in sync with what is required by NodePools. This should be dropped once kubernetes-sigs/cluster-api-provider-aws#2864 gets merged.
There's an bug in CAPI AWS provider which prevent Machines from targeting subnets ids which are not present on the infra CR. This PR is a workaround by keeping the infra CR subnets in sync with what is required by NodePools. This should be dropped once kubernetes-sigs/cluster-api-provider-aws#2864 gets merged.
There's an bug in CAPI AWS provider which prevent Machines from targeting subnets ids which are not present on the infra CR. This PR is a workaround by keeping the infra CR subnets in sync with what is required by NodePools. This should be dropped once kubernetes-sigs/cluster-api-provider-aws#2864 gets merged.
This basically unifies the code used to find subnets with filters and subnets by ID. This is done by handling the ID case as a simple filter instead of looking into the cluster network spec. The reason is that explicitely specified subnets are not necessarely part of the cluster network spec. For example, it might be desired to run a EKS control plane inside 3 specific subnets (so they are part of the network spec) while running workers in different subnets from the same VPC. A side effect of this change is that we don't have to check for the correct failure domain and/or public/private subnets anymore as this is implicitely handled by the criterias.
f1950e6
to
582357a
Compare
@pydctw @sedefsavas Thanks for your reviews :) |
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sedefsavas The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
Removed two TODOs waiting on an upstream CAPA PR to be merged and included: kubernetes-sigs/cluster-api-provider-aws#2864. Signed-off-by: Bryan Cox <[email protected]>
What type of PR is this?
/kind feature
What this PR does / why we need it:
This basically unifies the code used to find subnets with filters and subnets
by ID. This is done by handling the ID case as a simple filter instead of
looking into the cluster network spec.
The reason is that explicitely specified subnets are not necessarely part
of the cluster network spec. For example, it might be desired to run a
EKS control plane inside 3 specific subnets (so they are part of the
network spec) while running workers in different subnets from the same
VPC.
A side effect of this change is that we don't have to check for the correct
failure domain and/or public/private subnets anymore as this is implicitely
handled by the criterias.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Checklist:
Release note: