-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Allow communication between pods on workers and pods using the primary cluster security group #892
feat: Allow communication between pods on workers and pods using the primary cluster security group #892
Conversation
This is a potentially breaking change as it may conflict with rules put in place by users already running mixed clusters. I think there should be a variable, default false, to enable creation of these rules. We can then remove it in a future major release. |
Agree @dpiddockcmp and done. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was playing around testing this PR.
Created a 1.13 cluster with worker_create_cluster_primary_security_group_rules = true
. Everything worked great. I then tried to upgrade to 1.14 and after finishing the eks_cluster upgrade terraform threw an error:
Error: Provider produced inconsistent final plan
When expanding the plan for
module.eks.aws_security_group_rule.cluster_primary_ingress_workers[0] to
include new values learned so far during apply, provider
"registry.terraform.io/-/aws" produced an invalid new value for
.security_group_id: was cty.StringVal(""), but now
cty.StringVal("sg-0892893c8410071f4").
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
Applying a second time was fine.
I don't think this is something to worry about long term. The default value for these security groups is false so users upgrading 1.13 clusters won't naturally hit the bug. And 1.13 is being removed at the end of June, just a month away.
Means that I think it's because of : cluster_primary_security_group_id = var.cluster_version >= 1.14 ? element(concat(aws_eks_cluster.this[*].vpc_config[0].cluster_security_group_id, list("")), 0) : null
You can notice with a terraform plan, that there is no
So we have to ensure a correct ordering during resources creation to let Terraform wait for the SG creation or at least mark the Here how I make it work (but I don't know if it brokes something else). locals {
cluster_primary_security_group_id = var.cluster_version >= 1.14 ? coalescelist(data.null_data_source.cluster[*].outputs["cluster_primary_security_group_id"], [""])[0] : null
}
data "null_data_source" "cluster" {
count = var.create_eks ? 1 : 0
inputs = {
cluster_primary_security_group_id = aws_eks_cluster.this[0].vpc_config[0].cluster_security_group_id
}
} With the null_data_source, Terraform wait for the You can see the difference with a plan (
This will also make the output @dpiddockcmp @itssimon Is this work for you ? Do you see any downside or side effects of doing this ? @dpiddockcmp I assigned this PR to you, so you can squash and merge it when it's ready ^^ |
I don't think the added complexity of using a I guess don't enable this on a 1.13 cluster 😉 |
Ohh that was the first thing I've tried, but no luck. Maybe I didn't test it correctly ^^ |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
PR o'clock
Description
Adds two security group rules allowing pods running on worker nodes to communicate with pods using the primary cluster security group (such as pods running on Fargate).
See #866 (comment).
Checklist