-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Controller can't attach ALB security group to eks cluster security group #1791
Comments
I have same issue. The docs provide an explanation for this:
The annotation supports multiple security groups being on ALB, so I can see why the controller does not automatically apply them to the nodes' security group. However the controller should automatically add the specified SG if there is only one in the annotation, AND it should support having some SG that are added automatically, and some that are not. This could be done with a second annotation such as
If auto-add is not given, all security group in alb.ingress.kubernetes.io/security-groups list would be auto added. Then the following annotations on an ingress would cause ALB to have SG A, B, C, but only B to be added. Ingress groups still work with this approach.
For now it seems that the workaround is to manually (or via terraform or such automation tool) add the ALB SG to the cluster SG. |
@M00nF1sh Cool, I'd be happy to help |
@M00nF1sh great,my pleasure to help |
also, i think we can change the default behavior, so that we reuse a single LB-securityGroup shared for all ALBs in the cluster, and create a another additional LB-securityGroup per ALB. the shared LB-securityGroup will be granted access to worker nodes at any port number in your cluster. while the additional LB-securityGroup per ALB is used to encode the inbound client CIDRs that is allowed to access ALB(the client CIDR might differs per ALB). The benefit of this approach is we'll only use a single worker node securityGroup rule no matter how many ALBs we have. The only downside of approach is we have a securityGroup that allowed to access worker node at any port number, some customers might treat this as a security threat. however, it looks secure to me as we'll only attach this security group to LoadBalancers. We can use the same approach for NLBs too and share the same shared LB-securityGroup for both ALB/NLB. |
@M00nF1sh I understand the possible need for different CIDR per ALB, makes sense, but can you clarify (maybe an example) the ingress and egress rules on the shared LB? |
Any update on this? I tried to add ALB SG in |
Hi .. we are experiencing the same issue as we now have the need to annotate the attachment of multiple SG's using the "security-groups" annotation to have ingress to our cluster via the worker node security group. This I would have thought would be default behaviour or otherwise I do not see the point of annotating SG's that ultimately do not have access/ingress to the cluster. Is there a clear view to implement this and if so do you have any idea when this might make its way out us. Many thanks. |
Hi, |
Hi, |
@cmsmith7 |
Thanks for the confirmation, sad thats it's not going to be here sooner (from a selfish point of view lol) but all the same I'm incredibly pleased that it is now included in your road map and that it will be incorporated in a later version. Any inkling of a estimated target release date for 2.3.0? 😄 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
I think I am bumping into this exact issue. I'm defining "by hand" (via terraform) some "Large SGs with many CIDR rules" and I thought I could use annotation I think the ALB controller is managing one or more additional rules required for healthchecks to pass. I think it has to do with the SG described as |
/remove-kind design |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@kishorj: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi |
Starring v2.3 release, controller supports the annotation Closing the issue. |
it works perfectly!
|
Sheez yeah this solved my headache. I feel like it should be a default behavior. |
Hi,
The alb controller provides two ingress annotations to work with AWS security groups to restrict traffic access.
In my scenario,
alb.ingress.kubernetes.io/security-groups
is applied.However, while trying to navigate to the ingress URL, the website application was not accessible with a 504 error. So I checked the target group health check and found that all the backend were in unhealthy status.
I noticed that the cause of those phenomena is that the security group of the EKS cluster did not include the alb security group so that the health check was unable to work normally.
On the other hand, if applying
alb.ingress.kubernetes.io/inbound-cidrs
, ALB would work well since the alb security is covered by the security group of the EKS cluster.Is there any roadmap for our ALB controller project to fix the problem?
The text was updated successfully, but these errors were encountered: