-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add EKS Fargate support #866
Conversation
@itssimon Thanks for working on this. But since this module is getting more complex and discussed in #635 or #774. We decided to split functionality into different submodules. We started recently with the managed node groups. So my point here is this feature should be added as a submodule. Can you please work in that direction ? |
Done @barryib |
Is there anything else required for this to be approved and merged? |
Sorry for the late answer. I was busy these last days. Reviewed. Thanks again for working on this. |
Thanks @barryib. I addressed all your review notes. |
Hey all, Can you confirm that this does allow for fargate-only cluster support too?
With this, it's creating an EKS cluster (w/security groups, IAM roles etc), but not any of the aws-auth or fargate policies etc. Am I missing something? |
I think I addressed all review notes. |
@@ -0,0 +1,10 @@ | |||
module "fargate" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs a dependency on the aws-auth configmap. Otherwise spinning up fresh clusters with fargate enabled may fail due to the race condition of who creates the configmap first
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you provide some guidance on how to best achieve that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
node_groups.tf
in the root of the module has one example. But honestly I think the rejected work in #867 is a more "terraform native" way of implementing it. Makes the module easier to use stand-alone as well. My mistake as I didn't think depending on vars had been implemented yet when I moved managed node groups to a module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding this dependency creates a cycle. Are you sure the fargate module would cause the aws-auth
ConfigMap to be created?
Error: Cycle: data.null_data_source.fargate, module.fargate.var.cluster_name, module.fargate.aws_iam_role.eks_fargate_pod, module.fargate.output.aws_auth_roles, local.configmap_roles, kubernetes_config_map.aws_auth
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, AWS automatically creates the aws-auth configmap when you create a fargate profile on a cluster that does not have the configmap. This is a race condition that caused some pain for early adopters of the managed node groups in this module.
The problem with the cycle is using the cluster_name from the null_resource to create the IAM role. This could be avoided by using a dependency variable instead of the null_resource. Then only the aws_eks_fargate_profile
needs to block on the configmap.
|
||
resource "aws_iam_role" "eks_fargate_pod" { | ||
count = var.create_eks ? 1 : 0 | ||
name = format("%s-fargate", var.cluster_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cluster and worker roles are created using name_prefix
by default. This allows for the same name cluster to be created in different regions as IAM is global. I have no idea if anybody actually does that. There is also the complexity around name_prefix
vs name
and var.workers_role_name
. Do we want to replicate that here? @barryib
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only open topic to resolve now. How would you like to handle the naming here @barryib ?
Co-authored-by: Daniel Piddock <[email protected]>
Co-authored-by: Daniel Piddock <[email protected]>
Co-authored-by: Daniel Piddock <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add in a README.md for the module. See modules/node_groups
@@ -0,0 +1,10 @@ | |||
module "fargate" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, AWS automatically creates the aws-auth configmap when you create a fargate profile on a cluster that does not have the configmap. This is a race condition that caused some pain for early adopters of the managed node groups in this module.
The problem with the cycle is using the cluster_name from the null_resource to create the IAM role. This could be avoided by using a dependency variable instead of the null_resource. Then only the aws_eks_fargate_profile
needs to block on the configmap.
} | ||
} | ||
|
||
resource "aws_iam_role" "eks_fargate_pod" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add in the iam_path
variable from the parent too
variable "create_fargate_pod_execution_role" { | ||
description = "Controls if the EKS Fargate pod execution IAM role should be created." | ||
type = bool | ||
default = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could creation of the IAM role be based on the length of fargate_profiles
? That way this variable could default to true.
And if we're giving the option of not creating the fargate role we should provide the ability to pass in an externally created role ARN.
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
A vote was kind of taken on #774 where a majority supported going full in on TF 0.13's for_each module support instead of looping inside the module. But 0.13 isn't exactly working too well with the module currently. Raises design questions for this PR as the fargate submodule contains an IAM role. |
Thanks @itssimon for working on this. We have an terraform-aws-modules working session this friday. We'll discuss about the direction we want to take about this feature. We'll come back to you pretty soon. |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
PR o'clock
Description
eks_fargate_profiles = [...]
. This also creates an IAM role for pod execution and extends the aws-auth ConfigMap.Checklist