-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
configmaps "aws-auth" already exists #852
Comments
Which version of the module are you using? This shouldn't be possible if you have What is your full |
Curious how You can see in the guide here - https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html In the |
To reproduce this;
|
Yes, AWS does add to the aws-auth config map when creating managed nodes. However there is dependency management in the module to ensure that the aws-auth configmap is applied by terraform in new clusters before attempting to create the managed node groups. It happens via the null_data_source in node_groups.tf |
Using module version v12.0.0 and |
Same error using module version v11.1.0 +
|
my fault, missed |
Same mistake, apologies I didn't realise the provider block was required. |
I'm experiencing the same issue with the latest module: 12.0.0
I am using the provider already & have What am I doing wrong here? |
FYI - Just a follow-up on this...I am not using managed nodes & was attempting to add a fargate profile to the same EKS cluster, so that is what was causing my error. Once I removed the fargate profile & IAM role everything worked with the latest version (12.0.0) |
Same issue here with using the |
Looking at the codebase, if I add a dependency in the |
Using v12.0.0 with worker_groups_launch_template and getting below error
|
@ibratoev - did you just add a Or is there a dependencies variable for the module OR is this a PR? |
Having the same issue with v12.1.0
This is on a new cluster being created. |
Just a note from my experience (even though I'm not using the module) if you place the My dependency chain "Create Cluster --> Create Auth --> Create Node Groups" |
I have found that if one already has a kubernetes cluster and the ~/.kube/config file pointing to that cluster the aws-auth is setup there and not your AWS EKS cluster. The code does not even check to see if it's the right cluster it simply assumes that the current kube config is correct. Which is strange as the cluster is being created. My solution was to remove the unwanted aws-auth entity from my other cluster, remove the kube config file temporarily while creating the AWS EKS cluster and all seemed fine. Seems like one of those use cases where someone did not think about some managing several clusters and already having the kube config file actually pointing to a running system. |
One more note regarding the same error. We pinned our EKS provider to an older version v7.0.0. Once we upgraded to v12.1.0. Same thing happened to us... since our clusters already existed there for a while anyways... |
Same error solved configuring the kubernetes provider pointing to my cluster:
Basic example reference: https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/examples/basic/main.tf |
Whats the correct answer for this? I don't understand.
|
Same issue here and I have included the basic example reference: Module version 12.2.0
I'm using worker_groups_launch_template configuration
I have other clusters created with other version of this module but very older (5.0.0). Maybe this configuration is overlapping from my other cluster ? |
This error happens when I use linux but don't when I use mac os. |
OK, I solve my issue using this configuration:
I hope it serves someone |
maybe my experience will help someone... I had an existing EKS cluster created using module version
I had corrrectly set up I resolved it very easily - I imported the
that solved it for me Looks like when adding this resource possibility of pre-existing aws-auth config map did not come to mind :) |
Thank you @ivanmartos , it worked for me. For my case, I was having this config: The first TF run went fine without errors. However from next run, it was throwing the error: |
what might help solve at least some peoples problems, make sure that:
so TF doesnt use the wrong one (or default one) and sees an truly eixsting configmap but in the wrong cluster |
I am doing everything as prescribed. Using aliases, using kubernetes provider et cetera. I run into this error whenever I start a cluster with manage_auth=False, and then at a later date try to add I have a theory about what the issue is. This provider didn't create the configmap, but AWS EKS must have some background jobs that run. When one starts a cluster without creating a configmap for One can simply do this |
facing the same issue with below code snippet to add custom users in aws-auth while creating EKS cluster with terraform provider "aws" { data "aws_eks_cluster_auth" "cluster" { module "my-cluster" { worker_groups = [
|
I think the problem is, with managed nodegroup, aws-auth configmap is already created, and terraform Kubernetes provider resource kubernetes_config_map does not support "upsert". |
I just launched using 13.1.0 of |
A colleague working with me tonight came up with steps-to-reproduce this issue with 13.1.0:
You will then see the error described in this issue's description. Quoting my colleague's explanation about what's apparently happening under the hood with EKS:
This is problematic for us (and for no doubt many others who encounter this ticket in a google search) because the API key and secret used by terraform is often (especially if it's the TFE cloud service) different from the one a user might have access to when invoking |
I'm probably missing something. I played a lot of time the MNG example to start managed node groups for almost 5 times without errors.
What do you mean by your third point ? Are you doing another terraform apply with your own
Yes. AWS create the aws-auth configmap for managed node groups. That's why you have to ensure a correct dependency during your ressource creations:
For the records, there were probably a race condition with dependencies in pre v12.2.0, but should be solved with #867. That PR add an explicit depends on the aws-auth configmap. That means, Terraform will start creating managed node groups only when the aws-auth configmap is created by FWIW, there are 2 ways to manage the aws-auth configmap:
In both case, if you use kubernetes provider, don't forget that Terraform can't manage existing resources if they don't exist in it's state. So you have to ensure that the configmap doesn't exist (this is the case for new cluster), or you have to import it first (if it already exist). See https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map#import. |
I'll repeat: if you just give it a tf definition, that includes a node group, you can reproduce the problem (regardless of whether or how you set the
|
What you're calling 3-step manual procedure is already done by this module. I just explained step by step what this module do for worker groups, managed node groups and fargate profiles:
My english is probably not very good, but I think that the meaning is there. |
Plus if you re-read my previous comment, if you set managed_aws_auth to false, you have to manage the aws-auth configmap by your self before the MNG creation. Otherwise AWS creates it during the MNG creation (in this case, you can't use directly the kubernetes provider, you have to import the confimap first or use kubectl). |
I can confirm that when using managed node group (MNG), setting Once I deleted the configmap from the cluster, it worked (this should be safe for node-group EKS non-prod cluster since the map will be created within a minute, but see @barryib post below he makes a good point):
Then If instead I delete the cm from the terraform state (eg |
@schollii deleting the configmap can be dangerous, because you can loose access to your cluster. In that situation, I'll suggest to import the configmap instead https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map#import and #852 (comment) $ terraform import module.MODULE_NAME.kubernetes_config_map.aws_auth[0] kube-system/aws-auth See also #852 (comment), because Terraform works that way
|
FYI I filed this in the AWS Provider github which may fix it: hashicorp/terraform-provider-aws#17333 |
Thanks @barryib the import worked for me, safer than deleting the configmap and perhaps better success rate than deleting the tf resource from state (eg this did not work for me). |
Prepare for production down time due to this bug |
Starting from that code sample and changing it for my use-case fixed both this issue and the symptoms described in #699 for me after upgrading. If you see a lot of tickets created around these topics, then I would humbly suggest adding an exemple for configuring multiple providers in the project's documentation. (Hopefully it's not just that I kept missing information already somewhere). |
@francoisfaubert you're probably right. Feel free to improve docs. I'll happy to review it. |
can anyone please help me ? using to create the EKS cluster
mydata.tf
varible.tf
i am able to create the cluster but getting the error when re-applying some changes.
Note : while re-applying updating the value in Anyone faced this error before ? Thanks in advance. |
@harsh-cldcvr It looks like you're using the cloudposse EKS module, not this module. You might want to try the issue tracker in that repository instead. |
I have go thru this issue and seems main problem is configuration where you create multiple clusters with irsa management from single code which is related to wrong kubernetes provider setup. Could somebody confirm my understanding? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity since being marked as stale. |
@daroga0002 that was the problem in my case. solution was to use multiple kubernetes providers (per EKS cluster) each aliased and using its own set of aws_eks_cluster_auth and aws_eks_cluster_auth. |
Any workaround around this? 1 cluster is online, the second is created, node group is created and then is says configmap already created...
I confirmed the role-cluster2 is present is cluster2, but nevertheless its still looking on the first one and see misconfiguration.. |
We faced a similar problem at square - how do we change The solution was not to use TF, but rather we invented a We used it to add/remove business services, where we used kubernetes namespaces as source/inputs for the mapRoles and mapUsers. That way we could use TF (or flux/argocd) to provision the new namespace, and a configmap INSIDE the namespace, and the CMMC would pick this up and merge it into aws-auth. If we delete the namespace, the configmap would go away, and then the CMMC would prune it from aws-auth as well. We had also intended to use this to do blue/green EKS cluster transitions, which I believe would solve your problem here. |
If you still face this issue after creating the EKS cluster, and you want to have full control from Terraform, then you can solve it with the following: Note: we are using Fargate # Cluster authorization access level
resource "kubernetes_config_map_v1_data" "aws-auth" {
force = true
data = {
"mapRoles" = templatefile("${path.module}/data/role-map-config.tftpl", {
execution_role_arn = var.execution_role_arn
admin_role_arn = var.admin_role_arn
})
}
metadata {
name = "aws-auth"
namespace = "kube-system"
}
} |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
I'm submitting a...
What is the current behavior?
When deploying a cluster and using only managed node_groups I believe because they're managed, AWS creates the
aws-auth
automatically and joins them to the cluster.This means that terraform throws the error
configmaps "aws-auth" already exists
. So thekubernetes_config_map
should update and not throw an error saying the configmap already existsIf this is a bug, how to reproduce? Please include a code sample if relevant.
Deploy the cluster using
managed node_groups
.What's the expected behavior?
aws-auth
config map should not already exist.kubernetes_config_map
should apply/update-in-placeaws-auth
Are you able to fix this problem and submit a PR? Link here if you have already.
N/A
Environment details
The text was updated successfully, but these errors were encountered: