Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

map_additional_iam_users does not work #63

Closed
ismailyenigul opened this issue Jun 8, 2020 · 11 comments · Fixed by #119
Closed

map_additional_iam_users does not work #63

ismailyenigul opened this issue Jun 8, 2020 · 11 comments · Fixed by #119
Labels
bug 🐛 An issue with the system

Comments

@ismailyenigul
Copy link
Contributor

Describe the Bug

First Created a eks cluster without any map_additional_iam_users variable
then added the following lines into terraform.tfvars and run

map_additional_iam_users = [
    {
    userarn = "arn:aws:iam::xyz:user/myuser"
    username = "myuser"
    groups   =  ["system:masters"]
 }
 ]

then

$ terraform plan

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

does not show any change

@ismailyenigul ismailyenigul added the bug 🐛 An issue with the system label Jun 8, 2020
@ismailyenigul
Copy link
Contributor Author

I deployed eks cluster with kubernetes_config_map_ignore_role_changes true then set kubernetes_config_map_ignore_role_changes to false to update ws-auth with map_additional_iam_users values
but I got configmaps "aws-auth" already exists error.

module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Destroying... [id=kube-system/aws-auth]
module.eks_cluster.kubernetes_config_map.aws_auth[0]: Creating...
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Destruction complete after 0s

Error: configmaps "aws-auth" already exists

  on .terraform/modules/eks_cluster/auth.tf line 104, in resource "kubernetes_config_map" "aws_auth":
 104: resource "kubernetes_config_map" "aws_auth" {



@ismailyenigul
Copy link
Contributor Author

I managed map_additional_iam_users for aws-auth configmap with
kubernetes_config_map_ignore_role_changes = true
then

data "null_data_source" "wait_for_cluster_and_kubernetes_configmap" {
  inputs = {
    cluster_name             = module.eks_cluster.eks_cluster_id
    kubernetes_config_map_id = module.eks_cluster.kubernetes_config_map_id
  }
}

and using

cluster_name              = data.null_data_source.wait_for_cluster_and_kubernetes_configmap.outputs["cluster_name"]

in module "eks_node_group"

From your README

If you want to modify the Node Group (e.g. add more Node Groups to the cluster) or need to map other IAM roles to 
Kubernetes groups, set the variable kubernetes_config_map_ignore_role_changes to false and re-provision the module. 
Then set kubernetes_config_map_ignore_role_changes back to true.

If I set
kubernetes_config_map_ignore_role_changes = false
terraform destroy the current node group and re-create
and got the following error

module.eks_node_group.aws_eks_node_group.default[0]: Destruction complete after 3m23s
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Destroying... [id=kube-system/aws-auth]
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Destruction complete after 0s
Error: configmaps "aws-auth" already exists

  on .terraform/modules/eks_cluster/auth.tf line 104, in resource "kubernetes_config_map" "aws_auth":
 104: resource "kubernetes_config_map" "aws_auth" {

@ismailyenigul
Copy link
Contributor Author

while kubernetes_config_map_ignore_role_changes = true
I created another node group with the following module and I can see that

module "eks_node_group2" {
  source         = "git::https://github.com/cloudposse/terraform-aws-eks-node-group.git?ref=tags/0.5.0"
kubectl get configmaps -n kube-system aws-au
th -o yaml
apiVersion: v1
data:
  mapAccounts: |
    []
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::myawsid:role/my-worker-node-1
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::myawsid:role/my-worker-node-2
      username: system:node:{{EC2PrivateDNSName}}
  mapUsers: |
    - "groups":
      - "system:masters"
      "userarn": "arn:aws:iam::myawsid:user/iy"
      "username": "iy"
    - "groups":

It seems all are working fine If I set kubernetes_config_map_ignore_role_changes = true
I could not find a use case to set it false

@rewt
Copy link

rewt commented Feb 19, 2021

I also had issue when setting kubernetes_config_map_ignore_role_changes = false

On build, the setting worked correctly, but on next terraform plan it wanted to remove the EKS Worker Group role since it wasn't specified.

When setting kubernetes_config_map_ignore_role_changes = true it wanted to remove the changes I made to add IAM users, effectively locking me out of cluster.

Using kubernetes_config_map_ignore_role_changes = true works fine and agree, cannot find use case for kubernetes_config_map_ignore_role_changes = false

@marcelloromani
Copy link

I'm glad you filed this bug, because I couldn't make myself to notice the line related to kubernetes_config_map_ignore_role_changes in the README, and was struggling to understand why a change to the value of map_additional_iam_roles would only cause an in-place update of the cluster definition but no changes in the auth configmap whatsoever.

@marcelloromani
Copy link

Is there any drawback in leaving this parameter to true?

@marcelloromani
Copy link

marcelloromani commented Apr 19, 2021

After setting the aforementioned tag to true and adding a role to map_additional_iam_roles I encountered the following error:

module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Destruction complete after 0s
202 |  
203 | Error: configmaps "aws-auth" already exists
204 |  
205 | on .terraform/modules/eks/auth.tf line 103, in resource "kubernetes_config_map" "aws_auth":
206 | 103: resource "kubernetes_config_map" "aws_auth" {
207 |  
208

Is the setting of kubernetes_config_map_ignore_role_changes to false/true meant to destroy the aws-auth config map to prevent this issue?

Edit: I ran terraform apply a second time and it succeeded.

@marcelloromani
Copy link

Update: I encountered the same error configmaps "aws-auth" already exists when creating a fresh cluster. Re-applying did not solve the problem.

The only workaround I could find was to disable parallelism: terraform apply -parallelism=1

I think there's a race condition or a missing dependency lurking somewhere.

@reixd
Copy link

reixd commented Jul 5, 2021

I have also this error.

@marcelloromani
Copy link

The solution we went for eventually (at least in one of our projects) was to disable aws-auth from the module altogether and manage the configmap ourselves. The downside is that you make sure to include the node role, but on the upside you get much more predictable behaviour.

@Nuru
Copy link
Contributor

Nuru commented Jul 14, 2021

By default, kubernetes_config_map_ignore_role_changes is set to true, ignoring changes to roles. That said, it should only ignore changes to roles, including var.map_additional_iam_roles, but not including var.map_additional_iam_users. However, I cannot reproduce the problem with map_additional_iam_users.

In PR #119 I have corrected the README to address the issue of changing kubernetes_config_map_ignore_role_changes, and made other changes which should fix the remaining issues raised in this thread. If there are still problems, please open a new issue.

@Nuru Nuru closed this as completed Jul 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 An issue with the system
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants