Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Unauthorized on .terraform/modules/eks/aws_auth.tf line 65, in resource "kubernetes_config_map" "aws_auth": #1287

Closed
kaykhancheckpoint opened this issue Mar 27, 2021 · 11 comments

Comments

@kaykhancheckpoint
Copy link

kaykhancheckpoint commented Mar 27, 2021

Description

Fails to create module.eks.kubernetes_config_map.aws_auth because of unauthorised error.

Versions

  • Terraform:
    Terraform v0.14.8
  • Provider(s):
+ provider registry.terraform.io/hashicorp/aws v3.34.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.0.3
+ provider registry.terraform.io/hashicorp/local v2.0.0
+ provider registry.terraform.io/hashicorp/null v3.0.0
+ provider registry.terraform.io/hashicorp/random v3.0.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

Reproduction

Steps to reproduce the behavior:
Are you using workspaces?
yes
Have you cleared the local cache (see Notice section above)
yes
List steps in order that led up to the issue you encountered
terraform apply -var-file=prod.tfvars

Code Snippet to Reproduce

eks-cluster.ts

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.k8s_name
  cluster_version = "1.18"
  subnets         = data.terraform_remote_state.networking.outputs.private_subnets

  tags = {
    Terraform = "true"
    Environment = local.workspace
    GithubRepo  = "terraform-aws-eks"
    GithubOrg   = "terraform-aws-modules"
  }

  vpc_id = data.terraform_remote_state.networking.outputs.vpc_id

  workers_group_defaults = {
    root_volume_type = "gp2"
  }

  worker_groups = [
    {
      name                          = "worker-group-1"
      instance_type                 = "t2.small"
      additional_userdata           = "echo foo bar"
      asg_desired_capacity          = 2
      additional_security_group_ids = [data.terraform_remote_state.networking.outputs.worker_group_mgmt_one_id]
    },
    {
      name                          = "worker-group-2"
      instance_type                 = "t2.medium"
      additional_userdata           = "echo foo bar"
      additional_security_group_ids = [data.terraform_remote_state.networking.outputs.worker_group_mgmt_two_id]
      asg_desired_capacity          = 1
    },
  ]
}

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

kubernetes.tf

# Kubernetes provider
# https://learn.hashicorp.com/terraform/kubernetes/provision-eks-cluster#optional-configure-terraform-kubernetes-provider
# To learn how to schedule deployments and services using the provider, go here: https://learn.hashicorp.com/terraform/kubernetes/deploy-nginx-kubernetes

# The Kubernetes provider is included in this file so the EKS module can complete successfully. Otherwise, it throws an error when creating `kubernetes_config_map.aws_auth`.
# You should **not** schedule deployments and services in this workspace. This keeps workspaces modular (one for provision EKS, another for scheduling Kubernetes resources) as per best practices.

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    command     = "aws"
    args = [
      "eks",
      "get-token",
      "--cluster-name",
      data.aws_eks_cluster.cluster.name
    ]
  }
}

Expected behavior

Create module.eks.kubernetes_config_map.aws_auth[0]

Actual behavior

fails to create

Terminal Output Screenshot(s)

Error: Unauthorized

  on .terraform/modules/eks/aws_auth.tf line 65, in resource "kubernetes_config_map" "aws_auth":
  65: resource "kubernetes_config_map" "aws_auth" {

Additional context

aws provider is pointing to a profile that has AdministratorAccess.

provider "aws" {
  region = var.region
  shared_credentials_file = "$HOME/.aws/credentials"
  profile                 = "terraform"
}
@schollii
Copy link

schollii commented Mar 28, 2021

If you run the aws eks command from command line, what happens? Also in your kubernetes provider block try using

  token                  = data.aws_eks_cluster_auth.default.token
}

(also see #1280)

@kaykhancheckpoint
Copy link
Author

@schollii

I tried adding that token to the providers kubernetes { }

but i get the following error:

Error: Reference to undeclared resource

  on kubernetes.tf line 21, in provider "kubernetes":
  21:   token = data.aws_eks_cluster_auth.default.token

A data resource "aws_eks_cluster_auth" "default" has not been declared in the
root module.

@kaykhancheckpoint
Copy link
Author

Adding this to my kubernetes.tf provider file works. thank you

  token = data.aws_eks_cluster_auth.cluster.token

@bil9000
Copy link

bil9000 commented Jul 2, 2021

Hi. I am not having the same luck as@kaykhancheckpoint. I just ran the same terraform in a different account: there it worked and in the second account it fails. I even erased the contents of .kube/config in case there was something stale. Tried enabling and disabling tokens.

@bil9000
Copy link

bil9000 commented Jul 2, 2021

you know what - i think it was a capacity issue with us-east-1 - sorry to bother y'all.

@serhiiromaniuk
Copy link

serhiiromaniuk commented Dec 2, 2021

Same here

  required_providers {
    aws        = "3.64.2"
    local      = ">= 1.4"
    random     = ">= 2.1"
    kubernetes = "1.13.4"
    external   = "~> 2.1.0"
    helm       = "~> 2.4.1"
    null       = "~> 3.1.0"
    tls        = "~> 3.1.0"
    vault      = "~> 3.0.1"
  }

@dancave72
Copy link

dancave72 commented Dec 14, 2021

I had this same issue with eu-west-2 (London) - since last Thursday when i deployed the same code, which worked, When I tried it again today, it fails..

I didn't change any code, but it seems to have been an issue with the terraform state file and resources when i last did an apply.. Seems the iam role for my cluster didn't create properly. :/

When i looked at the AWS EKS console, i saw this
Screenshot 2021-12-14 at 17 39 01

When i clicked the link - it takes me to https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting_iam.html#security-iam-troubleshoot-cannot-view-nodes-or-workloads

so basically there was no role for the cluster -

@oesah
Copy link

oesah commented Dec 29, 2021

I had the same issue and solved it by adding proper AWS credentials to the environment. The one I added before didn't have enough permissions, but after adding another one with admin permissions, it worked.

@govardha
Copy link

I am most definitely new to AWS, EKS & terraform, but I do think I know what the problem is., I ran into the same issue.

Initially the kubernetes cluster I was trying to work on was created by a federated Administrator user. I further tried to work on this cluster as another user who did have the "Administrator" role, but as per AWS snippet below, but to keep things simple, I have to basically destroy and re-create the cluster as the new user for things to work properly.

When you create an Amazon EKS cluster, the AWS Identity and Access Management (IAM) entity user or role, such as a federated user that creates the cluster, is automatically granted system:masters permissions in the cluster's role-based access control (RBAC) configuration in the Amazon EKS control plane. This IAM entity doesn't appear in any visible configuration, so make sure to keep track of which IAM entity originally created the cluster. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes and create a Kubernetes rolebinding or clusterrolebinding with the name of a group that you specify in the aws-auth ConfigMap.

@jlsvieira
Copy link

this is the solution, then you update you $HOME/.aws/cred.yaml file with the ACKEY, SecretKEY, and the Token:
aws sts assume-role --role-arn arn:aws:iam::123456789012:role/role-name --role-session-name "RoleSession1" --profile IAM-user-name > assume-role-output.txt

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html#using-temp-creds-sdk-ec2-instances

@github-actions
Copy link

github-actions bot commented Nov 9, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 9, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants