Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: error creating KMS Key: MalformedPolicyDocumentException #2325

Closed
rajesh174u opened this issue Dec 7, 2022 · 3 comments · Fixed by #2328
Closed

Error: error creating KMS Key: MalformedPolicyDocumentException #2325

rajesh174u opened this issue Dec 7, 2022 · 3 comments · Fixed by #2328

Comments

@rajesh174u
Copy link

rajesh174u commented Dec 7, 2022

Description

I have upgraded from 18.20.5 to 19.0.0 for incorporating iam_role_additional_policies in self_managed_node_groups
Getting below error when running terraform apply

│ Error: error creating KMS Key: MalformedPolicyDocumentException: The new key policy will not allow you to update the key policy in the future.
│
│   with module.eks.module.eks.module.kms.aws_kms_key.this[0],
│   on .terraform/modules/eks.eks.kms/main.tf line 8, in resource "aws_kms_key" "this":
│    8: resource "aws_kms_key" "this" {
│ 
╵
╷
│ Error: [WARN] A duplicate Security Group rule was found on (sg-06f2aceb128c12062). This may be
│ a side effect of a now-fixed Terraform issue causing two security groups with
│ identical attributes but different source_security_group_ids to overwrite each
│ other in the state. See https://github.com/hashicorp/terraform/pull/2376 for more
│ information and instructions for recovery. Error: InvalidPermission.Duplicate: the specified rule "peer: sg-0a0e08d228eaa157e, TCP, from port: 8443, to port: 8443, ALLOW" already exists
│       status code: 400, request id: 84765c23-9ba1-48ca-b1e2-d5a5cdd8784b
│ 
│   with module.eks.module.eks.aws_security_group_rule.node["ingress_admission_webhook_controller"],
│   on .terraform/modules/eks.eks/node_groups.tf line 168, in resource "aws_security_group_rule" "node":
│  168: resource "aws_security_group_rule" "node" {
│ 

Versions

  • Module version [Required]: 19.0.0

  • Terraform version: Terraform v1.3.4

  • Provider version(s):
    provider registry.terraform.io/hashicorp/aws v4.45.0
    provider registry.terraform.io/hashicorp/cloudinit v2.2.0
    provider registry.terraform.io/hashicorp/helm v2.7.1
    provider registry.terraform.io/hashicorp/kubernetes v2.16.1
    provider registry.terraform.io/hashicorp/tls v4.0.4

Reproduction Code [Required]

data "aws_iam_policy_document" "eks_inline_additional_policy" {
  statement {
    actions = [
      "ssm:*",
      "quicksight:*",
      "cognito-idp:*",
      "cognito-sync:*",
      "cognito-identity:*",
      "workdocs:*",
      "sqs:*",
      "transfer:*",
      "kms:*",
      "s3:*",
      "lambda:*",
      "cloudfront:*",
      "iam:PassRole",
      "route53:ListHostedZones",
      "route53:GetHostedZoneCount",
      "route53:ListHostedZonesByName",
      "route53:GetHostedZone",
      "route53:ChangeResourceRecordSets",
      "route53:ListResourceRecordSets",
      "events:*"
    ]
    resources = ["*"]
  }
}

data "aws_iam_policy_document" "cognito_create_tags_policy" {
  statement {
    actions = ["ec2:CreateTags"]
    resources = [
      "arn:aws:iam::*:role/aws-service-role/email.cognito-idp.amazonaws.com/AWSServiceRoleForAmazonCognitoIdpEmail*",
    "arn:aws:ec2:*:*:network-interface/*"]
  }
}

data "aws_iam_policy_document" "cognito_createservicelinkedrole_policy" {
  statement {
    actions   = ["iam:CreateServiceLinkedRole"]
    resources = ["*"]
    condition {
      test     = "StringEquals"
      variable = "iam:AWSServiceName"
      values   = ["mail.cognito-idp.amazonaws.com"]
    }
  }
}

data "aws_iam_policy_document" "cognito_servicelinkedrole_policy" {
  statement {
    actions = [
      "iam:GetServiceLinkedRoleDeletionStatus",
      "iam:DeleteServiceLinkedRole"
    ]
    resources = ["arn:aws:iam::*:role/aws-service-role/email.cognito-idp.amazonaws.com/AWSServiceRoleForAmazonCognitoIdpEmail*"]
  }
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "19.0.3"

  cluster_name                         = format("%s-%s-%s", var.meta["project"], var.meta["environment"], var.aws["region"])
  cluster_version                      = var.cluster_version
  cluster_endpoint_private_access      = var.cluster_endpoint_private_access
  cluster_endpoint_public_access       = var.cluster_endpoint_public_access
  cluster_endpoint_public_access_cidrs = var.cluster_endpoint_public_access_cidrs

  cluster_addons = {
    coredns = {
      resolve_conflicts = "OVERWRITE"
    }
    kube-proxy = {}
    vpc-cni = {
      resolve_conflicts = "OVERWRITE"
    }
  }

  cluster_encryption_config = var.cluster_encryption_config

  vpc_id     = var.vpc_id
  subnet_ids = var.private_subnet_ids

  # aws-auth configmap
  create_aws_auth_configmap = true
  manage_aws_auth_configmap = true

  aws_auth_roles = [
    {
      rolearn  = "arn:aws:iam::${var.aws.account_id}:role/Jenkins"
      username = "jenkins"
      groups   = ["aml:jenkins"]
    },
    {
      rolearn  = "arn:aws:iam::${var.aws.account_id}:role/SRE"
      username = "sre"
      groups   = ["aml:sre"]
    },
    {
      rolearn  = "arn:aws:iam::${var.aws.account_id}:role/DEV"
      username = "dev"
      groups   = ["aml:dev"]
    },
    {
      rolearn  = "arn:aws:iam::${var.aws.account_id}:role/QA"
      username = "qa"
      groups   = ["aml:qa"]
    },
  ]

  # Extend cluster security group rules
  cluster_security_group_additional_rules = {
    egress_nodes_ephemeral_ports_tcp = {
      description                = "Cluster API to node pods"
      protocol                   = "tcp"
      from_port                  = 1025
      to_port                    = 65535
      type                       = "egress"
      source_node_security_group = true
    }
  }

  # Extend node-to-node security group rules
  node_security_group_additional_rules = {
    ingress_self_all = {
      description = "Node to node all ports/protocols"
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      type        = "ingress"
      self        = true
    },
    ingress = {
      description                   = "Allow EKS cluster to call metrics-server"
      protocol                      = "tcp"
      from_port                     = 4443
      to_port                       = 4443
      type                          = "ingress"
      source_cluster_security_group = true
    }
    ingress_admission_webhook_controller = {
      description                   = "AWS NLB Admission Webhook Controller"
      protocol                      = "tcp"
      from_port                     = 8443
      to_port                       = 8443
      type                          = "ingress"
      source_cluster_security_group = true
    }
    egress_all = {
      description      = "Node all egress"
      protocol         = "-1"
      from_port        = 0
      to_port          = 0
      type             = "egress"
      cidr_blocks      = ["0.0.0.0/0"]
      ipv6_cidr_blocks = ["::/0"]
    }
  }

  self_managed_node_groups = {
    worker_group = {
      name            = format("%s-%s-worker-group", var.meta["project"], var.meta["environment"])
      use_name_prefix = true

      subnet_ids = var.private_subnet_ids

      min_size     = var.min_size
      max_size     = var.max_size
      desired_size = var.desired_size

      ami_id               = var.aws_ami_id
      bootstrap_extra_args = var.bootstrap_extra_args

      pre_bootstrap_user_data = <<-EOT
      export USE_MAX_PODS=false
      EOT

      post_bootstrap_user_data = <<-EOT
      cd /tmp

      # Install AWSLogs
      yum install -y awslogs
      echo "[/var/log/secure]
      datetime_format = %b %d %H:%M:%S
      file = /var/log/secure
      buffer_duration = 5000
      log_stream_name = {instance_id}
      initial_position = start_of_file
      log_group_name = /var/log/secure" > /etc/awslogs/config/secure.conf
      systemctl restart awslogsd.service && echo $? || service awslogs start

      # Install ssm agent
      sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
      sudo systemctl enable amazon-ssm-agent
      sudo systemctl start amazon-ssm-agent
      EOT

      instance_type = var.instance_type

      launch_template_name            = format("%s-%s-worker-group", var.meta["project"], var.meta["environment"])
      launch_template_use_name_prefix = true

      ebs_optimized = true
      # vpc_security_group_ids = [aws_security_group.additional.id]
      enable_monitoring = true

      block_device_mappings = {
        xvda = {
          device_name = "/dev/xvda"
          ebs = {
            volume_size = 20
            volume_type = "gp3"
            iops        = 3000
            throughput  = 125
            # encrypted             = true
            # kms_key_id            = aws_kms_key.ebs.arn
            delete_on_termination = true
          }
        }
      }

      create_iam_role          = true
      iam_role_name            = format("%s-%s-worker-group", var.meta["project"], var.meta["environment"])
      iam_role_use_name_prefix = false
      iam_role_tags            = local.tags
      iam_role_additional_policies = {
        AmazonEC2ContainerRegistryReadOnly = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
        additional                         = aws_iam_policy.eks-inline-ssm-policy.arn
      }

      create_security_group          = true
      security_group_name            = format("%s-%s-worker-group", var.meta["project"], var.meta["environment"])
      security_group_use_name_prefix = false

      security_group_tags = local.tags

      timeouts = {
        create = "80m"
        update = "80m"
        delete = "80m"
      }

      tags = merge(local.tags, { "k8s.io/cluster-autoscaler/${var.meta["project"]}-${var.meta["environment"]}" = "owned", "k8s.io/cluster-autoscaler/enabled" = "TRUE" })
    }
  }

  tags = local.tags
}

resource "aws_iam_policy" "eks-inline-ssm-policy" {
  name = "eks-inline-ssm-policy"
  policy = data.aws_iam_policy_document.eks-inline-eks-policy.json

  tags = local.tags
}

data "aws_iam_policy_document" "eks-inline-eks-policy" {
  source_policy_documents = [
    data.aws_iam_policy_document.eks_inline_additional_policy.json,
    data.aws_iam_policy_document.cognito_create_tags_policy.json,
    data.aws_iam_policy_document.cognito_createservicelinkedrole_policy.json,
    data.aws_iam_policy_document.cognito_servicelinkedrole_policy.json
  ]
}

Steps to reproduce the behavior:

Are you using workspaces? Yes
Have you cleared the local cache (see Notice section above)? --> Yes
List steps in order that led up to the issue you encountered --> terraform apply

Expected behavior

@antonbabenko
Copy link
Member

This issue has been resolved in version 19.0.4 🎉

@jurgen-weber-deltatre
Copy link

#2328 (comment)

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 13, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.