Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: #67

Closed
osterman opened this issue Jul 3, 2020 · 5 comments · Fixed by #132
Closed

Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: #67

osterman opened this issue Jul 3, 2020 · 5 comments · Fixed by #132
Labels
bug 🐛 An issue with the system

Comments

@osterman
Copy link
Member

osterman commented Jul 3, 2020

what

Error: Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: The specified log group already exists:  The CloudWatch Log Group '/aws/eks/eg-test-eks-cluster/cluster' already exists

why

See: hashicorp/terraform#14750, terraform-aws-modules/terraform-aws-eks#920

This is happening because EKS Cluster gets destroyed after Terraform delete the Cloudwatch Log Group. The AmazonEKSServicePolicy IAM policy (that is assigned to EKS Cluster role by default within this module) has permissions to CreateLogGroup and anything else needed to continue to logging correctly. When the Terraform destroys the Cloudwatch Log Group, the EKS Cluster that is running create it again. Then, when you run Terraform Apply again, the Cloudwatch Log Group doesn't exist in your state anymore (because the Terraform actually destroyed it) and the Terraform doesn't know this resource created outside him. terraform-aws-modules/terraform-aws-eks/issues/920

fix for tests

  • add random attribute to tests
@osterman osterman added the bug 🐛 An issue with the system label Jul 3, 2020
@luisllm
Copy link

luisllm commented Jul 17, 2020

I am facing this same issue. I can see in the logs that Terraform is able to destroy the EKS cluster and the CWLogGroup. I can also see those delete operations in CloudTrail. But just 1 second after the DeleteLogGroup operation, EKS automatically re-creates the CWLogGroup.

CloudTrail message - CWLogGroup deleted by Terraform:

"eventTime": "2020-07-16T12:54:49Z"
"eventName": "DeleteLogGroup"
"userAgent": "aws-sdk-go/1.32.12 (go1.13.7; linux; amd64) APN/1.0 HashiCorp/1.0 Terraform/0.12.25 (+https://www.terraform.io)"

CloudTrail message - CWLogGroup created by EKS (just 1 second after TF deleted it):

"eventTime": "2020-07-16T12:54:50Z"
"eventName": "CreateLogGroup"
"userAgent": "eks.amazonaws.com"

@tatitati
Copy link

tatitati commented Jul 9, 2021

Same problem here

@nitrocode
Copy link
Member

nitrocode commented Aug 10, 2021

Looks like the solution may be to disable CreateLogGroup permissions in IAM role for vpc flow logs which most likely is causing the recreation of the log group.

This module already creates the log group using terraform and eks already depends on the log group.

See hashicorp/terraform#14750 (comment)

@tatitati @luisllm try removing CreateLogGroup from the iam role used by your vpc flow logs, as recommended in the above comment. I'm curious if that fixes it.

Note: the aws docs's example shows using an iam policy with the above permission (also noted by the linked comment).

@Nuru
Copy link
Contributor

Nuru commented Nov 2, 2021

Looks like we can copy the fix from terraform-aws-modules/terraform-aws-eks#1594

@govind-bt
Copy link

govind-bt commented Feb 22, 2022

@nitrocode @Nuru Currently working on https://github.com/gruntwork-io/terraform-aws-eks/releases/tag/v0.48.0, seeing the below error

  # module.eks_cluster.aws_cloudwatch_log_group.control_plane_logs[0] will be created
  + resource "aws_cloudwatch_log_group" "control_plane_logs" {
      + arn               = (known after apply)
      + id                = (known after apply)
      + name              = "/aws/eks/eks-gvenkatesan/cluster"
      + retention_in_days = 0
      + tags_all          = (known after apply)
    }

  # module.eks_cluster.null_resource.customize_aws_vpc_cni[0] will be created
  + resource "null_resource" "customize_aws_vpc_cni" {
      + id       = (known after apply)

      + triggers = {
          + "eks_cluster_endpoint"           = "https://BD3154E241F49386AA86A3959E915DB5.gr7.us-east-1.eks.amazonaws.com"
          + "enable_prefix_delegation"       = "false"
          + "sync_core_components_action_id" = "5862677358994836770"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.


Error: Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: The specified log group already exists:  The CloudWatch Log Group '/aws/eks/eks-gvenkatesan/cluster' already exists.

  on .terraform/modules/eks_cluster/modules/eks-cluster-control-plane/main.tf line 212, in resource "aws_cloudwatch_log_group" "control_plane_logs":
 212: resource "aws_cloudwatch_log_group" "control_plane_logs" {

Don't see any existing resource with that name to import as well

aws-vault exec 'gvenkatesan' -- terragrunt state list | grep -i aws_cloudwatch_log_group
(no results)

 aws-vault exec 'gvenkatesan' -- terragrunt state list | grep -i module
module.cloudwatch_log_aggregation.data.aws_iam_policy_document.cloudwatch_logs_permissions
module.cloudwatch_log_aggregation.aws_iam_policy.cloudwatch_log_aggregation[0]
module.eks_cluster.data.aws_iam_policy_document.allow_eks_to_assume_role
module.eks_cluster.data.aws_iam_policy_document.allow_fargate_to_assume_role
module.eks_cluster.data.aws_region.current
module.eks_cluster.data.tls_certificate.oidc_thumbprint[0]
module.eks_cluster.aws_eks_cluster.eks
module.eks_cluster.aws_iam_openid_connect_provider.eks[0]
module.eks_cluster.aws_iam_role.eks
module.eks_cluster.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy
module.eks_cluster.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy
module.eks_cluster.aws_security_group.eks
module.eks_cluster.aws_security_group_rule.allow_outbound_all
module.eks_cluster.null_resource.customize_aws_vpc_cni[0]
module.eks_cluster.null_resource.fargate_profile_dependencies
module.eks_cluster.null_resource.sync_core_components[0]
module.eks_cluster.null_resource.wait_for_api
module.eks_k8s_role_mapping.kubernetes_config_map.eks_to_k8s_role_mapping
module.eks_workers_next_version.data.aws_eks_cluster.eks[0]
module.eks_workers_next_version.data.aws_iam_policy_document.allow_describe_ec2_tags
module.eks_workers_next_version.data.aws_iam_policy_document.allow_ec2_instances_to_assume_role
module.eks_workers_next_version.aws_autoscaling_group.eks_worker["asg"]
module.eks_workers_next_version.aws_iam_instance_profile.eks_worker[0]
module.eks_workers_next_version.aws_iam_role.eks_worker[0]
module.eks_workers_next_version.aws_iam_role_policy.allow_describe_ec2_tags[0]
module.eks_workers_next_version.aws_iam_role_policy_attachment.worker_AmazonEC2ContainerRegistryReadOnly[0]
module.eks_workers_next_version.aws_iam_role_policy_attachment.worker_AmazonEKSWorkerNodePolicy[0]
module.eks_workers_next_version.aws_iam_role_policy_attachment.worker_AmazonEKS_CNI_Policy[0]
module.eks_workers_next_version.aws_launch_template.eks_worker["asg"]
module.eks_workers_next_version.aws_security_group.eks_worker[0]
module.irsa_assume_role_policy.data.aws_iam_policy_document.eks_assume_role_policy
module.eks_cluster.module.install_kubergrunt.data.external.executable[0]
module.eks_cluster.module.require_kubergrunt.data.external.required_executable

Not sure if this is the right method to import

 aws-vault exec 'gvenkatesan' -- terragrunt import module.eks_cluster.aws_cloudwatch_log_group.control_plane_logs[0] /aws/eks/eks-gvenkatesan/cluster
Error: resource address "module.eks_cluster.aws_cloudwatch_log_group.control_plane_logs[0]" does not exist in the configuration.

Before importing this resource, please create its configuration in module.eks_cluster. For example:

resource "aws_cloudwatch_log_group" "control_plane_logs" {
  # (resource arguments)
}

ERRO[0009] 1 error occurred:
        * exit status 1
        

Can you please suggest? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 An issue with the system
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants