Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Destroy never succeeds, DependencyViolation for Security Group #285

Closed
3 of 4 tasks
diversario opened this issue Feb 25, 2019 · 38 comments
Closed
3 of 4 tasks

Destroy never succeeds, DependencyViolation for Security Group #285

diversario opened this issue Feb 25, 2019 · 38 comments

Comments

@diversario
Copy link

diversario commented Feb 25, 2019

I have issues

I'm submitting a...

  • bug report
  • feature request
  • support request
  • kudos, thank you, warm fuzzy

What is the current behavior?

A cluster cannot be destroyed without manual intervention

If this is a bug, how to reproduce? Please include a code sample if relevant.

Given this (stripped down but working version) of the cluster:

data "aws_region" "current" {}

data "aws_availability_zones" "az" {}

locals {
  worker_groups_launch_template = [
    {
      instance_type        = "t2.small"
      subnets              = "${join(",", module.vpc.private_subnets)}"
      asg_desired_capacity = "2"
    },
    {
      instance_type                            = "t2.small"
      subnets                                  = "${join(",", module.vpc.private_subnets)}"
      override_instance_type                   = "t3.small"
      asg_desired_capacity                     = "2"
      spot_instance_pools                      = 10
      on_demand_percentage_above_base_capacity = "0"
    },
  ]
}

module "vpc" {
  source             = "terraform-aws-modules/vpc/aws"
  version            = "1.57.0"
  name               = "test-eks-thingy"
  cidr               = "192.168.0.0/16"
  azs                = "${data.aws_availability_zones.az.names}"
  private_subnets    = ["192.168.1.0/24", "192.168.2.0/24", "192.168.3.0/24"]
  public_subnets     = ["192.168.4.0/24", "192.168.5.0/24", "192.168.6.0/24"]
  enable_nat_gateway = true
  single_nat_gateway = true
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "2.2.1"

  cluster_version = "1.11"
  cluster_name    = "test-eks-cluster"
  subnets         = ["${module.vpc.private_subnets}"]
  vpc_id          = "${module.vpc.vpc_id}"

  worker_groups_launch_template        = "${local.worker_groups_launch_template}"
  worker_group_launch_template_count   = "1"

  map_roles          = []
  map_roles_count    = 0
  map_users          = []
  map_users_count    = 0
  map_accounts       = []
  map_accounts_count = 0
}

terraform apply completes fine, however terraform destroy fails with

module.eks.aws_security_group.workers: Still destroying... (ID: sg-0e5f0b620ea6e8bc0, 9m50s elapsed)
module.eks.aws_security_group.workers: Still destroying... (ID: sg-0e5f0b620ea6e8bc0, 10m0s elapsed)

Error: Error applying plan:

1 error(s) occurred:

* module.eks.aws_security_group.workers (destroy): 1 error(s) occurred:

* aws_security_group.workers: DependencyViolation: resource sg-0e5f0b620ea6e8bc0 has a dependent object
        status code: 400, request id: bc0096af-9940-42bf-a727-e1a51c9d21b3

Network Interfaces in the AWS console ends up looking like this:
image

Manually detaching the green interfaces and deleting them all allows terraform to complete destruction.

What's the expected behavior?

Cluster can be destroyed entirely by terraform itself.

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version: 2.2.1
  • OS: macOS 10.14.3
  • Terraform version:
Terraform v0.11.11
+ provider.aws v1.60.0
+ provider.local v1.1.0
+ provider.null v2.0.0
+ provider.template v2.0.0

Any other relevant info

Resources remaining after attempted destroy:

Terraform will perform the following actions:

  - module.eks.aws_eks_cluster.this

  - module.eks.aws_iam_role.cluster

  - module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy

  - module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy

  - module.eks.aws_security_group.cluster

  - module.eks.aws_security_group.workers

  - module.vpc.aws_subnet.private[0]

  - module.vpc.aws_subnet.private[1]

  - module.vpc.aws_subnet.private[2]

  - module.vpc.aws_vpc.this
@uday1bhanu
Copy link

Im also facing this issue and I can attest the same behavior.

@luisc009
Copy link

Not sure if this is an issue with the module itself. I had the same problem and to solve it I had to remove all the kubernetes services before destroying it.

@diversario
Copy link
Author

@LuisC09 manually?

@max-rocket-internet
Copy link
Contributor

Yes manually. If you create a service with type ELB then k8s will create a security group for this ELB. And this will stop the destroy process.

@uday1bhanu
Copy link

uday1bhanu commented Mar 7, 2019

latest run of eks cluster creation followed by destroy is successful. Not sure what has changed, but this didnt work before.

Note: In my case, i haven't deployed any apps after provisioning the cluster.

@max-rocket-internet
Copy link
Contributor

Not sure what has changed, but this didnt work before.

OK no worries. Feel free to debug and add info here.

@jsa4000
Copy link

jsa4000 commented Mar 28, 2019

I have just used this module, since I have moved from premises, and trying to create an eks cluster with terraform. In my case I have used a little modification of the example fixture, apply and then destroy with any other interaction with the eks cluster. I got two DependencyViolation error with security groups attached with interfaces.

@jsa4000
Copy link

jsa4000 commented Mar 29, 2019

Hi,
Just tested the merge request #311
I have tested it and it fixes my issue with the DependencyViolation, so I can destroy the cluster without any peroblem.

@max-rocket-internet
Copy link
Contributor

OK cool, then perhaps we merge that PR to solve this issue. It sounds like it would be a popular option.

Question: Don't you have left over ENIs and security groups after cluster is destroyed?

@jsa4000
Copy link

jsa4000 commented Apr 5, 2019

Hi, I don't think so. The creation is very simple since it was just for a PoC.
You can get more into the code here

Bellow are some fragments of the terraform code

locals {
  tags = {
    Environment = "${var.environment}"
    Owner       = "${var.owner}"
    Workspace   = "${var.cluster_name}"
  }
  worker_groups = [
    {
      instance_type        = "${var.instance_type}"
      key_name             = "${var.key_name}"
      subnets              = "${join(",", var.subnets)}"
      additional_userdata  = "${file("${path.module}/user_data.sh")}"
      asg_desired_capacity = "${var.asg_desired_capacity}"
    },
  ]
  worker_groups_launch_template = [
    {
      instance_type                            = "${var.instance_type}"
      key_name                                 = "${var.key_name}"
      subnets                                  = "${join(",", var.subnets)}"
      additional_userdata                      = "${file("${path.module}/user_data.sh")}"
      asg_desired_capacity                     = "${var.asg_spot_desired_capacity}"
      spot_instance_pools                      = "${var.spot_instance_pools}"
      on_demand_percentage_above_base_capacity = "0"
    },
]
}

module "eks" {
  source                               = "./terraform-aws-eks"
  #source                               = "terraform-aws-modules/eks/aws"
  #version                              = "2.3.1" 
  cluster_name                         = "${var.cluster_name}"
  subnets                              = ["${var.subnets}"]
  vpc_id                               = "${var.vpc_id}"
  worker_groups                        = "${local.worker_groups}"
  worker_groups_launch_template        = "${local.worker_groups_launch_template}"
  worker_group_count                   = 1
  worker_group_launch_template_count   = 1
  worker_additional_security_group_ids = ["${aws_security_group.eks_sec_group.id}"]

  tags                                 = "${local.tags}"
}

resource "aws_security_group" "eks_sec_group" {
  name_prefix             = "eks-sec-group"
  description             = "Security to be applied for eks nodes"
  vpc_id                  = "${var.vpc_id}"
  
  ingress {
    from_port             = 22
    to_port               = 22
    protocol              = "tcp"
    cidr_blocks           = [
      "10.0.0.0/8",
      "172.16.0.0/12",
      "192.168.0.0/16",
    ]
  }
  tags                    = "${merge(local.tags, map("Name", "${var.cluster_name}-database_sec_group"))}"      
}
data "aws_availability_zones" "available" {}

locals {
  network_count = "${length(data.aws_availability_zones.available.names)}"

  tags = {
    Environment = "${var.environment}"
    Owner       = "${var.owner}"
    Workspace   = "${var.cluster_name}"
  }
}

resource "aws_route53_zone" "hosted_zone" {
  name      = "eks-lab.com"
  comment   = "Private hosted zone for eks cluster"

  vpc {
    vpc_id  = "${module.vpc.vpc_id}"
  }

  tags      = "${local.tags}"
}

module "vpc" {
  source               = "terraform-aws-modules/vpc/aws"
  version              = "1.60.0"
  name                 = "${var.cluster_name}"
  cidr                 = "${var.cidr_block}"
  azs                  = ["${data.aws_availability_zones.available.names[0]}", "${data.aws_availability_zones.available.names[1]}", "${data.aws_availability_zones.available.names[2]}"]
  public_subnets      = [
    "${cidrsubnet(var.cidr_block, var.cidr_subnet_bits, 0)}", 
    "${cidrsubnet(var.cidr_block, var.cidr_subnet_bits, 1)}", 
    "${cidrsubnet(var.cidr_block, var.cidr_subnet_bits, 2)}"
  ]
  private_subnets       = [
    "${cidrsubnet(var.cidr_block, var.cidr_subnet_bits, local.network_count)}", 
    "${cidrsubnet(var.cidr_block, var.cidr_subnet_bits, local.network_count + 1)}", 
    "${cidrsubnet(var.cidr_block, var.cidr_subnet_bits, local.network_count + 2)}"
  ]
  database_subnets  = [
    "${cidrsubnet(var.cidr_block, var.cidr_subnet_bits, local.network_count + 3)}", 
    "${cidrsubnet(var.cidr_block, var.cidr_subnet_bits, local.network_count + 4)}", 
    "${cidrsubnet(var.cidr_block, var.cidr_subnet_bits, local.network_count + 5)}"
  ]

  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true
  tags                 = "${merge(local.tags, map("kubernetes.io/cluster/${var.cluster_name}", "shared"))}"
}
terraform {
  required_version = ">= 0.11.8"
}

provider "aws" {
  version    = ">= 1.47.0"
  region     = "${var.region}"
}

module "vpc" {
  source           = "./vpc"
  owner            = "${var.owner}"
  environment      = "${var.environment}"
  cluster_name     = "${var.cluster_name}"
  cidr_block       = "${var.cidr_block}"
  cidr_subnet_bits = "${var.cidr_subnet_bits}"
}

module "rds" {
...
}

module "eks" {
  source                    = "./eks"
  owner                     = "${var.owner}"
  environment               = "${var.environment}"
  cluster_name              = "${var.cluster_name}"
  vpc_id                    = "${module.vpc.vpc_id}"
  key_name                  = "${module.bastion.key_name}"
  subnets                   = "${module.vpc.private_subnets}"
  instance_type             = "${var.eks_instance_type}"
  asg_desired_capacity      = "${var.eks_asg_desired_capacity}"
  asg_spot_desired_capacity = "${var.eks_asg_spot_desired_capacity}"
}


module "bastion" {
 ...
}

@danielsiwiec
Copy link

danielsiwiec commented May 7, 2019

Also experiencing this issue in 3.0.0:

Error: Error applying plan:

2 error(s) occurred:

* aws_security_group.all_worker_mgmt (destroy): 1 error(s) occurred:

* aws_security_group.all_worker_mgmt: DependencyViolation: resource sg-0caaa8517b45c88af has a dependent object
	status code: 400, request id: 075224e9-6732-40ac-a77d-e0935b7b1bed
* module.eks.aws_security_group.workers (destroy): 1 error(s) occurred:

* aws_security_group.workers: DependencyViolation: resource sg-0c333ddbea0342038 has a dependent object
	status code: 400, request id: a107a26c-bac7-498b-af73-8b76e4e52c58

Running it the second time was successful.

@AirbornePorcine
Copy link

This is happening for me too, and I think I know why.

I setup my cluster to have private access only. The ENIs that hang around and prevent deletion of the SG are created by Amazon accounts. I suspect they're created in order to allow access from the workers to the endpoint via private IPs.

In any case, it seems to be an order-of-operations issue here, as if you first manually destroy the EKS cluster (via console or CLI), the ENIs disappear and destruction of all other resources proceeds without issue. Of course, that confuses things because destroying the cluster first and then the workers doesn't make much sense. Or maybe it doesn't make a difference? That could be a solution to this.

@sjmiller609
Copy link

sjmiller609 commented Jul 11, 2019

This issue occurs for me in 5.0.0 https://cloud.drone.io/astronomer/terraform-kubernetes-astronomer/8/1/4

I think it's because I am using the parameter worker_additional_security_group_ids

@petrikero
Copy link
Contributor

I'm getting the same with 5.1.0 as well, if I use 'worker_groups' to create the worker node pools. The ENIs don't get destroyed with the instances, which prevents the destruction of the worker node security group. But if I use 'worker_groups_launch_template' to create the worker node pools, then the ENIs get destroyed with the instances, and the SG destruction works as expected.

Is there a down side to using worker_groups_launch_template? Maybe it could be the default or recommended way of creating worker node pools?

@stale
Copy link

stale bot commented Jan 3, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jan 3, 2020
@stale
Copy link

stale bot commented Feb 2, 2020

This issue has been automatically closed because it has not had recent activity since being marked as stale.

@stale stale bot closed this as completed Feb 2, 2020
@canhnt
Copy link

canhnt commented Apr 13, 2020

/remove-lifecycle stale

@barryib barryib reopened this Apr 13, 2020
@stale stale bot removed the stale label Apr 13, 2020
@barryib
Copy link
Member

barryib commented Apr 13, 2020

@canhnt are you still experiencing this issue ? Is your issue related to this PR #815 ?

@canhnt
Copy link

canhnt commented Apr 14, 2020

@canhnt are you still experiencing this issue ? Is your issue related to this PR #815 ?

I got the following error when destroying eks with the module terraform-aws-eks:v11.0.0:

Error: Error deleting security group: DependencyViolation: resource sg-017efb07a174d33dc has a dependent object

I think it may not relate to #815 because the cluster is created with public access endpoint.

@dpiddockcmp
Copy link
Contributor

DependencyViolation is an error returned by the AWS API when a resource is still in use.

Can you find out what was still using the security group when terraform tried to delete it?

@canhnt
Copy link

canhnt commented Apr 14, 2020

DependencyViolation is an error returned by the AWS API when a resource is still in use.

Can you find out what was still using the security group when terraform tried to delete it?

We created EKS with custom networking (pod IPs are in different subnets).

The security group that TF tried to delete and failed refers to the ENI of the pod subnets. After it fails, I check and the ENI is in "available" state and can be deleted manually, then I can also delete the security group as well.

I suspect this bug may relate to the leaking ENI issue when additional ENIs were not deleted when worker node is decommissioned.

Update: I can reproduce the issue. When a worker node is deleted and in terminated state, two ENIs with tag node.k8s.amazonaws.com/instance_id=<id> are in available state but are not deleted. It caused the SG for worker node (with description "Security group for all nodes in the cluster. ") cannot be deleted.

@rorybyrne
Copy link

I think I ran into this today, but I'm not using this module (I use the AWS provider directly).

My destroy job was stuck destroying a public subnet and timed out. I found an EKS security group which was attached to an ENI, so I deleted both and then the subnet was destroyed normally on the second attempt. I guess the subnet was waiting on the security group, and the security group was waiting on the ENI like @canhnt mentioned?

For context, I had a LoadBalancer deployed via Kubernetes when I started the Terraform destroy, and I used aws_eks_node_group to provision the workers.

Hope this helps.

@haofeif
Copy link

haofeif commented May 26, 2020

Same here. Still experiencing this during destroy. But i am using the private endpoint.

@sighupper
Copy link

I ran into this as well: private eni's lingering after a terrform destroy on vanilla/fresh cluster.
It seems that it set by:

delete_on_termination = lookup(

So, I added eni_delete to my worker_groups config. That is:

module "eks-cluster" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = "foobar"
  cluster_version = "1.16"
  ...

  worker_groups = [
    {
     ...
      eni_delete = "true"
    }
  ]
  }
}

This seems to have corrected the issue. I am not using templates in my code explicitly (only the ones in module, implicitly). What I am not understanding is if local.tf has eni_delete = "true", why did I have to do it explicitly?

@dpiddockcmp
Copy link
Contributor

Hi @sighupper .

The eni_delete setting only applies to launch templates. Setting the value in worker_groups will make no changes to how terraform runs.

@Tokynet
Copy link

Tokynet commented Aug 11, 2020

Today I ran into this issue, I will troubleshoot and add more details. I do think its the deploying of Kubernetes resources into the cluster, which then creates AWS resources which is making this hang.

terraform-aws-modules/eks/aws =>  v12.2.0
cluster_version => 1.17


➜ terraform --version
Terraform v0.12.21
+ provider.aws v2.70.0
+ provider.external v1.2.0
+ provider.helm v1.2.4
+ provider.kubernetes v1.12.0
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.random v2.3.0
+ provider.template v2.1.2
+ provider.tls v2.2.0

@Tokynet
Copy link

Tokynet commented Aug 14, 2020

I found my issue, I had a null_resource creating an IngressRoute which in-turn, created more resources. Although I was running terraform destroy in the directory that created these resources, the null_resource was only for creating...so it had no way to destroy what it created.

@stale
Copy link

stale bot commented Nov 12, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Nov 12, 2020
@stale
Copy link

stale bot commented Dec 12, 2020

This issue has been automatically closed because it has not had recent activity since being marked as stale.

@stale stale bot closed this as completed Dec 12, 2020
@TjeuKayim
Copy link
Contributor

TjeuKayim commented Mar 4, 2021

I occasionally ran into this same issue the past months. When running terraform destroy one security group can't be deleted because of a dangling elastic network interface (ENI) from a EKS worker node. Since February, I experienced this issue 10 out of 10 runs (often 1 ENI remains behind, sometimes 2), probably because I updated the Terraform version to 0.14 or updated a module or provider (I'm still tracing down what change in my code exactly caused this).
What I already tried: My Terraform definitions don't use a null_resource. Checked CNI version v1.7.5. Checked that the ENIs are marked as delete_on_termination. Tried the latest Terraform 1.15 beta.
Unfortunately, I'm still facing the same problem.

@TjeuKayim
Copy link
Contributor

TjeuKayim commented Mar 8, 2021

In the table below I recorded 19 runs of terraform apply and terraform destroy on the same Terraform declarations with different Terraform versions. Sometimes the destroy succeeded and other times it failed with the DependencyViolation for Security Group error because 1 or 2 elastic network interfaces remained behind. For my infrastructure as code, older Terraform versions seem to have less chances of throwing this error, but it is a bit random.
Should I open a new GitHub issue or can this issue be reopened?

Terraform version Test
count
Percentage
failed runs
v0.15-0-dev e9c7f37b8 4 100%
v0.14.5 3 100%
v0.14.0-beta2 1 100%
v0.14.0-beta1 1 100%
v0.14.0-alpha20201007 2 50%
v0.14.0-alpha20200923 2 100%
v0.14.0-alpha20200910 3 100%
v0.14.0-dev a176aaa4d 2 50%
v0.13.5 3 67%
v0.13.2 4 75%

@sujeetkp
Copy link

I am still getting the same issue. Using Terraform Version "1.0.3" and AWS Provider Version "3.50.0".
Is this issue resolved or any work around ?

@sujeetkp
Copy link

/remove-lifecycle stale

@harish422
Copy link

Uploading terraform.rtf…

I am still seeing this issue. not sure why the remote security group keeps lingering around since there were attached ENI's. once i delet those ENI's , it works fine. Anyone got it working or any work arounds ?

Below is the destroy console output:

**aws_security_group_rule.HRIT-cluster-ingress-workstation-https: Destroying... [id=sgrule-1571703115]
aws_route_table_association.HR_K8s_2: Destroying... [id=rtbassoc-03f94311c7ba33161]
aws_security_group_rule.Marketing-cluster-ingress-workstation-https: Destroying... [id=sgrule-51171598]
aws_security_group_rule.Sales-cluster-ingress-workstation-https: Destroying... [id=sgrule-497293064]
aws_route_table_association.HR_K8s_1: Destroying... [id=rtbassoc-09e0dc9234365d0f2]
aws_route_table_association.IT_K8s_1: Destroying... [id=rtbassoc-097e9aa1644f674bf]
aws_route_table_association.Marketing_K8s_1: Destroying... [id=rtbassoc-0e1e95ac378434207]
aws_eks_node_group.CustA-HR: Destroying... [id=HRIT:CustA-HR]
aws_route_table_association.Sales_K8s_1: Destroying... [id=rtbassoc-035d13faffdba06a1]
aws_eks_node_group.CustA-Marketing: Destroying... [id=Marketing:CustA-Marketing]
aws_route_table_association.HR_K8s_1: Destruction complete after 6s
aws_route_table_association.Sales_K8s_2: Destroying... [id=rtbassoc-0e42b4df7a4336f90]
aws_route_table_association.HR_K8s_2: Destruction complete after 6s
aws_route_table_association.IT_K8s_1: Destruction complete after 6s
aws_route_table_association.IT_K8s_2: Destroying... [id=rtbassoc-009417f8c59015db1]
aws_eks_node_group.CustA-IT: Destroying... [id=HRIT:CustA-IT]
aws_route_table_association.Marketing_K8s_1: Destruction complete after 6s
aws_route_table_association.Marketing_K8s_2: Destroying... [id=rtbassoc-04c3457f8677c663f]
aws_route_table_association.Sales_K8s_1: Destruction complete after 6s
aws_eks_node_group.CustA-Sales: Destroying... [id=Sales:CustA-Sales]
aws_security_group_rule.Sales-cluster-ingress-workstation-https: Destruction complete after 7s
aws_security_group_rule.Marketing-cluster-ingress-workstation-https: Destruction complete after 7s
aws_security_group_rule.HRIT-cluster-ingress-workstation-https: Destruction complete after 7s
aws_route_table_association.Marketing_K8s_2: Destruction complete after 1s
aws_route_table_association.IT_K8s_2: Destruction complete after 1s
aws_route_table_association.Sales_K8s_2: Destruction complete after 1s
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 10s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 10s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 10s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 10s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 20s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 20s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 20s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 20s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 30s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 30s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 30s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 30s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 40s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 40s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 40s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 50s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 50s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 50s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m0s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m0s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m0s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m0s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m10s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m10s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m10s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m10s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m20s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m20s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m20s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m20s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m30s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m30s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m30s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m30s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m40s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m40s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m40s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 1m50s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 1m50s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 1m50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 1m50s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m0s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m0s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m0s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m0s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m10s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m10s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m10s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m10s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m20s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m20s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m20s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m20s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m30s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m30s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m30s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m30s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m40s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m40s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m40s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 2m50s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 2m50s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 2m50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 2m50s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m0s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 3m0s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 3m0s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m0s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m10s elapsed]
aws_eks_node_group.CustA-Marketing: Still destroying... [id=Marketing:CustA-Marketing, 3m10s elapsed]
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 3m10s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m10s elapsed]
aws_eks_node_group.CustA-Marketing: Destruction complete after 3m19s
aws_iam_role_policy_attachment.Marketing-node-AmazonEC2ContainerRegistryReadOnly: Destroying... [id=terraform-eks-Marketing-node-2021073016173864380000000b]
aws_iam_role_policy_attachment.Marketing-node-AmazonEKSWorkerNodePolicy: Destroying... [id=terraform-eks-Marketing-node-2021073016173864660000000c]
aws_eks_cluster.Marketing: Destroying... [id=Marketing]
aws_iam_role_policy_attachment.Marketing-node-AmazonEKS_CNI_Policy: Destroying... [id=terraform-eks-Marketing-node-2021073016173861290000000a]
aws_iam_role_policy_attachment.Marketing-node-AmazonEKS_CNI_Policy: Destruction complete after 1s
aws_iam_role_policy_attachment.Marketing-node-AmazonEKSWorkerNodePolicy: Destruction complete after 1s
aws_iam_role_policy_attachment.Marketing-node-AmazonEC2ContainerRegistryReadOnly: Destruction complete after 1s
aws_iam_role.Marketing-node: Destroying... [id=terraform-eks-Marketing-node]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m20s elapsed]
aws_iam_role.Marketing-node: Destruction complete after 2s
aws_eks_node_group.CustA-IT: Still destroying... [id=HRIT:CustA-IT, 3m20s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m20s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 10s elapsed]
aws_eks_node_group.CustA-IT: Destruction complete after 3m24s
aws_iam_role_policy_attachment.IT-node-AmazonEC2ContainerRegistryReadOnly: Destroying... [id=terraform-eks-IT-node-2021073016174542760000000d]
aws_iam_role_policy_attachment.IT-node-AmazonEKS_CNI_Policy: Destroying... [id=terraform-eks-IT-node-2021073016174543030000000e]
aws_iam_role_policy_attachment.IT-node-AmazonEKSWorkerNodePolicy: Destroying... [id=terraform-eks-IT-node-2021073016175061740000000f]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m30s elapsed]
aws_iam_role_policy_attachment.IT-node-AmazonEKS_CNI_Policy: Destruction complete after 1s
aws_iam_role_policy_attachment.IT-node-AmazonEC2ContainerRegistryReadOnly: Destruction complete after 1s
aws_iam_role_policy_attachment.IT-node-AmazonEKSWorkerNodePolicy: Destruction complete after 1s
aws_iam_role.IT-node: Destroying... [id=terraform-eks-IT-node]
aws_iam_role.IT-node: Destruction complete after 2s
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m30s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 20s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m40s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 30s elapsed]
aws_eks_node_group.CustA-HR: Still destroying... [id=HRIT:CustA-HR, 3m50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 3m50s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 40s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 4m0s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 50s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 4m10s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 1m0s elapsed]
aws_eks_node_group.CustA-Sales: Still destroying... [id=Sales:CustA-Sales, 4m20s elapsed]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 1m10s elapsed]
aws_eks_node_group.CustA-Sales: Destruction complete after 4m30s
aws_iam_role_policy_attachment.Sales-node-AmazonEKS_CNI_Policy: Destroying... [id=terraform-eks-Sales-node-20210730161737943500000006]
aws_iam_role_policy_attachment.Sales-node-AmazonEKSWorkerNodePolicy: Destroying... [id=terraform-eks-Sales-node-20210730161737934000000005]
aws_iam_role_policy_attachment.Sales-node-AmazonEC2ContainerRegistryReadOnly: Destroying... [id=terraform-eks-Sales-node-20210730161737944100000008]
aws_eks_cluster.Sales: Destroying... [id=Sales]
aws_iam_role_policy_attachment.Sales-node-AmazonEKS_CNI_Policy: Destruction complete after 1s
aws_iam_role_policy_attachment.Sales-node-AmazonEC2ContainerRegistryReadOnly: Destruction complete after 1s
aws_iam_role_policy_attachment.Sales-node-AmazonEKSWorkerNodePolicy: Destruction complete after 2s
aws_iam_role.Sales-node: Destroying... [id=terraform-eks-Sales-node]
aws_eks_cluster.Marketing: Still destroying... [id=Marketing, 1m20s elapsed]
aws_iam_role.Sales-node: Destruction complete after 2s
aws_eks_cluster.Sales: Still destroying... [id=Sales, 10s elapsed]
aws_eks_cluster.Marketing: Destruction complete after 1m28s
aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSServicePolicy: Destroying... [id=terraform-eks-Marketing-cluster-20210730161732278700000002]
aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSClusterPolicy: Destroying... [id=terraform-eks-Marketing-cluster-20210730161732274700000001]
aws_security_group.Marketing-cluster: Destroying... [id=sg-0b666285a10238f62]
aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSServicePolicy: Destruction complete after 1s
aws_iam_role_policy_attachment.Marketing-cluster-AmazonEKSClusterPolicy: Destruction complete after 1s
aws_iam_role.Marketing-cluster: Destroying... [id=terraform-eks-Marketing-cluster]
aws_security_group.Marketing-cluster: Destruction complete after 2s
aws_iam_role.Marketing-cluster: Destruction complete after 3s
aws_eks_cluster.Sales: Still destroying... [id=Sales, 20s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 30s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 40s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 50s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m0s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m10s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m20s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m30s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m40s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 1m50s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 2m0s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 2m10s elapsed]
aws_eks_cluster.Sales: Still destroying... [id=Sales, 2m20s elapsed]
aws_eks_cluster.Sales: Destruction complete after 2m22s
aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSServicePolicy: Destroying... [id=terraform-eks-Sales-cluster-20210730161732303600000004]
aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSClusterPolicy: Destroying... [id=terraform-eks-Sales-cluster-20210730161732301500000003]
aws_security_group.Sales-cluster: Destroying... [id=sg-086007437e95e69da]
aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSServicePolicy: Destruction complete after 0s
aws_iam_role_policy_attachment.Sales-cluster-AmazonEKSClusterPolicy: Destruction complete after 0s
aws_iam_role.Sales-cluster: Destroying... [id=terraform-eks-Sales-cluster]
aws_security_group.Sales-cluster: Destruction complete after 1s
aws_iam_role.Sales-cluster: Destruction complete after 3s

Error: error waiting for EKS Node Group (HRIT:CustA-HR) deletion: Ec2SecurityGroupDeletionFailure: DependencyViolation - resource has a dependent object. Resource IDs: [sg-0f2eef7aff2bb7765]**

@ddvdozuki
Copy link

I am also having this issue. Not sure why this issue is closed, It seems to be still be a problem.

@lanejlanej
Copy link

Also still seeing this.

@Loag
Copy link

Loag commented Nov 1, 2022

#1267 & #2004 why keep closing this issue?

@github-actions
Copy link

github-actions bot commented Dec 2, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 2, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests