Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example IAM policy is insufficient #935

Closed
WojciechKarpiel opened this issue Jun 15, 2021 · 17 comments
Closed

Example IAM policy is insufficient #935

WojciechKarpiel opened this issue Jun 15, 2021 · 17 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@WojciechKarpiel
Copy link

/kind bug

What happened?
I was following this guide, and when my pod attempted to use a restored PVC (with snapshot datasource), I've got following error

0s          Warning   ProvisioningFailed              persistentvolumeclaim/mongo6-mongod-persistent-storage-claim-mongo6-mongod-0   failed to provision volume with StorageClass "ssd-xfs": rpc error: code = Internal desc = Could not create volume "pvc-29a86a12-d64c-4ffe-b799-a63209267737": failed to get an available volume in EC2: InvalidVolume.NotFound: The volume 'vol-04da06270c9fd721e' does not exist.
            status code: 400, request id: 4f9dfe64-23dd-428f-8fbc-15b5a84bb444

What you expected to happen?
I expected PVC to be created and bound successfully

How to reproduce it (as minimally and precisely as possible)?
Just follow the AWS guide: https://aws.amazon.com/blogs/containers/using-ebs-snapshots-for-persistent-storage-with-your-eks-cluster/

Anything else we need to know?:
I suspect this is not-sufficient-permissions problem. I've used this IAM policy
After I've added the entire universe to my permission list (as below) I was able to create and restore snapshots successfully

      {
        Effect: "Allow",
        Action: [
          "*"
        ],
        Resource: "*"
      },

Environment

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.4-eks-6b7464", GitCommit:"6b746440c04cb81db4426842b4ae65c3f7035e53", GitTreeState:"clean", BuildDate:"2021-03-19T19:33:03Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
  • Driver version:
    1.0.0 (Helm release)
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 15, 2021
@wongma7
Copy link
Contributor

wongma7 commented Jun 15, 2021

is the error message accurate though? "InvalidVolume.NotFound: The volume 'vol-04da06270c9fd721e' does not exist." Why is ec2 returning this error, does this volume actually exist or not?

@wongma7
Copy link
Contributor

wongma7 commented Jun 15, 2021

Somewhat related to this issue, I think we do need to provide more examples though with more documentation. The current example https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/example-iam-policy.json is a bit too restrictive because it breaks in migration scenario. Also it doesn't match the policy we use for e2e testing https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/hack/kops-patch.yaml#L6 which looks a lot like the 0.6.0 one. However I am still a bit baffled how even the 0.6.0 was not sufficient in this specific case.

@WojciechKarpiel
Copy link
Author

WojciechKarpiel commented Jun 17, 2021

I've tried again with the current policy, results are the same

Sorry, I've missed a crucial detail:
Kubernetes events say the snapshot creation failed

17s         Normal    CreatingSnapshot                   volumesnapshot/new-snapshot-test7                                              Waiting for a snapshot mongotest1/new-snapshot-test7 to be created by the CSI driver.
17s         Warning   SnapshotFinalizerError             volumesnapshot/new-snapshot-test7                                              Failed to check and update snapshot: snapshot controller failed to update mongotest1/new-snapshot-test7 on API server: Operation cannot be fulfilled on volumesnapshots.snapshot.storage.k8s.io "new-snapshot-test7": the object has been modified; please apply your changes to the latest version and try again
17s         Warning   SnapshotContentCreationFailed      volumesnapshot/new-snapshot-test7                                              Failed to create snapshot content with error snapshot controller failed to update elopvc9 on API server: Operation cannot be fulfilled on persistentvolumeclaims "elopvc9": the object has been modified; please apply your changes to the latest version and try again

Despite this, VolumeSnapshot is in ready: true state, but fails when trying to create PVC (as described above)
Do you know why PVC is modified? The modified-but-newer-version-exists error makes little sense to me, because it works when I have all the possible permissions.

Why is ec2 returning this error, does this volume actually exist or not?

The volume is missing from the volume list (I'm AWS noob, I think they should appear here, next to volumes underlying other PVs: https://eu-west-1.console.aws.amazon.com/ec2/v2/home?region=eu-west-1#Volumes:sort=desc:createTime )

Is there another way to check what permissions are required for the task?
Something like:

  1. give yourself all permissions.
  2. Click some button at AWS to record all used permissions from now on
  3. restore snapshot and see what was used

@wongma7
Copy link
Contributor

wongma7 commented Jun 17, 2021

Is there another way to check what permissions are required for the task?

There is this new feature called IAM Access Analyze. I have never tried it but in theory if you analyze the role that the policy is attached to then it will spit out exactly what you want and we can do a diff between it and the example.
https://aws.amazon.com/blogs/security/iam-access-analyzer-makes-it-easier-to-implement-least-privilege-permissions-by-generating-iam-policies-based-on-access-activity/

As for the events, from my experience "the object has been modified; please apply your changes to the latest version and try again" are usually intermittent, they indicate that the snapshotter's internal cache is out of date somehow. But like you said it doesn't really amke sense given that changing IAM permissions fixes the issue, so I think it's a red herring.

@WojciechKarpiel
Copy link
Author

I've used IAM Access Analyze, it generated insufficient policy (same problem as with the example policy)

For reference, IAM policy generated by the tool:

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"ec2:DescribeInstances",
				"ec2:DescribeSnapshots",
				"ec2:DescribeVolumes"
			],
			"Resource": "*"
		},
		{
			"Effect": "Allow",
			"Action": "ec2:AttachVolume",
			"Resource": "arn:aws:ec2:${Region}:${Account}:instance/${InstanceId}"
		},
		{
			"Effect": "Allow",
			"Action": [
				"ec2:AttachVolume",
				"ec2:CreateSnapshot",
				"ec2:CreateVolume",
				"ec2:DetachVolume"
			],
			"Resource": "arn:aws:ec2:${Region}:${Account}:volume/${VolumeId}"
		},
		{
			"Effect": "Allow",
			"Action": "ec2:CreateSnapshot",
			"Resource": "arn:aws:ec2:${Region}::snapshot/${SnapshotId}"
		},
		{
			"Effect": "Allow",
			"Action": [
				"kms:CreateGrant",
				"kms:GenerateDataKeyWithoutPlaintext"
			],
			"Resource": "arn:aws:kms:${Region}:${Account}:key/${KeyId}"
		}
	]
}

@wongma7
Copy link
Contributor

wongma7 commented Jun 22, 2021

Can you share logs or excerpts from the driver around the time of the error.

kubectl logs -n kube-system (kubectl get lease -n kube-system ebs-csi-aws-com -o=jsonpath="{.spec.holderIdentity}") ebs-plugin

The command kubectl get lease -n kube-system ebs-csi-aws-com -o=jsonpath="{.spec.holderIdentity} is to find out which Pod replica is the leader. Then from that Pod we want the ebs-plugin container logs from it because that is the only thing making AWS API calls and I am hoping it logs a helpful error.

kubectl logs -n kube-system (kubectl get lease -n kube-system ebs-csi-aws-com -o=jsonpath="{.spec.holderIdentity}") csi-snapshotter

Also, does the StorageClass have WaitForFirstConsumer like here ?

One explanation for the original volume NotFound error is the Pod trying to use the PVC got scheduled to a different zone than vol-04da06270c9fd721e was in.

@WojciechKarpiel
Copy link
Author

Hi!

Logs from csi-snapshotter (none except for initialization logs):

I0623 05:41:48.689256       1 main.go:87] Version: v3.0.3
I0623 05:41:48.690774       1 connection.go:153] Connecting to unix:///var/lib/csi/sockets/pluginproxy/csi.sock
W0623 05:41:48.695108       1 metrics.go:333] metrics endpoint will not be started because `metrics-address` was not specified.
I0623 05:41:48.695124       1 common.go:111] Probing CSI driver for readiness
I0623 05:41:48.696977       1 leaderelection.go:243] attempting to acquire leader lease  kube-system/external-snapshotter-leader-ebs-csi-aws-com...

Excerpt from logs of ebs-plugin (I've separated creating the original volume and restoring snapshot by a blank line for readability, there were no actual logs in between. The error is repeated over and over):

I0623 05:45:40.370682       1 controller.go:101] CreateVolume: called with args {Name:pvc-5581ccd3-96f4-4499-9bc1-fc87f17e25e3 CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"eu-west-1c" > segments:<key:"topology.kubernetes.io/zone" value:"eu-west-1c" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"eu-west-1c" > segments:<key:"topology.kubernetes.io/zone" value:"eu-west-1c" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0623 05:45:43.838180       1 inflight.go:69] Node Service: volume="name:\"pvc-5581ccd3-96f4-4499-9bc1-fc87f17e25e3\" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > > accessibility_requirements:<requisite:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"eu-west-1c\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"eu-west-1c\" > > preferred:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"eu-west-1c\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"eu-west-1c\" > > > " operation finished

I0623 05:47:25.139136       1 controller.go:101] CreateVolume: called with args {Name:pvc-8eff8be4-041d-4723-bf6f-d9c98324cbb1 CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[] Secrets:map[] VolumeContentSource:snapshot:<snapshot_id:"snap-0a5ffa2db717fee49" >  AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"eu-west-1c" > segments:<key:"topology.kubernetes.io/zone" value:"eu-west-1c" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"eu-west-1c" > segments:<key:"topology.kubernetes.io/zone" value:"eu-west-1c" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E0623 05:47:28.745915       1 cloud.go:364] vol-0f5945a1ece01a393 failed to be deleted, this may cause volume leak
I0623 05:47:28.745964       1 inflight.go:69] Node Service: volume="name:\"pvc-8eff8be4-041d-4723-bf6f-d9c98324cbb1\" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_content_source:<snapshot:<snapshot_id:\"snap-0a5ffa2db717fee49\" > > accessibility_requirements:<requisite:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"eu-west-1c\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"eu-west-1c\" > > preferred:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"eu-west-1c\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"eu-west-1c\" > > > " operation finished
E0623 05:47:28.745989       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not create volume "pvc-8eff8be4-041d-4723-bf6f-d9c98324cbb1": failed to get an available volume in EC2: InvalidVolume.NotFound: The volume 'vol-0f5945a1ece01a393' does not exist.
	status code: 400, request id: b4df307b-eaf6-4cfe-8590-9c036c8f203b
I0623 05:47:29.764295       1 controller.go:101] CreateVolume: called with args {Name:pvc-8eff8be4-041d-4723-bf6f-d9c98324cbb1 CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[] Secrets:map[] VolumeContentSource:snapshot:<snapshot_id:"snap-0a5ffa2db717fee49" >  AccessibilityRequirements:requisite:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"eu-west-1c" > segments:<key:"topology.kubernetes.io/zone" value:"eu-west-1c" > > preferred:<segments:<key:"topology.ebs.csi.aws.com/zone" value:"eu-west-1c" > segments:<key:"topology.kubernetes.io/zone" value:"eu-west-1c" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
E0623 05:47:33.488936       1 cloud.go:364] vol-0a5e35023ecce3ba6 failed to be deleted, this may cause volume leak
I0623 05:47:33.488980       1 inflight.go:69] Node Service: volume="name:\"pvc-8eff8be4-041d-4723-bf6f-d9c98324cbb1\" capacity_range:<required_bytes:1073741824 > volume_capabilities:<mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > > volume_content_source:<snapshot:<snapshot_id:\"snap-0a5ffa2db717fee49\" > > accessibility_requirements:<requisite:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"eu-west-1c\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"eu-west-1c\" > > preferred:<segments:<key:\"topology.ebs.csi.aws.com/zone\" value:\"eu-west-1c\" > segments:<key:\"topology.kubernetes.io/zone\" value:\"eu-west-1c\" > > > " operation finished
E0623 05:47:33.489007       1 driver.go:119] GRPC error: rpc error: code = Internal desc = Could not create volume "pvc-8eff8be4-041d-4723-bf6f-d9c98324cbb1": failed to get an available volume in EC2: InvalidVolume.NotFound: The volume 'vol-0a5e35023ecce3ba6' does not exist.
	status code: 400, request id: 84dcdaa4-c44f-409b-b28b-d92b98a1c864
I0623 05:47:35.507908       1 controller.go:101] CreateVolume: called with args {Name:pvc-8eff8be4-041d-4723-bf6f-d9c98324cbb1 CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"xfs" > 

Also, does the StorageClass have WaitForFirstConsumer like here ?

Yes, I've used WaitForFirstConsumer. I've switched to Immediate now, but results are the same, except for the error appearing when creating PVC, not when binding PVC ;)

I thought that the lack of logs from csi-snapshotter means that the snapshot wasn't created, but when I kubectl describe volumesnapshotcontent/C I can see reference to a AWS snapshot, and I can see the snapshot at AWS console. It is marked "completed" and "available", has description saying "Created by AWS EBS CSI driver for volume vol-0b1999f60a776b2b2". It seems that the problem is with creating restored volume, not with creating snapshot

@robsonvn
Copy link

robsonvn commented Jul 8, 2021

I had the same issue and turns out that it was related to KMS permission, it was using the default KMS but the role had no access to it, so I added the missing permissions to the example policy.

Perhaps setting encrypted parameter of the Storage Class to false might be enough for some.

Note: I'm using terraform to replace variables.

Helm Values

enableVolumeSnapshot: true
serviceAccount:
  controller:
    create: true
    name: "ebs-csi-controller-sa"
    annotations:
       eks.amazonaws.com/role-arn: "${serviceAccountRoleArn}"
storageClasses:
 - name: ebs-sc
   annotations:
     storageclass.kubernetes.io/is-default-class: "true"
   volumeBindingMode: WaitForFirstConsumer
   parameters:
     encrypted: "true"
     kmsKeyId: "${kmsKeyId}"

Policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:CreateSnapshot",
        "ec2:AttachVolume",
        "ec2:DetachVolume",
        "ec2:ModifyVolume",
        "ec2:DescribeAvailabilityZones",
        "ec2:DescribeInstances",
        "ec2:DescribeSnapshots",
        "ec2:DescribeTags",
        "ec2:DescribeVolumes",
        "ec2:DescribeVolumesModifications"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:CreateTags"
      ],
      "Resource": [
        "arn:aws:ec2:*:*:volume/*",
        "arn:aws:ec2:*:*:snapshot/*"
      ],
      "Condition": {
        "StringEquals": {
          "ec2:CreateAction": [
            "CreateVolume",
            "CreateSnapshot"
          ]
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DeleteTags"
      ],
      "Resource": [
        "arn:aws:ec2:*:*:volume/*",
        "arn:aws:ec2:*:*:snapshot/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:CreateVolume"
      ],
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "aws:RequestTag/ebs.csi.aws.com/cluster": "true"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:CreateVolume"
      ],
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "aws:RequestTag/CSIVolumeName": "*"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DeleteVolume"
      ],
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "ec2:ResourceTag/CSIVolumeName": "*"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DeleteVolume"
      ],
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "ec2:ResourceTag/ebs.csi.aws.com/cluster": "true"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DeleteSnapshot"
      ],
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "ec2:ResourceTag/CSIVolumeSnapshotName": "*"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DeleteSnapshot"
      ],
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "ec2:ResourceTag/ebs.csi.aws.com/cluster": "true"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "kms:CreateGrant",
        "kms:ListGrants",
        "kms:RevokeGrant"
      ],
      "Resource": ["${kmsKeyId}"],
      "Condition": {
        "Bool": {
          "kms:GrantIsForAWSResource": "true"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "kms:Encrypt",
        "kms:Decrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:DescribeKey"
      ],
      "Resource": ["${kmsKeyId}"]
    }
  ]
}

@WojciechKarpiel
Copy link
Author

Thanks a lot, adding the KMS permissions solved the issue for me!
How did you figure it out?

@robsonvn
Copy link

Thanks a lot, adding the KMS permissions solved the issue for me!
How did you figure it out?

I analysed the events in Cloud Trail and noticed it was in a loop of Creating Volume and Deleting Volume, and by analysing the Creating Volume events, it was saying that it was creating and was using the default KMS key. I created another KMS giving access to the role which was being assumed by the Service Account and voilà it worked.

IMO this is a bug still, just don't know if on this project or AWS as I would assume that Cloud Trail should have had log the event of failing to use the KMS.

Meanwhile, we need to figure out the best way of improving the documentation.

@ArchiFleKs
Copy link

Same issue here, working out of the box with encrypted: true using the default KMS but not with custom KMS

@ArchiFleKs
Copy link

Manage to get it to work here with KMS key handling in Terraform: https://github.com/particuleio/terraform-kubernetes-addons/blob/main/modules/aws/aws-ebs-csi-driver.tf

@niroowns
Copy link

niroowns commented Sep 8, 2021

Can anyone point to what key actually gets selected by the provisioner when only encrypted: true is specified? I'm having a hard time finding a documented behavior here and the clusters I have seem to randomly select KMS keys.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 7, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 7, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants