Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot delete keypair secrets with multiple id's #5318

Closed
jtolsma opened this issue Jun 12, 2018 · 7 comments
Closed

Cannot delete keypair secrets with multiple id's #5318

jtolsma opened this issue Jun 12, 2018 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@jtolsma
Copy link

jtolsma commented Jun 12, 2018

Thanks for submitting an issue! Please fill in as much of the template below as
you can.

------------- BUG REPORT TEMPLATE --------------------

  1. What kops version are you running? The command kops version, will display
    this information.

kops version
Version 1.9.1

  1. What Kubernetes version are you running? kubectl version will print the
    version if a cluster is running or provide the Kubernetes version specified as
    a kops flag.

Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

  1. What cloud provider are you using?

AWS

Trying to reset the kubecfg credential to a cluster by following this document: https://github.com/kubernetes/kops/blob/master/docs/rotate-secrets.md

The delete secret secret line worked fine, but the delete secret keypair did not. I have multiple Keypair id's listed and it won't let me delete any of them. I tried deleting just the secrets and recreating them but was still able to connect with my original kubecfg. What is the simplest way to revoke the kubecfg client credentials and roll those to a cluster?

Example:

kops get secret
Keypair kube-controller-manager 6535036977980663811654852636
Keypair kube-controller-manager keyset.yaml

kops delete secret keypair kube-controller-manager
found multiple matching secrets; specify the id of the key

kops delete secret keypair kube-controller-manager 6535036977980663811654852636
I0612 16:02:31.789595 39168 certificate.go:106] Ignoring unexpected PEM block: "RSA PRIVATE KEY"
error deleting secret: error deleting certificate: error loading certificate "s3://foo/bar/pki/private/kube-controller-manager/6535036977980663811654852636.key": could not parse certificate

The file s3://foo/bar/pki/private/kube-controller-manager/6535036977980663811654852636.key exists, but it starts with -----BEGIN RSA PRIVATE KEY-----, the code seems to be looking for CERTIFICATE?

kops delete secret keypair kube-controller-manager keyset.yaml
error deleting secret: keypair had non-integer version: "keyset.yaml"

How do I delete the keyset.yaml?

Thanks in advance.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 10, 2018
@fcortijo-waldo
Copy link

fcortijo-waldo commented Sep 17, 2018

I have the same problem. jtolsma did you find a solution for this?

@gbird3
Copy link

gbird3 commented Sep 27, 2018

@fcortijo-waldo From initial testing, I was able to delete the keypairs by going into the state store (s3 bucket) and deleting the "issued" and "private" folders under the pki folder.

After doing that I ran kops update cluster --yes and it recreated the keypairs. I ran into a few things like having to perform a rolling-update twice for some reason but in the end it seemed to have worked. I recommend doing a few trial runs before running it on a prod cluster though.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 27, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Overbryd
Copy link

Overbryd commented Feb 19, 2019

Your bots are closing an issue, that is still an issue. I will open a new one. P.s. Where can I buy one of these? It would come handy here at work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants