-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to access pod logs when using Webhook authorizationMode for Kubelet in cluster with RBAC #6280
Comments
I managed to get around this using the ClusterRoleBinding described here |
I think this is a straight duplicate of #5706 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1. What
kops
version are you running?Version 1.10.0
2. What Kubernetes version are you running?
Both client (
kubectl
) and server are runningv1.11.6
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
We are trying to disable anonymous auth on our Kubelet's in a cluster with RBAC authorization and have added the following to our cluster spec:
We updated our the cluster (
kops update cluster --yes
), but did not roll the update out the entire cluster due to the risk of breakage. We manually terminated a node and the auto-scaling group brought up a new node with anonymous auth disabled.5. What happened after the commands executed?
We are now unable to access logs on those pods, even after creating the ClusterRoleBindings outlined here:
#5176 (comment)
When running
kubectl logs <pod_name>
we get the following error:6. What did you expect to happen?
We expect to be able to access logs on the new node without any issues.
7. Please provide your cluster manifest.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.9. Anything else do we need to know?
The master node is not running with the updated
kubelet
configuration.It seems like running RBAC with Webhook Kubelet auth is still under development according to #5176. I tried creating the
ClusterRoleBindings
described in #5176 (comment), but it did not fix the issue.This older issue #3891 mentions specifying things relating to certificate files. Is there something we have to do there?
The text was updated successfully, but these errors were encountered: