Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to access pod logs when using Webhook authorizationMode for Kubelet in cluster with RBAC #6280

Closed
PaulJuliusMartinez opened this issue Dec 31, 2018 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@PaulJuliusMartinez
Copy link

PaulJuliusMartinez commented Dec 31, 2018

1. What kops version are you running?
Version 1.10.0

2. What Kubernetes version are you running?
Both client (kubectl) and server are running v1.11.6

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
We are trying to disable anonymous auth on our Kubelet's in a cluster with RBAC authorization and have added the following to our cluster spec:

  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook

We updated our the cluster (kops update cluster --yes), but did not roll the update out the entire cluster due to the risk of breakage. We manually terminated a node and the auto-scaling group brought up a new node with anonymous auth disabled.

5. What happened after the commands executed?

We are now unable to access logs on those pods, even after creating the ClusterRoleBindings outlined here:

#5176 (comment)

When running kubectl logs <pod_name> we get the following error:

error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log <pod_name> )`

6. What did you expect to happen?

We expect to be able to access logs on the new node without any issues.

7. Please provide your cluster manifest.

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2017-03-17T16:47:08Z
  name: <REDACTED>
spec:
  api:
    loadBalancer:
      additionalSecurityGroups:
      - <REDACTED>
      idleTimeoutSeconds: 600
      type: Internal
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: <REDACTED>
  etcdClusters:
  - etcdMembers:
    - encryptedVolume: false
      instanceGroup: master-<REDACTED>
      name: a
    name: main
  - etcdMembers:
    - encryptedVolume: false
      instanceGroup: master-<REDACTED>
      name: a
    name: events
  iam:
    legacy: true
  kubeAPIServer:
    admissionControl:
    - AlwaysPullImages
    - DenyEscalatingExec
    runtimeConfig:
      batch/v2alpha1: "true"

  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook

  kubernetesApiAccess:
  - <REDACTED>
  kubernetesVersion: 1.11.6
  masterInternalName: <REDACTED>
  masterPublicName: <REDACTED>
  networkCIDR: <REDACTED>
  networking:
    weave: {}
  nonMasqueradeCIDR: <REDACTED>
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: <REDACTED>
    name: <REDACTED>
    type: Private
    zone: <REDACTED>
  - cidr: <REDACTED>
    name: <REDACTED>
    type: Utility
    zone: <REDACTED>
  topology:
    bastion:
      bastionPublicName: <REDACTED>
    dns:
      type: Public
    masters: private
    nodes: private

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-03-17T16:47:08Z
  labels:
    kops.k8s.io/cluster: <REDACTED>
  name: <REDACTED>
spec:
  additionalSecurityGroups:
  - <REDACTED>
  image: <REDACTED>
  machineType: m5.large
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - <REDACTED>

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-03-17T16:47:08Z
  labels:
    kops.k8s.io/cluster: <REDACTED>
  name: nodes
spec:
  additionalSecurityGroups:
  - <REDACTED>
  image: <REDACTED>
  machineType: m5.large
  maxSize: <REDACTED>
  minSize: <REDACTED>
  role: Node
  subnets:
  - <REDACTED>

8. Please run the commands with most verbose logging by adding the -v 10 flag.

$ kubectl -v 10 logs <pod_name>
I1230 23:14:31.672355    9524 round_trippers.go:405] GET https://<REDACTED>/api/v1/namespaces/default/pods/<pod_name>/log 401 Unauthorized in 108 milliseconds
I1230 23:14:31.672384    9524 round_trippers.go:411] Response Headers:
I1230 23:14:31.672390    9524 round_trippers.go:414]     Date: Mon, 31 Dec 2018 05:14:31 GMT
I1230 23:14:31.672396    9524 round_trippers.go:414]     Content-Type: application/json
I1230 23:14:31.672402    9524 round_trippers.go:414]     Content-Length: 286
I1230 23:14:31.672441    9524 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"the server has asked for the client to provide credentials ( pods/log <pod_name>)","reason":"Unauthorized","details":{"name":"<pod_name>","kind":"pods/log"},"code":401}
I1230 23:14:31.673605    9524 helpers.go:198] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "the server has asked for the client to provide credentials ( pods/log <pod_name>)",
  "reason": "Unauthorized",
  "details": {
    "name": "<pod_name>",
    "kind": "pods/log"
  },
  "code": 401
}]
F1230 23:14:31.673646    9524 helpers.go:116] error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log <pod_name>))

9. Anything else do we need to know?

The master node is not running with the updated kubelet configuration.

It seems like running RBAC with Webhook Kubelet auth is still under development according to #5176. I tried creating the ClusterRoleBindings described in #5176 (comment), but it did not fix the issue.

This older issue #3891 mentions specifying things relating to certificate files. Is there something we have to do there?

@tmlbl
Copy link

tmlbl commented Feb 25, 2019

I managed to get around this using the ClusterRoleBinding described here

@Smirl
Copy link
Contributor

Smirl commented Mar 19, 2019

I think this is a straight duplicate of #5706
Should we close this one?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 17, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 17, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants