You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
kubectl top pods seems not working. Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
2020/02/01 23:01:20 [2020-02-01T23:01:20Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:21 [2020-02-01T23:01:21Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 169.254.219.245:36052:
2020/02/01 23:01:21 Getting list of namespaces
2020/02/01 23:01:21 [2020-02-01T23:01:21Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:25 [2020-02-01T23:01:25Z] Incoming HTTP/2.0 GET /api/v1/pod/%!?(MISSING)itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 169.254.219.245:36052:
2020/02/01 23:01:25 Getting list of all pods in the cluster
2020/02/01 23:01:25 received 0 resources from sidecar instead of 10
2020/02/01 23:01:25 received 0 resources from sidecar instead of 2
2020/02/01 23:01:25 received 0 resources from sidecar instead of 2
2020/02/01 23:01:25 received 0 resources from sidecar instead of 10
2020/02/01 23:01:25 Getting pod metrics
2020/02/01 23:01:25 received 0 resources from sidecar instead of 8
2020/02/01 23:01:25 received 0 resources from sidecar instead of 2
2020/02/01 23:01:25 received 0 resources from sidecar instead of 8
2020/02/01 23:01:25 received 0 resources from sidecar instead of 2
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 Skipping metric because of error: Metric label not set.
2020/02/01 23:01:25 [2020-02-01T23:01:25Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:26 [2020-02-01T23:01:26Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 169.254.219.245:36052:
2020/02/01 23:01:26 Getting list of namespaces
2020/02/01 23:01:26 [2020-02-01T23:01:26Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 169.254.219.245:36052:
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 169.254.219.245:36052:
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/pod/kubernetes-dashboard/kubernetes-dashboard-7867cbccbb-5s6h6 request from 169.254.219.245:36052:
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/pod/kubernetes-dashboard/kubernetes-dashboard-7867cbccbb-5s6h6/event?itemsPerPage=10&page=1 request from 169.254.219.245:36052:
2020/02/01 23:01:27 Getting events related to a pod in namespace
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 169.254.219.245:36052: { contents hidden }
2020/02/01 23:01:27 Getting details of kubernetes-dashboard-7867cbccbb-5s6h6 pod in kubernetes-dashboard namespace
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 received 0 resources from sidecar instead of 1
2020/02/01 23:01:27 received 0 resources from sidecar instead of 1
2020/02/01 23:01:27 No persistentvolumeclaims found related to kubernetes-dashboard-7867cbccbb-5s6h6 pod
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Incoming HTTP/2.0 GET /api/v1/pod/kubernetes-dashboard/kubernetes-dashboard-7867cbccbb-5s6h6/persistentvolumeclaim?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 169.254.219.245:36052:
2020/02/01 23:01:27 No persistentvolumeclaims found related to kubernetes-dashboard-7867cbccbb-5s6h6 pod
2020/02/01 23:01:27 [2020-02-01T23:01:27Z] Outcoming response to 169.254.219.245:36052 with 200 status code
in the deployment progress i had the problem that metrics runns in an error with --kubelet-insecure-tls
in the log was --kubelet-insecure-tls doesnt supported. i remove it. restart add it and no error in the log seems working. but obviously not. by the way srry for my bad english.
and i had the problem before that. it downloads the amd64 version previously. i fix it in the deployment.yaml.
The text was updated successfully, but these errors were encountered:
I have the same issue, reinstalling it does not help. However, I have it installed in kube-system, but I suspect that it should not make a difference in operational stability.
kubernetesui/dashboard:v2.0.0-rc3
k8s.gcr.io/metrics-server-arm64:v0.3.6
kubernetesui/metrics-scraper:v1.0.3
kubectl top pods seems not working.
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)
watch kubectl get pods --all-namespaces
metrics server log:
dashboard-metrics-scrapper log:
kubernetes-dashboard log:
in the deployment progress i had the problem that metrics runns in an error with --kubelet-insecure-tls
in the log was --kubelet-insecure-tls doesnt supported. i remove it. restart add it and no error in the log seems working. but obviously not. by the way srry for my bad english.
and i had the problem before that. it downloads the amd64 version previously. i fix it in the deployment.yaml.
The text was updated successfully, but these errors were encountered: