Skip to content

Commit

Permalink
Note that Heapster is deprecated (#8827)
Browse files Browse the repository at this point in the history
* Note that Heapster is deprecated

This notes that Heapster is deprecated, and migrates the relevant
docs to talk about metrics-server or other solutions by default.

* Copyedits and improvements

Signed-off-by: Misty Stanley-Jones <[email protected]>

* Address feedback
  • Loading branch information
DirectXMan12 authored and Misty Stanley-Jones committed Jun 20, 2018
1 parent c1adc37 commit fe8235c
Show file tree
Hide file tree
Showing 9 changed files with 121 additions and 111 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ title: Guaranteed Scheduling For Critical Add-On Pods

In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine
there are a number of add-ons which, for various reasons, must run on a regular cluster node (rather than the Kubernetes master).
Some of these add-ons are critical to a fully functional cluster, such as Heapster, DNS, and UI.
Some of these add-ons are critical to a fully functional cluster, such as metrics-server, DNS, and UI.
A cluster may stop working properly if a critical add-on is evicted (either manually or as a side effect of another operation like upgrade)
and becomes pending (for example when the cluster is highly utilized and either there are other pending pods that schedule into the space
vacated by the evicted critical add-on pod or the amount of resources available on the node changed for some other reason).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,28 +20,32 @@ but is not allowed to use more CPU than its limit.

Each node in your cluster must have at least 1 cpu.

A few of the steps on this page require that the
[Heapster](https://github.com/kubernetes/heapster) service is running
in your cluster. But if you don't have Heapster running, you can do most
of the steps, and it won't be a problem if you skip the Heapster steps.
A few of the steps on this page require you to run the
[metrics-server](https://github.com/kubernetes-incubator/metrics-server)
service in your cluster. If you don't have metrics-server
running, you can skip those steps.

If you are running minikube, run the following command to enable heapster:
If you are running minikube, run the following command to enable
metrics-server:

```shell
minikube addons enable heapster
minikube addons enable metrics-server
```

To see whether the Heapster service is running, enter this command:
To see whether metrics-server (or another provider of the resource metrics
API, `metrics.k8s.io`) is running, enter this command:

```shell
kubectl get services --namespace=kube-system
kubectl get apiservices
```

If the heapster service is running, it shows in the output:
If the resource metrics API is available, the output will include a
reference to `metrics.k8s.io`.


```shell
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system heapster 10.11.240.9 <none> 80/TCP 6d
NAME
v1beta1.metrics.k8s.io
```

{{% /capture %}}
Expand Down Expand Up @@ -101,26 +105,18 @@ resources:
cpu: 500m
```

Start a proxy so that you can call the heapster service:
Use `kubectl top` to fetch the metrics for the pod:

```shell
kubectl proxy
```

In another command window, get the CPU usage rate from the heapster service:

```
curl http://localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/cpu-example/pods/cpu-demo/metrics/cpu/usage_rate
kubectl top pod memory-demo
```

The output shows that the Pod is using 974 millicpu, which is just a bit less than
the limit of 1 cpu specified in the Pod's configuration file.

```json
{
"timestamp": "2017-06-22T18:48:00Z",
"value": 974
}
```
NAME CPU(cores) MEMORY(bytes)
memory-demo 794m <something>
```

Recall that by setting `-cpu "2"`, you configured the Container to attempt to use 2 cpus.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,28 +19,33 @@ but is not allowed to use more memory than its limit.

Each node in your cluster must have at least 300 MiB of memory.

A few of the steps on this page require that the
[Heapster](https://github.com/kubernetes/heapster) service is running
in your cluster. But if you don't have Heapster running, you can do most
of the steps, and it won't be a problem if you skip the Heapster steps.
A few of the steps on this page require you to run the
[metrics-server](https://github.com/kubernetes-incubator/metrics-server)
service in your cluster. If you don't have metrics-server
+running, you can skip those steps.

If you are running minikube, run the following command to enable heapster:
If you are running minikube, run the following command to enable
metrics-server:

```shell
minikube addons enable heapster
minikube addons enable metrics-server
```

To see whether the Heapster service is running, enter this command:
To see whether metrics-server (or another provider of the resource metrics
API, `metrics.k8s.io`) is running, enter this command:

```shell
kubectl get services --namespace=kube-system
kubectl get apiservices
```

If the Heapster service is running, it shows in the output:
If the resource metrics API is available, the output will include a
reference to `metrics.k8s.io`.



```shell
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system heapster 10.11.240.9 <none> 80/TCP 6d
NAME
v1beta1.metrics.k8s.io
```

{{% /capture %}}
Expand Down Expand Up @@ -103,29 +108,20 @@ resources:
...
```

Start a proxy so that you can call the Heapster service:
Use `kubectl top` to fetch the metrics for the pod:

```shell
kubectl proxy
```

In another command window, get the memory usage from the Heapster service:

```
curl http://localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/namespaces/mem-example/pods/memory-demo/metrics/memory/usage
kubectl top pod memory-demo
```

The output shows that the Pod is using about 162,900,000 bytes of memory, which
is about 150 MiB. This is greater than the Pod's 100 MiB request, but within the
Pod's 200 MiB limit.

```json
{
"timestamp": "2017-06-20T18:54:00Z",
"value": 162856960
}
```

NAME CPU(cores) MEMORY(bytes)
memory-demo <something> 162856960
```

Delete your Pod:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,42 @@ reviewers:
title: Tools for Monitoring Compute, Storage, and Network Resources
---

Understanding how an application behaves when deployed is crucial to scaling the application and providing a reliable service. In a Kubernetes cluster, application performance can be examined at many different levels: containers, [pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and whole clusters. As part of Kubernetes we want to provide users with detailed resource usage information about their running applications at all these levels. This will give users deep insights into how their applications are performing and where possible application bottlenecks may be found. In comes [Heapster](https://github.com/kubernetes/heapster), a project meant to provide a base monitoring platform on Kubernetes.
To scale and application and provide a reliable service, you need to
understand how an application behaves when it is deployed. You can examine
application performance in a Kubernetes cluster by examining the containers,
[pods](/docs/user-guide/pods), [services](/docs/user-guide/services), and
the characteristics of the overall cluster. Kubernetes provides detailed
information about an application's resource usage at each of these levels.
This information allows you to evaluate your application's performance and
where bottlenecks can be removed to improve overall performance.

## Overview

Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes' [Kubelet](/docs/admin/kubelet/)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization), [Google Cloud Monitoring](https://cloud.google.com/monitoring/) and many others described in more details [here](https://git.k8s.io/heapster/docs/sink-configuration.md). The overall architecture of the service can be seen below:
In Kubernetes, application monitoring does not depend on a single monitoring
solution. On new clusters, you can use two separate pipelines to collect
monitoring statistics by default:

- The **resource metrics pipeline** provides a limited set of metrics related
to cluster components such as the HorizontalPodAutoscaler controller, as well
as the `kubectl top` utility. These metrics are collected by
[metrics-server](https://github.com/kubernetes-incubator/metrics-server)
and are exposed via the `metrics.k8s.io` API. `metrics-server` discovers
all nodes on the cluster and queries each node's [Kubelet](/docs/admin/kubelet)
for CPU and memory usage. The Kubelet fetches the data from
[cAdvisor](https://github.com/google/cadvisor). `metrics-server` is a
lightweight short-term in-memory store.

- A **full monitoring pipeline**, such as Prometheus, gives you access to richer
metrics. In addition, Kubernetes can respond to these metrics by automatically
scaling or adapting the cluster based on its current state, using mechanisms
such as the Horizontal Pod Autoscaler. The monitoring pipeline fetches
metrics from the Kubelet, and then exposes them to Kubernetes via an adapter
by implementing either the `custom.metrics.k8s.io` or
`external.metrics.k8s.io` API. See
[Full metrics pipeline](#full-metrics-pipelines) for more information about
some popular pipelines that implement these APIs and enable these
capabilities.

![overall monitoring architecture](/images/docs/monitoring-architecture.png)

Let's look at some of the other components in more detail.

### cAdvisor

Expand All @@ -26,38 +53,35 @@ On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine contain

The Kubelet acts as a bridge between the Kubernetes master and the nodes. It manages the pods and containers running on a machine. Kubelet translates each pod into its constituent containers and fetches individual container usage statistics from cAdvisor. It then exposes the aggregated pod resource usage statistics via a REST API.

## Storage Backends

### InfluxDB and Grafana

A Grafana setup with InfluxDB is a very popular combination for monitoring in the open source world. InfluxDB exposes an easy to use API to write and fetch time series data. Heapster is setup to use this storage backend by default on most Kubernetes clusters. A detailed setup guide can be found [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/influxdb.md). InfluxDB and Grafana run in Pods. The pod exposes itself as a Kubernetes service which is how Heapster discovers it.

The Grafana container serves Grafana's UI which provides an easy to configure dashboard interface. The default dashboard for Kubernetes contains an example dashboard that monitors resource usage of the cluster and the pods inside of it. This dashboard can easily be customized and expanded. Take a look at the storage schema for InfluxDB [here](https://github.com/GoogleCloudPlatform/heapster/blob/master/docs/storage-schema.md#metrics).
## Full Metrics Pipelines

Here is a video showing how to monitor a Kubernetes cluster using heapster, InfluxDB and Grafana:
Many full metrics solutions exist for Kubernetes. Prometheus and Google Cloud
Monitoring are two of the most popular.

[![How to monitor a Kubernetes cluster using heapster, InfluxDB and Grafana](http://img.youtube.com/vi/SZgqjMrxo3g/0.jpg)](http://www.youtube.com/watch?v=SZgqjMrxo3g)
### Prometheus

Here is a snapshot of the default Kubernetes Grafana dashboard that shows the CPU and Memory usage of the entire cluster, individual pods and containers:

![snapshot of the default Kubernetes Grafana dashboard](/images/docs/influx.png)
[Prometheus](https://prometheus.io) natively monitors Prometheus.
The [Prometheus Operator](https://coreos.com/operators/prometheus/docs/latest/)
simplifies Prometheus setup on Kubernetes, and allows you to serve the
custom metrics API using the
[Prometheus adapter](https://github.com/directxman12/k8s-prometheus-adapter).
Prometheus provides a robust query language and a built-in dashboard for
querying and visualizing your data. Prometheus is also a supported
data source for [Grafana](https://prometheus.io/docs/visualization/grafana/).

### Google Cloud Monitoring

Google Cloud Monitoring is a hosted monitoring service that allows you to visualize and alert on important metrics in your application. Heapster can be setup to automatically push all collected metrics to Google Cloud Monitoring. These metrics are then available in the [Cloud Monitoring Console](https://app.google.stackdriver.com/). This storage backend is the easiest to setup and maintain. The monitoring console allows you to easily create and customize dashboards using the exported data.
Google Cloud Monitoring is a hosted monitoring service you can use to
visualize and alert on important metrics in your application. can collect
metrics from Kubernetes, and you can access them
using the [Cloud Monitoring Console](https://app.google.stackdriver.com/).
You can create and customize dashboards to visualize the data gathered
from your Kubernetes cluster.

Here is a video showing how to setup and run a Google Cloud Monitoring backed Heapster:
This video shows how to configure and run a Google Cloud Monitoring backed Heapster:

[![how to setup and run a Google Cloud Monitoring backed Heapster](http://img.youtube.com/vi/xSMNR2fcoLs/0.jpg)](http://www.youtube.com/watch?v=xSMNR2fcoLs)

Here is a snapshot of the Google Cloud Monitoring dashboard showing cluster-wide resource usage.

![Google Cloud Monitoring dashboard](/images/docs/gcm.png)

## Try it out!

Now that you've learned a bit about Heapster, feel free to try it out on your own clusters! The [Heapster repository](https://github.com/kubernetes/heapster) is available on GitHub. It contains detailed instructions to setup Heapster and its storage backends. Heapster runs by default on most Kubernetes clusters, so you may already have it! Feedback is always welcome. Please let us know if you run into any issues via the troubleshooting [channels](/docs/troubleshooting/).
{{< figure src="/images/docs/gcm.png" alt="Google Cloud Monitoring dashboard example" title="Google Cloud Monitoring dashboard example" caption="This dashboard shows cluster-wide resource usage."> }}

***
*Authors: Vishnu Kannan and Victor Marmol, Google Software Engineers.*
*This article was originally posted in [Kubernetes Blog](https://kubernetes.io/blog/2015/05/resource-usage-monitoring-kubernetes).*
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ This document walks you through an example of enabling Horizontal Pod Autoscaler
## Prerequisites

This example requires a running Kubernetes cluster and kubectl, version 1.2 or later.
[Heapster](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster
as Horizontal Pod Autoscaler uses it to collect metrics
(if you followed [getting started on GCE guide](/docs/getting-started-guides/gce),
heapster monitoring will be turned-on by default).
[metrics-server](https://github.com/kubernetes/heapster) monitoring needs to be deployed in the cluster
to provide metrics via the resource metrics API, as Horizontal Pod Autoscaler uses this API to collect metrics
(if you followed [getting started on GCE guide](/docs/getting-started-guides/gce.md),
metrics-server monitoring will be turned-on by default).

To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster
and kubectl at version 1.6 or later. Furthermore, in order to make use of custom metrics, your cluster
Expand Down Expand Up @@ -196,7 +196,7 @@ Notice that the `targetCPUUtilizationPercentage` field has been replaced with an
The CPU utilization metric is a *resource metric*, since it is represented as a percentage of a resource
specified on pod containers. Notice that you can specify other resource metrics besides CPU. By default,
the only other supported resource metric is memory. These resources do not change names from cluster
to cluster, and should always be available, as long as Heapster is deployed.
to cluster, and should always be available, as long as the `metrics.k8s.io` API is available.

You can also specify resource metrics in terms of direct values, instead of as percentages of the
requested value. To do so, use the `targetAverageValue` field instead of the `targetAverageUtilization`
Expand Down
Loading

0 comments on commit fe8235c

Please sign in to comment.