Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 default service monitor configuration to use https #2065

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think that we should move forward here. See the doc: https://book.kubebuilder.io/reference/metrics.html

These metrics are protected by kube-auth-proxy by default if using kubebuilder. Kubebuilder v2.2.0+ scaffold a clusterrole which can be found at config/rbac/auth_proxy_client_clusterrole.yaml.

You will need to grant permissions to your Prometheus server so that it can scrape the protected metrics. To achieve that, you can create a clusterRoleBinding to bind the clusterRole to the service account that your Prometheus server uses.

You can run the following kubectl command to create it. If using kubebuilder <project-prefix> is the namePrefix field in config/default/kustomization.yaml.

kubectl create clusterrolebinding metrics --clusterrole=<project-prefix>-metrics-reader --s

Did you follow that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/hold

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but I'm not sure how your comment is related.

The proxy is by default exposed only via https so scraping using scheme: http will result in a 400 error. Annoyingly this directly implemented in golang, so the rbac proxy doesn't log anything and prometheus doesn't log the response body.

The bearerTokenFile is required for prometheus to actually include a Authorization header.

This header is required so the proxy can actually validate whether the request has the necessary permissions (you get a 401 otherwise, while the missing binding would result in a 403)

The insecureSkipVerify is required as the rbac proxy uses a self-generate certificate in the current configuration https://github.com/brancz/kube-rbac-proxy/blob/e4b31758aedb3d29d8f50d615828401a30d28c1e/main.go#L261

@estroz proposed to use the webhook certificate #2065 (comment), but this seems out of scope of this PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can be missing something here our misunderstanding. However, see: https://github.com/kubernetes-sigs/kubebuilder/pull/1317/files#diff-20a08dcfd7ad4ed3343d00cf32c7844ce41a2d8d5f5df5b6de56852a510c5517R46 from the PR: #1317 which via the rbac rules allows accessing metrics behind kube-rbac-proxy.

Then, see that in the e2e tests that are checked:

So, I am trying here understand why in the e2e test we get HTTP 200 from the metrics endpoint and you are facing 400 or 403. What is missing? What has been working our e2e tests and not for you?

Is it because we are getting the token to check and doing the handshake and you are not?

Then, I understand that the default configuration has been allowing Prometheus to get the metrics. See that the problem point out by you was solved in the past with the rbac roles #1317.

Anyway, shows that also has some difference between using Prometheus Operator instead of kube-prometheus which has been solved with the solution applied here as you point out : #1253 (comment)

So, I will

/hold cancel

However, would be very nice to have a new issue with the steps required to reproduce the scenario for the future we are able to check that. Describing the cluster and Prometheus used, and if the rbac roles for allowing the Prometheus get the metrics were or not used, and what are the steps to check the error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, Camila's point is pretty sound. If e2e tests are working and this is being tested, we need an issue describing the steps to get this error so that we can reproduce and check that it is an actual bug and not just an error on your side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Go 1.16? Not supported yet. There are some modules related changes (like some commands not tidying up go.mod anymore) that haven't been taking into account in kubebuilder nor any of their dependencies.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah that is unfortunate. If I have some time I can update e2e code after this is merged.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to highlight: We might be unable to reproduce the issue in the e2e as it now if it is specific only when to use Prometheus Operator instead of kube-prometheus.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you create an issue describing the need of an e2e test that covers this case, I'm ok merging this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #2087

matchLabels:
control-plane: controller-manager
16 changes: 8 additions & 8 deletions docs/book/src/reference/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ can be found at `config/rbac/auth_proxy_client_clusterrole.yaml`.
You will need to grant permissions to your Prometheus server so that it can
scrape the protected metrics. To achieve that, you can create a
`clusterRoleBinding` to bind the `clusterRole` to the service account that your
Prometheus server uses.
Prometheus server uses. If you are using `kube-prometheus`, this cluster binding already exists.

You can run the following kubectl command to create it. If using kubebuilder
`<project-prefix>` is the `namePrefix` field in `config/default/kustomization.yaml`.
Expand All @@ -26,7 +26,7 @@ kubectl create clusterrolebinding metrics --clusterrole=<project-prefix>-metrics
Follow the steps below to export the metrics using the Prometheus Operator:

1. Install Prometheus and Prometheus Operator.
We recommend using [kube-prometheus](https://github.com/coreos/kube-prometheus#installing)
We recommend using [kube-prometheus](https://github.com/coreos/kube-prometheus#installing)
in production if you don't have your own monitoring system.
If you are just experimenting, you can only install Prometheus and Prometheus Operator.
2. Uncomment the line `- ../prometheus` in the `config/default/kustomization.yaml`.
Expand All @@ -38,21 +38,21 @@ It creates the `ServiceMonitor` resource which enables exporting the metrics.
```

Note that, when you install your project in the cluster, it will create the
`ServiceMonitor` to export the metrics. To check the ServiceMonitor,
`ServiceMonitor` to export the metrics. To check the ServiceMonitor,
run `kubectl get ServiceMonitor -n <project>-system`. See an example:

```
$ kubectl get ServiceMonitor -n monitor-system
$ kubectl get ServiceMonitor -n monitor-system
NAME AGE
monitor-controller-manager-metrics-monitor 2m8s
```

Also, notice that the metrics are exported by default through port `8443`. In this way,
you are able to check the Prometheus metrics in its dashboard. To verify it, search
for the metrics exported from the namespace where the project is running
`{namespace="<project>-system"}`. See an example:
you are able to check the Prometheus metrics in its dashboard. To verify it, search
for the metrics exported from the namespace where the project is running
`{namespace="<project>-system"}`. See an example:

<img width="1680" alt="Screenshot 2019-10-02 at 13 07 13" src="https://user-images.githubusercontent.com/7708031/66042888-a497da80-e515-11e9-9d77-d8a9fc1159a5.png">
<img width="1680" alt="Screenshot 2019-10-02 at 13 07 13" src="https://user-images.githubusercontent.com/7708031/66042888-a497da80-e515-11e9-9d77-d8a9fc1159a5.png">

## Publishing Additional Metrics

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
Comment on lines +56 to +59
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this point, no new features should go into go/v2

Suggested change
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the current configuration doesn’t work, it could also be seen as a bug fix, but happy to remove this part.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that's true, then it can be left in I suppose.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is a bugfix (haven't checked that) it should have the 🐛 (:bug:) emoji.

Copy link
Contributor Author

@johanneswuerbach johanneswuerbach Mar 6, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure, but as the default configuration just doesn't work I think it makes sense to treat this as a bugfix.

selector:
matchLabels:
control-plane: controller-manager
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
Comment on lines +58 to +59
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The webhook server already sets up TLS. I wonder if that cert/key can be used here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems out of scope of this PR, but seems like a good idea otherwise.

selector:
matchLabels:
control-plane: controller-manager
Expand Down
4 changes: 4 additions & 0 deletions testdata/project-v2-addon/config/prometheus/monitor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
4 changes: 4 additions & 0 deletions testdata/project-v2-multigroup/config/prometheus/monitor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
4 changes: 4 additions & 0 deletions testdata/project-v2/config/prometheus/monitor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
4 changes: 4 additions & 0 deletions testdata/project-v3-addon/config/prometheus/monitor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
4 changes: 4 additions & 0 deletions testdata/project-v3-config/config/prometheus/monitor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
4 changes: 4 additions & 0 deletions testdata/project-v3-multigroup/config/prometheus/monitor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
selector:
matchLabels:
control-plane: controller-manager
4 changes: 4 additions & 0 deletions testdata/project-v3/config/prometheus/monitor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would not it remove the need to get the token in the e2e tests?

If yes, I think we need to remove that from the e2e tests to validate your solution as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this only instructs how prometheus should scrape the service, but doesn’t change anything about the service itself. As prometheus isn’t used in e2e tests this doesn’t really change anything as this part is not test covered yet.

selector:
matchLabels:
control-plane: controller-manager