Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus.service.annotations in the default values.yaml should be unbiased #157

Open
den-is opened this issue Dec 28, 2023 · 5 comments · May be fixed by #158
Open

Prometheus.service.annotations in the default values.yaml should be unbiased #157

den-is opened this issue Dec 28, 2023 · 5 comments · May be fixed by #158
Assignees

Comments

@den-is
Copy link
Contributor

den-is commented Dec 28, 2023

Right now default values.yaml set biased annotation on coredns prometheus metrics service object.

prometheus:
  service:
    enabled: false
    annotations:
      prometheus.io/scrape: "true"
      prometheus.io/port: "9153"
  • default values in the most cases should be blank literals
  • current annotations setup is biased, putting this very specific annotations

What if someone does not want to use any scraping at all?
What if someone is not using this native prometheus "hack" and is using prometheus operator workflow with ServiceMonitors?

In both above cases I have to override default value for these annotations with null to get completely rid of them.

prometheus:
  service:
    enabled: true
    annotations: null
  monitor:
    enabled: true

Yep, these annotations won't hurt in most cases, and can be just ignored. But we are returning to the default case. Default values in the chart's values.yaml should be blank.

If a person wants to scrape coredns metrics using these annotations, this person has to explicitly enable annotations for his specific setup. That also makes setup more efficient, explicit, and trackable in GitOps way.

@den-is den-is linked a pull request Dec 28, 2023 that will close this issue
4 tasks
@hagaibarel
Copy link
Collaborator

I don't see this as biased, it's more of "batteries included but replaceable". We provide sane defaults that fit most use cases with on option to override / change them if the need arises.

@hagaibarel hagaibarel self-assigned this Dec 28, 2023
@den-is
Copy link
Contributor Author

den-is commented Dec 28, 2023

Another case.
It is hard to override map/list values in helm.

Let's say I want to add annotations to metrics service, and don't want to keep prometheus annotations.

prometheus:
  service:
    enabled: true
    annotations:
      testannotation: "test"

Helm is merging default map values with user provided values:
So resulting resource will have this annotations:

# helm template coredns ./helm/charts/coredns -f ~/projects/kublab/apps/coredns/values.yaml --dry-run --debug

apiVersion: v1
kind: Service
metadata:
  name: coredns-metrics
  namespace: kube-system
  labels:
    # omitted
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
    testannotation: test
# omitted

So right no there are two options:

  • completely disable annotations for the metrics service with prometheus.service.annotations=null
  • or have a combination of annotations from the default values.yaml and user provided annotations.

@den-is
Copy link
Contributor Author

den-is commented Dec 28, 2023

I also did not want to provide this unique and rare example.
But then there is a guy that will do thing like this: https://github.com/istio/istio/blob/fe4b8d8cd5d55ce9db7c87a129bdcd7c2bb6baf7/samples/addons/extras/prometheus-operator.yaml#L23-L31

Personally I had issues with this specific case. spent some time trying to find the root cause.

@hagaibarel
Copy link
Collaborator

I also did not want to provide this unique and rare example.
But then there is a guy that will do thing like this: https://github.com/istio/istio/blob/fe4b8d8cd5d55ce9db7c87a129bdcd7c2bb6baf7/samples/addons/extras/prometheus-operator.yaml#L23-L31

That's a different case, as the example is overwriting the pod annotations and here we're discussing the specific metrics-service annotations. It's also not that rare as you might think, a lot of places which don't use prometheus-operator have some kind of similar rules to use annotations for discovery.

In regards to your comment - #157 (comment) and my comment in #158 (comment), can you provide a real world example of why the current situation is limiting or prevented you from achieving your desired setup (not a theoretical example or something an anonymous person might want to do)?

@den-is
Copy link
Contributor Author

den-is commented Dec 28, 2023

That's a different case, as the example is overwriting the pod annotations and here we're discussing the specific metrics-service annotations. It's also not that rare as you might think, a lot of places which don't use prometheus-operator have some kind of similar rules to use annotations for discovery.

IMHO this is not a different example. This is exactly what can go wrong. This example Creates ServiceMonitor for PrometheusOperator but for some "great reason" (I understand this reason and can apply it to some exotic cases) overrides discovered values from annotations of some pod X. (imagine for some reason you enable istio-sidecar injection on coredn's namespace, and then instead of istio-sidecars metrics port you will be getting coredns metrics port, while job asks to scrape istio-sidecars metrics)

Also I'm very well aware about that non-official "hack" the community came up long time ago, on how to dynamically discover scraping targets using these annotations + relablings, in non PrometheusOperator setups.

In regards to your comment - #157 (comment) and my comment in #158 (comment), can you provide a real world example of why the current situation is limiting or prevented you from achieving your desired setup (not a theoretical example or something an anonymous person might want to do)?

I do not have any additional real-world examples, except those which I have shown above, because they are already real-world examples, from our real time-space continuum.

This enhancement is not critical. It just was my attempt to make the chart better for the community, while maybe, yes, would require select admins to just re-add annotations to their coredns deploys.

Curious how my previous PR was merged. What if some "theoretical" guy does promql queries on pods/services metadata.names to get specific metrics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants