Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing prometheus metrics in versions 0.20.0 and later #4066

Closed
sahilpanjwani opened this issue May 7, 2019 · 6 comments
Closed

Missing prometheus metrics in versions 0.20.0 and later #4066

sahilpanjwani opened this issue May 7, 2019 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sahilpanjwani
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

NGINX Ingress controller version: all versions later than and including 0.20.0

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Azure bare metal.
  • OS (e.g. from /etc/os-release): CentOS Linux 7 (Core)
  • Kernel (e.g. uname -a): Linux machine-1 3.10.0-862.14.4.el7.x86_64 #1 SMP Wed Sep 26 15:12:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubeadm
  • Others:

What happened:
We are setting up monitoring for our ingress controller. I'm getting all the required metrics when using versions up to 0.19.0. However, as soon as I switch to any later version while using the exact same setup, several important metrics stop appearing.

Here are the metrics which are appearing in v0.19 but absent in v0.24.1-
nginx_ingress_controller_bytes_sent
nginx_ingress_controller_ingress_upstream_latency_seconds
nginx_ingress_controller_request_duration_seconds
nginx_ingress_controller_request_size
nginx_ingress_controller_requests
nginx_ingress_controller_response_duration_seconds
nginx_ingress_controller_response_size

What you expected to happen: We are expecting the above mentioned metrics to be present in all the versions after 0.17.1.

How to reproduce it (as minimally and precisely as possible):
Create test pod + service by applying the following yaml-

apiVersion: v1
metadata:
  name: apple-app
  labels:
    app: apple
spec:
  containers:
    - name: apple-app
      image: hashicorp/http-echo
      args:
        - "-text=apple"

---

kind: Service
apiVersion: v1
metadata:
  name: apple-service
spec:
  selector:
    app: apple
  ports:
    - port: 5678

Create test ingress by applying the yaml-

kind: Ingress
metadata:
  name: example-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
    - http:
        paths:
        - path: /apple
          backend:
            serviceName: apple-service
            servicePort: 5678

Deploy Ingress Controller with necessary Configmaps and RBAC by applying the yaml-
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
This will deploy version 0.24.1, call the test service a couple of times-
kubectl exec -it ingress-controller-podname -n ingress-nginx curl 0.0.0.0/apple
Then check the metrics-
kubectl exec -it ingress-controller-podname -n ingress-nginx curl 0.0.0.0:10254/metrics

Now try version 0.19.0 by changing the image version in the deployment and add the flag --default-backend-service=default/apple-service in the args in the deployment and repeat the service hit and metrics scraping.

Anything else we need to know:

@baabel
Copy link

baabel commented May 10, 2019

I am experiencing the same issue, after 0.19.0 prometheus query for the above metrics do not appear.

@curantes
Copy link

duplicate of #3713

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 29, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 28, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants