-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Servicemonitor for prometheus exporter is referring to cluster port instead of metrics pod port #483
Comments
Can you provide the yaml for the service monitor you created? |
Hello @HoustonPutman, Below is the Servicemonitor yaml , i used the default provided in SolrOperator Documentation. apiVersion: monitoring.coreos.com/v1
|
So you are using a serviceMonitor, and the Solr metrics service is listening on port 80, or at least it should be... The pod is listening on port 8080, but the service forwards that 80 -> 8080 when sending the request to the pod. I have almost the exact same thing working correctly. What version of the prometheus stack are you running? Also can you provide information on your Kube cluster? (version, vendor, etc) I have a feeling there's an issue with your networking. |
you are right, thats how its supposed to work. However the service endpoint in prometheus targets is referencing to http://podIP:90/metrics and due to that, the connection is getting refused. My other default service endpoints for prometheus are working as expected. Prometheus : |
Are you sure you don't have a podMonitor defined as well? Looks like there might be a bug in the prometheus operator? In the meantime you can use |
We have the same problem here. We are using the solr-operator 0.6 and prometheus 2.39.1 hosted in gke version 1.21.
As you can see in the screenshot prometheus tries to connect to the pod on port 80 which is the wrong port. Our workaround is to add a prometheus scraping annotation to the exporter pod:
|
In that screenshot, is the |
even after adding pod annotation, prometheus still looking at port 80 on pod IP in my case. Something is seriously wrong with this.below is my exporter config.
|
It is the pod ip |
The old failed target will still exits but there should be a new target which should works. |
Can you share your prometheus scraping config? This seems to be a prometheus issue... |
We are having the same issue. The We've also bypassed the problem by enabling scraping of the pods directly:
The Prometheus scraping config we use is the default |
Looking at the code, it looks like the Any attempts to overwrite this by using custom |
We have exactly the same issue. |
Indeed, this is a valid workaround |
So it seems like everyone is using I think the issue is that this feature was designed with the I will try to test this locally but it might be difficult. I'm happy to create a test docker image for anyone else to try out (based on v0.6.0) and see if it fixes things for them. |
Situation before Solr:
Situation after Solr:
No metrics are scraped from Solr as, by default, it seems Prometheus is using the endpoints?
|
I have a patch that I think should work: #539. Steps to try it:
If it does work we can get this into the |
It seems to be working |
Cool, I will go ahead and merge then! |
I have followed the solr operator documentation to configure SolrPrometheusExporter, however after creating the servicemonitor, the service endpoint is going inactive. After further troubleshooting, i realized the metric server is trying to connect to port 80 whereas the metrics server is running on port 8080. Is it possible to pass port into service monitor?
Get "http://x.x.x.x:80/metrics": dial tcp x.x.x.x:80: connect: connection refused
The text was updated successfully, but these errors were encountered: