Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kube-proxy support for Openshift #17863

Closed
ChrsMark opened this issue Apr 21, 2020 · 9 comments · Fixed by #30054
Closed

Kube-proxy support for Openshift #17863

ChrsMark opened this issue Apr 21, 2020 · 9 comments · Fixed by #30054
Labels
enhancement Team:Integrations Label for the Integrations team Team:Platforms Label for the Integrations - Platforms team

Comments

@ChrsMark
Copy link
Member

ChrsMark commented Apr 21, 2020

Describe the enhancement:
Right now proxy metricseat does not work out of the box on Openshift installations. It would be nice to investigate how this can happen.

Extra info

Logs collected while trying to deploy on Openshift:

2020-04-06T11:35:37.554Z INFO module/wrapper.go:252 Error fetching data for metricset kubernetes.proxy: error getting processed metrics: error making http request: Get http://localhost:10249/metrics: dial tcp 127.0.0.1:10249: connect: connection refused
Notes:

kube-proxy pod runs on kube-proxy namespace and not kube-system. Not that it would be the issue since we are trying to access host's network.

What is actually running in this pod:

kubectl -n kube-proxy exec -it kube-proxy-hj5zb /bin/bash           
[root@minishift origin]# curl http://0.0.0.0:10249/metrics
curl: (7) Failed connect to 0.0.0.0:10249; Connection refused
[root@minishift origin]# ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.1  0.6 475736 28220 ?        Ssl  10:04   0:10 openshift start network --enable=proxy --listen=https://0.0.0.0:8444 --config=/etc/origin/node/node-config.yaml
root      5784  0.4  0.1  20244  7036 ?        Ss   11:55   0:00 /bin/bash
root      5831  0.0  0.0  55184  1844 ?        R+   11:55   0:00 ps aux
Info:

Container image: openshift/origin-control-plane:v3.11.0

Info: https://github.com/openshift/sdn

@ChrsMark ChrsMark added [zube]: Investigate enhancement Team:Platforms Label for the Integrations - Platforms team labels Apr 21, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations-platforms (Team:Platforms)

@ChrsMark ChrsMark added the containers Related to containers use case label Apr 21, 2020
@zube zube bot removed the containers Related to containers use case label Apr 21, 2020
@botelastic
Copy link

botelastic bot commented Mar 25, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@botelastic botelastic bot added the Stalled label Mar 25, 2021
@exekias
Copy link
Contributor

exekias commented Mar 25, 2021

This one is still valid, please don't close it 🙂

@botelastic botelastic bot removed the Stalled label Mar 25, 2021
@Protopopys
Copy link

@exekias, i have some problem with my OpenShift cluster.
I have read the documentation and it looks like Namespace: openshift-sdn configmaps: sdn-config --> metricsBindAddress: 0.0.0.0:29101 is the solution to the problem. I am currently testing an elastic-agent configuration with the following settings:

          - data_stream:
              dataset: kubernetes.proxy
              type: metrics
            metricsets:
              - proxy
            hosts:
              # Kubernetes
              # - 'localhost:10249'
              # Openshift
              - 'localhost:29101'
            period: 10s

@ChrsMark ChrsMark added the Team:Integrations Label for the Integrations team label Jan 20, 2022
@ChrsMark
Copy link
Member Author

ChrsMark commented Jan 20, 2022

Thanks for your feedback @Protopopys! So did you need to change kube-proxy's config if I understand correctly? Also could you provide more details about your Openshift installation, version bare-metal/cloud etc?

Fyi we are running some verification for Agent these days. Find more info at elastic/integrations#2065. cc: @tetianakravchenko

@Protopopys
Copy link

Protopopys commented Jan 21, 2022

Hi @ChrsMark, for local testing I use "Code Ready Containers"

CodeReady Containers version: 1.38.0+659b2cbd
OpenShift version: 4.9.12 (embedded in executable)

We use a bare metal installation Openshift 4.9 in development and production.

You don't need to change the kube-proxy configMap, metricsBindAddress: 0.0.0.0:29101 it's default value.
In the crc, you can find the configuration map at https://console-openshift-console.apps-crc.testing/k8s/ns/openshift-sdn/configmaps/sdn-config

kind: KubeProxyConfiguration
metricsBindAddress: 0.0.0.0:29101

My Elastic-Agent configMap - https://gist.github.com/Protopopys/024679185b7a7e24fd3b551b1e343c75

1 - kubernetes-cluster-metrics - https://gist.github.com/Protopopys/024679185b7a7e24fd3b551b1e343c75#file-gistfile1-txt-L191

2 - kubernetes-node-metrics - https://gist.github.com/Protopopys/024679185b7a7e24fd3b551b1e343c75#file-gistfile1-txt-L390

3 - container-log - https://gist.github.com/Protopopys/024679185b7a7e24fd3b551b1e343c75#file-gistfile1-txt-L502

@ChrsMark
Copy link
Member Author

Thank you a lot for these details @Protopopys ! We will take care of these issues and for proxy specifically while at elastic/integrations#2065.

@tetianakravchenko
Copy link
Contributor

tetianakravchenko commented Jan 26, 2022

Hi @Protopopys, thank you for the detailed reply!

I have few questions regarding your setup:
in your config you have 'https://kubernetes.default.svc.cluster.local:443' - https://gist.github.com/Protopopys/024679185b7a7e24fd3b551b1e343c75#file-gistfile1-txt-L210, instead of 'https://${env.KUBERNETES_SERVICE_HOST}:${env.KUBERNETES_SERVICE_PORT}' as defined here https://github.com/elastic/beats/blob/master/deploy/kubernetes/elastic-agent-standalone-kubernetes.yaml#L46
did you have some issue with 'https://${env.KUBERNETES_SERVICE_HOST}:${env.KUBERNETES_SERVICE_PORT}' ?

did you need to add any changes to the ClusterRole/Role(s) - https://github.com/elastic/beats/blob/master/deploy/kubernetes/elastic-agent-standalone-kubernetes.yaml#L640-L717 to have elastic-agent running?

@Protopopys
Copy link

Hi @tetianakravchenko !
1 - We can use a DNS record or an IP address (KUBERNETES_SERVICE_HOST). There were no problems in either case.
2 - My ClusterRole/Role(s) are identical to the ones you linked.

PS. I'm still testing and if I find anything, I'll let you know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Team:Integrations Label for the Integrations team Team:Platforms Label for the Integrations - Platforms team
Projects
None yet
5 participants