Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy exec problem #8119

Open
sanjesss opened this issue Nov 7, 2024 · 2 comments
Open

Proxy exec problem #8119

sanjesss opened this issue Nov 7, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@sanjesss
Copy link

sanjesss commented Nov 7, 2024

Hi, I have a problem with exec inside the feed.
I have a kubrnetes cluster on bare metal. When connecting directly to it on host master node via Lens, exec inside pod works fine.
I can also connect to this cluster via HAproxy, in this case exec inside the pod via Lens does not work.
HAproxy works in TCP mode proxying all requests from *:6443 to kube-master:6443 without any processing.
At the same time through k9s and kubectl exec inside the feed works both directly and through HAproxy.
The same situation happens with exec inside a node.
I suspect that Lens uses some additional port for exec, which I also need to proxy.

Environment (please complete the following information):

  • Lens Version:
    Lens: 2024.10.171859-latest
    Electron: 31.6.0
    Chrome: 126.0.6478.234
    Node: 20.17.0
  • OS: Windows 11, MacOS Sequoia
  • Installation method (e.g. snap or AppImage in Linux): *.dmg and *.exe form lens website

Logs exec into pod:

PS C:\Users\sanje> kubectl exec -i -t -n namespace pod-name-797fcdf95b-gpfl1 -c  pod-name -- sh -c "clear; (bash || ash || sh)"
Error from server:

Logs exec into node:

Error from server:
Terminal will auto-close in 15 seconds ...

HAproxy config:

  frontend kubernetes_api
    mode tcp
    bind *:6443
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }
    acl host_cluster0 req_ssl_sni -i cluster0.eu
    acl host_cluster1 req_ssl_sni -i cluster1.eu
    acl host_cluster2 req_ssl_sni -i cluster2.eu
    use_backend kubeapi_cluster0 if host_cluster0
    use_backend kubeapi_cluster1 if host_cluster1
    use_backend kubeapi_cluster2 if host_cluster2

backend kubeapi_cluster0
    mode tcp
    balance roundrobin
    server cluster0_master master.cluster0:6443 check

backend kubeapi_cluster1
    mode tcp
    balance roundrobin
    server cluster1_master master.cluster1:6443 check

backend kubeapi_cluster2
    mode tcp
    balance roundrobin
    server cluster2_master master.cluster2:6443 check
@sanjesss sanjesss added the bug Something isn't working label Nov 7, 2024
@Christiaanvdl
Copy link

Hello @sanjesss , Thank you for reaching out i have contacted our development team regarding your issue .

@dechegaray
Copy link

Hi, I have a problem with exec inside the feed. I have a kubrnetes cluster on bare metal. When connecting directly to it on host master node via Lens, exec inside pod works fine. I can also connect to this cluster via HAproxy, in this case exec inside the pod via Lens does not work. HAproxy works in TCP mode proxying all requests from *:6443 to kube-master:6443 without any processing. At the same time through k9s and kubectl exec inside the feed works both directly and through HAproxy. The same situation happens with exec inside a node. I suspect that Lens uses some additional port for exec, which I also need to proxy.

Environment (please complete the following information):

  • Lens Version:
    Lens: 2024.10.171859-latest
    Electron: 31.6.0
    Chrome: 126.0.6478.234
    Node: 20.17.0
  • OS: Windows 11, MacOS Sequoia
  • Installation method (e.g. snap or AppImage in Linux): *.dmg and *.exe form lens website

Logs exec into pod:

PS C:\Users\sanje> kubectl exec -i -t -n namespace pod-name-797fcdf95b-gpfl1 -c  pod-name -- sh -c "clear; (bash || ash || sh)"
Error from server:

Logs exec into node:

Error from server:
Terminal will auto-close in 15 seconds ...

HAproxy config:

  frontend kubernetes_api
    mode tcp
    bind *:6443
    tcp-request inspect-delay 5s
    tcp-request content accept if { req_ssl_hello_type 1 }
    acl host_cluster0 req_ssl_sni -i cluster0.eu
    acl host_cluster1 req_ssl_sni -i cluster1.eu
    acl host_cluster2 req_ssl_sni -i cluster2.eu
    use_backend kubeapi_cluster0 if host_cluster0
    use_backend kubeapi_cluster1 if host_cluster1
    use_backend kubeapi_cluster2 if host_cluster2

backend kubeapi_cluster0
    mode tcp
    balance roundrobin
    server cluster0_master master.cluster0:6443 check

backend kubeapi_cluster1
    mode tcp
    balance roundrobin
    server cluster1_master master.cluster1:6443 check

backend kubeapi_cluster2
    mode tcp
    balance roundrobin
    server cluster2_master master.cluster2:6443 check

There is an open issue related to this topic. The problem is that Lens uses an internal proxy to route requests to the Kube API of each cluster, and when other proxies are put in front of it (i.e. HAProxy, Tailscale, etc), ours is disrupted and cannot execute appropriately node shells, pod shells or port-forwards.

We are currently working on a solution for this. Please stay tune for an update that will patch this issue in the next couple of weeks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants