Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to terminate TLS mqtt to TCP with Kubernetes ingress-nginx controller. ERROR: Client network socket disconnected before secure TLS connection #369

Closed
xtianus79 opened this issue Sep 2, 2022 · 5 comments
Assignees
Labels
question Further information is requested #triage/stale

Comments

@xtianus79
Copy link

Not sure if this is a bug but this is something I am trying to get through and not sure what I can do to get past the error.

I feel like I am close but I am not sure if I am just terminating wrong with the ingress controller and what I am trying is not possible. Or, the certificate is in the way and I need to do something to alleviate the issue.

Another idea I see is using a HAPROXY LB in front of the ingress-nginx controller.

First, I am using the ingress Nginx tcp/udp controller for Kubernetes described here with an Letsencrypt RA CA

https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/

This allows me through my HELM installation to create a tcp config and mapping.

 helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --create-namespace --namespace $NAMESPACE --set controller.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path=/healthz,controller.service.annotations.service\.beta\.kubernetes\.io/azure-dns-label-name=$DNS_LABEL --set controller.service.loadBalancerIP=$STATIC_IP --set tcp.18083=$NAMESPACE/emqx-ee:18083,tcp.8883=$NAMESPACE/emqx-ee:8883

That does allow for an mqtts:// ssl connection to go through all the way to the 8883 backend pod. The issue is when I want to terminate at the loadbalancer and send the resulting incoming traffic to terminate the TLS and go to the TCP 1883 port.

To try this I change the port tcp definition

from this:

'8883': ingress-emqx/emqx-ee:8883

to this:

'8883': ingress-emqx/emqx-ee:1883

When I connect the client mqttx to 8883 CA I get the resulting error:

Error: Client network socket disconnected before secure TLS connection was established

What exactly does the termination for ingress-nginx? Is it only the ingress instruction rule or is it the controller TCP Proxy Protocol?

https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
https://kubernetes.github.io/ingress-nginx/examples/tls-termination/

Is the only way to terminate is to utilize the ingress instruction/rewrite rule? If so, I think this is where the controller won't work because the route ingress rule only works with http and won't act upon mqtt layer-7 protocol.

Is there a way for the ingress to align with the ingress-controller service? I'm not sure it even matters to the previous issue. I simply want to take the incoming 8883 port and switch it to the resulting 1883 port and IP.

@xtianus79 xtianus79 added the bug Something isn't working label Sep 2, 2022
@Gala-R
Copy link
Contributor

Gala-R commented Sep 2, 2022

you can take a look at this discussion kubernetes/ingress-nginx#636, As I said before, nginx ingress is not friendly to tcp and currently does not support tls offloading on nginx ingress @xtianus79

@xtianus79
Copy link
Author

xtianus79 commented Sep 2, 2022

@Gala-R I think part of my issue is I don't understand what the "offloading" or "termination" is technically doing. Both in a general sense and in the sense with the ingress-nginx setup. With that said, 2 questions.

  1. Can I do the solution from that link? Would you recommend it? Is there a good way to change that file across all 3 pods emqx instances?
Temporarily run an ingress-nginx pod with default settings, and save a copy of its /etc/nginx/template/nginx.tmpl file
This is important because I've noticed there are some variations across versions and they can cause issues too. It's important to make sure you make edits over the default for your version
  1. Can I do the HAPROXY setup you did and keep the ingress-nginx controller load balancer only to offload the dashboard 443 -> 18083 and do a HAPROXY like you've described to the NGINX controller that will be there for pods setup that is already there.

  2. Or should I just remove the Nginx controller and go for the HAPROXY?

I am trying to think if there is a reason to have the controller w/ loadbalancer on there if I am only going to try and use it for the dashboard.

@xtianus79
Copy link
Author

also, @Gala-R do you think it would be worth trying to figure out what is going on here? It seems like if the cert issue wasn't there it would go through as expected? I get that the instruction rule of the ingress won't work but I feel like it also shouldn't matter. Shouldn't the load balancer which is what is being installed by HELM and setting the TCP be enough? the 8883 -> 1883 Shouldn't that also be considered termination?

You make a good point in the HAPROXY setup where the IP needs to be "known" from beginning to end. End being the pod emqx-ee-0... address. The ingress-nginx load balancer should be able to do this as well. And it does fine for the 8883 -> 8883 or 1883 -> 1883. It's the switch that is causing the cert issue. Something tells me that the socket disconnecting before connection could be established is what I want to happen. Connect via TLS and terminate and then proceed further to a port on the same IP without pause.

@Gala-R
Copy link
Contributor

Gala-R commented Sep 6, 2022

1 termination means to process mTLS on the LB, not on the EMQX side
2 i don't recommend the solution from above link
3 I would like to know what your needs are @xtianus79

@Rory-Z Rory-Z added question Further information is requested and removed bug Something isn't working labels Sep 13, 2022
@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested #triage/stale
Projects
None yet
Development

No branches or pull requests

3 participants