-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[nginx] Client remote address is discarded when using TLS and proxy-protocol #672
Comments
I just ran into this issue as well with our setup using an AWS ELB in TCP mode in front of the NGINX Ingress Controller. 0.9.0-beta.3 works as expected but proxy-protocol support in 0.9.0-beta.4 & 0.9.0-beta.5 is broken. |
I've submitted #675, which fixes this. |
What's the usual workflow for closing out issues? Since #675 is merged, it would seem this is resolved. |
@arjanschaaf @dpratt please test if the image |
@arjanschaaf do you see the correct IP in the logs? |
@arjanschaaf 0.113 is just a test image with a fix for that |
Yes I do see the correct ip address in the log file. So that part has been fixed! I wasn't sure if I should expect this image to be fully functional 😄 Like I stated before: this image isn't functioning correctly in my setup. But it does log the correct ip address! FYI: my test setup is AWS based with a single ELB (tcp mode with proxy protocol support) in front of the NGINX Ingress Controller. |
Please update the image to |
Closing. Please reopen if the issue persists after the upgrdade to 0.9.0-beta.6. |
0.9.0-beta.7 works for me: thanks! |
|
Confirm |
Just a follow up,
10.46.0.0 is still some internal k8s address. Also trying |
+1 |
Could you tell me which version you were using that works? |
I can confirm that 0.9.0-beta.7 works, I had to roll back to that version |
That's strange. I just tested 0.9.0-beta.7 this week and, even though it's not passing 127.0.0.1, it's still passing some k8s internal IP.
In my case, it only works setting |
Latest working version is 0.9.0-beta.8 |
@juliohm1978 Sorry, was out for a few days. I can confirm both 10 & 11 are broken. |
I was looking in to this myself. What changed after b8 was #890 fixing #885 The somewhat bad thing about #890 is that many people were relying on that funny default for this functionality. But for quick fix just add
The reasoning behind the previous is the 443->442 trick we do for TLS SNI @aledbf I don't think the |
@n1koo Sorry for a little off-topic, but, please, can you point me somewhere where I can read more about this hack? Because, as I understand, Nginx has good TLS SNI support itself and I'm confused why it was implemented here the way it is now. It would be really great if you can. Thank you!
BTW, in my case, that breaks http support, obviously, as I do not have any load balancers in front of Nignx Ingress controller which can proxy http connections to me via proxy-protocol. |
I just wanted to confirm that |
Could you, please, provide an example of your configuration? Service/Deployment objects you used? I'm interested to compare it with my yaml. I'm also using |
@juliohm1978 sure! Please note that an important detail in my setup is that I have a classic ELB in front of my NGINX Ingress Controller with proxy-protocol support enabled. Enabling proxy-protocol support isn't possible in the AWS webui: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-proxy-protocol.html For the configuration of my ingress controller I have a ConfigMap with some additional configuration which looks like:
The service configuration:
port 80 & 443 are exposed by the ELB, the healthcheck port is used by the ELB to determine which nodes in the EC2 autoscaling group actually run a Ingress Controller. The ELB only sents traffic to the nodes which run such a pod. The Ingress Controller Deployment config:
Let me know if this helps out. |
I was also able to "fix" the problem by adding a LB (non proxy-protocol though) in front of Nginx Ingress controller. But it still remains if Nginx Ingress controller is the last frontier. |
@arjanschaaf, Thank you. Sorry for the late response, it's been busy around here. The main difference I see is that I'm using a ClusterIP Service, instead of NodePort.
I'll admit some ignorance on my part. In the case of ClusterIP Service, clients get NATed into the cluster through the host's I'm having trouble finding out the correct way to expose ports 443 and 80 to the external world using NodePort. What kind of ELB am I suppsed to setup outside the k8s cluster that understands which hsot ports represent 80 and 443? Can anyone give me an example? |
@juliohm1978 I'm using a classic ELB which points http traffic coming in on port 80 towards nodePort 31111 and https traffic coming in on 443 towards nodeport 31112. |
@dragonsmith are you sure about this? |
I have been looking into this too. We currently run on GKE and after the change in #890 we lost the ability to access the real client-ip. Enabling We can get the old behaviour back by simply using @aledbf since #890 didn’t have too much information attached to it, would you be able to provide more insight into the change? In its current state, without |
@arjanschaaf @dragonsmith when we create a service type=Loadbalancer by default the ports are TCP and that's the reason why we don't see the real source IP. In AWS the solution is to add additional annotations to the service:
|
@aledbf not sure if I understand your solution for AWS. Will this lead to an ELB being created by the k8s LoadBalancer service with HTTP based ports instead of TCP based ports? Personally I bypass the creation of ELB instances by the k8s LoadBalancer services. I created our ELB "by hand" with TCP based ports and proxy-protocol enabled and coupled it with a simple NodePort based k8s service. |
That works too :) |
@aledbf one of the disadvantages of using ELB in HTTP mode instead of TCP is that you need to configure your SSL certificate on the ELB. Where we currently use the NGINX ingress controller for the SSL offloading. It all depends on your specific situation of course but we currently use multiple wildcard certificates for different domains simultaneously. This works perfectly in one Ingress Controller + one ELB in TCP mode but when I would have to facilitate this with Amazon ELB I would need to create an ELB for every SSL certificate I need to deploy because ELB only supports one SSL certificate per ELB instance. So in our situation we would need create many ELB's and incur the additional costs (and hassle). |
This is still a problem for people using HTTPS and a TCP load balancer. I have described the details that I have found in my own testing in #1067 (comment) |
Wouldn't it be possible to skip the 442/443 port hack for the nginx controller and just use ssl_preread instead? |
We tried that without success. |
I confirm that @arjanschaaf suggestion fixes the issue in Note: I'm using it on GKE with a LoadBalancer service and a static IP. |
@arjanschaaf Sorry, I was inaccurate in my terms. In my case, there is no AWS at all - I'm running clusters on DigitalOcean & bare-metal. So adding one more Nginx as an LB in front of Nginx Ingress Controller solves the problem as it can set @aledbf The problem still presents even if there is no Sorry again, I've concentrated too much on my end of the problem and did not mention clearly I'm not talking in an AWS context. |
Closing. Current master code disables ssl-passthrough feature (behind a flag now) and by default it's disabled. |
@dragonsmith please open a new issues describing the issue you have. Please include the k8s version, where you are running the cluster and how to reproduce the issue |
@aledbf I'll test beta-12 release first. At first glance, it seems |
I have set use-proxy-protocol: "true" and in the log-format i have "remote_addr": "$proxy_protocol_addr" in the ingress controller version gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 inside GKE kubernetes. But i am not seeing client IP and it shows only private IP. Could you help here? GKE cluster version is 1.9.2 |
@Karthickarumugam checkout the quay repo and upgrade that's already quite old version you have there. |
Thanks @YouriT externalTrafficPolicy: Local |
hi @Karthickarumugam how could you get it to work? |
It looks like the nginx ingress controller has recently added some bits that directly handle SNI and proxy-protocol for port 443 in the golang wrapper instead of nginx.
This unfortunately has the side effect of discarding the remote address in nginx and messing up the proxy_add_x_forwarded for directives. From nginx's point of view, all incoming requests that come in over it's SSL port (442) have an origin address of 127.0.0.1. This breaks a lot of stuff for us internally, the least of which is HTTP request logs - every SSL request (which is nearly all of them) always shows up with a blank remote address.
The short term solution would be to have the golang post-SNI tcp proxy also start it's new connections to the backend with a proxy-protocol header.
In the long term, this is likely going to be a problem for SSL passthrough containers as well, since they will see a source IP of the nginx ingress controller - might be a good idea to optionally implement proxy-protocol for L4 backend connections as well.
The text was updated successfully, but these errors were encountered: