Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to version 2.4.0 with "-disable-ipv6=true" cannot reach upstream. #3138

Closed
jppitout opened this issue Oct 11, 2022 · 8 comments · Fixed by #3139
Closed

Upgrade to version 2.4.0 with "-disable-ipv6=true" cannot reach upstream. #3138

jppitout opened this issue Oct 11, 2022 · 8 comments · Fixed by #3139

Comments

@jppitout
Copy link

jppitout commented Oct 11, 2022

Describe the bug
We've upgraded our nginx-ingress controller from version 2.1.2 to 2.4.0 and included the -disable-ipv6=true arg in our deployment manifest. TKGi clusters don't allow IPv6. (see also #2970)

The nginx-ingress pods start but cannot reach any upstream pods. Connections fail with 404s.

We've set debugging to -v=3 and configured log-format to:

log-format: '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" ua="$upstream_addr"'

Which produces these logs:

10.200.57.53 [11/Oct/2022:12:54:45 +0000] TCP 200 5625 1389 0.004 "vault.cluster.example.com"
10.200.57.53 - - [11/Oct/2022:12:54:46 +0000] "POST /v1/auth/kubernetes-cluster/login HTTP/1.1" 404 153 "-" "-" "-" ua="-"

On the pods themselves we're still seeing ipv6 listeners configured (along with IPv4 listeners):

nginx@nginx-ingress-64556df554-4t44r:/etc/nginx/conf.d$ grep listen vault-vault.conf 
        listen 80;
        listen [::]:80;
        listen unix:/var/lib/nginx/passthrough-https.sock ssl proxy_protocol;

To Reproduce
Steps to reproduce the behavior:

  1. Update kubernetes-ingress deployment manifest from v2.1.2 to v2.4.0
  2. Include -disable-ipv6=true argument
  3. Update RBAC manifests
  4. Deploy manifests
  5. Check pod logs.

Expected behavior
kubernetes-ingress is able to communicate with upstream pods/services.

Environment

  • Ingress Controller: v2.4.0
  • Kubernetes: v1.23.7
  • Kubernetes platform: TKGi
  • Using NGINX (no PLUS)

Additional context
See also #2970

@github-actions
Copy link

Hi @jppitout thanks for reporting!

Be sure to check out the docs while you wait for a human to take a look at this 🙂

Cheers!

@tomasohaodha
Copy link
Contributor

Thanks for reporting this @jppitout - we are investigating.

@haywoodsh
Copy link
Contributor

Hi @jppitout, I am trying to reproduce the issue. Would you mind sharing your deployment config files and the generated NGINX conf file, please?

@jppitout
Copy link
Author

jppitout commented Oct 11, 2022

@haywoodsh
Copy link
Contributor

Hi @jppitout Thank you for your info. I was able to reproduce the issue. The command line argument works for virtual server and transport server as expected, but not for ingress resources at the moment. We will publish a fix soon.

@haywoodsh
Copy link
Contributor

We merged a fix to the main branch and pushed a new image to Docker Hub. Please try to deploy again with the nginx/nginx-ingress:edge image, and let us know if the issue has been resolved.

@vepatel
Copy link
Contributor

vepatel commented Oct 14, 2022

@jppitout can you please give us the full pod log along with deployment yaml? As I can see earlier your pod was starting but issue was in ingress.

@jppitout
Copy link
Author

jppitout commented Oct 14, 2022

My apologies it is in fact working. There was indeed a change made to the deployment manifest for another test being performed.

I can confirm this bug fix is working and the ingress controller is working as expected!
Many thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants