-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Catch-all server_name _
block of /etc/nginx/nginx.conf
is being set to the upstream of the last ingress processed
#8823
Comments
@ericdstein: This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@harry1064 or @Volatus, this may interest you |
@longwuyuan |
It looks like we are running into this cause we are defining both While defining both As I mentioned it was actually #6576 that affected us as assignment to |
I believe #8473 is same issue. |
In your ingress spec, if you change for below the path from "/" to something else. Then what is the behavior ?
Also I think you need to check under the respective server_name in nginx.conf |
@harry1064 are you on kubernetes slack in the ingress-nginx-dev channel. If yes, can you ping me there please. Thanks |
@longwuyuan Yes, sure. |
Thank you for opening this issue @ericdstein , was going to do it myself today if no one was going to pick up #8473 again after my comment :) |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
Ingress variables and
$proxy_upstream_name
for catch-allserver_name _
block of/etc/nginx/nginx.conf
are set from the upstream of the last ingress processed with a default backend defined in the ingress spec. This causes requests to any URL paths or hosts that the Ingress-NGINX controller doesn't understand to be sent to this upstream instead of a global catch-all default backend.Narrowed this down to the change in this PR: #1379 but started affecting us after this PR: #6576.
In internal/ingress/controller/controller.go
createServers
func thedefault server and root location
is initialized (server_name _) and added to server map. Then, each ingress is processed and added to the server map. While processing thespecial "catch all" case, Ingress with a backend but no rule
of each ingress, the pointerservers[defServerName].Locations[0]
is assigned todefLoc
var.defLoc
is then updated with thebackendUpstream
anding
of the ingress currently being processed. Since this is a pointer, the originalservers[defServerName].Locations[0]
is updated as well; resulting in the global default catch-all backend (server_name _) being assigned to the upstream of the last ingress processed.What you expected to happen:
Catch-all
server_name _
block of/etc/nginx/nginx.conf
to be set so that any traffic the Ingress-NGINX controller doesn't understand is sent to the global catch-all default backend.NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.): v1.2.0
Kubernetes version (use
kubectl version
): v1.20.12Environment:
Cloud provider or hardware configuration: Qemu VM on bare metal
OS (e.g. from /etc/os-release):
Kernel (e.g.
uname -a
):Linux t7819mws0001 5.15.13-1.el7.elrepo.x86_64 #1 SMP Tue Jan 4 17:33:28 EST 2022 x86_64 x86_64 x86_64 GNU/Linux
How was the ingress-nginx-controller installed:
Others:
kubectl describe ...
of any custom configmap(s) created and in useHow to reproduce this issue:
I will look at reproducing in minikube. I believe the cause is fairly clear in that the values for
servers[defServerName].Locations[0]
are reassigned to values for each ingress while they are being processed.Anything else we need to know:
All our ingresses are standardized and are defined similarly to:
The text was updated successfully, but these errors were encountered: