Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress NGINX v1.10.2 & v1.11.0 throw core dumps #11588

Closed
auwaerter opened this issue Jul 9, 2024 · 57 comments · Fixed by #11594
Closed

Ingress NGINX v1.10.2 & v1.11.0 throw core dumps #11588

auwaerter opened this issue Jul 9, 2024 · 57 comments · Fixed by #11594
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@auwaerter
Copy link

auwaerter commented Jul 9, 2024

What happened:

I was updating from ingress-nginx 1.10.1 to 1.11.0 using helm upgrade ingress-nginx /helm/ingress-nginx --install -f /helm/ingress-nginx/custom-values.yaml

All five ingresses assigned to that ingress-nginx instance went unresponsive, came up a few times but then crashed constantly.

When opening the ingress hosts in the browser, i see lots of messages such as:

2024/07/09 07:00:34 [alert] 23#23: worker process 229 exited on signal 11 (core dumped)
2024/07/09 07:00:35 [alert] 23#23: worker process 29 exited on signal 11 (core dumped)
2024/07/09 07:02:17 [alert] 23#23: worker process 328 exited on signal 11 (core dumped)
2024/07/09 07:02:22 [alert] 23#23: worker process 295 exited on signal 11 (core dumped)

A fallback to 1.10.1 fixed the issue.

What you expected to happen:

A normal (rolling) deployment with responsive hosts and no core dumps.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

NGINX Ingress controller
Release: v1.11.0
Build: 96dea88
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.5

Kubernetes version (use kubectl version):

Client Version: v1.28.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.6

Environment:

  • Cloud provider or hardware configuration: AWS

  • OS (e.g. from /etc/os-release): Ubuntu 20.04.6 LTS / containerd://1.7.18

  • Kernel (e.g. uname -a): 5.15.0-107-generic

  • Basic cluster related info:

    • kubectl version: Already described, see above...
    • kubectl get nodes -o wide: Already described, see above...
  • How was the ingress-nginx-controller installed:

    • If helm was used then please show output of helm ls -A | grep -i ingress
ingress-nginx           ingress-nginx           19              2024-07-09 09:12:31.294492684 +0200 CEST        deployed        ingress-nginx-4.11.0                    1.11.0
ingress-nginx-internet  ingress-nginx-internet  15              2024-05-02 08:55:37.339164187 +0200 CEST        deployed        ingress-nginx-4.10.1                    1.10.1
  • If helm was used then please show output of helm -n <ingresscontrollernamespace> get values <helmreleasename>
    values.json

  • if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances

  • Current State of the controller:

How to reproduce this issue:

  • Have a ingress-nginx controller running in 1.10.1.
  • Update to 1.11.0 using helm with the given configuration.
  • Check the logs / ingress hosts assigned.

Anything else we need to know:

Attached logs:

values.json
ingressclasses.json
getallwide.log
pod-describe.log
svc-describe.log
ingress-nginx-controller-5bd44bf869-4k9kf.log

@auwaerter auwaerter added the kind/bug Categorizes issue or PR as related to a bug. label Jul 9, 2024
@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Jul 9, 2024
@zeeZ
Copy link
Contributor

zeeZ commented Jul 9, 2024

Cluster getting core dumps with 1.11.0 (OVH):

  • K8S v1.29.3
  • Ubuntu 22.04.4 LTS
  • 5.15.0-107-generic
  • containerd://1.6.32

Cluster with zero dumps so far (self-hosted):

  • K8S v1.27.10
  • Flatcar Container Linux by Kinvolk 3815.2.5 (Oklo)
  • 6.1.96-flatcar
  • containerd://1.7.11

Sadly the cluster where I could easily access core dumps is the one that's stable.

@thomaspeitz
Copy link
Contributor

thomaspeitz commented Jul 9, 2024

Setup:

  • EKS / AWS
  • 1.29

Which makes the problem in our case worse, ist that it completely filled our nodes disks.
So not only ingress controller stopped working but nodes completely got destroyed.
On cloud no issue, they get rotated but I am afraid of those getting the issue on onprem.

We already have 9 upvotes on this one.
Could you put a warning into the release nodes till this gets resolved?
Is there an option to remove the release from release page till it is properly debugged?

@ConnorJC3
Copy link

Kernel version appears to affect the bug - I'm unsure of the exact cutoff but all working nodes have version 6.x and all broken have 5.x (typically 5.15) in my testing.

@bootc
Copy link

bootc commented Jul 9, 2024

Kernel version appears to affect the bug - I'm unsure of the exact cutoff but all working nodes have version 6.x and all broken have 5.x (typically 5.15) in my testing.

Sorry to rain on this, but my cluster with kernel 6.9.7 was failing in the same way. Debian Trixie (testing), RKE2, K8s 1.30.2, containerd 1.7.17.

@Gacko
Copy link
Member

Gacko commented Jul 9, 2024

Does the same happen with v1.10.2? That would help narrowing down the root cause.

@bootc
Copy link

bootc commented Jul 9, 2024

Does the same happen with v1.10.2? That would help narrowing down the root cause.

Yes, it did on my cluster. 1.10.1 OK, 1.10.2 and 1.11.0 both bad.

@zeeZ
Copy link
Contributor

zeeZ commented Jul 9, 2024

Does the same happen with v1.10.2? That would help narrowing down the root cause.

Yes.

@Gacko
Copy link
Member

Gacko commented Jul 9, 2024

Ok, thanks! That really helps as we "only" introduced patches to v1.10.2.

@rouke-broersma
Copy link

rouke-broersma commented Jul 9, 2024

I have the following environments:

1 (onprem):
Ingress-nginx: 1.11.0
Kubernetes: Talos v1.30.0
Kernel: 6.6.29-talos
Issue: Yes, however due to my specific setup I think my replicas are not exiting at the same time which means I still have inbound connectivity
Error Message: worker process 2362 exited on signal 11

2 (onprem, not exactly the same as 1 however no significant differences..):
Ingress-nginx: 1.10.2
Kubernetes: Talos v1.30.0
Kernel: 6.6.29-talos
Issue: Yes, and there is no inbound connectivity due to exiting at the same time
Error Message: worker process 2362 exited on signal 11

3 (Azure):
Ingress-nginx: 1.11.0
Kubernetes: AKS v1.29.2
Kernel: 5.15.158.2-1.cm2
Issue: No errors at all

@strongjz
Copy link
Member

strongjz commented Jul 9, 2024

thanks, I'll discuss pulling the release on github with @Gacko but we can not remove the images on 1.10.2 and 1.11.0 in the kubernetes registry. .

@koehn
Copy link

koehn commented Jul 9, 2024

I’m seeing the same when using 1.11.0 on my hybrid ARM64/AMD64 cluster running k3s 1.29.6, kernel 6.1:

k3s-01   Ready    control-plane,etcd,master   240d   v1.29.6+k3s2   10.0.1.234    <none>        Armbian 24.5.1 bookworm          6.1.43-vendor-rk35xx   containerd://1.7.17-k3s1
k3s-02   Ready    control-plane,etcd,master   90d    v1.29.6+k3s2   10.0.1.235    <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-22-amd64         containerd://1.7.17-k3s1
k3s-03   Ready    control-plane,etcd,master   45d    v1.29.6+k3s2   10.0.1.236    <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-21-amd64         containerd://1.7.17-k3s1

@strongjz
Copy link
Member

strongjz commented Jul 9, 2024

We updated the release notes with a warning.

There are known issues with this release, some folks are experiencing core dumps. Please see https://github.com/kubernetes/ingress-nginx/issues/11588 for more information and comment if you are experiencing issues.

@strongjz
Copy link
Member

strongjz commented Jul 9, 2024

i am in transit right now, but It is running on kind on my laptop.

k get nodes
NAME                              STATUS   ROLES           AGE   VERSION
ingress-nginx-dev-control-plane   Ready    control-plane   41s   v1.29.2
STRONGJZ-M-JVJM:ingress-nginx strongjz$ k get pods -n kube-system
NAME                                                      READY   STATUS    RESTARTS   AGE
coredns-76f75df574-2mqjv                                  1/1     Running   0          36s
coredns-76f75df574-kp9zg                                  1/1     Running   0          36s
etcd-ingress-nginx-dev-control-plane                      1/1     Running   0          49s
kindnet-2bld5                                             1/1     Running   0          36s
kube-apiserver-ingress-nginx-dev-control-plane            1/1     Running   0          49s
kube-controller-manager-ingress-nginx-dev-control-plane   1/1     Running   0          50s
kube-proxy-jqm95                                          1/1     Running   0          36s
kube-scheduler-ingress-nginx-dev-control-plane            1/1     Running   0          49s

@bootc
Copy link

bootc commented Jul 9, 2024

The issue as I experienced it is that the pods remaining running and healthy, but requests through them end up crashing Nginx with a segfault. This doesn't happen on each request, but at least 50% of the time. Here is a sample of what came out of my nodes' dmesg:

[166489.439128] nginx[2643326]: segfault at 600 ip 0000557619ccfd3e sp 00007ffeac356070 error 6 in nginx[557619b96000+176000] likely on CPU 4 (core 4, socket 0)
[166489.439200] Code: 00 00 00 4c 89 3c 24 e8 a0 7e ec ff 48 8b 7c 24 60 48 89 c6 e8 b3 1b ed ff 48 8b 3c 24 49 89 c7 e8 c7 7d ec ff 48 8b 44 24 20 <4c> 89 38 49 83 ff ff 0f 84 e3 01 00 00 4c 89 ef e8 1d 7d ec ff 48
[166489.883911] nginx[2644496]: segfault at 600 ip 0000557619ccfd3e sp 00007ffeac356070 error 6 in nginx[557619b96000+176000] likely on CPU 3 (core 3, socket 0)
[166489.883940] Code: 00 00 00 4c 89 3c 24 e8 a0 7e ec ff 48 8b 7c 24 60 48 89 c6 e8 b3 1b ed ff 48 8b 3c 24 49 89 c7 e8 c7 7d ec ff 48 8b 44 24 20 <4c> 89 38 49 83 ff ff 0f 84 e3 01 00 00 4c 89 ef e8 1d 7d ec ff 48
[166492.277424] nginx[2644823]: segfault at 600 ip 0000557619ccfd3e sp 00007ffeac356070 error 6 in nginx[557619b96000+176000] likely on CPU 1 (core 1, socket 0)
[166492.277452] Code: 00 00 00 4c 89 3c 24 e8 a0 7e ec ff 48 8b 7c 24 60 48 89 c6 e8 b3 1b ed ff 48 8b 3c 24 49 89 c7 e8 c7 7d ec ff 48 8b 44 24 20 <4c> 89 38 49 83 ff ff 0f 84 e3 01 00 00 4c 89 ef e8 1d 7d ec ff 48
[166493.592942] nginx[2644590]: segfault at 600 ip 0000557619ccfd3e sp 00007ffeac356070 error 6 in nginx[557619b96000+176000] likely on CPU 4 (core 4, socket 0)
[166493.592974] Code: 00 00 00 4c 89 3c 24 e8 a0 7e ec ff 48 8b 7c 24 60 48 89 c6 e8 b3 1b ed ff 48 8b 3c 24 49 89 c7 e8 c7 7d ec ff 48 8b 44 24 20 <4c> 89 38 49 83 ff ff 0f 84 e3 01 00 00 4c 89 ef e8 1d 7d ec ff 48
[166494.078852] nginx[2644331]: segfault at 600 ip 0000557619ccfd3e sp 00007ffeac356070 error 6 in nginx[557619b96000+176000] likely on CPU 3 (core 3, socket 0)
[166494.078880] Code: 00 00 00 4c 89 3c 24 e8 a0 7e ec ff 48 8b 7c 24 60 48 89 c6 e8 b3 1b ed ff 48 8b 3c 24 49 89 c7 e8 c7 7d ec ff 48 8b 44 24 20 <4c> 89 38 49 83 ff ff 0f 84 e3 01 00 00 4c 89 ef e8 1d 7d ec ff 48
[166494.349418] nginx[2645034]: segfault at 600 ip 0000557619ccfd3e sp 00007ffeac356070 error 6 in nginx[557619b96000+176000] likely on CPU 5 (core 5, socket 0)
[166494.349447] Code: 00 00 00 4c 89 3c 24 e8 a0 7e ec ff 48 8b 7c 24 60 48 89 c6 e8 b3 1b ed ff 48 8b 3c 24 49 89 c7 e8 c7 7d ec ff 48 8b 44 24 20 <4c> 89 38 49 83 ff ff 0f 84 e3 01 00 00 4c 89 ef e8 1d 7d ec ff 48
[166495.810372] nginx[2644497]: segfault at 600 ip 0000557619ccfd3e sp 00007ffeac356070 error 6 in nginx[557619b96000+176000] likely on CPU 4 (core 4, socket 0)
[166495.810405] Code: 00 00 00 4c 89 3c 24 e8 a0 7e ec ff 48 8b 7c 24 60 48 89 c6 e8 b3 1b ed ff 48 8b 3c 24 49 89 c7 e8 c7 7d ec ff 48 8b 44 24 20 <4c> 89 38 49 83 ff ff 0f 84 e3 01 00 00 4c 89 ef e8 1d 7d ec ff 48
[166496.101377] nginx[2645157]: segfault at 600 ip 0000557619ccfd3e sp 00007ffeac356070 error 6 in nginx[557619b96000+176000] likely on CPU 5 (core 5, socket 0)

@HubbeKing
Copy link

Running into this as well in my cluster, both with 1.10.2 and 1.11.0. Rolling back to 1.10.1 fixed the issue.

  • OS: Debian 12
  • Kernel: 6.1.0-22-amd64
  • Runtime: containerd://1.6.33
  • K8s: kubeadm, v1.30.2

@bmv126
Copy link

bmv126 commented Jul 9, 2024

Could this be related to ssl patches applied on 1.10.2 ?

@zeeZ
Copy link
Contributor

zeeZ commented Jul 9, 2024

I tried pushing just an extra 1.10.2 deployment with a different class and shared config to the cluster that's crashing, using the bug template repro service and ingress. No core dumps, so triggering this might require a more complex setup than just one ingress with TLS and curling to localhost with proxy-protocol.

@Gacko
Copy link
Member

Gacko commented Jul 9, 2024

Could this be related to ssl patches applied on 1.10.2 ?

I couldn't verify it, yet. But it might be interesting to know if some of you are using TLS offloading / pure HTTP and therefore do not face that issue.

@strongjz
Copy link
Member

strongjz commented Jul 9, 2024

/triage accepted
/priority critical-urgent

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Jul 9, 2024
@koehn
Copy link

koehn commented Jul 9, 2024

FWIW I’m doing TLS termination inside ingress-nginx (TLS 1.2/1.3), the cluster is behind a load balancer using proxy protocol v2. Again, kernel 6.1, arm64/amd64, k3s 1.29.6 on bare metal Armbian/Debian.

@longwuyuan
Copy link
Contributor

Hello,

Any chance that anyone can attach even one coredump here

@strongjz
Copy link
Member

I deployed an ingress with TLS via cert-manager, with ocsp enabled, and it core dump on me, as soon as I disabled it ingress worked just fine.

I am going to update nginx base and remove that last change #11590

Test it in this same cluster and see if that fixes the issue. It could still be the patches and nginx version.

Thank you @thomaspeitz for pointing us in the right direction.

@rikatz
Copy link
Contributor

rikatz commented Jul 10, 2024

Apparently there is a bug on OCSP code from latest Lua or LuaJIT:

sudo gdb ./nginx core/core.99

...
#0  0x00005626f775dd3e in ngx_http_lua_ffi_ssl_validate_ocsp_response (resp=<optimized out>, resp_len=<optimized out>, chain_data=<optimized out>, chain_len=<optimized out>, errbuf=0x7fa407769500 "schemeupstream_name\005", errbuf_size=0x7fa4077ad818, valid=0x600)
    at /tmp/build/lua-nginx-module/src/ngx_http_lua_ssl_ocsp.c:483

...
(gdb) backtrace
#18 0x00005626f775db20 in ?? () at /tmp/build/lua-nginx-module/src/ngx_http_lua_ssl_ocsp.c:235

@strongjz
Copy link
Member

Yep, see the same thing in several other dumps.

We need to revert the version of lua-nginx-module upgrade #11470

and put in an Issue in https://github.com/openresty/lua-nginx-module/issues

lldb -c core.766
(lldb) target create --core "core.766"
Core file '/Users/strongjz/go/src/github/kubernetes/ingress-nginx/core.766' (x86_64) was loaded.
(lldb) bt all
* thread #1, name = 'nginx', stop reason = signal SIGSEGV: address not mapped to object
  * frame #0: 0x00005626f775dd3e nginx`ngx_http_lua_ffi_ssl_validate_ocsp_response(resp=<unavailable>, resp_len=<unavailable>, chain_data=<unavailable>, chain_len=<unavailable>, errbuf="0S0Q0O0M0K0\t\U00000006\U00000005+\U0000000e\U00000003\U00000002\U0000001a\U00000005", errbuf_size=0x00007fa4077ad818, valid=0x0000000000000600) at ngx_http_lua_ssl_ocsp.c:388:16
    frame #1: 0x00007fa40eb68f92

@strongjz
Copy link
Member

strongjz commented Jul 11, 2024

/reopen

Didn't mean to auto close this until we confirm the new nginx build fixes the issues.

@k8s-ci-robot k8s-ci-robot reopened this Jul 11, 2024
@k8s-ci-robot
Copy link
Contributor

@strongjz: Reopened this issue.

In response to this:

/reopen

Didn't mean to auto close this till we we confirm the new nginx build fixes the issues.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@strongjz
Copy link
Member

I think we found out why it passed in CI, the test was skip for a bug issue a little while ago. Adding the test back and trying again in #11606

@strongjz
Copy link
Member

Looks like we disabled the OCSP tests a while back for a bug. It seems that cfssl needs sqlite-dev from the failures.

This PR turns on the OCSP e2e tests and adds sqlite-dev back into our testing.

#11606

I'm going to run tests on the nginx:0.0.8 version that has the newer version of the lua nginx module and see if that catches the issue of what is causing the core dump, more so to put in an in the lua module repo.

If the tests in #11606 pass, we will move forward with the revert and wait for a release of the lua module.

@Gacko
Copy link
Member

Gacko commented Jul 18, 2024

We just released controller v1.11.1 & v1.10.3 with chart v4.11.1 & v4.10.3. These releases should ship a fix for this issue.

@jessebot
Copy link
Contributor

To confirm, I have rolled out v1.11.1 today and have enabled ocsp again and everything is working great :) Thanks team!

@strongjz
Copy link
Member

So a couple things, we discussed this as well at the community meeting. The e2e for OSCP is fixed, which would have caught this issue, we learned that we should stick with released version of lua-nginx-module and not commits. I have also reviewed the e2e test to make sure we are not skipping others.

Thank you to @thomaspeitz for finding the root cause. And to others who helped confirm it or provide more details.

We opened issue at for the lua nginx module folks to review at openresty/lua-nginx-module#2339

We apologize for causing this issue, and will continue to work on making releases stable, there is a lot 3rd party software that goes into making ingress-nginx work and we do our best to test all the components. If you are interested in helping us out please join us every other Thursday at 11 am eastern in our community meetings or on #ingress-nginx-dev on kubernetes.slack.com.

lambchop4prez referenced this issue in lambchop4prez/network Aug 11, 2024
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [ingress-nginx](https://togithub.com/kubernetes/ingress-nginx) | minor
| `4.10.1` -> `4.11.1` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>kubernetes/ingress-nginx (ingress-nginx)</summary>

###
[`v4.11.1`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.11.1)

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.11.0`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.11.0)

### WARNING

There are known issues with this release, some folks are experiencing
core dumps. Please see
[https://github.com/kubernetes/ingress-nginx/issues/11588](https://togithub.com/kubernetes/ingress-nginx/issues/11588)
for more information and comment if you are experiencing issues.

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.10.3`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.10.3)

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.10.2`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.10.2)

### WARNING

There are known issues with this release, some folks are experiencing
core dumps. Please see
[https://github.com/kubernetes/ingress-nginx/issues/11588](https://togithub.com/kubernetes/ingress-nginx/issues/11588)
for more information and comments if you are experiencing issues.

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View the
[repository job
log](https://developer.mend.io/github/lambchop4prez/network).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40MjUuMSIsInVwZGF0ZWRJblZlciI6IjM3LjQzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
renovate bot referenced this issue in anza-labs/infra Aug 11, 2024
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [ingress-nginx](https://togithub.com/kubernetes/ingress-nginx) | minor
| `4.10.1` -> `4.11.1` |

---

### Release Notes

<details>
<summary>kubernetes/ingress-nginx (ingress-nginx)</summary>

###
[`v4.11.1`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.11.1)

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.11.0`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.11.0)

### WARNING

There are known issues with this release, some folks are experiencing
core dumps. Please see
[https://github.com/kubernetes/ingress-nginx/issues/11588](https://togithub.com/kubernetes/ingress-nginx/issues/11588)
for more information and comment if you are experiencing issues.

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.10.3`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.10.3)

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.10.2`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.10.2)

### WARNING

There are known issues with this release, some folks are experiencing
core dumps. Please see
[https://github.com/kubernetes/ingress-nginx/issues/11588](https://togithub.com/kubernetes/ingress-nginx/issues/11588)
for more information and comments if you are experiencing issues.

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View the
[repository job log](https://developer.mend.io/github/anza-labs/infra).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yMC4xIiwidXBkYXRlZEluVmVyIjoiMzguMjAuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
@Faq
Copy link

Faq commented Aug 14, 2024

btw there is patch what awaits feedback openresty/lua-nginx-module#2339 (comment)

renovate bot referenced this issue in anza-labs/manifests Aug 25, 2024
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [ingress-nginx](https://togithub.com/kubernetes/ingress-nginx) | minor
| `4.10.1` -> `4.11.2` |

---

### Release Notes

<details>
<summary>kubernetes/ingress-nginx (ingress-nginx)</summary>

###
[`v4.11.2`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.11.2)

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.11.1`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.11.1)

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.11.0`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.11.0)

### WARNING

There are known issues with this release, some folks are experiencing
core dumps. Please see
[https://github.com/kubernetes/ingress-nginx/issues/11588](https://togithub.com/kubernetes/ingress-nginx/issues/11588)
for more information and comment if you are experiencing issues.

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.10.3`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.10.3)

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

###
[`v4.10.2`](https://togithub.com/kubernetes/ingress-nginx/releases/tag/helm-chart-4.10.2)

### WARNING

There are known issues with this release, some folks are experiencing
core dumps. Please see
[https://github.com/kubernetes/ingress-nginx/issues/11588](https://togithub.com/kubernetes/ingress-nginx/issues/11588)
for more information and comments if you are experiencing issues.

Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend
Renovate](https://www.mend.io/free-developer-tools/renovate/). View the
[repository job
log](https://developer.mend.io/github/anza-labs/manifests).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Development

Successfully merging a pull request may close this issue.