-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LimitRange added in 0.26.2 breaks default cert-manager Issuer configuration #5210
Comments
@edmorley thank you for the report. We will adjust the LimitRange to apply only to the ingress controller pod, not everything in the same namespace. Edit: that said, I run several clusters using ingress-nginx and cert-manager without |
@edmorley why are you creating a Certificate in the same namespace than the ingress controller? |
@aledbf Hi! Thank you for the fast reply :-) We are creating the Or is that not correct? One of the problems is that there is a gap between the topics covered by the ingress-nginx and cert-manager docs -- and "how one should make use of namespaces when using both" doesn't seem to be covered anywhere? :-) |
This applies to secrets referenced by Ingresses. The default SSL certificate is a special case. You can put that certificate in any certificate. That said, this is still a bug. |
Hi, I just wanted to point out this is affecting Linkerd as well, which attempts to inject an init-container in the ingress-nginx controller pod. It only requests 10m CPU and 10Mi Memory, so the injection is refused given this limitrange. I'm guessing other service meshes and other projects using sidecars are having the same problem. |
The next version removes the limitrange, adding the resource definition in the yaml files. |
kubectl get LimitRange -n <namespace_name> change the minimum limit to a lower value below the failing threshold, recreate the cert-manager CRD after clearing existing ones spend some good amount of time for this : cert-manager Error presenting challenge: pods "cm-acme-http-solver-jkvzm" is forbidden: [minimum cpu usage per Container is 100m, but request is 10m, minimum memory usage per Container is 90Mi, but request is 64Mi] |
Hi :-)
NGINX Ingress controller version:
master
(currently at 99419c7)Kubernetes version (use
kubectl version
):Environment:
What happened:
In #4843 (released in v0.26.2), a new
LimitRange
was applied to theingress-nginx
in order to fix #4735. ThisLimitRange
causes cert-manager Issuers to fail in their default configuration with errors like:The
Certificate
(and the pod created by the Issuer) has to be created in theingress-nginx
namespace, so that the TLS secret is created in that namespace (sinceingress-nginx
will need to access the secret).If the
LimitRange
is deleted (egkubectl delete limitrange ingress-nginx -n ingress-nginx
), these pod scheduling errors go away.What you expected to happen:
For the default
ingress-nginx
configuration to work with the defaultcert-manager
Issuer configuration's pod request limits.Specifically,
cert-manager
appears to already do the right thing by setting a pod resource request limits - and in fact the requested resources are actually less intensive than those in theLimitRange
, which seems like a good thing, not something that should be prevented?How to reproduce it:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
kubectl get deployment cert-manager-webhook -n cert-manager
reports the webhook as available (takes a while due to cert-manager-webhook deployment tooks too long to start.. secret "cert-manager-webhook-tls" not found cert-manager/cert-manager#2537).kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.13.1/cert-manager.yaml
ClusterIssuer
(per cert-manager docs):(Making sure to substitute in a valid email address)
Certificate
(per cert-manager docs):kubectl describe challenges -A
/kind bug
The text was updated successfully, but these errors were encountered: