nginx-ingress is an Ingress controller that uses ConfigMap to store the nginx configuration.
To use, add the kubernetes.io/ingress.class: nginx
annotation to your Ingress resources.
$ helm install stable/nginx-ingress
This chart bootstraps an nginx-ingress deployment on a Kubernetes cluster using the Helm package manager.
- Kubernetes 1.6+
To install the chart with the release name my-release
:
$ helm install --name my-release stable/nginx-ingress
The command deploys nginx-ingress on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
To uninstall/delete the my-release
deployment:
$ helm delete my-release
The command removes all the Kubernetes components associated with the chart and deletes the release.
The following table lists the configurable parameters of the nginx-ingress chart and their default values.
Parameter | Description | Default |
---|---|---|
controller.name |
name of the controller component | controller |
controller.image.repository |
controller container image repository | quay.io/kubernetes-ingress-controller/nginx-ingress-controller |
controller.image.tag |
controller container image tag | 0.30.0 |
controller.image.pullPolicy |
controller container image pull policy | IfNotPresent |
controller.image.runAsUser |
User ID of the controller process. Value depends on the Linux distribution used inside of the container image. | 101 |
controller.useComponentLabel |
Wether to add component label so the HPA can work separately for controller and defaultBackend. Note: don't change this if you have an already running deployment as it will need the recreation of the controller deployment | false |
controller.containerPort.http |
The port that the controller container listens on for http connections. | 80 |
controller.containerPort.https |
The port that the controller container listens on for https connections. | 443 |
controller.config |
nginx ConfigMap entries | none |
controller.hostNetwork |
If the nginx deployment / daemonset should run on the host's network namespace. Do not set this when controller.service.externalIPs is set and kube-proxy is used as there will be a port-conflict for port 80 |
false |
controller.defaultBackendService |
default 404 backend service; needed only if defaultBackend.enabled = false and version < 0.21.0 |
"" |
controller.dnsPolicy |
If using hostNetwork=true , change to ClusterFirstWithHostNet . See pod's dns policy for details |
ClusterFirst |
controller.dnsConfig |
custom pod dnsConfig. See pod's dns config for details | {} |
controller.reportNodeInternalIp |
If using hostNetwork=true , setting reportNodeInternalIp=true , will pass the flag report-node-internal-ip-address to nginx-ingress. This sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller. |
|
controller.electionID |
election ID to use for the status update | ingress-controller-leader |
controller.extraEnvs |
any additional environment variables to set in the pods | {} |
controller.extraContainers |
Sidecar containers to add to the controller pod. See LemonLDAP::NG controller as example | {} |
controller.extraVolumeMounts |
Additional volumeMounts to the controller main container | {} |
controller.extraVolumes |
Additional volumes to the controller pod | {} |
controller.extraInitContainers |
Containers, which are run before the app containers are started | [] |
controller.ingressClass |
name of the ingress class to route through this controller | nginx |
controller.maxmindLicenseKey |
Maxmind license key to download GeoLite2 Databases. See Accessing and using GeoLite2 database | "" |
controller.scope.enabled |
limit the scope of the ingress controller | false (watch all namespaces) |
controller.scope.namespace |
namespace to watch for ingress | "" (use the release namespace) |
controller.extraArgs |
Additional controller container arguments | {} |
controller.kind |
install as Deployment, DaemonSet or Both | Deployment |
controller.deploymentAnnotations |
annotations to be added to deployment | {} |
controller.autoscaling.enabled |
If true, creates Horizontal Pod Autoscaler | false |
controller.autoscaling.minReplicas |
If autoscaling enabled, this field sets minimum replica count | 2 |
controller.autoscaling.maxReplicas |
If autoscaling enabled, this field sets maximum replica count | 11 |
controller.autoscaling.targetCPUUtilizationPercentage |
Target CPU utilization percentage to scale | "50" |
controller.autoscaling.targetMemoryUtilizationPercentage |
Target memory utilization percentage to scale | "50" |
controller.daemonset.useHostPort |
If controller.kind is DaemonSet , this will enable hostPort for TCP/80 and TCP/443 |
false |
controller.daemonset.hostPorts.http |
If controller.daemonset.useHostPort is true and this is non-empty, it sets the hostPort |
"80" |
controller.daemonset.hostPorts.https |
If controller.daemonset.useHostPort is true and this is non-empty, it sets the hostPort |
"443" |
controller.tolerations |
node taints to tolerate (requires Kubernetes >=1.6) | [] |
controller.affinity |
node/pod affinities (requires Kubernetes >=1.6) | {} |
controller.terminationGracePeriodSeconds |
how many seconds to wait before terminating a pod | 60 |
controller.minReadySeconds |
how many seconds a pod needs to be ready before killing the next, during update | 0 |
controller.nodeSelector |
node labels for pod assignment | {} |
controller.podAnnotations |
annotations to be added to pods | {} |
controller.deploymentLabels |
labels to add to the deployment metadata | {} |
controller.podLabels |
labels to add to the pod container metadata | {} |
controller.podSecurityContext |
Security context policies to add to the controller pod | {} |
controller.replicaCount |
desired number of controller pods | 1 |
controller.minAvailable |
minimum number of available controller pods for PodDisruptionBudget | 1 |
controller.resources |
controller pod resource requests & limits | {} |
controller.priorityClassName |
controller priorityClassName | nil |
controller.lifecycle |
controller pod lifecycle hooks | {} |
controller.service.annotations |
annotations for controller service | {} |
controller.service.labels |
labels for controller service | {} |
controller.publishService.enabled |
if true, the controller will set the endpoint records on the ingress objects to reflect those on the service | false |
controller.publishService.pathOverride |
override of the default publish-service name | "" |
controller.service.enabled |
if disabled no service will be created. This is especially useful when controller.kind is set to DaemonSet and controller.daemonset.useHostPorts is true |
true |
controller.service.clusterIP |
internal controller cluster service IP (set to "-" to pass an empty value) |
nil |
controller.service.omitClusterIP |
(Deprecated) To omit the clusterIP from the controller service |
false |
controller.service.externalIPs |
controller service external IP addresses. Do not set this when controller.hostNetwork is set to true and kube-proxy is used as there will be a port-conflict for port 80 |
[] |
controller.service.externalTrafficPolicy |
If controller.service.type is NodePort or LoadBalancer , set this to Local to enable source IP preservation |
"Cluster" |
controller.service.sessionAffinity |
Enables client IP based session affinity. Must be ClientIP or None if set. |
"" |
controller.service.healthCheckNodePort |
If controller.service.type is NodePort or LoadBalancer and controller.service.externalTrafficPolicy is set to Local , set this to the managed health-check port the kube-proxy will expose. If blank, a random port in the NodePort range will be assigned |
"" |
controller.service.loadBalancerIP |
IP address to assign to load balancer (if supported) | "" |
controller.service.loadBalancerSourceRanges |
list of IP CIDRs allowed access to load balancer (if supported) | [] |
controller.service.enableHttp |
if port 80 should be opened for service | true |
controller.service.enableHttps |
if port 443 should be opened for service | true |
controller.service.targetPorts.http |
Sets the targetPort that maps to the Ingress' port 80 | 80 |
controller.service.targetPorts.https |
Sets the targetPort that maps to the Ingress' port 443 | 443 |
controller.service.ports.http |
Sets service http port | 80 |
controller.service.ports.https |
Sets service https port | 443 |
controller.service.type |
type of controller service to create | LoadBalancer |
controller.service.nodePorts.http |
If controller.service.type is either NodePort or LoadBalancer and this is non-empty, it sets the nodePort that maps to the Ingress' port 80 |
"" |
controller.service.nodePorts.https |
If controller.service.type is either NodePort or LoadBalancer and this is non-empty, it sets the nodePort that maps to the Ingress' port 443 |
"" |
controller.service.nodePorts.tcp |
Sets the nodePort for an entry referenced by its key from tcp |
{} |
controller.service.nodePorts.udp |
Sets the nodePort for an entry referenced by its key from udp |
{} |
controller.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated | 10 |
controller.livenessProbe.periodSeconds |
How often to perform the probe | 10 |
controller.livenessProbe.timeoutSeconds |
When the probe times out | 5 |
controller.livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
controller.livenessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded. | 3 |
controller.livenessProbe.port |
The port number that the liveness probe will listen on. | 10254 |
controller.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated | 10 |
controller.readinessProbe.periodSeconds |
How often to perform the probe | 10 |
controller.readinessProbe.timeoutSeconds |
When the probe times out | 1 |
controller.readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
controller.readinessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded. | 3 |
controller.readinessProbe.port |
The port number that the readiness probe will listen on. | 10254 |
controller.metrics.enabled |
if true , enable Prometheus metrics |
false |
controller.metrics.service.annotations |
annotations for Prometheus metrics service | {} |
controller.metrics.service.clusterIP |
cluster IP address to assign to service (set to "-" to pass an empty value) |
nil |
controller.metrics.service.omitClusterIP |
(Deprecated) To omit the clusterIP from the metrics service |
false |
controller.metrics.service.externalIPs |
Prometheus metrics service external IP addresses | [] |
controller.metrics.service.labels |
labels for metrics service | {} |
controller.metrics.service.loadBalancerIP |
IP address to assign to load balancer (if supported) | "" |
controller.metrics.service.loadBalancerSourceRanges |
list of IP CIDRs allowed access to load balancer (if supported) | [] |
controller.metrics.service.servicePort |
Prometheus metrics service port | 9913 |
controller.metrics.service.type |
type of Prometheus metrics service to create | ClusterIP |
controller.metrics.serviceMonitor.enabled |
Set this to true to create ServiceMonitor for Prometheus operator |
false |
controller.metrics.serviceMonitor.additionalLabels |
Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | {} |
controller.metrics.serviceMonitor.honorLabels |
honorLabels chooses the metric's labels on collisions with target labels. | false |
controller.metrics.serviceMonitor.namespace |
namespace where servicemonitor resource should be created | the same namespace as nginx ingress |
controller.metrics.serviceMonitor.namespaceSelector |
namespaceSelector to configure what namespaces to scrape | will scrape the helm release namespace only |
controller.metrics.serviceMonitor.scrapeInterval |
interval between Prometheus scraping | 30s |
controller.metrics.prometheusRule.enabled |
Set this to true to create prometheusRules for Prometheus operator |
false |
controller.metrics.prometheusRule.additionalLabels |
Additional labels that can be used so prometheusRules will be discovered by Prometheus | {} |
controller.metrics.prometheusRule.namespace |
namespace where prometheusRules resource should be created | the same namespace as nginx ingress |
controller.metrics.prometheusRule.rules |
rules to be prometheus in YAML format, check values for an example. | [] |
controller.admissionWebhooks.enabled |
Create Ingress admission webhooks. Validating webhook will check the ingress syntax. | false |
controller.admissionWebhooks.failurePolicy |
Failure policy for admission webhooks | Fail |
controller.admissionWebhooks.port |
Admission webhook port | 8080 |
controller.admissionWebhooks.service.annotations |
Annotations for admission webhook service | {} |
controller.admissionWebhooks.service.omitClusterIP |
(Deprecated) To omit the clusterIP from the admission webhook service |
false |
controller.admissionWebhooks.service.clusterIP |
cluster IP address to assign to admission webhook service (set to "-" to pass an empty value) |
nil |
controller.admissionWebhooks.service.externalIPs |
Admission webhook service external IP addresses | [] |
controller.admissionWebhooks.service.loadBalancerIP |
IP address to assign to load balancer (if supported) | "" |
controller.admissionWebhooks.service.loadBalancerSourceRanges |
List of IP CIDRs allowed access to load balancer (if supported) | [] |
controller.admissionWebhooks.service.servicePort |
Admission webhook service port | 443 |
controller.admissionWebhooks.service.type |
Type of admission webhook service to create | ClusterIP |
controller.admissionWebhooks.patch.enabled |
If true, will use a pre and post install hooks to generate a CA and certificate to use for validating webhook endpoint, and patch the created webhooks with the CA. | true |
controller.admissionWebhooks.patch.image.repository |
Repository to use for the webhook integration jobs | jettech/kube-webhook-certgen |
controller.admissionWebhooks.patch.image.tag |
Tag to use for the webhook integration jobs | v1.0.0 |
controller.admissionWebhooks.patch.image.pullPolicy |
Image pull policy for the webhook integration jobs | IfNotPresent |
controller.admissionWebhooks.patch.priorityClassName |
Priority class for the webhook integration jobs | "" |
controller.admissionWebhooks.patch.podAnnotations |
Annotations for the webhook job pods | {} |
controller.admissionWebhooks.patch.nodeSelector |
Node selector for running admission hook patch jobs | {} |
controller.customTemplate.configMapName |
configMap containing a custom nginx template | "" |
controller.customTemplate.configMapKey |
configMap key containing the nginx template | "" |
controller.addHeaders |
configMap key:value pairs containing custom headers added before sending response to the client | {} |
controller.proxySetHeaders |
configMap key:value pairs containing custom headers added before sending request to the backends | {} |
controller.headers |
DEPRECATED, Use controller.proxySetHeaders instead. |
{} |
controller.updateStrategy |
allows setting of RollingUpdate strategy | {} |
controller.configMapNamespace |
The nginx-configmap namespace name | "" |
controller.tcp.configMapNamespace |
The tcp-services-configmap namespace name | "" |
controller.udp.configMapNamespace |
The udp-services-configmap namespace name | "" |
defaultBackend.enabled |
Use default backend component | true |
defaultBackend.name |
name of the default backend component | default-backend |
defaultBackend.image.repository |
default backend container image repository | k8s.gcr.io/defaultbackend-amd64 |
defaultBackend.image.tag |
default backend container image tag | 1.5 |
defaultBackend.image.pullPolicy |
default backend container image pull policy | IfNotPresent |
defaultBackend.image.runAsUser |
User ID of the controller process. Value depends on the Linux distribution used inside of the container image. By default uses nobody user. | 65534 |
defaultBackend.useComponentLabel |
Whether to add component label so the HPA can work separately for controller and defaultBackend. Note: don't change this if you have an already running deployment as it will need the recreation of the defaultBackend deployment | false |
defaultBackend.extraArgs |
Additional default backend container arguments | {} |
defaultBackend.extraEnvs |
any additional environment variables to set in the defaultBackend pods | [] |
defaultBackend.port |
Http port number | 8080 |
defaultBackend.livenessProbe.initialDelaySeconds |
Delay before liveness probe is initiated | 30 |
defaultBackend.livenessProbe.periodSeconds |
How often to perform the probe | 10 |
defaultBackend.livenessProbe.timeoutSeconds |
When the probe times out | 5 |
defaultBackend.livenessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
defaultBackend.livenessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded. | 3 |
defaultBackend.readinessProbe.initialDelaySeconds |
Delay before readiness probe is initiated | 0 |
defaultBackend.readinessProbe.periodSeconds |
How often to perform the probe | 5 |
defaultBackend.readinessProbe.timeoutSeconds |
When the probe times out | 5 |
defaultBackend.readinessProbe.successThreshold |
Minimum consecutive successes for the probe to be considered successful after having failed. | 1 |
defaultBackend.readinessProbe.failureThreshold |
Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
defaultBackend.tolerations |
node taints to tolerate (requires Kubernetes >=1.6) | [] |
defaultBackend.affinity |
node/pod affinities (requires Kubernetes >=1.6) | {} |
defaultBackend.nodeSelector |
node labels for pod assignment | {} |
defaultBackend.podAnnotations |
annotations to be added to pods | {} |
defaultBackend.deploymentLabels |
labels to add to the deployment metadata | {} |
defaultBackend.podLabels |
labels to add to the pod container metadata | {} |
defaultBackend.replicaCount |
desired number of default backend pods | 1 |
defaultBackend.minAvailable |
minimum number of available default backend pods for PodDisruptionBudget | 1 |
defaultBackend.resources |
default backend pod resource requests & limits | {} |
defaultBackend.priorityClassName |
default backend priorityClassName | nil |
defaultBackend.podSecurityContext |
Security context policies to add to the default backend | {} |
defaultBackend.service.annotations |
annotations for default backend service | {} |
defaultBackend.service.clusterIP |
internal default backend cluster service IP (set to "-" to pass an empty value) |
nil |
defaultBackend.service.omitClusterIP |
(Deprecated) To omit the clusterIP from the default backend service |
false |
defaultBackend.service.externalIPs |
default backend service external IP addresses | [] |
defaultBackend.service.loadBalancerIP |
IP address to assign to load balancer (if supported) | "" |
defaultBackend.service.loadBalancerSourceRanges |
list of IP CIDRs allowed access to load balancer (if supported) | [] |
defaultBackend.service.type |
type of default backend service to create | ClusterIP |
defaultBackend.serviceAccount.create |
if true , create a backend service account. Only useful if you need a pod security policy to run the backend. |
true |
defaultBackend.serviceAccount.name |
The name of the backend service account to use. If not set and create is true , a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend. |
`` |
imagePullSecrets |
name of Secret resource containing private registry credentials | nil |
rbac.create |
if true , create & use RBAC resources |
true |
rbac.scope |
if true , do not create & use clusterrole and -binding. Set to true in combination with controller.scope.enabled=true to disable load-balancer status updates and scope the ingress entirely. |
false |
podSecurityPolicy.enabled |
if true , create & use Pod Security Policy resources |
false |
serviceAccount.create |
if true , create a service account for the controller |
true |
serviceAccount.name |
The name of the controller service account to use. If not set and create is true , a name is generated using the fullname template. |
`` |
revisionHistoryLimit |
The number of old history to retain to allow rollback. | 10 |
tcp |
TCP service key:value pairs. The value is evaluated as a template. | {} |
udp |
UDP service key:value pairs The value is evaluated as a template. | {} |
releaseLabelOverride |
If provided, the value will be used as the release label instead of .Release.Name |
"" |
These parameters can be passed via Helm's --set
option
$ helm install stable/nginx-ingress --name my-release \
--set controller.metrics.enabled=true
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
$ helm install stable/nginx-ingress --name my-release -f values.yaml
A useful trick to debug issues with ingress is to increase the logLevel as described here
$ helm install stable/nginx-ingress --set controller.extraArgs.v=2
Tip: You can use the default values.yaml
Note that the PodDisruptionBudget resource will only be defined if the replicaCount is greater than one, else it would make it impossible to evacuate a node. See gh issue #7127 for more info.
The Nginx ingress controller can export Prometheus metrics.
$ helm install stable/nginx-ingress --name my-release \
--set controller.metrics.enabled=true
You can add Prometheus annotations to the metrics service using controller.metrics.service.annotations
. Alternatively, if you use the Prometheus Operator, you can enable ServiceMonitor creation using controller.metrics.serviceMonitor.enabled
.
Previous versions of this chart had a controller.stats.*
configuration block, which is now obsolete due to the following changes in nginx ingress controller:
- in 0.16.1, the vts (virtual host traffic status) dashboard was removed
- in 0.23.0, the status page at port 18080 is now a unix socket webserver only available at localhost.
You can use
curl --unix-socket /tmp/nginx-status-server.sock http://localhost/nginx_status
inside the controller container to access it locally, or use the snippet from nginx-ingress changelog to re-enable the http server
Add an ExternalDNS annotation to the LoadBalancer service:
controller:
service:
annotations:
external-dns.alpha.kubernetes.io/hostname: kubernetes-example.com.
Annotate the controller as shown in the nginx-ingress l7 patch:
controller:
service:
targetPorts:
http: http
https: http
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:XX-XXXX-X:XXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
ssl-redirect
and force-ssl-redirect
flag are not working with AWS Network Load Balancer. You need to turn if off and add additional port with server-snippet
in order to make it work.
The port NLB 80
will be mapped to nginx container port 80
and NLB port 443
will be mapped to nginx container port 8000
(special). Then we use $server_port
to manage redirection on port 80
controller:
config:
ssl-redirect: "false" # we use `special` port to control ssl redirection
server-snippet: |
listen 8000;
if ( $server_port = 80 ) {
return 308 https://$host$request_uri;
}
containerPort:
http: 80
https: 443
special: 8000
service:
targetPorts:
http: http
https: special
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "your-arn"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
To configure the LoadBalancer service with the route53-mapper addon, add the domainName
annotation and dns
label:
controller:
service:
labels:
dns: "route53"
annotations:
domainName: "kubernetes-example.com"
With nginx-ingress-controller version 0.25+, the nginx ingress controller pod exposes an endpoint that will integrate with the validatingwebhookconfiguration
Kubernetes feature to prevent bad ingress from being added to the cluster.
With nginx-ingress-controller in 0.25.* work only with kubernetes 1.14+, 0.26 fix this issue
If you are upgrading this chart from a version between 0.31.0 and 1.2.2 then you may get an error like this:
Error: UPGRADE FAILED: Service "?????-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Detail of how and why are in this issue but to resolve this you can set xxxx.service.omitClusterIP
to true
where xxxx
is the service referenced in the error.
As of version 1.26.0
of this chart, by simply not providing any clusterIP value, invalid: spec.clusterIP: Invalid value: "": field is immutable
will no longer occur since clusterIP: ""
will not be rendered.