To expose an HTTP application running on MKE to either inside or outside the DC/OS cluster, a Kubernetes Ingress
resource must be created.
Furthermore, said Ingress
resource must be explicitly annotated for provisioning with EdgeLB:
kubernetes.io/ingress.class: edgelb
dklb
will react to the creation of such an Ingress
resource by provisioning an EdgeLB pool (henceforth referred to as the target EdgeLB pool) for the Ingress
resource.
This EdgeLB pool is provisioned using sane default values for its name, its CPU, memory and size requirements, and other options.
After provisioning said EdgeLB pool, dklb
will periodically query EdgeLB in order to obtain the list of hostnames and IPs at which the ingress can be reached.
These will eventually be reported on the .status
field of the Ingress
resource.
It should be noted that, due to the way EdgeLB pool scheduling and metadata reporting works, it may take from a few seconds to several minutes for these hostnames and IPs to be reported.
All Kubernetes services used as backends in an Ingress
resource annotated for provisioning with EdgeLB MUST be of type NodePort
or LoadBalancer
.
In particular, services of type ClusterIP
and headless services cannot be used as the backends for Ingress
resources to be provisioned by EdgeLB.
In case an invalid Service
resource is specified as a backend for a given Ingress
resource, or whenever a default backend is not explicitly defined, dklb
will be used as the (default) backend instead.
dklb
will respond to requests arriving at the default backend with 503 SERVICE UNAVAILABLE
and with a short error message.
Whenever dklb
gets to be used as a backend, a Kubernetes event will be emitted and associated with the Ingress
resource being processed.
This event contains useful information about the reason why dklb
is being used instead of the intended backend, and may be used for diagnosing problems.
As mentioned before, dklb
uses sane defaults when provisioning EdgeLB pools for Ingress
resources annotated for provisioning with EdgeLB.
It is, however, possible to customize the target EdgeLB pool for a given Ingress
resource by using the kubernetes.dcos.io/dklb-config
annotation.
This annotation accepts a YAML object (henceforth called the configuration object) with the following structure:
kubernetes.dcos.io/dklb-config: | # NOTE: The "|" character is mandatory.
name: "dklb-pool-0"
role: "*"
network: "dcos"
size: 3
cpus: 0.2
memory: 512
frontends:
http:
port: 8080
The subsections below provide more insight on each of the fields on the configuration object.
ℹ️
|
The |
|
The kubernetes.dcos.io/dklb-config cannot be removed after the Ingress resource is created.
|
By default, dklb
uses the MKE cluster’s name and a randomly-generated five-digit suffix to compute the name of the target EdgeLB pool.
To specify a custom name for said EdgeLB pool, one may use the .name
field of the configuration object:
kubernetes.dcos.io/dklb-config: |
name: "<edgelb-pool-name>"
Depending on whether the "<edgelb-pool-name>" EdgeLB pool exists or not, dklb
will create or update it in order to expose all rules defined in the Ingress
resource.
❗
|
This field cannot be changed or removed after the Ingress resource is created.
|
By default, dklb
exposes ingresses to outside the DC/OS cluster by requesting for the target EdgeLB pool to be scheduled onto a public DC/OS agent.
However, and in order to accommodate all possible needs, dklb
supports explicitly specifying a Mesos role for the target EdgeLB pool using the .role
field of the configuration object:
kubernetes.dcos.io/dklb-config: |
role: "<edgelb-pool-role>"
In particular, to expose an ingress to inside DC/OS only, *
should be used as the value of <edgelb-pool-role>
.
Providing said value will cause dklb
to request for the target EdgeLB pool to be scheduled onto a private DC/OS agent.
❗
|
This field cannot be changed or removed after the Service resource is created.
|
dklb
provisions the target EdgeLB pool by looking at the ingress’s rules and creating an EdgeLB backend per referrenced Service
resource, and a single EdgeLB frontend.
By default, dklb
uses port 80
as the frontend’s bind port.
In particular, this means that HTTPS is not supported at the moment (see [limitations]).
In some situations, using a different port number as the frontend’s bind port may be required.
In order to accommodate more advanced use cases, dklb
supports defining a custom port via the .frontends.http.port
field of the configuration object:
kubernetes.dcos.io/dklb-config: |
frontends:
http:
port: <frontend-bind-port>
When this field is provided on the configuration object, dklb
will use <frontend-bind-port>
instead of port 80
as the actual frontend bind port.
|
Changing the value of this field after the Ingress resource is created is supported, but may cause disruption (as the target EdgeLB pool will most likely be re-deployed).
|
dklb
supports customizing CPU, memory and size requests for the target EdgeLB pool.
Custom values for these requests can be specified using the .cpus
, .memory
and .size
fields, respectively:
kubernetes.dcos.io/dklb-config: |
cpus: <edgelb-pool-cpus>
memory: <edgelb-pool-memory>
size: <edgelb-pool-size>
In the above representation, <edgelb-pool-cpus>
is a floating-point number (e.g. 0.2
), and <edgelb-pool-memory>
and <edgelb-pool-size>
are integers (e.g. 512
and 3
, respectively).
dklb
supports customizing load balancer instance placement for the target EdgeLB Pool.
By default, no constraint is specified. A custom value can be specified using the constraints
field.
|
Take special care to escape strings in the placement constraint. |
kubernetes.dcos.io/dklb-config: |
contraints: "<Marathon style constraints for load balancer instance placement>"
By design, pools exposing Kubernetes ingresses to outside the DC/OS cluster (i.e. pools using the slave_public
role) must be scheduled onto the DC/OS host network (i.e. the network where the public DC/OS agents are running on top of).
Also by design, pools exposing Kubernetes ingresses to inside the DC/OS cluster must be scheduled onto a DC/OS virtual network.
By default, these pools are scheduled onto the dcos
virtual network.
It is, however, possible to pick a custom DC/OS virtual network for these pools by using the .network
field of the configuration object:
kubernetes.dcos.io/dklb-config: |
network: "<edgelb-pool-network>"
❗
|
This field cannot be changed or removed after the Service resource is created.
|
In certain scenarios, it may be desirable to use a pre-existing EdgeLB pool to expose a Kubernetes ingress (instead of having dklb
creating one).
This can easily be achieved by providing the name of the pre-existing EdgeLB pool as the value of the .name
field of the configuration object.
This example illustrates how to expose two different HTTP applications to outside the DC/OS cluster. To start with, two simple "echo" pods will be created:
$ kubectl run --restart=Never --image hashicorp/http-echo --labels app=http-echo-1,owner=dklb --port 80 http-echo-1 -- -listen=:80 --text='Hello from http-echo-1!'
$ kubectl run --restart=Never --image hashicorp/http-echo --labels app=http-echo-2,owner=dklb --port 80 http-echo-2 -- -listen=:80 --text='Hello from http-echo-2!'
$ kubectl get pod --selector "owner=dklb"
NAME READY STATUS RESTARTS AGE
http-echo-1 1/1 Running 0 5s
http-echo-2 1/1 Running 0 3s
Additionally, each of these pods will be exposed via a service of type NodePort
:
$ kubectl expose pod http-echo-1 --port 80 --target-port 80 --type NodePort --name "http-echo-1"
$ kubectl expose pod http-echo-2 --port 80 --target-port 80 --type NodePort --name "http-echo-2"
$ kubectl get svc --selector "owner=dklb"
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-echo-1 NodePort 10.100.174.194 <none> 80:32070/TCP 5s
http-echo-2 NodePort 10.100.213.12 <none> 80:30383/TCP 3s
Then, an Ingress
resource annotated for provisioning with EdgeLB and targeting the aforementioned services will be created:
$ cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: edgelb
kubernetes.dcos.io/dklb-config: |
name: dklb-echo
labels:
owner: dklb
name: dklb-echo
spec:
rules:
- host: "http-echo-1.com"
http:
paths:
- backend:
serviceName: http-echo-1
servicePort: 80
- host: "http-echo-2.com"
http:
paths:
- backend:
serviceName: http-echo-2
servicePort: 80
EOF
ingress.extensions/dklb-echo created
$ kubectl get ingress --selector "owner=dklb"
NAME HOSTS ADDRESS PORTS AGE
dklb-echo http-echo-1.com,http-echo-2.com 80 3s
The kubernetes.dcos.io/dklb-config
annotation defined on this Ingress
resource will cause dklb
to expose the ingress using an EdgeLB pool called dklb-echo
.
At this point, querying the EdgeLB API should confirm the existence of a pool called dklb-echo
exposing port 80
:
$ dcos edgelb list
NAME APIVERSION COUNT ROLE PORTS
dklb-echo V2 1 slave_public 9090, 80
This means that dklb
has successfully created and provisioned the target EdgeLB pool based on the spec of the dklb-echo
Ingress
resource.
ℹ️
|
Host-based routing depends on the establishment of adequate DNS records for each host.
Hence, and since DNS configuration is out-of-scope, |
To test connectivity, it is necessary to determine the public IP at which the target EdgeLB pool can be reached.
This IP will eventually be reported in the .status
field of the Ingress
resource:
$ kubectl get ingress --selector "owner=dklb"
NAME HOSTS ADDRESS PORTS AGE
dklb-echo http-echo-1.com,http-echo-2.com <public-dcos-agent-ip> 80 3s
curl
may then be used to confirm that the ingress is correctly exposed to outside the DC/OS cluster:
$ curl -H "Host: http-echo-1.com" http://<public-dcos-agent-ip>
Hello from http-echo-1!
$ curl -H "Host: http-echo-2.com" http://<public-dcos-agent-ip>
Hello from http-echo-2!
This means that requests made to the http-echo-1.com
host are being forwarded to the http-echo-1
service, and that a similar routing is in place between the http-echo-2.com
host and the http-echo-2
service.
It should be noted that since no default backend has been specified in the dklb-echo
ingress, requests without a matching Host
header will get 503
as a response:
$ curl -v http://<public-dcos-agent-ip>
(...)
> Host: <public-dcos-agent-ip>
(...)
< HTTP/1.0 503 Service Unavailable
(...)
After testing finishes, cleanup of the Kubernetes pods, services and ingresses and of the target EdgeLB pool can be done by running the following commands:
$ kubectl delete ingress --selector "owner=dklb"
$ kubectl delete svc --selector "owner=dklb"
$ kubectl delete pod --selector "owner=dklb"
The dklb-echo
EdgeLB pool will be automatically deleted.