-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design load-balancing and high-availability of services #1788
Comments
For L2 the soft anti-affinity would be ideal. |
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
We no longer need this since we provide in-cluster HA for `kube-apiserver` access. If this is desired for out-of-cluster access, we can provide this using a `LoadBalancer` `Service` once we have the infrastructure to support this in place. This also removed the optional deployment of `keepalived`. See: #2103 See: #1788
While it's not exactly what I have in mind, https://github.com/redhat-cop/keepalived-operator goes into this direction to a certain extent. This could be a project we can contribute to to get done what we want to achieve. |
I experimented a bit:
Then, executing HTTP requests from either the router host, or another VM whose routing table pointed to the router for the So, in whatever design, we should consider the ability to BGP-peer with routers for L3 balancing. This could be done using Calico if we stick to |
Currently, services exposed using an Ingress object can be load-balanced and are highly available because the
nginx-ingress
nginx
services are exposed on ports 80 and 443 of every node's workload-plane interface. As such, if a client is able to failover to other server(s) (for HA), and clients connect randomly or RR to nodes in the cluster (LB), we achieve the goals.However, many clients keep insisting a single IP address they e.g. received from DNS must work (at least until some caches get invalidated), or are unable to balance access across multiple endpoints.
As such, we need to provide virtual IPs and failover for them, as well as (potentially) load-balancing.
#779 states MetalLB needs to be deployed. Indeed, this allows to create Services of type LoadBalancer, after which a VIP will be taken from a pool of available addresses, and this VIP will at all times be ARP broadcasted by a single node in the cluster (when using L2 mode, at least: BGP changes the picture). However, this implies this one node is a bottleneck for all traffic (even though, indeed, this traffic will be propagated to one of all backing service instances to achieve LB at the backend). There's metallb/metallb#439 (comment), but even when implemented, this doesn't seem to fit nicely in the Kubernetes Service model.
This ticket is meant to discuss alternative approaches and eventually come up with a design and implementation. Ideally this is not limited to Ingress-fronted services.
The text was updated successfully, but these errors were encountered: