Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add HealthChecks support for vs/vsr #635

Merged
merged 1 commit into from
Aug 6, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 76 additions & 18 deletions docs/virtualserver-and-virtualserverroute.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,24 +7,26 @@ This document is the reference documentation for the resources. To see additiona
**Feature Status**: The VirtualServer and VirtualServerRoute resources are available as a preview feature: it is suitable for experimenting and testing; however, it must be used with caution in production environments. Additionally, while the feature is in preview, we might introduce some backward-incompatible changes to the resources specification in the next releases.

## Contents
- [VirtualServer and VirtualServerRoute Resources](#virtualserver-and-virtualserverroute-resources)
- [Contents](#contents)
- [Prerequisites](#prerequisites)
- [VirtualServer Specification](#virtualserver-specification)
- [VirtualServer.TLS](#virtualservertls)
- [VirtualServer.Route](#virtualserverroute)
- [VirtualServerRoute Specification](#virtualserverroute-specification)
- [VirtualServerRoute.Subroute](#virtualserverroutesubroute)
- [Common Parts of the VirtualServer and VirtualServerRoute](#common-parts-of-the-virtualserver-and-virtualserverroute)
- [Upstream](#upstream)
- [Upstream.TLS](#upstreamtls)
- [Split](#split)
- [Rules](#rules)
- [Condition](#condition)
- [Match](#match)
- [Using VirtualServer and VirtualServerRoute](#using-virtualserver-and-virtualserverroute)
- [Validation](#validation)
- [Customization via ConfigMap](#customization-via-configmap)
- [VirtualServer and VirtualServerRoute Resources](#VirtualServer-and-VirtualServerRoute-Resources)
- [Contents](#Contents)
- [Prerequisites](#Prerequisites)
- [VirtualServer Specification](#VirtualServer-Specification)
- [VirtualServer.TLS](#VirtualServerTLS)
- [VirtualServer.Route](#VirtualServerRoute)
- [VirtualServerRoute Specification](#VirtualServerRoute-Specification)
- [VirtualServerRoute.Subroute](#VirtualServerRouteSubroute)
- [Common Parts of the VirtualServer and VirtualServerRoute](#Common-Parts-of-the-VirtualServer-and-VirtualServerRoute)
- [Upstream](#Upstream)
- [Upstream.TLS](#UpstreamTLS)
- [Upstream.Healthcheck](#UpstreamHealthcheck)
- [Header](#Header)
- [Split](#Split)
- [Rules](#Rules)
- [Condition](#Condition)
- [Match](#Match)
- [Using VirtualServer and VirtualServerRoute](#Using-VirtualServer-and-VirtualServerRoute)
- [Validation](#Validation)
- [Customization via ConfigMap](#Customization-via-ConfigMap)

## Prerequisites

Expand Down Expand Up @@ -209,12 +211,68 @@ tls:
| `next-upstream-timeout` | The time during which a request can be passed to the next upstream server. See the [proxy_next_upstream_timeout](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream_timeout) directive. The `0` value turns off the time limit. The default is `0`. | `string` | No |
| `next-upstream-tries` | The number of possible tries for passing a request to the next upstream server. See the [proxy_next_upstream_tries](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream_tries) directive. The `0` value turns off this limit. The default is `0`. | `int` | No |
| `tls` | The TLS configuration for the Upstream. | [`tls`](#UpstreamTLS) | No |
| `healthCheck` | The health check configuration for the Upstream. See the [health_check](http://nginx.org/en/docs/http/ngx_http_upstream_hc_module.html#health_check) directive. Note: this feature is supported only in NGINX Plus. | [`healthcheck`](#UpstreamHealthcheck) | No |

### Upstream.TLS
| Field | Description | Type | Required |
| ----- | ----------- | ---- | -------- |
| `enable` | Enables HTTPS for requests to upstream servers. The default is `False`, meaning that HTTP will be used. | `boolean` | No |

### Upstream.Healthcheck

The Healthcheck defines an [active health check](https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/). In the example below we enable a health check for an upstream and configure all the available parameters:

```yaml
name: tea
service: tea-svc
port: 80
healthCheck:
enable: true
path: /healthz
interval: 20s
jitter: 3s
fails: 5
passes: 5
port: 8080
tls:
enable: true
connect-timeout: 10s
read-timeout: 10s
send-timeout: 10s
headers:
- name: Host
value: my.service
statusMatch: "! 500"
```

| Field | Description | Type | Required |
| ----- | ----------- | ---- | -------- |
| `enable` | Enables a health check for an upstream server. The default is `false`. | `boolean` | No |
| `path` | The path used for health check requests. The default is `/`. | `string` | No |
| `interval` | The interval between two consecutive health checks. The default is `5s`. | `string` | No |
| `jitter` | The time within which each health check will be randomly delayed. By default, there is no delay. | `string` | No |
| `fails` | The number of consecutive failed health checks of a particular upstream server after which this server will be considered unhealthy. The default is `1`. | `integer` | No |
| `passes` | The number of consecutive passed health checks of a particular upstream server after which the server will be considered healthy. The default is `1`. | `integer` | No |
| `port` | The port used for health check requests. By default, the port of the upstream is used. Note: in contrast with the port of the upstream, this port is not a service port, but a port of a pod. | `integer` | No |
| `tls` | The TLS configuration used for health check requests. By default, the `tls` field of the upstream is used. | [`upstream.tls`](#UpstreamTLS) | No |
| `connect-timeout` | The timeout for establishing a connection with an upstream server. By default, the `connect-timeout` of the upstream is used. | `string` | No |
| `read-timeout` | The timeout for reading a response from an upstream server. By default, the `read-timeout` of the upstream is used. | `string` | No |
| `send-timeout` | The timeout for transmitting a request to an upstream server. By default, the `send-timeout` of the upstream is used. | `string` | No |
| `headers` | The request headers used for health check requests. NGINX Plus always sets the `Host`, `User-Agent` and `Connection` headers for health check requests. | [`[]header`](#Header) | No |
| `statusMatch` | The expected response status codes of a health check. By default, the response should have status code 2xx or 3xx. Examples: `“200”`, `“! 500”`, `"301-303 307"`. See the documentation of the [match](https://nginx.org/en/docs/http/ngx_http_upstream_hc_module.html?#match) directive. | `string` | No |

### Header
The header defines an HTTP Header:
```yaml
name: Host
value: example.com
```

| Field | Description | Type | Required |
| ----- | ----------- | ---- | -------- |
| `name` | The name of the header. | `string` | Yes |
| `value` | The value of the header. | `string` | No |

### Split

The split defines a weight for an upstream as part of the splits configuration.
Expand Down
33 changes: 29 additions & 4 deletions internal/configs/version2/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@ package version2

// VirtualServerConfig holds NGINX configuration for a VirtualServer.
type VirtualServerConfig struct {
Server Server
Upstreams []Upstream
SplitClients []SplitClient
Maps []Map
Server Server
Upstreams []Upstream
SplitClients []SplitClient
Maps []Map
StatusMatches []StatusMatch
}

// Upstream defines an upstream.
Expand Down Expand Up @@ -37,6 +38,7 @@ type Server struct {
Snippets []string
InternalRedirectLocations []InternalRedirectLocation
Locations []Location
HealthChecks []HealthCheck
}

// SSL defines SSL configuration for a server.
Expand Down Expand Up @@ -74,6 +76,23 @@ type SplitClient struct {
Distributions []Distribution
}

// HealthCheck defines a HealthCheck for an upstream in a Server.
type HealthCheck struct {
Name string
URI string
Interval string
Jitter string
Fails int
Passes int
Port int
ProxyPass string
ProxyConnectTimeout string
ProxyReadTimeout string
ProxySendTimeout string
Headers map[string]string
Match string
}

// Distribution maps weight to a value in a SplitClient.
type Distribution struct {
Weight string
Expand All @@ -98,3 +117,9 @@ type Parameter struct {
Value string
Result string
}

// StatusMatch defines a Match block for status codes.
type StatusMatch struct {
Name string
Code string
}
20 changes: 20 additions & 0 deletions internal/configs/version2/nginx-plus.virtualserver.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,12 @@ map {{ $m.Source }} {{ $m.Variable }} {
}
{{ end }}

{{ range $m := .StatusMatches }}
match {{ $m.Name }} {
status {{ $m.Code }};
}
{{ end }}

{{ $s := .Server }}
server {
listen 80{{ if $s.ProxyProtocol }} proxy_protocol{{ end }};
Expand Down Expand Up @@ -82,6 +88,20 @@ server {
}
{{ end }}

{{ range $hc := $s.HealthChecks }}
location @hc-{{ $hc.Name }} {
{{ range $n, $v := $hc.Headers }}
proxy_set_header {{ $n }} "{{ $v }}";
{{ end }}
proxy_connect_timeout {{ $hc.ProxyConnectTimeout }};
proxy_read_timeout {{ $hc.ProxyReadTimeout }};
proxy_send_timeout {{ $hc.ProxySendTimeout }};
proxy_pass {{ $hc.ProxyPass }};
health_check uri={{ $hc.URI }} port={{ $hc.Port }} interval={{ $hc.Interval }} jitter={{ $hc.Jitter }}
fails={{ $hc.Fails }} passes={{ $hc.Passes }}{{ if $hc.Match }} match={{ $hc.Match }}{{ end }};
}
{{ end }}

{{ range $l := $s.Locations }}
location {{ $l.Path }} {
{{ range $snippet := $l.Snippets }}
Expand Down
116 changes: 110 additions & 6 deletions internal/configs/virtualserver.go
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,92 @@ func (namer *variableNamer) GetNameForVariableForRulesRouteMainMap(rulesIndex in
return fmt.Sprintf("$vs_%s_rules_%d", namer.safeNsName, rulesIndex)
}

func newHealthCheckWithDefaults(upstream conf_v1alpha1.Upstream, upstreamName string, cfgParams *ConfigParams) *version2.HealthCheck {
return &version2.HealthCheck{
Name: upstreamName,
URI: "/",
Interval: "5s",
Jitter: "0s",
Fails: 1,
Passes: 1,
Port: int(upstream.Port),
ProxyPass: fmt.Sprintf("%v://%v", generateProxyPassProtocol(upstream.TLS.Enable), upstreamName),
ProxyConnectTimeout: generateString(upstream.ProxyConnectTimeout, cfgParams.ProxyConnectTimeout),
ProxyReadTimeout: generateString(upstream.ProxyReadTimeout, cfgParams.ProxyReadTimeout),
ProxySendTimeout: generateString(upstream.ProxySendTimeout, cfgParams.ProxySendTimeout),
Headers: make(map[string]string),
}
}

func generateHealthCheck(upstream conf_v1alpha1.Upstream, upstreamName string, cfgParams *ConfigParams) *version2.HealthCheck {
if upstream.HealthCheck == nil || !upstream.HealthCheck.Enable {
return nil
}

hc := newHealthCheckWithDefaults(upstream, upstreamName, cfgParams)

if upstream.HealthCheck.Path != "" {
hc.URI = upstream.HealthCheck.Path
}

if upstream.HealthCheck.Interval != "" {
hc.Interval = upstream.HealthCheck.Interval
}

if upstream.HealthCheck.Jitter != "" {
hc.Jitter = upstream.HealthCheck.Jitter
}

if upstream.HealthCheck.Fails > 0 {
hc.Fails = upstream.HealthCheck.Fails
}

if upstream.HealthCheck.Passes > 0 {
hc.Passes = upstream.HealthCheck.Passes
}

if upstream.HealthCheck.Port > 0 {
hc.Port = upstream.HealthCheck.Port
}

if upstream.HealthCheck.ConnectTimeout != "" {
hc.ProxyConnectTimeout = upstream.HealthCheck.ConnectTimeout
}

if upstream.HealthCheck.ReadTimeout != "" {
hc.ProxyReadTimeout = upstream.HealthCheck.ReadTimeout
}

if upstream.HealthCheck.SendTimeout != "" {
hc.ProxySendTimeout = upstream.HealthCheck.SendTimeout
}

for _, h := range upstream.HealthCheck.Headers {
hc.Headers[h.Name] = h.Value
}

if upstream.HealthCheck.TLS != nil {
hc.ProxyPass = fmt.Sprintf("%v://%v", generateProxyPassProtocol(upstream.HealthCheck.TLS.Enable), upstreamName)
}

if upstream.HealthCheck.StatusMatch != "" {
hc.Match = generateStatusMatchName(upstreamName)
}

return hc
}

func generateStatusMatchName(upstreamName string) string {
return fmt.Sprintf("%s_match", upstreamName)
}

func generateUpstreamStatusMatch(upstreamName string, status string) version2.StatusMatch {
return version2.StatusMatch{
Name: generateStatusMatchName(upstreamName),
Code: status,
}
}

func generateVirtualServerConfig(virtualServerEx *VirtualServerEx, tlsPemFileName string, baseCfgParams *ConfigParams, isPlus bool) version2.VirtualServerConfig {
ssl := generateSSLConfig(virtualServerEx.VirtualServer.Spec.TLS, tlsPemFileName, baseCfgParams)

Expand All @@ -91,6 +177,8 @@ func generateVirtualServerConfig(virtualServerEx *VirtualServerEx, tlsPemFileNam
virtualServerUpstreamNamer := newUpstreamNamerForVirtualServer(virtualServerEx.VirtualServer)

var upstreams []version2.Upstream
var statusMatches []version2.StatusMatch
var healthChecks []version2.HealthCheck

// generate upstreams for VirtualServer
for _, u := range virtualServerEx.VirtualServer.Spec.Upstreams {
Expand All @@ -99,6 +187,13 @@ func generateVirtualServerConfig(virtualServerEx *VirtualServerEx, tlsPemFileNam
ups := generateUpstream(upstreamName, u, virtualServerEx.Endpoints[endpointsKey], isPlus, baseCfgParams)
upstreams = append(upstreams, ups)
crUpstreams[upstreamName] = u

if hc := generateHealthCheck(u, upstreamName, baseCfgParams); hc != nil {
healthChecks = append(healthChecks, *hc)
if u.HealthCheck.StatusMatch != "" {
statusMatches = append(statusMatches, generateUpstreamStatusMatch(upstreamName, u.HealthCheck.StatusMatch))
}
}
}
// generate upstreams for each VirtualServerRoute
for _, vsr := range virtualServerEx.VirtualServerRoutes {
Expand All @@ -109,6 +204,13 @@ func generateVirtualServerConfig(virtualServerEx *VirtualServerEx, tlsPemFileNam
ups := generateUpstream(upstreamName, u, virtualServerEx.Endpoints[endpointsKey], isPlus, baseCfgParams)
upstreams = append(upstreams, ups)
crUpstreams[upstreamName] = u

if hc := generateHealthCheck(u, upstreamName, baseCfgParams); hc != nil {
healthChecks = append(healthChecks, *hc)
if u.HealthCheck.StatusMatch != "" {
statusMatches = append(statusMatches, generateUpstreamStatusMatch(upstreamName, u.HealthCheck.StatusMatch))
}
}
}
}

Expand Down Expand Up @@ -179,9 +281,10 @@ func generateVirtualServerConfig(virtualServerEx *VirtualServerEx, tlsPemFileNam
}

return version2.VirtualServerConfig{
Upstreams: upstreams,
SplitClients: splitClients,
Maps: maps,
Upstreams: upstreams,
SplitClients: splitClients,
Maps: maps,
StatusMatches: statusMatches,
Server: version2.Server{
ServerName: virtualServerEx.VirtualServer.Spec.Host,
ProxyProtocol: baseCfgParams.ProxyProtocol,
Expand All @@ -194,6 +297,7 @@ func generateVirtualServerConfig(virtualServerEx *VirtualServerEx, tlsPemFileNam
Snippets: baseCfgParams.ServerSnippets,
InternalRedirectLocations: internalRedirectLocations,
Locations: locations,
HealthChecks: healthChecks,
},
}
}
Expand Down Expand Up @@ -252,8 +356,8 @@ func upstreamHasKeepalive(upstream conf_v1alpha1.Upstream, cfgParams *ConfigPara
return cfgParams.Keepalive != 0
}

func generateProxyPassProtocol(upstream conf_v1alpha1.Upstream) string {
if upstream.TLS.Enable {
func generateProxyPassProtocol(enableTLS bool) string {
if enableTLS {
return "https"
}
return "http"
Expand All @@ -278,7 +382,7 @@ func generateLocation(path string, upstreamName string, upstream conf_v1alpha1.U
ProxyBuffering: cfgParams.ProxyBuffering,
ProxyBuffers: cfgParams.ProxyBuffers,
ProxyBufferSize: cfgParams.ProxyBufferSize,
ProxyPass: fmt.Sprintf("%v://%v", generateProxyPassProtocol(upstream), upstreamName),
ProxyPass: fmt.Sprintf("%v://%v", generateProxyPassProtocol(upstream.TLS.Enable), upstreamName),
ProxyNextUpstream: generateString(upstream.ProxyNextUpstream, "error timeout"),
ProxyNextUpstreamTimeout: generateString(upstream.ProxyNextUpstreamTimeout, "0s"),
ProxyNextUpstreamTries: upstream.ProxyNextUpstreamTries,
Expand Down
Loading