Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support upstream directives such as backup in VirtualServerRoute CRD #626

Closed
jrmercier1 opened this issue Jul 17, 2019 · 3 comments
Closed
Labels
proposal An issue that proposes a feature request

Comments

@jrmercier1
Copy link

Is your feature request related to a problem? Please describe.
Feature related to VirtualServer.route (upstream or split block?)

Describe the solution you'd like
Currently you can assign weights using splits in the VirtualServerRoute splits directive. I would also like to be able to use a backup directive within an individual path. This would allow us to circuit break at the subroute level.

subroutes:
- path: /animal/dog
   splits:
   - upstream: dog-v1
   - upstream: external-dog-service-failover-v1
     backup: true (?)
- path: /animal/cat
   splits:
   - upstream: cat-v1
   - upstream: sorry-upstream
     backup: true (?)

Or potentially within upstreams block?

@pleshakov pleshakov added the proposal An issue that proposes a feature request label Jul 18, 2019
@pleshakov
Copy link
Contributor

Hi @jrmercier1

Thanks for the feature request! To make sure we understand the use case correctly, please see a couple of questions below:

What is the problem you're trying to solve? It seems that you want failover behavior for upstreams: If an upstream becomes unavailable, then the IC should start sending requests to the backup upstream. Is that the case?

How would you decide if an upstream fails? All of the endpoints of the corresponding service are unavailable?

Does the backup upstream represent a different version of the same service (application) or a special service that responds with error pages or something similar?

Have you considered any alternatives? For example, why having the endpoints of the dog-v1 and external-dog-service-failover-v1 in one service is not enough?

@jrmercier1
Copy link
Author

*What is the problem you're trying to solve? It seems that you want failover behavior for upstreams: If an upstream becomes unavailable, then the IC should start sending requests to the backup upstream. Is that the case?

Yes that would be the case. The "backup" is only used if the primary one is down. Similar to how we would use upstream setup below in the traditional NGINX plus.

*NGINX PLUS EXAMPLE(simplified)

upstream animals-dog {
  ...
  1.2.3.4:8080 fail_timeout=10 max_fails=3 slow_start=300; (dog-v1)
  4.3.2.1:8080 backup;  (external-dog-service-failover-v1)
  ...
}

location /animal/dog {
  proxy_pass http://animals-dog;
  ...
}

location @animals-hc {
  proxy_pass http://animals-dog;
  ....
  health_check interval=10 jitter=1 uri=/animal/dog/healthcheck match=healthy;
  ...
}

How would you decide if an upstream fails? All of the endpoints of the corresponding service are unavailable?

This is actually something I was thinking about after submitting the request. I am looking to have this mimic as close to possible the backup directive we would usually use in regular nginx plus.

From what I can tell ingress health checks are handled a bit differently since they appear to piggy back on the k8s checks? This may pose a problem using externalName oriented service since I am not specifying anything in a deployment.

In our case we would only want to send traffic to backup if all endpoints associated with the primary service are unavailable or not healthy.

Does the backup upstream represent a different version of the same service (application) or a special service that responds with error pages or something similar?

This would most likely be a 'externalName'. In this case it would be the same app or error page running on a different K8s cluster or VM. The idea is to circuit break only if the local service deployment is having issues.

Have you considered any alternatives? For example, why having the endpoints of the dog-v1 and external-dog-service-failover-v1 in one service is not enough?

I believe this is an issue when using external services(externalName).

@pleshakov
Copy link
Contributor

@jrmercier1
thanks for the additional details!

Note that health checks are soon to be added to the IC (#635) and support for externalname services for VirtualServer/VirtualServerRoutes is coming soon as well.

We're also currently designing support for error pages (a wrapper around the error_pages directive http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page). Consider the following examples (this is a prototype):

apiVersion: k8s.nginx.org/v1alpha1
kind: VirtualServer
metadata:
  name: cafe
spec:
  host: cafe.example.com
  tls:
    secret: cafe-secret
  upstreams:
  - name: tea
    service: tea-svc
    port: 80
  - name: coffee
    service: coffee-svc
    port: 80
  - name: errors
    services: errors-svc
    port: 80
  # enabled per VirtualServer
  errorPages: 
  - codes: # ex. 1 - for responses with 404 and 405 status codes, redirect to https://nginx.org with a code 307 
    - 404
    - 405
    redirect:
      code: 307
      url: https://nginx.org
  - codes: # ex. 2 - for responses with 502 status code , send a request to the upstream errors for path "/" with the new response code 200
    - 502
    upstream: errors
    path: /
    newCode: 200
  - codes: # ex. 3 - for responses with 503 status code , send a request to the upstream errors for path "/errors/$upstream_status.html" ($upstream_status - nginx variable) 
    - 503
    upstream: errors
    path: /errors/$upstream_status.html
  - codes:  # ex. 4 - for responses with 504 status code send a request to the upstream errors with special headers set to the values of NGINX variables 
    - 504
    upstream: errors
    path: /
    headers:
    - name: X-Original-URI
      value: $uri
    - name: X-Original-Status
      value: $upstream_status
  routes:
  - path: "/tea"
    upstream: tea
  - path: "/coffee"
    upstream: coffee

Please note that errorPages are configured for all upstreams.

Perhaps the following example can work for your case?:

 - codes:
    - 502
    upstream: errors

In this case, when no available/heathy endpoint exist in an upstream, NGINX will send a new request to the upstream errors with the same URI, returning a 502 response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proposal An issue that proposes a feature request
Projects
None yet
Development

No branches or pull requests

3 participants