Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specifying group order smaller than the default (i.e. 0) #1875

Closed
hazim1093 opened this issue Mar 10, 2021 · 14 comments · Fixed by #2634
Closed

Specifying group order smaller than the default (i.e. 0) #1875

hazim1093 opened this issue Mar 10, 2021 · 14 comments · Fixed by #2634
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@hazim1093
Copy link

Problem

I have an ingress with a rule that needs to be evaluated first in the ALB.
In order to ensure that, I would have to assign group orders to all other ingresses, so that this particular ingress has the smallest order which is "0".
It would be convenient if we could set the order such that one ingress has the smallest order instead of specifying the order for all other present and future ingresses.

I faced this issue while creating a common ingress for redirecting HTTP to HTTPS for all routes, and some common annotations instead of adding boilerplate config to each ingress. ( This request/issue is already tracked in #1768 )

Proposed Solution

If a new order is introduced which is smaller than the default, it would solve the problem.
e.g. order -1, the default would still be 0 so all rules will be added after that.

Current Solution / Workaround

By default the rules are ordered based on the lexical order of Ingress’s namespace/name.
I created a dummy namespace (e.g. 1-ns) such that this ingress rule is always evaluated first and created this ingress in that namespace.

Example Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dev-loadbalancer-default
  namespace: 1-ns
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/group.name: "dev"
    alb.ingress.kubernetes.io/tags: Environment=dev,Name=dev
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
  rules:
    - http:
        paths:
          - path: /*
            pathType: Prefix
            backend:
              service:
                name: ssl-redirect
                port:
                  name: use-annotation
@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Mar 11, 2021

@hazim1093
we are introducing a new way to configure HTTP->HTTPS redirection: #1837, so you only need to add a single annotation alb.ingress.kubernetes.io/ssl-redirect: '443' to one of the Ingresses within IngressGroup for the task. This feature will be release in v2.2.0

with above feature, would you still have this feature request?
If it's still needed, what's your preference of allow negative order(the proposed solution here) vs a controller level flag to set the default order (e.g. you can set it to 500). i feel allow negative order seems more flexible for per ingressGroup, but a negative order seems a little weird..

@M00nF1sh M00nF1sh added the kind/design Categorizes issue or PR as related to design. label Mar 17, 2021
@hazim1093
Copy link
Author

@M00nF1sh
That would make solving this particular case easier, but even then I believe the common configurations should be moved to a common place rather than being able to specify them on one of the ingresses.

In a scenario where you have multiple ingresses, having such annotations on of them would make other ingresses not be "self-sufficient". Looking at such an ingress without the common annotations, it would be difficult to see all the aspects that affect the ingress. Similarly if it is moved to another AWS LB controller or another environment, it would be hard to make sure all the common annotations (which could be spread across a number of ingresses) are present.

On the other hand, having such annotations on all ingresses would create duplication and maybe even conflicts if different ingresses specify the same annotation and its value is changed in one of the ingresses over time.

I agree tinkering with the order seems a bit weird whether negative one or defining a custom starting order. Instead, we could introduce an IngressGroup custom resource would be responsible for provisioning the LB and have all common configuration. Or it might be even better if we can use the IngressClass resource for this, i.e. specifying common annotations or config in the IngressClass instead of the Ingress.
What are your thoughts on that ?

@hazim1093
Copy link
Author

I guess I diverted more towards #1768 in the discussion.

Setting the default order for the controller seems to be a good idea as well, if there is a certain case where one ingress should be processed before all the others with the default order.
Wondering if that would confuse people if the default order is different for different deployments of the controller.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 20, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 19, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kishorj kishorj removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 20, 2021
@kishorj
Copy link
Collaborator

kishorj commented Sep 20, 2021

/reopen

@k8s-ci-robot
Copy link
Contributor

@kishorj: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Sep 20, 2021
@MadhavJivrajani
Copy link

/remove-kind design
/kind feature
kind/design is migrated to kind/feature, see kubernetes/community#6144 for more details

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/design Categorizes issue or PR as related to design. labels Oct 11, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 9, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 8, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants