Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

L4 Gateway Sharing #1062

Closed
markmc opened this issue Mar 22, 2022 · 11 comments
Closed

L4 Gateway Sharing #1062

markmc opened this issue Mar 22, 2022 · 11 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@markmc
Copy link
Contributor

markmc commented Mar 22, 2022

The Gateway vs Route separation allows a cluster operator gateway owner to share a gateway and its resources with many application developers.

However, in the L4 routing model (i.e. TCPRoute and UDPRoute), routes attaching to a shared gateway will generally be associated with a dedicated gateway listener, thereby preventing an application owner from creating a route without some cooperation from the gateway owner.

However, the gateway may represent a resource that is desirable to share - for example, a cloud load-balancer, or even a public IP address - so it seems important that we allow sharing.

Sharing a L4 forwarding gateway would require that a cluster operator could define a gateway from which independent application owners could be assigned listener TCP/UDP ports, without any risk of other application owners interfering with or conflicting with their ports.

Once a route has attached to such a listener, the application owner would retrieve the assigned port from status, and share this port and the gateway address with clients that wish to connect.

One way of thinking of this is in the context of the "the goal of using Gateway to fill the role that is currently filled by Services of type LoadBalancer" - this would allow independent application developers to have type=LoadBalancer behavior without requiring expensive resources (load-balancers, public IP addresses, etc.) to be allocated to each.

For a longer discussion, see Gateway API - Transport Layer (L4) Routing vs Gateway Sharing

#818 is also relevant because the proposed modelling in the doc involves a listener having a port range to allocate from:

If a Listener could be associated with a range of ports, the Gateway controller could be responsible for assigning ports from the range to routes which match that Listener.

In Gateway.status, the attachment status of each port would need to be represented somehow. One way is to add a ListenerStatus for each port, or perhaps only for each allocated port. In this way, listeners with port ranges behave like a Listener template from which per-port Listeners are generated.

In TCPRoute.status, sufficient information should be provided to determine which generated Listener the route attached to, so the application owner can determine which port was allocated. We could make use of the recently added ParentReference.Port field to expose this.

See also this work-in-progress branch showing the API changes and this work-in-progress Istio branch showing an Istio-based proof-of-concept.

@markmc markmc added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 22, 2022
@youngnick
Copy link
Contributor

For this to work, Gateway will need a way to specify a pool of ports (most likely a range) to draw from (probably covered by #818), and will also need the port included in the status (#1060), right?

That gives the Gateway owner a way to specify what ports to pick from, and the Route owner a way to know what they got.

@hbagdi
Copy link
Contributor

hbagdi commented Apr 1, 2022

Reading this issue reminds me of my recent comment: #1061 (comment)

Spitballing an idea:
spec.listeners is currently expected to be populated and static, static meaning configured by a human. We could consider making spec.listeners to be dynamically managed, maybe by another controller or the controller managing the gateway. And then have either another section within the Gateway or a dedicated ReferencePolicy that dictates the rules of binding L4 routes to the gateway (which dynamically creates the listener).
The distinction of dynamic and static listeners in the API semantics could be helpful to other use-cases as well.

@youngnick
Copy link
Contributor

Historically, having spec fields be dynamically managed has been a mistake - there is a section in the API conventions that talks about how this hasn't gone well. The reason for the split between spec and status is that spec reflects the user intent, and status reflects the actual state.

I think that having a way to represent a user's intent to request "please dynamically assign me a port from a list of ports" is a much better user experience.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 2, 2022
@youngnick
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 7, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 5, 2022
@shaneutt
Copy link
Member

shaneutt commented Oct 5, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 5, 2022
@shaneutt
Copy link
Member

shaneutt commented Oct 5, 2022

While I don't think this issue is entirely ready for action, I do think it represents some key interests in the L4 ingress space that I would like to see addressed. I'm going to take this one to try and shepherd it forward with the goal of fully defining the acceptance criteria that would see it complete and ideally getting this into v0.7.0.

@shaneutt shaneutt self-assigned this Oct 5, 2022
@shaneutt shaneutt added this to the v0.7.0 milestone Oct 5, 2022
@aojea
Copy link
Contributor

aojea commented Oct 25, 2022

/cc

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 23, 2023
@shaneutt shaneutt removed their assignment Jan 30, 2023
@shaneutt shaneutt modified the milestones: v0.7.0, v1.0.0 Feb 21, 2023
@shaneutt shaneutt added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Feb 21, 2023
@shaneutt shaneutt moved this from Backlog to Triage in Gateway API: The Road to GA Mar 8, 2023
@shaneutt shaneutt removed this from the v1.0.0 milestone Mar 8, 2023
@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Mar 14, 2023
@shaneutt shaneutt moved this to FROZEN in Post-GA Refinement Sep 18, 2024
@shaneutt
Copy link
Member

shaneutt commented Oct 2, 2024

At this point given so little emphasis behind our L4 support (and some doubts if it will even move forward, see #2644 #2645 ) it seems reasonable to close this one as unplanned for now. If needed we can always re-open this later if someone wants to make a case to do so.

@shaneutt shaneutt closed this as not planned Won't fix, can't repro, duplicate, stale Oct 2, 2024
@github-project-automation github-project-automation bot moved this from Blocked/Stalled to Done in Post-GA Refinement Oct 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
Development

No branches or pull requests

7 participants