Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Ingress Controller Wishlist #1254

Open
5 of 15 tasks
dpbevin opened this issue Sep 19, 2021 · 15 comments
Open
5 of 15 tasks

Kubernetes Ingress Controller Wishlist #1254

dpbevin opened this issue Sep 19, 2021 · 15 comments
Labels
Kubernetes Ingress Controller Type: Idea This issue is a high-level idea for discussion.
Milestone

Comments

@dpbevin
Copy link
Contributor

dpbevin commented Sep 19, 2021

What should we add or change to make your life better?

I've captured some features that would be good for improving the Kubernetes Ingress controller code. I'm more than happy to contribute heavily in this space, as I had a previous Ingress Controller project based on Yarp with some of these items.

  • Same ingress or service name in multiple Kubernetes namespaces #1389 Support ingresses of same name in multiple namespaces. Reasonably simple - cluster and route IDs don't include namespace, so clashes are possible.
  • Single deployable. Combine the "sample ingress" project and the "controller" project into a single deployable, making it easy to set up and manage.
  • Path rewrite via ingress annotations
  • Default (fallback) TLS support for all ingresses, utilizing a Kube secret.
  • SNI-based TLS support on a per-ingress basis. Again, based on Kube approach. (work in progress May 2022)
  • Kubernetes Ingress Class Support #1397 Follow newer IngressClass specification. Allows multiple ingress controllers to be present in the same cluster. Not just other ingress technologies but potentially multiple yarp-based controllers for different usages (e.g. public vs internal ingresses).
  • Report "scheduled for sync", etc. events to ingress resources to identify that Yarp controller is managing specific ingresses. Not really well documented from a Kube point of view but something that other ingress controller do.
  • Docker image and Helm Chart
  • Session affinity via ingress annotations
  • OAuth enforcement for backend services that don't have their own auth. Similar to https://github.com/oauth2-proxy/oauth2-proxy.
  • GRPC-routing
  • OpenTracing support. Great for diagnosing issues
  • Request size limiting via ingress annotations
  • Header-based canary routing via ingress annotations
  • Rate limiting via ingress annotations

Why?

To make this project a great option for Kubernetes ingress!

@dpbevin dpbevin added the Type: Idea This issue is a high-level idea for discussion. label Sep 19, 2021
@samsp-msft
Copy link
Contributor

We welcome contributions in this space as k8s is not our area of expertise, and we are heads down on trying to finish 1.0 for YARP.

I have questions on a couple of features:

  • Single deployable. Combine the "sample ingress" project and the "controller" project into a single deployable, making it easy to set up and manage

We have found with the Service Fabric equivalent, that when you have a number of proxy instances querying the backend for changes, that can create quite a bit of load, which is why we are thinking that a separate instance for the controller makes more sense.

  • Docker Image

This is on the roadmap, but as we have found that YARP's differentiator is its customizability, this has taken a back seat as that implies it will be a static binary, rather than something built for each scenario. Are you not wanting or needing any customization of the proxy logic and everything can be driven by config/annotations?

  • GRPC-routing

What are you looking for specifically here? YARP can proxy GRPC requests already.

  • OpenTracing support

I think this is already available when using .NET 6. @shirhatti can you confirm?

  • Header-based canary routing via ingress annotations

What exactly are you looking for here?

  • Rate limiting

We don't have rate limiting support directly in ASP.NET core, but you can use middleware such as AspNetCoreRateLimit, which will apply to inbound requests through YARP. This is an example of the kind of customization that is easy to implement because of the modular nature of YARP.

@shirhatti
Copy link

I think this is already available when using .NET 6. @shirhatti can you confirm?

We are already instrumented for OpenTelemetry (not OpenTracing*) support. The OpenTelemetry project already has support for Jaeger, Zipkin, and OTLP exporters (among others).

* OpenTracing is being retired in favor of OpenTelemetry

We don't have rate limiting support directly in ASP.NET core, but you can use middleware such as AspNetCoreRateLimit, which will apply to inbound requests through YARP. This is an example of the kind of customization that is easy to implement because of the modular nature of YARP.

This is something that's being worked on in ASP.NET in .NET 7. We should be shipping preview packages for rate limiting soon. See aspnet/AspLabs#384.
👀 @BrennanConroy

@macsux
Copy link
Contributor

macsux commented Sep 20, 2021

Few things that I would like to see added to the list

  • Custom CRDs as an alternative to using standard Ingress resource. Ingress resource is very limiting and extending it view metadata annotations will get messy very quick for more advanced routing setups.
  • Ability to combine multiple configuration sources instead of taking everything from the dispatcher. For example, I want to have routes managed via spring config server backed by git repository, but I want clusters to be dynamically built up by observing Kubernetes infrastructure. Potentially refactor the Receiver as an IConfigurationProvider so it can be overlayed with other config sources. This would have the added benefit for monitoring current state of YARP configuration via something like /env endpoint in Steeltoe Actuators

@dpbevin
Copy link
Contributor Author

dpbevin commented Sep 24, 2021

Hi @samsp-msft. Lots of great follow up questions there. Most can be answered in a similar fashion....

I rely heavily on a pre-built nginx Ingress controller (single deployable) in my K8S setup and I suspect I'm not alone 😜.

So, step 1. I'd like a drop-in replacement for nginx. I'm lazy, I don't want to have to compile something for every situation.

Nginx isn't without its flaws. Ive run into a problem with its support for GRPC streaming just this week (related to client body size limits).

Nginx isn't without its limitations too. Recently, I needed to do auth conversion - client certs on the front side, OIDC client credentials on the backend. Let's just say, plenty of frustration with Lua, Perl and JavaScript. I'm a C# nut at heart, so for those occasional "I need extensibility" situations, Yarp is a good fit. But these are the exceptions, not the rule.

Back to the single deployable...

Loading certs (from secrets) becomes harder with separate deployables because of the cluster role bindings. The actual ingress needs sni plus a way of reading the secrets, so the logic to do this would be split between the two deployables.

I've not seen high load with nginx personally, though I know there are some lurking, potential problems. K8S has a new feature called "EndpointSlices" that would likely improve things though. I don't see any of these issues being a deal-breaker for the majority of situations though.

The rich middleware of ASP.NET Core, combined with its blistering performance, means that a yarp-based ingress controller can easily be superior to many of the ingress controllers available today for K8S.

I'm sure I've missed a couple of items but I think this comment is long enough already 🤣.

Happy to talk it through more.

@GreenShadeZhang
Copy link

Hi @samsp-msft. Lots of great follow up questions there. Most can be answered in a similar fashion....

I rely heavily on a pre-built nginx Ingress controller (single deployable) in my K8S setup and I suspect I'm not alone 😜.

So, step 1. I'd like a drop-in replacement for nginx. I'm lazy, I don't want to have to compile something for every situation.

Nginx isn't without its flaws. Ive run into a problem with its support for GRPC streaming just this week (related to client body size limits).

Nginx isn't without its limitations too. Recently, I needed to do auth conversion - client certs on the front side, OIDC client credentials on the backend. Let's just say, plenty of frustration with Lua, Perl and JavaScript. I'm a C# nut at heart, so for those occasional "I need extensibility" situations, Yarp is a good fit. But these are the exceptions, not the rule.

Back to the single deployable...

Loading certs (from secrets) becomes harder with separate deployables because of the cluster role bindings. The actual ingress needs sni plus a way of reading the secrets, so the logic to do this would be split between the two deployables.

I've not seen high load with nginx personally, though I know there are some lurking, potential problems. K8S has a new feature called "EndpointSlices" that would likely improve things though. I don't see any of these issues being a deal-breaker for the majority of situations though.

The rich middleware of ASP.NET Core, combined with its blistering performance, means that a yarp-based ingress controller can easily be superior to many of the ingress controllers available today for K8S.

I'm sure I've missed a couple of items but I think this comment is long enough already 🤣.

Happy to talk it through more.

Recently, I am also looking for projects such as api gateways. There are functions provided by the mainstream istio envoy service mesh and nginx. Their extensions are basically based on lua or c++, and of course they also use webassembly. Many of them integrate well with k8s, such as ingress nginx or apisix. I think we really need a k8s ingress based on .net. This project just returned is also used as a reverse proxy, so I think your idea is really good, and I look forward to your continued sharing of development ideas for this feature.

@adityamandaleeka
Copy link
Member

Should this also go with this list? #1145

We'll need to understand the scope of that work as well.

@samsp-msft samsp-msft moved this to 📋 Backlog in YARP 2.x Jun 9, 2022
@samsp-msft samsp-msft moved this from 📋 Backlog to 🏗 In progress in YARP 2.x Jun 14, 2022
@nquandt
Copy link

nquandt commented Sep 22, 2022

Will the Yarp.Kubernetes.Controller be added to nuget anytime soon?

@jmezach
Copy link
Contributor

jmezach commented May 3, 2023

@nquandt Good question. I've recently tried it using the daily builds and although there are definitely some rough edges it seems to be working quite well already. Having the possibility to use YARP as a Kubernetes Ingress would be very useful for us. So far we've been using a centralised config-based approach, which works well too, but it requires all our teams to make changes to this central configuration, while having the Ingress would allow them to define how the route traffic to their services independently of other teams.

@nquandt
Copy link

nquandt commented May 4, 2023

@jmezach I have a pack'd version of the 2.0 controller project in my private feed, and use it as our primary reverse-proxy for our cloud services. I utilize the ingress yamls in kubernetes to specify new routes anytime a new service gets deployed. I have seen almost no issues with using this in production as of yet in the 6 months I've been using it. So I don't see why this couldn't be added to Nuget unless there are planned breaking changes.

@jmezach
Copy link
Contributor

jmezach commented May 5, 2023

@nquandt Did you set it up as a separate monitor and actual ingress, or did you combine the two together? I did notice during my testing that if the monitor shuts down the ingress will keep trying to reconnect every half second or so which could lead to a spike in CPU usage when that happens. I guess some kind of circuit breaker could be added there to avoid this.

@nquandt
Copy link

nquandt commented May 5, 2023

I am using Monitor + Ingress.. I found that I had issues horizontally scaling when combining the two. I noticed that because the "monitor" updates the status of the ingress yaml on kube, it causes the other instances to register a change and constantly cycles as each updates the ingress yaml. I have not had an instance of my "monitor" crashing and so haven't hit that cpu issue. I haven't explored that portion of the project much but could there be a test in added to the Reciever to prevent spiking in the event of a no-connect. The Limiter seems to already have a "max" of 2 requests per second after 3 failures (probably to prevent that)

@samsp-msft
Copy link
Contributor

Few things that I would like to see added to the list

  • Custom CRDs as an alternative to using standard Ingress resource. Ingress resource is very limiting and extending it view metadata annotations will get messy very quick for more advanced routing setups.
  • Ability to combine multiple configuration sources instead of taking everything from the dispatcher. For example, I want to have routes managed via spring config server backed by git repository, but I want clusters to be dynamically built up by observing Kubernetes infrastructure. Potentially refactor the Receiver as an IConfigurationProvider so it can be overlayed with other config sources. This would have the added benefit for monitoring current state of YARP configuration via something like /env endpoint in Steeltoe Actuators

Looking at the yaml spec for ingress, and the less than ideal way that custom annotations need to be supplied, I am wondering if a hybrid approach to configuration would be desirable. This would use existing mechanisms such as YARP JSON configuration as the way to specify routes and clusters. The difference would be when it comes to destinations. We could extend the schema to be able to specify destinations other than with IP addesses. In the case of kubernetes, I am thinking in terms of having a service name or label specification for a deployment, and YARP would be able to resolve those and produce a list of destinations based on the specification. It would be something like combination of route extensibility and config filters. Using a config filter would mean that in the case of a config server, it would pass the resolved information to the instances using it.

{
  "ReverseProxy": {
    "Routes": {
      "route1": {
        "ClusterId": "cluster1",
        "Match": {
          "Path": "{**catch-all}"
        }
      }
    },
     "Clusters": {
      "cluster1": {
        "Destinations" : {},
        "Extensions": {
          "kubernetes": {
            "selector": {
              "app": "app1",
              "region": "westus1"
            }
          }
        }
      }
    }
  }
}

So in the example above, a kubernetes config filter would use a kubernetes config extension to cluster to be able to specify the rules for resolving destinations. In this case it would find all pods with app=app1 and region = westus1 and add those as destinations to the cluster. This information could come from any config provider, not just json files. The config filter would be responsible for the kubernetes selector resolution, and would update the list as the kubernetes configuration changes.

@jmezach
Copy link
Contributor

jmezach commented May 31, 2023

@samsp-msft I think I quite like that approach. We currently have an entirely configuration based approach for our YARP based gateway running on Kubernetes which routes requests both to containers within the Kubernetes cluster and to our older Windows based infrastructure. For routing requests to Kubernetes containers we're using just a single destination which is a Kubernetes service which then does load balancing. While that works fine, it doesn't combine very well with the health checking features of YARP, since with one destination it can only health check one instance at a time while other instances might be down. Having the actual pods as destinations obviously resolves that issue, but that is not something you'd want to maintain manually within the configuration. Having a config filter as you described would at least level-up our current approach.

That being said, having a full Ingress feature would definitely have some more benefits. Right now the configuration is maintained in a single repository which is then modified by all our development teams and it is not uncommon for them to bump into each other. With the ingress controller feature the teams can do their own thing by just defining an Ingress as part of the their deployment process.

@msschl
Copy link

msschl commented Aug 2, 2023

Another wish of mine would be a documentation for the available ingress annotations for yarp Kubernetes.

@Joren-Thijs-KasparSolutions

@msschl There is a list of supported annotations on the Ingress Sample readme. However i too would like to see kubernetes ingress documentatien included with the real docs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Kubernetes Ingress Controller Type: Idea This issue is a high-level idea for discussion.
Projects
Status: 🏗 In progress
Development

No branches or pull requests

10 participants