-
-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a Controller<K> runtime #148
Comments
A few more questions..
Should we? Seems like another place to encourage reconcile to be idempotent. |
Yeah, I can see a case for both ways. Essentially I think we just want #52 to be an option. |
Thinking of doing this within the controller itself. Mutexing the event queue so we can debounce them.
Yeah, if we're following the go pattern, then the reconcile function will have to return an enum of "what the controller should do next". E.g. |
Hey 👋 I work on controller-runtime and I'm a (learning) Rust noob. Love what I see here 🙂 This seems pretty closely related to #102 and I noticed you've been sticking pretty close to the controller-runtime patterns. I also saw the draft in #184 with a controller owning its own informers.
In CR there's one informer per GVK shared in a cache owned by the manger. Each controller has a workqueue which is basically a priority queue based on the next time reconcile a particular object. Informers drive events into the workqueue. Having at most 1 informer/ListWatcher per GVK helps reduces server load. I don't really know enough about how that maps into Rust -- an Arc<Vec<Informer<T>>> or something? The workqueue itself guarantees an individual controller only processes a single item on one goroutine at a time, since each controller can have multiple goroutines. The workqueue also enforces exponential backoff -- enqueueAfter basically enforces a "notAfter" semantic of adding back to the workqueue with the calculated delay ("Mutexing the event queue so we can debounce them." -> this sounds about right). Anyway, that's not to say anything about how this should work in Rust. I noticed you were feeling out some of these areas in draft PRs and figured I'd share 🙂 https://github.com/kubernetes/client-go/blob/master/util/workqueue/delaying_queue.go |
Thanks! That is super helpful! Have been trying to dig into Sounds like we are already trying the same informer to resource 1-1 mapping. There's actually just a The queuing behaviour is one of the harder problem. Have not found a good internal queue solution yet, and am very curious to how the exponential backoff interacts with the queue in CR. It looks like it does this:
Which seems sensible, if I've read it right. How we translate all this sensibly to rust is obviously the bigger question. We are going to rely on async and futures, so right now it's a question of whether or not we 1. use callbacks with a specific signature that we internally call with a set number of arguments and a strictly controlled signature, or 2. we expose some kind of pull based mechanism into the queue (but somehow still retaining the backoff mechanics). Am leaning a bit towards 2. atm (even if it needs more than one queue to separate public/private + backoff mechanics), but that's all up for debate still. Note that solution 2. looks more like the informer examplese herein (no callbacks, just pull events out). |
You've got the behavior right on the CR side. Your recommended solution sounds great to me! My intuition tells me theres some nice async-y way to drive this in Rust, and it looks like you already have some of it: https://github.com/clux/kube-rs/pull/184/files#diff-0db9a0cd2ebbcd77c0d69cf94c787b5eR150-R158 re: informer 1:1 with resource -- I meant N controllers can share 1 informers in CR. In your draft it looks like a new informer per controller? Was thinking some sort of cache of informers passed to new (can default to nil for new informers, or use some sort of builder pattern?) I'd love to try my hand at writing a finalizer example you mentioned in another issue -- maybe that'll give me some better ideas. |
Oh, interesting. I didn't even think about the case where we have more than one controller per binary, that sounds unusual. It will also complicate things a bit. Who owns all the Informers in CR then? The Manager? Needs to be something higher level than the Controller, and it feels a bit awkward for the apps to manage this 🤔
Yeah, please. If you get anywhere useful it'd be very much appreciated 👍 |
Yep -- the manager owns an object cache shared between both the kubeclient and the informers. So the manager has a global-ish cache of all objects any controller under it cares about. There's also a non-cached client, of course, and writes are never cached.
I think it's the common case, rather than the exception -- I rarely see production-level operators with < 2 controllers. Many controllers in one binary describes the core k8s components pretty well 😋 Advanced users of CR might even dynamically load/unload controllers inside their operator code. https://github.com/crossplane/crossplane-runtime does something like this. |
I think it seems like a fairly common pattern to have a It's also sometimes not as 1:1 as that. For example, you could consider |
Okay, I always understood it as:
This is roughly how Based on this understanding, sharing an
But (from what I can deduce from https://pkg.go.dev/k8s.io/client-go/tools/cache?tab=doc, please correct me if this is still wrong!), this isn't actually how the
That pretty much solves problem 2 from earlier, but at the cost of needing to diff out the changes from the cache.. which was just updated from the watch events from the server! That doesn't feel quite right either. |
I need to play with the kube-rs code a bit to understand more deeply, but your understanding at the end is basically correct. Re: diffing out changes for the cache, you get this for free via resource version if I understand the question correctly (which is backed by the underlying k/v storage in etcd). I think we're talking about this, more or less: https://github.com/kubernetes/client-go/blob/ede92e0fe62deed512d9ceb8bf4186db9f3776ff/tools/cache/shared_informer.go#L534-L566 (edit: might need something like the fifo delta queue?) |
jfyi...I think the pkg.go.dev link is missing quite a bit of content from the godoc.org link on the SharedInformer which I found illuminating. I wasn't familiar with the reasoning behind the delta FIFO queue, but it appears to provide some the guarantees enumerated in the long comment on SharedInformer: https://godoc.org/k8s.io/client-go/tools/cache?tab=doc#SharedInformer |
Ouch, yeah. Feels misleading that they label pkg.go.dev as the official successor when it has traps like that. |
Kind of spoiled this already on Discord, but I took a stab at reimplementing Kube-rs's runtime as a series of Example operator: https://gitlab.com/teozkr/kube-rt/-/blob/master/examples/configmapgen_controller.rs |
It's so clean. Fantastic work. Going to have a bank holiday field day with it tomorrow :D |
FWIW, That said.. What do you think about the future of
/cc #102 since |
Yeah, that's a good breakdown. Very much agree with what you list as pros/cons here. I am very much in favour of option 1, and primarily so for the ease of maintenance, particularly early on when the patterns are being explored, but also because it does feel like a significant step forward in a better and more higher level direction. We also seem to have pretty similar viewpoints of what should be responsible for what code-wise, so would very much appreciate being able to pool our limited resources on the various issues herein. Now technically, there are a couple of things that I'd like to prod you about for the controller first, but I'll do that on discord. For the smaller things; Resource Version, yeah, sure that might be an issue if we break Informer (although you're introducing a watcher instead as a higher level api, so maybe they can temporarily co-exist - though not a big fan of the manual resourceVersion APIs anyway so maybe not). |
Yeah, they have basically the same feature set and generate the same
Yeah, I don't really see a use case for mucking about with |
To provide something like controller-runtime's
Controller
s. If you are not aware of how it works, the more opinionated watch the talk from kubecon sandiego on kubebuilder's interface. Many positives in there that we can probably steal ideas from, as well as things to avoid (like how not to deal with Option types).Wiring
Their interface kind of looks like:
We probably don't need the full idea of a
Manager
to attach a controller to, since I imagine it can self-manage with futures and infinite polling tasks, but we should make our own version of aController
with similar design goals:We MUST to be able to:
Controller<K>
where K is the owned kube object (usually the crd)We SHOULD also
Reconcile
Additionally, we must be able to define/provide a reconciler for our owned object, and we need to call this reconcile fn whenever it, something it owns, or something it watches, changes.
Deciding what calls this, probably need to be the meatiest part of the runtime. We need to start many
tokio::spawn
tasks that run poll on possibly many informers. Then when they receive events, we need to figure out the owner of these. Some of this is wired up in controller-runtime viaSetControllerReference
which looks kind of awkward. It should just be a forwarding events to a reconcile fn. We probably need an event queue that all informers can push to.This can abstract away the type of Informer events (as described in #134 (comment)) and potentially also debounce them (so that we don't cause reconciles back to back too quickly on the same object).
The reconciliation loop can then be written in an idempotent way, which forces us to have the return signature of the function be a result type to the Controller internals that effectively tells it whether or not we need to re-run the reconcile for this object (requeue), or maybe do it in
n
seconds after some objects are up, or maybe we need to do periodic updates.The function needs to take only two strings: name of object + namespace. The reconcile should almost always start by fetching the CRD (note again that a reconcile request it's not always initiated by the CRD changing).
Examples
Then we need some example controllers that does the basic flow. The kubebuilder example is probably nice. We should also shocase how to set the root object's status.
Open Questions
The text was updated successfully, but these errors were encountered: