-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proposal] Switch to use informers to handle k8s resource state/status checking #4467
Comments
Would you like to submit a PR to do this? |
Instead of Informer, I would recommend using listwatcher to avoid memory usage. |
@sarabala1979 Thanks for the suggestion. My understanding is that informer leverages list-watch mechanism so that we will only query from the server for the first time list() is called and then we will only query from local cache after that. Could you clarify what you meant? @alexec Yes, I can look into this when I get a chance and will send a WIP PR if I ever start working on it. |
See discussions in #4669 (comment). This should not be an issue anymore. |
@jessesuen @alexec @sarabala1979 Do you know if there's a reason for writing the creation and status checking for resource type template using
Here's what I am proposing to help address the above issues: Rewrite the status checking part using k8s Go client instead of Any feedback and suggestions would be appreciated. |
This is a yes from me. I think we should build a new template called |
So are you suggesting we poll instead of watch? I think that would be reasonable and less taxing on API server. I guess the next question is how will users get the ability to control this interval The choice of using kubectl was a convenience. The resource template is 3+ years old and at the time:
Now that they do, I agree could do this without kubectl.
Does user agent not already include the service account? That's unfortunate if it doesn't.
While this will create less pods, I don't think this will save on API server load. In any case, if we do decide to do it in an agent, I would still prefer to avoid a new template type and reuse existing resource template |
How about updating the current resource implementation to use a dynamic informer then? I anticipate that we would be able to reuse that code in the agent. |
Thank you! I think we can reuse the existing resource template. I am not sure if we want to switch to informer yet (though that was originally proposed in this issue) as that would require additional resources on executor pod whereas currently the requirements can be customized to be minimal. Just to clarify - the main changes of my proposal would be:
|
Summary
Switch to use informers to handle k8s resource state/status checking.
Use Cases
Currently the executor calls
kubectl get
with a fixed interval to check the status/state of the k8s resource that's created (related code). This will add a lot of burden on the cluster when there are a lot of steps defined in the form of k8s manifests.We should consider switching to use informers to check status of the created resources from local cache instead of querying the status from remote server on a schedule.
Message from the maintainers:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
The text was updated successfully, but these errors were encountered: