-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to get root container stats at separate housekeeping interval #1247
Comments
SGTM |
From what I can gather, fsInfo is computed on demand, so no separate housekeeping interval is needed there, but @pmorie has informed me that thin_ls data per container is cached. Either way for out of resource killing, we care more about rootfs available bytes and imagefs available bytes. |
Yep, thin_ls data is cached, but my WIP hasn't established at what interval it is refreshed |
My understanding is that kubelet mainly needs higher resolution for machine level stats and not for container stats. |
+1 for on demand stats. I'd also like to avoid adding more flags if we can. |
@vishh @timstclair https://github.com/google/cadvisor/blob/master/api/versions.go#L483 I will throw another wrinkle in here, and broaden the request. I suspect when we get a little further into the future, we will want to get the stats for certain special containers at a higher frequency. For example, the The So I want to come back and re-phrase my request, I want to be able to tell cadvisor a set of special containers that have a shorter housekeeping interval. I am fine not exposing it as a flag to the binary, but I would like to be able to specify it for how Kubernetes starts its internal cadvisor. Thoughts? |
Can't we address that with on-demand scraping as well? WRT configuring internal-cAdvisor, I opened a proposal in #1224. It's an intermediate step, but would mean we could stop leaking cAdvisor flags into kube-binaries, and avoid the flag-editing needed for things like kubernetes/kubernetes#24771 |
@timstclair - I am happy to defer to what you and @vishh think is best, you have more expertise in this area then me, just wanted to state where my confusion came from as all things looked derived from the cached root container. If the desire is to support on-demand scraping instead, that works for me because I get the same net result as the caller. Any suggestions on how you would want to see this implemented? I am volunteering my time because I think this is needed to make evictions actually useful in Kubernetes to end users without having to sacrifice large amounts of reserved memory on the node. |
I don't know that I have more experience in this area, but my main concern is that as we have more and more components which want various stats at various intervals the complexity will get out of hand, and stats will be collected unnecessarily often. If we can make it a happen, I think a good on demand model could clean this up and lead to greater efficiency. I think this is probably complex enough to warrant at least an informal design document. I'd be happy to help out with it, but here are a few issues I can think of off the top of my head:
|
In order to support out of resource monitoring in Kubernetes, I want to be able to get information about the root container at a separate interval than the information I gather on containers associated with pods. For example, I would set housekeeping for containers associated with pods at 10s, but root container at 100ms.
A potential option is to add a flag:
-root_housekeeping_interval duration if specified, perform housekeeping on the root container at the specified interval rather than default housekeeping interval
/cc @pmorie @ncdc @vishh - Thoughts?
The text was updated successfully, but these errors were encountered: