Replies: 1 comment 3 replies
-
So, I confirmed the issue is still happening, this time armed with a bit more insights - While it is technically correct, iiuc one of the leading reasons for the k8s pagination mechanism is to be able to list objects while keeping a low memory footprint. This is exactly what I'm trying to achieve 🙃 . Unfortunately this is bad news when dealing with a big amount of objects (I crossed 1gb barrier, release build and all.. 😅 ) I'm going to try and open a PR, but I'm not too fluent with the inners of kube-rs. If anyone wants to take the lead, code review me or collaborate - I'll be more than happy. Thanks! |
Beta Was this translation helpful? Give feedback.
-
I'm using
kube::runtime::watcher
to watch certain resources in my cluster.It works great, but I suspect that the initial
list
stage causes a rather big memory spike when dealing with a large amount of objects (e.g. 500+). I think that what happens is that thelist
action tries to list+deserialize all the objects first, and then provide them to the iterator after the fact. The data itself, plus the deserialization effort take their toll on the process memory and we see a spike.I tried to set
page_size
to relatively low numbers (10?), but it didn't seem to affect the memory usage pattern. It's possible that I don't consume fast enough or that I'm doing something wrong though.This is only my assesment, so I would love to confirm this suspicion with someone here who's more familiar with the code.
In addition - I wanted to know if ask if there's a possible workaround, or ask if list could be changed to stream the objects one-by-one instead of getting all of them first.
Thanks a lot!
Beta Was this translation helpful? Give feedback.
All reactions